jeudi 15 janvier 2015

Industry voice: What can be done to promote trust in electronic banking systems?

Introduction


It wasn't so long ago that the PIN and personal password were your guarantee for secure internet banking. Then along came digital signatures and personalised images or phrases to ensure that the website is genuine, and the addition of single use Transaction Authentication Numbers (TAN), plus two factor authentication, where the TAN is generated by an individual security token or independently transmitted by email or SMS. Then chipTAN generators added transaction data to outwit man-in-the-middle attacks, and now there are calls for a further layer of biometric identification for added security.


Does all this mean that, year-on-year, the public is growing ever more confident of the safety and security of internet banking? Probably not – any more than a house surrounded by a high wall with razor wire, electric fencing, motion detectors, security cameras and armed response warnings makes you feel confident that this must be a safe neighbourhood to live in.


Layer upon layer


Adding many layers of security is the obvious bit – the criminal may have discovered my PIN code and got a bank statement from the refuse bin, but still might not be sure about my birth date and mother's maiden name.


When there is a certain amount of human interaction, as in telephone banking, you can even allow a bit of leeway on getting these answers exactly right. Sometimes the call centre asks for more details than I can provide: I have remembered to take my debit card and PIN, reminded myself of all my security answers – and then they ask for the amount of a monthly standing order and I simply cannot remember. But does that mean they will slam the phone down on me? No, they go on asking other questions and see how I manage.


Even though I failed one security test, I get another chance because a human operator has time and the social skills to judge how I react to being told I have failed a test, how I explain or justify my failure, and how I respond to further questioning. A human operator has a human brain that can make very many more subtle decisions based on further layers of information. It can also be wrong.


If, however, the whole transaction takes place via a keypad, there is vastly less corroborating data and greater reliance on mechanical answers. If the PIN or keyword is wrong, it is wrong, and it would be unwise to allow too many further attempts – because we might be under attack from a system using an algorithm to generate a series of likely PIN numbers.


Judgement calls


But what if the keypad entry system was so sophisticated that it could, like the call centre staff, make judgements about such mistakes – whether, for example, the entry process was a mechanised attack, or behaving like an absent-minded but genuine customer, or like a hacker trying out a series of likely guesses? Google searches, for example, are pretty good at guessing what was really meant when terms are misspelled – they don't just shut down on you.


Similar intelligence might help make decisions on whether a mistaken password was a slip or fraud and, like a human operator, it might actually identify, raise an alarm and help nail the attacker instead of simply blocking them to try again later.


We're talking the future here – artificial intelligence may be sufficiently advanced to provide some interesting screening attempts, but not yet enough to be trusted with anything as sensitive and precious as real-world customers who are paying for the bank's services.


There are, however, recent developments that could bring that future closer.


A fuzzy approach


So what can be done right now to increase trust in banking systems?


Today's most advanced automated security tests throw every known attack at the system under every likely operating condition and – being cloud-based – the tests are kept up-to-date with new attacks as soon as they are recognised. This is a powerful solution for reassuring the bank's management that their systems are indeed secure and trustworthy, but it is hard to explain this to the customer in a way that builds their trust. They might even wonder why – if the system was properly designed in the first place – does it now need so much additional testing?


The human factor in telephone banking raises the question of whether better trust might be built around a more organic test approach – one that builds up layers of testing that are not so rigidly defined. You could describe these test criteria as being "fuzzy", meaning that the correct responses are not so sharply delineated around the edges. The point is that today's sophisticated test procedures do include a form of "fuzzing testing" as a way of addressing unknown security threats.


Fuzzing testing bombs the system – anywhere where applications and devices receive inputs – with semi-random data instead of known attack profiles. This is one way to find if any irregular input can crash or hang an application, bring down a website or put a device in a compromised state – the sort of thing that might happen when someone inputs a letter 'O' when it should have been zero, or accidentally hits an adjacent key.


Zero-day attacks


Another goal of fuzzing testing is to anticipate "zero-day" attacks – i.e. those that hit you before they hit the news. Hackers assume that you have thoroughly tested your system with traditional functional testing, but there are so many permutations of invalid random input that may not have been tested.


As David Newman, President of Benchmarking Consultancy Network Test, explains: "Attackers have long exploited the fact that even subtle variations in protocols can cause compromise or failure of networked devices. Fuzzing technology helps level the playing field, giving implementers a chance to subject their systems to millions of variations in traffic patterns before the bad guys get a chance to".


All it might take is one random string of input to cause a crash or hang, and so hackers use automated software to keep throwing random input at your network in the chance of striking lucky. "It takes a thief to catch a thief", as they say – so fuzzing testing does the same thing, but under controlled conditions. Again, such fuzzing testing relies heavily on automation to get sufficient test coverage. Today's fuzzing test tools generate millions of permutations – not only making the network much more secure, but also saving manual work and keeping the testing fast and efficient.


The immediate benefit of fuzzing testing is that it increases the bank's trust in its own system security. But does that help the customer to build trust?


I suggest that it does, for the following reasons. One of the things that supports trust in Google is the way it handles silly mistakes: if a user misspells a search term, Google comes up with intelligent suggestions, and that gives the feel of a well-designed system. By analogy, if a customer makes a small slip when logging in to the bank, and the system responds stupidly or even crashes, it suggests that the system is fragile, and that does not build customer confidence.


So the greater resilience to error resulting from repeated fuzzing testing does make the system seem less fragile – and that is the first step in building confidence.


What lies ahead?


Today's functional test systems can do a lot to reassure network managers that their systems are defended as well as possible against attacks and faults, but then the task is to pass on that confidence to the customer without over-explaining and sounding "defensive" in the negative sense.


Fuzzing tests go further along the same lines by adding confidence against unknown and unexpected threats, but I suggest that their application could also make the system begin to feel more solid and trustworthy to the customer.


Can we go further? Can we build into a mechanised entry system the equivalent of human intelligence that can assess the personality of the applicant and make good decisions about the credibility of their responses, and what further questions to ask? Instead of just dumbly closing down, can the system flag a danger signal and then escalate authentication with further security checks?


To the customer, such an intelligent response would suggest that the system really is alert to danger and "knows what it is doing" – as scary, and yet as comforting, as a community police officer with good local knowledge and experience.


We still have a long way to go before computers can match those skills, but recent advances in real time big data analysis could help clarify understanding of human behaviour patterns, and suggest more subtle tests to identify fraudulent behaviour. Couple that with fuzzing techniques that extend response testing to embrace the infinite variety of possible near misses, and this could point the way ahead.


Because the real challenge is two-fold: both to make the system resilient to attack and, at the same time, to build the customers' trust that the system is truly resilient.







from TechRadar: All latest feeds http://ift.tt/1G0Q6FW

via IFTTT

Related Posts:

  • Microsoft outlines enterprise upgrade path for Windows 10Microsoft has just announced an enterprise update path for existing Windows 7 Enterprise and Windows 8/8.1 Enterprise customers. Enterprises will be part of the Windows Software Assurance program, bringing more flexibility to… Read More
  • Review: Updated: ChromecastIntroductionUpdate: Chromecast was the #1 selling streaming device in the US in 2014, according to sales group NPD. The report also claims that users have streamed content more than 1 billion times since its July 24, 2013 lau… Read More
  • Industry voice: Collaborative working: five tech trends to watch for in 20152015 will see significant changes made to our ways of working. As digital devices become more and more central in everyday life, firms are finding that their workforce increasingly consists of tech-savvy employees, who are sk… Read More
  • Review: Amazon EchoDesign and sound quality"Hi Alexa, my name is Nick.""I'm sorry, I don't understand the question.""That's OK," I continue, "Alexa, I like the Red Hot Chili Peppers.""Now playing samples of the Red Hot Chili Peppers.""Cool."Thi… Read More
  • Review: mini review: Koss BT540iI discovered the Koss Porta Pro headphones while I was in college. Oblivious to their decades-old legacy, I found satisfaction in these headphones primarily for their budget-friendly price, but also their ear-opening performa… Read More

0 commentaires :

Enregistrer un commentaire