vendredi 13 mars 2015

In Depth: Automated cars and AI: reasons why the tech industry must consider ethics

Introduction and taking responsibility


Imagine that you're in an autonomous car when a lorry jack-knives in front of you on the motorway while a cyclist appears alongside. The computer inside your car now has to choose between swerving out of the way and killing the cyclist, or remaining where it is and risking your life. What would it do?


That would depend on the software's algorithms initially decided upon by a computer engineer, which throws into doubt the concept of the neutrality of the tech industry. With AI and automation on the horizon, are ethics and philosophy are about to become as important to computer engineers as noughts and ones?


Does the self-driving dilemma above have a correct answer? No, says philosopher Patrick Lin, Ph.D, Director, Ethics and Emerging Sciences Group at California Polytechnic State University, and editor of the book Robot Ethics: The Ethical and Social Implications of Robotics – but with lawsuits bound to follow, the way engineers work will have to change.


Transparent ethics


"Make the ethics programming in an autonomous car transparent in order to set expectations with users and society, and be able to defend those programming decisions very well," is Lin's advice, but the actual morality boundaries are up for grabs.


"It wouldn't be unreasonable to put the safety of the bicyclist over that of drivers," he suggests, adding that the moral and legal principle might be that if you introduce a risk to society, such as a new kind of 'robot car', then you should be the one who bears the brunt of that risk.


On the other hand, if the autonomous car could calculate that the choice was between simply knocking the cyclist off, with only minor injuries likely, or driving off the road or into oncoming traffic, then perhaps the car should make 'least harm to humans' its priority.


Dr Kevin Curran


Taking responsibility


For now, refined questions such as this are moot; all of this assumes better sensor and computing technology than we have so far. "The best we can do today is to program the car to brake hard, or swerve toward the smaller object, or some other simple-minded action, but this could work for a good number of cases," says Lin, "but for those cases where that reflex is the wrong action, car manufacturers will have some explaining to do."


The point is that the tech industry will soon have to start making, and taking responsibility for, life or death decisions embedded in algorithms. "Technology is evolving at a pace and scale that we have not seen before," says Dr Kevin Curran, IEEE Technical Expert and group leader for the Ambient Intelligence Research Group at the University of Ulster. "It's leaving a void where society is struggling to keep up with the social and moral implications technology is creating."


Dirty secrets


Morality and ethics aren't new to technology. They're everywhere. Is the tracking, monitoring and data harvesting of internet users – enabled by computer engineers – at all ethical? What about spam? Should the designers of phones and tablets, and the programmers who design apps for them, feel bad about the terrible waste of packaging, the low wages paid to assemble these gadgets, or the depletion of natural resources that they undoubtedly cause?


Curran thinks that morality has been an issue in the tech industry for yonks. "The first computer ever was built to calculate the trajectories of missiles," he points out, "and planes have been flying via computer guidance for many decades."


The truth is that most engineers have dealt with moral issues at some time in their career. "We tend to instinctively believe that technology is neutral, but that humans can repurpose it for evil," he says. "When was the last time you heard someone blame Tim Berners-Lee for child pornography online, as opposed to thanking him for the World Wide Web?" Engineers that struggle with the ethical implications of what they're asked to do, says Curran, can simply move jobs.


World Wide Web creator Tim Berners-Lee


"The question is never whether an algorithm is neutral but whether the outcome of applying that algorithm is neutral," says John Everhard, European CTO at Pegasystems, who thinks that in the era of big data it's software that becomes the deciding factor in whether the outcome is ethical, or not.


"It is increasingly important that the organisations that genuinely care about their customers and wish to exhibit strong moral values are provided with software systems to prove they made decisions on a morally sound basis," he says. Software systems designed for banks, for instance, can only be considered strictly ethical if they're completely transparent and use an individual's personal history to calculate borrowing rates rather than blind statistics.


Unethical technology


There are obviously some technologies and products that are created for completely unethical ends. "Ethics, or a lack thereof, may be found in the design process and intrinsically linked to the creation," says Lin. Think gas chambers, torture devices, missiles and robotic weaponry.


"Computer viruses and malware are 'evil'," says Curran, adding another to the long list. "They have no positive uses whatsoever and are a clear example of a non-ambiguous piece of technology with nothing but evil as its payload."


Some smartphone apps are almost accidentally unethical. A great example is Facebook's 'Year in Review' feature last year, which pushed a recap of the user's most popular photos posted – unintentionally forcing users to confront recent tragedies like the death of their child.


Into the same category of badly thought out ideas go the recent swathe of fitness apps. "Without any conscious sexism, there's an all-too-familiar trend of health monitoring apps that fail to consider tracking a woman's menstrual cycle even though women make up half the world," says Lin, adding that humanoid robots are typically designed to resemble men. "Those might not be unethical in the sense of evil, but they're certainly challenging if you care about social impact and the complicated role of gender," observes Lin.


What about the internet?


But the ultimate unethical, badly thought through modern creation? It's got to be the internet? Its core architecture makes possible cyberbullying, copyright theft, spam email, and all kinds of other activities that could be defined as unethical. "If ethics weren't an afterthought but part of the internet's design, then perhaps we could have headed off a lot of the problems we face today – not just security vulnerabilities, but also intellectual property issues, cyberbullying, privacy issues, and so on," says Linn.


Robot floor scrubber


Automated morality: who decides?


An automated car's core ethics need to be well understood, but to be acceptable those ethics need to come from the societies they'll operate in. "The work of programming moral decision-making into a product must include input from broader society, not just in a bubble of a particular company or even Silicon Valley, which may have radically different values than the rest of the world," says Lin.


Artificial intelligence can also be dangerous; last month a woman was 'attacked' by her robot vacuum cleaner and had to call someone to free her. "It's recommended we always have a managed kill switch that has protected code built-in and secured by design, which realises the machine shouldn't continue trying to hoover-up this poor lady," says Neil Thacker, Information Security & Strategy Officer at Websense. "When building and modelling AI behaviour, it remains vital to build in the kill switch to protect against rare anomalies such as this."


It's a similar story with driverless cars. "Google initially stated they wanted their driverless cars to have no steering wheel or foot controls because they were driverless and accurate, but in the worst case scenario and the AI fails or cannot make a decision based on the data, safety remains paramount," says Thacker. "People are not ready to change or trust AI in the next ten years, but we will see future generations challenge this rule."


Incentivising ethics


The incentive to be 'moral' is the threat of litigation. "The motivation to build rigorous and secure systems should be there because it is quite possible that all involved in its design could be held liable if a defect caused or even contributed to a collision," says Curran, who thinks that as the role of computer programmers comes to play a bigger part than drivers in the way vehicles move, manufacturers will build the cost of litigation and insurance into their vehicles.


Lin thinks that the social impact of any product should be part of the product's launch plan. "Even if the creators don't care about responsibility and ethics, they should care about how these issues might harm their brand, product adoption, and financial bottom line, for instance, if legal troubles arise," he says. "This is particularly true with artificial intelligence."


With automation, artificial intelligence and the Internet of Things on the horizon, the ethics and intentions of the creators and programmers of such devices are becoming more important. Does the tech industry need a code of ethics?


Actually, there already is one – the IEEE Code of Ethics – though what forces companies to take responsibility for pre-programmed ethics and morality will be the threat of litigation, since people aren't going to buy automated cars, robots or even control apps if they themselves are liable for their decisions. No one is going to want an automated car without knowing what the risks are.






from TechRadar: All latest feeds http://ift.tt/18GkYvM

via IFTTT

Related Posts:

0 commentaires :

Enregistrer un commentaire