![]()
by
Industrial Automation
Opinions
January 26, 2017
Part 1: Autonomous Systems and Safety
We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depends on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn’t matter, does it? We trust the good men and women (the much-maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn’t happen again.
All well and good you might think. But the infrastructure of life is increasingly autonomous – many decisions are now made not by a human, but by the systems themselves. When you search for a restaurant near you the recommendation isn’t made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don’t just mean city investments – it’s likely that your loan application will be decided by an algorithm. Machine legal advice is already available – a trend that is likely to increase. And of course, if you take a ride in a driverless car, algorithms decide when the car turns, brakes and so on. I could go on.
These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But – and this may surprise you – the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots or (I’ll wager) driverless car autopilots.
Why is this? Well, it’s partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is, I believe, one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don’t learn. Current safety assurance approaches assume that the system being certified will never change its behaviour, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.
And as if that were not bad enough, the particular method of learning which has caused such excitement – and rapid progress – in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine the internal structure of the ANN in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. We call this the black box problem.
Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn’t. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.
But – here’s the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car’s autopilot has been certified as safe – and that would require standards that don’t yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.
This is why work toward AI/autonomous systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.
The next few years of swimming against the tide is going to be hard work.
In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and autonomous systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.
More from Alan Winfield:
De-automation is a thing
Ethically Aligned Design
The dark side of ethical robots
Relax, we’re not living in a computer simulation
Could we make a moral machine?
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.
Alan Winfield is Professor in robotics at UWE Bristol. He communicates about science on his personal blog... read more
AI Algorithm AI-Cognition Algorithm Controls Artificial Intelligence ethics Industrial Automation Laws machine learning opinion policy Robotics technology safety critical systems
Research & Innovation
Business & Finance
Health & Medicine
Politics, Law & Society
Arts & Entertainment
Education & DIY
Events
Military & Defense
Exploration & Mining
Mapping & Surveillance
Enviro. & Agriculture
Aerial
Automotive
Industrial Automation
Consumer & Household
Space
latest posts popular reported elsewhere
Spy robots in the wild: K-Rock meets his bigger cousin
by
Factory move to U.S. creates jobs with help from robot workforce
by
How regenerative agriculture and robotics can benefit each other
by
Drone shows: Creative potential and best practices
by
Robotics, artificial intelligence, connected devices in ‘Forbes 30 under 30′
by
Understanding social robotics
by
Data Civilizer system links data scattered across files for easy querying
by
How to mount an external removable hard drive on a robot
by
A3 Business Forum: Our takeaways
by
The Drone Center’s Weekly Roundup: 1/23/17
by
latest posts popular reported elsewhere
How do self-driving cars work?The evolution of assembly lines: A brief historyRobotics, artificial intelligence, connected devices in ‘Forbes 30 under 30′Farming with robots$250 million awarded to new Advanced Robotics Manufacturing Innovation HubRobotics, maths, python: A fledgling computer scientist’s guide to inverse kinematicsUnderstanding social roboticsNAO Next Gen now available for a wider audienceROS 101: Intro to the Robot Operating SystemHow AR technology can help farmers stay relevant
latest posts popular reported elsewhere
Robot fabricator could change the way buildings are constructed | MIT Tech Review
How to make America’s robots great again | New York Times
Tiny, underwater robots offer unprecedented view of world’s oceans | Live Science
These 5 finalists will race to the moon in Google’s Lunar XPrize competition | Popular Science
Detection: the precursor to protection from drones | EE Times Europe
‘How we built India’s biggest robot company’ | BBC News
Carrick Detweiler: Micro-UAS for Prescribed Fire Ignition | CMU RI Seminar
Robots need our quantified trust and they don’t have it | Inverse
Robot skin senses warm bodies like a snake locating nearby prey | New Scientist
IoT botnets are growing, and up for hire | MIT Technology Review
Stretchy robotic suit reduces energy used to walk by 23% | New Scientist
Robotic device hugs the heart, helping it pump | LiveScience
DoorDash and Postmates are going to start testing robot deliveries | Fortune
Airbus CEO sees ‘flying car’ prototype ready by end of year | Reuters
As self-driving cars approach, the auto industry races to rebuild | Wired
Iran shoots at a drone in central Tehran: military commander | Reuters
MEPs vote on robots’ legal status – and if a kill switch is required | BBC News
Robots will take jobs, but Not as Fast as Some Fear, New Report Says | New York Times
Dramatic video footage shows drone circling and then crashing into Seattle’s Space Needle| GeekWire
Self-driving car tested for first time in UK in Milton Keynes | The Guardian
RoboRoach
May 16, 2014
A dedicated jobs board for the global robotics community.
Sr. SLAM Engineer - Mayfield RoboticsRobotics Software Engineer - Mayfield RoboticsMechanical / Biomedical Engineer - Open BionicsRESEARCH ANALYST – RESEARCH ENGINEER – SR. RESEARCH ENGINEER – PERCEPTION SYSTEMS - Southwest Research InstituteAssistant Professors of Mechanical Engineering (Tenure-Track) - Louisiana State University