Monday 10 October 2016

Autonomous Vehicles in the UK: Beware the Robotised Drunk Driver



[Disclaimer: I have had no involvement with the development of autonomous vehicle software. Any concerns raised here are based on my generic software development knowledge, and might (hopefully) be completely unfounded with respect to autonomous vehicles in the UK.]

Much has been written about the interesting ethical questions that arise with the control systems in autonomous vehicles. Most of this has been a rephrasing of the various age-old gedanken experiments that could confront an autonomous AI. For example, the car is driving at speed, a pedestrian steps out in front of the car - the AI can either take evasive action into a lane of incoming traffic (posing a risk to the driver and oncoming vehicles), or it ploughs on into the pedestrian and kills them. What action should / can it take, and who is at fault when someone dies?”. 

Fundamentally, this elicits either a technical response (the car would employ some technology to prevent such a situation from ever arising). It can also simply be discarded with a legalistic response (the driver should always have their hands hovering over the wheel so that this is ultimately their responsibility). 

However, in my view there is another ethical question that cannot be shrugged off so lightly. Previous thought exercises have assumed that the software is behaving correctly according to the rules set out by its developers. The software could however easily contain bugs. One can imagine a pathological case, where a bug leads to a car veering off course, yet also prevents any interference from the driver. 

If this is possible, or even probable, is it ethical to expose drivers and the wider UK public to them?

Software systems within cars are enormously complex. As an illustrative example, well before the era of autonomous vehicles, the software system that controlled just the break and the throttle of a 2005 Toyota Camry amounted to 295 thousand non-commented lines of C code. Now, if we move on a decade, and consider the software that controls a modern autonomous car, it is orders of magnitude more complex.

Highly complex software systems become difficult to manage and understand. They become especially difficult to test rigorously, and impossible to verify. This is especially true if they include a lot of concurrency, non-determinism, Machine Learning systems, and rely upon complex sensor inputs (as is invariably the case with autonomous vehicles). 

The “pathological” example of a software bug causing a car to veer off course, beyond the control of a driver, is perhaps not as pathological as it seems. This is what happened with the Toyota Camry mentioned above (along with a range of Toyota and Lexus models up to 2010). Even though the software was merely in charge of a break and throttle control, it led to circumstances where the break became unresponsive and the driver was unable to slow down, and has been linked to “at least 89" deaths in the US. Subsequent inspection of the software showed it to be incomprehensible, untestable, a probable hotbed of bugs. Since then, we have witnessed most modern car manufacturers, regularly recalling hundreds of thousands of cars due to software defects. 

There is also no sign that this trend is about to abate in the case of autonomous vehicles. An autonomous vehicle with software bugs is akin to a drunk driver in robot form.
We have already witnessed collisions and even a fatality, caused by bugs in autonomous vehicle software. Google's autonomous car manoeuvred itself into a bus. There have been several reports of Tesla crashes - e.g. when a Tesla car "autopiloted" into a SUV, killing its passengers, or an auto-piloted Tesla crashed into a bus full of tourists, or auto-piloted into a truck, again killing its driver.

This is not necessarily due to poor practice by Google or Tesla. It is simply a brutal reflection of the fact that software is inevitably defect-prone. This has shown to be especially the case with the Machine-Learning oriented autonomous car software.

This leads to an ethical conundrum, not just for car manufacturers, but for governments who chose to offer themselves up as a testbed for this technology. By providing  “permissive regulations” for these vehicles to be trialled in cities across the UK, the British public is being unwittingly exposed to robots, not under the control of their drivers, and which are controlled by software that almost inevitably contains bugs. 

It is important to emphasise this - it is not just drivers themselves, but pedestrians, cyclists, and families with children crossing roads, that are being exposed. In academia, if this were an experiment and the general public were the subjects, it not come close to passing an ethics committee. 

My personal view is that our processes for quality assurance are not yet mature enough to provide adequate confidence in the correctness of such software systems.

I am completely unfamiliar of the QA processes that are mandated by the UK government in this instance. But, if the genie is to be released from the bottle, one has to hope that they have at the very least:

  1. Established, quality and trust models that specifically factor in the various properties of autonomous software that make it so particularly difficult to reason about.
  2. Mandated that the software artefacts in autonomous vehicles are inspected and verified by an independent third party, cognisant of the specific UK driving conditions that might not have been factored in to QA activities abroad, and that the reports from these verification activities are made openly available to the public.
  3. Are in the process of compiling a large number of test scenarios and associated data that is to be applied to all autonomous vehicles to be driven in the UK.
  4. Maintain a database of all autonomous vehicles in the UK so that if (when) dangerous software defects are detected, that these vehicles are mandatorily recalled to prevent them from causing harm.




No comments:

Post a Comment