Saturday 21 July 2018

How safe are driverless cars?

The Uber fatality raises questions about the testing of driverless cars and how they mix with people, writes Geraldine Herbert

FATAL COLLISION: US National Transportation Safety Board NTSB investigators examine a self-driving Uber vehicle involved in a fatal accident in Tempe, Arizona, last week.
FATAL COLLISION: US National Transportation Safety Board NTSB investigators examine a self-driving Uber vehicle involved in a fatal accident in Tempe, Arizona, last week.

Uber has suspended all tests of self-driving cars after a fatal accident in Arizona last week. It was not the first fatality linked to a driverless car but it was the first time a pedestrian has been killed in a collision with one.

Questions are still being asked about what exactly happened, but recently released video footage shows how the Volvo XC90 was travelling in autonomous mode at approximately 64kmh when it failed to slow down and hit a woman walking across the street. A video taken inside the car indicates that the "safety driver", a person whose role is to intervene and take control of the car if necessary, was not looking at the road at the time of the collision.

The accident has revived the debate about the pace at which autonomous vehicles are deployed and, crucially, what level of risk is acceptable. One of the motivations behind the technology is to make driving safer as 90pc of accidents are caused by human error.

But problems associated with partial automation, such as the process of handing back control to the human, are becoming such a significant challenge that many car companies favour waiting for autonomous capability that will take the driver completely out of the driving process.

Is there convincing evidence that partially autonomous cars that rely on partially engaged human operators have the capacity to be safe? Is there a solution to keeping drivers alert and ready to engage if necessary?

Maybe it is simply that they need to be rigorously tested before they embark on public roads.

Both Tesla and Uber have taken a gung-ho attitude to testing compared with the cautious approach taken by incumbent car makers opting instead for a "roll it out and see what happens" strategy.

In 2016, a Tesla Model S set to Autopilot collided with a truck in Florida, killing the Tesla driver. The response of the company was that Tesla drivers were aware that their new technology was a beta phase and they essentially were the testers of it.

Controlling the conditions around the vehicle is proving more problematic than the technology itself. Autonomy requires excellent infrastructure which is rare and any level of autonomy currently requires near-perfect conditions. Even issues such as a lack of road markings, road works or adverse weather are a challenge.

Each country presents new obstacles: Volvo's self-driving technology is struggling to identify kangaroos on Australian roads. The Swedish car-maker uses a Large Animal Detection system to monitor the road for deer, elk and caribou but this is confounded by the unusual way kangaroos move.

The recent collision also raises questions of legal liability when it comes to collisions: does the blame lie with the self-driving car's owner, the software programmer, manufacturer or a combination of all?

And what of the social and ethical aspects of driving? Driverless cars may not get distracted, text or fall asleep behind the wheel, but can they be programmed to react to ambiguous situations with the deftness of an alert human driver?

With pedestrians, cyclists, other cars and large vehicles all jostling for road space how do you programme a car to choose the least worst outcome when a collision is unavoidable?

Car makers from Audi to Volvo are developing autonomous cars and the industry is expected to balloon to a $42bn global market by 2025, according to the Boston Consulting Group. Is the prospect of this multi-billion-euro bounty causing many to rush prematurely towards autonomous driving?

Self-driving technology is rated on a scale from Level 0 (no automation) to Level 5 (full automation). A recent report from consulting group Gartner predicts that Level 3 driverless cars are unlikely to achieve a 20pc market penetration in the next five to 10 years, while Level 4 (full autonomy in some circumstances) will take more than 10 years.

Many also believe that Level 4 and Level 5 (autonomy all times, conditions and places) will be separated by decades, so human-controlled vehicles are likely to be around for many years to come.

In response, we need a robust regulatory framework to manage the mix of people and driverless cars.

Sunday Independent

Life Newsletter

Our digest of the week's juiciest lifestyle titbits.

Also in Life