Friday 23 October 2015 9:01 am

Tesla Autopilot fail: Driverless car developers need to stop prioritising speed over market safety

Clara Guibourg is an Online Writer at City A.M. She can be contacted at She is particularly passionate about women in business, technology and telecoms.

Clara Guibourg is an Online Writer at City A.M. She can be contacted at She is particularly passionate about women in business, technology and telecoms.

Last week, amid a wave of anticipation, Tesla released a software update that propels us one step closer to a world of driverless consumer vehicles. The Autopilot software will allow cars to change lanes at the flick of a switch, detect other vehicles, adjust speeds in response to traffic and even park themselves.

How do we ensure that the extraordinary pace of technological innovation does not come at the expense of driver safety?

Tesla says that the Autopilot is designed to self-improve by using real-time feedback from each vehicle to continuously improve performance across the entire fleet, effectively turning every driver into an instructor “teaching” the entire fleet how to drive. This raises the question of whether we should be using public roads as a testing ground for safety-critical software. Recent videos indicate that some drivers are already testing the new software to the limit and exposing dangerous safety flaws in the process.

Read more: London's existing cab drivers must be central to the driverless solution

Even more worryingly, because all the cars with Autopilot will “learn” from each other, transforming the whole fleet into a test-driver network, this raises the question of whether erroneous data from one vehicle could have a ‘domino effect’ across a whole fleet interlinked by electronic feedback.

Could this offer a gateway for a future hacker to manipulate the data from the cameras or radar in one car in order to affect the performance of all the others in the fleet?

Recent stories indicate that the safety-critical software upon which modern vehicles depend is not being designed to be sufficiently trustworthy prior to release.

In August, Tesla had to issue an emergency ‘patch’ for its Model S after hackers found software flaws in the vehicle that allowed them to shut down the engine, while Fiat Chrysler recently had to recall 1.4m cars to fix a software flaw that left it vulnerable to hackers.

This raises the concern that, in the rush to add ever more exciting ‘updates’ to their vehicles, car manufacturers are rushing software to market without testing it to a rigorous standard.

“Over-the-air” software updates enable manufacturers to upgrade vehicles remotely in an instant, yet the competition to provide a stream of new ‘over-the-air’ functionality to every driver must not come at the expense of public safety.

This is particularly dangerous as we enter an era of semi-autonomous cars where software is increasingly taking over from human drivers. Even though Tesla currently advises Autopilot drivers to keep their hands on the wheel, by gradually replacing human functions with software, drivers will inevitably become increasingly ‘hands-off’ and reliant on the software.

This highlights the urgent need for a common gold standard of trustworthy software design and testing across the automotive industry. Automotive manufacturers should design and “test-drive” all software just as rigorously as they design and test-drive the hardware that encases it.

Crucially, they must no longer prioritise speed to market over customer safety.