Driverless Cruise car accident highlights dangers of artificial intelligence

Driverless Cruise car accident highlights dangers of artificial intelligence

Open Editor’s Digest for free

It is not unusual for a new technology product to be recalled. Promising new tools don’t always work perfectly from the start. But that doesn’t usually happen after the technology in question hits a pedestrian and then drags them 20 feet across the street.

The catastrophic accident that halted operations at Cruise, General Motors’ self-driving car division, is the kind of setback that self-driving car champions have long feared. It has the potential to erode confidence in the technology and spark tough regulatory intervention, but it doesn’t need to set back the cause of robotaxis — as long as Cruise and its competitors act quickly and show they’ve taken the lessons to heart.

In early October, one of the company’s cars ran over a pedestrian who was thrown into its path after being hit by another car. The Cruze stopped, but then moved another 20 feet in what the company called a safety maneuver to make sure it didn’t create a hazard — all the while with the seriously injured pedestrian trapped underneath. California authorities, who suspended the company’s licenses to operate two weeks ago, also claimed that Cruise executives did not initially disclose the car’s second maneuver to regulators, although the company denied this.

This chaos has highlighted a number of uncomfortable truths about self-driving vehicles, and by extension much of the AI ​​industry. The first is that the kind of races that break out over new, potentially world-changing technologies create inevitable tension. On the one hand, there is the Silicon Valley culture of rapid deployment of new technologies, and on the other, the kind of safety cultures and processes that take years to develop in more mature markets.

In the US, Cruise has been competing with Tesla and Waymo, part of Alphabet, to develop robotaxi services, and parent General Motors has set an ambitious revenue target of $1 billion by 2025. It has now voluntarily suspended all operations and promised an overhaul. comprehensive. For safety operations and management arrangements. This may be welcome, but it came after California regulators banned the company from operating. Cruz and its competitors must show they can exceed public expectations about safety, rather than just react.

The second uncomfortable truth is that deep learning, the technology behind today’s most advanced artificial intelligence systems, is not yet advanced enough to predict accidents like the one at Cruise. It may never be so. This incident is a reminder that supervised learning systems are only as good as the data fed into them. No matter how much data there is, it is impossible to train them for everything the world may throw at them.

Cruise is at least able to use this incident in future training: all its vehicles from now on will learn from this experience. It also estimates that this particular accident would likely only happen every 10 to 100 million miles of driving. However, there will always be new situations that we have not encountered before.

To regain public trust, Cruze and its competitors must prove not only that their cars get into fewer accidents than humans, but also that they don’t sometimes make the kind of serious mistakes that humans could easily avoid. This is still a very significant drawback for today’s technology.

The third issue raised by the incident concerns organization. While there has been a lot of discussion about how to regulate AI, there has been little about who should actually do the regulation — and what ordinary citizens and their elected representatives at various levels of government have to say about technology that might profoundly impact their lives.

In Cruz’s case, approval from California state regulators was enough to give robo-taxis free access to San Francisco’s streets, despite protests from the city’s transportation authorities, the mayor’s office and groups representing citizens that the vehicles had not been fully tested.

Allowing greater oversight at the city level would create a raft of regulations that would make it harder for self-driving car companies to expand. However, the fallout from the Cruz accident suggests that the balance achieved in California is insufficient.

The situation is not lost. Cruz’s response has come straight from the crisis management playbook, from the outside investigations it launched into its technology and incident handling, to the voluntary recall of its vehicles. Rather than simply practicing harm reduction, it must convince the world that this is a real turning point.

richard.waters@ft.com

Video: Can generative AI live up to the hype? | FT Tech

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *