The Uber Crash: A Cautionary Tale of Cause and Effect

You probably know the story, but let’s recap anyway: On March 18, a self-driving car struck a woman crossing the road in Tempe, Arizona. She was taken to the hospital, where she later died from her injuries. I covered the incident in my post Self-Driving Cars Must Be Banned Now!.

Since I wrote the thunderous entry, there has been some progress in Uber’s own investigation of the deadly March 18 crash. On May 7, The Information broke an exclusive story explaining what probably happened.

The Volvo XC90’s sensors detected 49-year-old Elaine Herzberg as she crossed the road, but the software decided it was a false positive. A “false positive” is a false alarm, i.e. the software decided that the sensors had detected something that could be ignored. As a result of that, the vehicle didn’t take evasive action, and Herzberg was hit by the car.

Wait, what!?

Why would the software do something so outrageously stupid? Well, there’s a lot of crap in the road that the sensors will detect that the car can safely ignore. Garbage and plastic bags, for instance. Human drivers behave in exactly the same way. We detect something in the road, then quickly decide whether it can be ignored. Sometimes, the conclusion we come to is wrong, though. Human drivers make exactly the same mistake as the Uber software did. Three years ago, a young woman was killed when walking in the middle of one of the main roads in Oslo. The driver continued on because she thought she’d hit a garbage bag.

Some of Uber’s hardware partners stated that it’s unlikely that the hardware they provide somehow caused the accident. Velodyne, which makes the LIDAR mounted on the XC90, said that their equipment would not have been affected by the night time conditions. Nvidia, which provides GPUs to Uber, went as far as to say that the problem must have been Uber’s own software.

Uber’s autonomous Volvo XC90.
Uber’s autonomous Volvo XC90.

On the Road too Soon

It looks like the Uber self driving software is a rookie driver. It still learns how to drive a car. But unfortunately it doesn’t do that in a controlled environment, or with proper supervision. The Uber car that struck Herzberg had a safety driver on-board, and the safety driver is supposed to keep an eye on the road in case the autonomous software cocks up. In this particular case, the safety driver didn’t pay attention to the road in the moments before the impact.

To me it sounds like Uber’s self-driving cars have been allowed on public roads too soon. And there are reasons to believe that the theory might not be too far fetched. In late March, The Guardian published a story detailing how “a cozy relationship” between Uber and Arizona governor Doug Ducey. The relationship might have led to the Uber cars being allowed on public roads even earlier than previously thought.

I think the prospect of autonomous cars is great. The more self-driving cars we get on the roads, the less accidents we’ll have. But we can’t let a two ton Volvo XC90 run loose on the roads if it can’t drive itself properly. Just as we don’t let a person without a driver’s license drive the car.


Feedback

This post has no feedback yet.

Do you have any thoughts you want to share? A question, maybe? Or is something in this post just plainly wrong? Then please send an e-mail to vegard at vegard dot net with your input. You can also use any of the other points of contact listed on the About page.


Caution

It looks like you're using Google's Chrome browser, which records everything you do on the internet. Personally identifiable and sensitive information about you is then sold to the highest bidder, making you a part of surveillance capitalism.

The Contra Chrome comic explains why this is bad, and why you should use another browser.