NTSB investigators looking at Uber self-driving car accident damage

Uber’s self-driving car that killed Elaine Herzberg in March in Tempe, Arizona, was affected by a software bug at the time of the incident, according to a report by Amir Efrati from The Information.

Amir Efrati reportedly discussed the incident with two sources, who said that the sensors on the car detected Herzberg, but failed to react due to a “false positive” signal.

The report details that it’s a “result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road.”


Uber self-driving car accident in Tempe resulted from software bug

One of the most fundamental problems self-driving cars face is the ability to detect the difference between real-world dangers and things it can ignore. The vehicle’s software needs to be able to distinguish between a person, which the car should not hit, and something like a plastic bag, which it can run over.

Yet software isn’t perfect, and accidents can happen just as they can to human drivers. The difference is that a fully-awake, non-intoxicated human is better (for now) at distinguishing between the aforementioned obstacles, and can even have faster reaction time.

Since the software can’t react in the “right way” in every situation, the software developers have some choices to make. They have to design a balance between a comfortable ride (one without a lot of jerking because of false positives), and a safe ride, where obstacles are detected correctly.


Read More:

Tesla regains composure with new data from last week’s fatal crash


If the software is created to err too much on the safe side, the vehicle may swerve for something near to it, but that is not an immediate danger. Tuning the car software towards the “comfortable ride” side of the spectrum may cause the software to ignore false positives, just like it did for Herzberg.

“There’s a reason Uber would tune its system to be less cautious about objects around the car,” Efrati wrote. “It is trying to develop a self-driving car that is comfortable to ride in.”

Efrati’s report also highlights another report about Cruise, GM’s autonomous vehicle division. Cruise vehicles are “still repeatedly involved in accidents, or near-accidents where a person has to grab the wheel of the car to avoid a collision”, the report states.

Obviously, there’s two sides to that story. Some people may say that the software in Cruise vehicles doesn’t do what it was designed for. Others may say that the software is a little too good at what it does. It’s clearly much better to have a slightly uncomfortable ride than to run down a pedestrian. This is especially the case in busy urban environments.

The software isn’t perfect, yet

The ultimate goal for self-driving cars would be to have the lowest amount possible of both false positives and false negatives, but the software just isn’t there yet. Herzberg’s death serves to prove that the technology is definitely here, but it isn’t perfect and accidents can and will happen.

Last week we saw an accident involving a Waymo self-driving van. The van in question was determined to not be at fault for the incident.

Uber declined to comment to The Information, citing a non-disclosure because of the NTSB on-going investigation. We will update if we hear more.

Are you for or against the testing of self-driving cars on city streets? Let us know in the comments!


 

Loading...

LEAVE A REPLY

Please enter your comment!
Please enter your name here