Uber's Software Failed In Fatal Arizona Crash According to NTSB

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,552
A couple of months ago we began reporting on a fatal crash in Arizona involving an Uber self-driving car and a pedestrian. Dashcam video was released and it looked like the area where it happened was dark. However, the NTSB has released a preliminary report that lays the fault of the accident on the software and not the sensors in the system. They say the sensors detected the pedestrian in sufficient time to make a stop, but the software became confused. The bottom line is Uber screwed the pooch. I guess they knew what really happened because they settled with the family very quickly. The preliminary report can be found here.

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.
 
Ignoring the careless driver falling asleep at the wheel, I guess the big question here is, would this have happened anyway without the autonomous software? Video feed may not really represent what a person actually sees, but given the darkness, potential glare, and seeing the pedestrian show up ~1 second before impact makes it look like this is an accident either way. The sensors and software will only get better and prevent things like from almost ever happening.
 
Ignoring the careless driver falling asleep at the wheel, I guess the big question here is, would this have happened anyway without the autonomous software? Video feed may not really represent what a person actually sees, but given the darkness, potential glare, and seeing the pedestrian show up ~1 second before impact makes it look like this is an accident either way. The sensors and software will only get better and prevent things like from almost ever happening.
She was playing on her phone when just before she struck the pedestrian. Still I don't think it is possible to determine if the drive could of even seen the pedestrian in those conditions. People also need to be held accountable which kids these days are raised to believe it is never their fault. It is what will hold back self driving tech for years. People don't see past their own little world. I honestly surprised I haven't hit anyone in my 20 years driving. People just like to randomly walk into the middle of the street. I once almost creamed a girl that was standing on a medium strip looking at her phone. If I haven't been watching and took some precaution I would killed her cause she just stepped of the strip into the street never taking her eyes off her phone.
 
So it still classified it as an object (alternating between various things) and chose the best path was through that object?

Yeah, I'd say that's a software failure. They have a fault in their logic tree somewhere if the car is deciding the best course of action is to hit an unknown object.

Doesn't surprise me though, isn't this based off of the stolen stuff from Waymo? The problem with stealing IP is that you don't have the collective knowledge that went into it's design, so flaws aren't easily found until they become painfully obvious (see: Russian Concord crash, Russian B29).
 
I still find the most surprising thing that came out about this case was that the driver was female. (yes, i'm a horrible person)
 
The report doesnt not cite fault only the facts that it has gathered so far so dont report that it specified fault. The software was not faulty according to the report. It detected the person and how long it would need to brake before hitting her (1 or 2 seconds before, I forget the exact second, hitting her) . The problem is that the automatic braking system is disabled by Uber in autonomous vehicles. Determining fault is more complex than people realize. Ultimately a percentage of the fault will lie with Uber and a percentage will lie with the woman. She had meth and marijuana in her system, crossed a dark street where there wasn't a crosswalk or lighting and had no side reflectors on the bike.

So people reading the report are drawing different conclusions based solely on their interpretation. Its best to wait for the final report after the investigation is complete and have people who know how to interpret what the report says correctly.

Heres what the AP said about the report:
https://www.msn.com/en-us/news/us/f...an-did-not-brake/ar-AAxKfls?OCID=ansmsnnews11
 
Last edited:
As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

lol
 
Yes, let's put thousands of these on the roads right a way! It will take care of the rabbid bicycle population! WOOT!
 
It still should of braked, disabling the cars own built in system that would of prevented or limited the speed of impact if the standard feature of that car was enabled (witch is just radar I believe )

They will fix this issue with the software I guess, as it getting confused what the object was it's failsafe should be brake not to ignore it and just plow into it as some tesla cars do (lack ladar)
 
From the article:

"At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator."

Uh, let me guess, the notification to operator is a feature to be added in the next version?
 
Good. Uber took shortcuts in utilizing the technology, and didn't want to spend money on the more advanced (read: safer) solutions (NVIDIA) available. Glad that blame is being correctly placed.
 
So it still classified it as an object (alternating between various things) and chose the best path was through that object?

Yeah, I'd say that's a software failure. They have a fault in their logic tree somewhere if the car is deciding the best course of action is to hit an unknown object.

Doesn't surprise me though, isn't this based off of the stolen stuff from Waymo? The problem with stealing IP is that you don't have the collective knowledge that went into it's design, so flaws aren't easily found until they become painfully obvious (see: Russian Concord crash, Russian B29).

Are you talking about the tu-144? If I recall that plane project started in 1962 and actually flew before the concord and the reason it took longer to get into service and safety issues were due to budget restrictions.
 
Ok to anyone saying “maybe the accident still would have happened...”. Never mind the dreadful $5 dash cam Uber was using making it look like this person was run over in a black fog. The accident was in a decently lit area at the time. Plus you can see way more than 1 second ahead of you at night, and the victim didn’t jump into the road or anything.

Uber designed a car to drive itself, detect emergencies, decided not to apply the brakes AND NOT EVEN WARN THE DRIVER. This is severe negligence.
 
The software was crippled by UBER since the software couldnt control the brakes or notify the driver by design.
 
"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator".

Then whats the bloody point of a self driving vehicle?!
 
"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator".

Then whats the bloody point of a self driving vehicle?!
Don't be stupid. It's self driving. Not self stopping. :rolleyes:
 
  • Like
Reactions: mord
like this
She was playing on her phone when just before she struck the pedestrian aven't been watching and took some precaution I would killed her cause she just stepped of the strip into the street never taking her eyes off her phone.

Dashcam footage of the driver looking down at her lap has prompted a lot of speculation that she was looking at a smartphone. But the driver told the NTSB that she was actually looking down at a touchscreen that was used to monitor the self-driving car software.

"The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging vents of interest for subsequent review," the report said.

The driver said she had two cell phones in the car, but neither was used until after the crash, when she called 911.

Seemed that the drivers was doing her job, monitoring the diagnostic data, the problem was the other self driving cars have two drivers, uber had one. Can't do two things at once.
 
I feel like braking should be the last thing to disable... MAYBE limit it to 80% or something. Crazy.
 
and then as a bicycle with varying expectations of future travel path.
Ok all things aside, this is fairly cool. The fact that it predicts where objects should be going.
 
Happy to blame uber but seriously everyone is giving the cyclist a free ticket for crossing the road in an random location with a car bearing down on them and taking no evasive action? Self presevation...? Anyone?
 
It's a bit more complicated than the sensors detected her and therefore the car should have stopped.
The sensors don't provide a boolean. True if pedestrian detected and false if one is not detected. They provide feedback that has to be interpreted. False positives are a very significant problem. You don't want the car slamming on the brakes in an emergency stop for a pedestrian that isn't there. Not only will it make it so no consumers will want to buy or ride in your autonomous vehicle, it could cause an accident.
The NTSB determined that in this case the Uber software had a clear enough signal that they should have been able to determine that the pedestrian was there and incorrectly identified it as a false positive.
It is a flaw in Uber's system. It did result in the pedestrian's death. I'm only trying to point out that it's not a simple bug that should have been easily caught with basic testing. Uber is at fault, but you have a very limited ability to come up with a wide enough variety of tests to verify such software works in a closed environment. Thus the need for real world testing with a human driver to override the software when it fails to properly identify something, and then the data would be gathered to find the flaw and fix it.
The weak point in Uber's testing strategy is the human driver. Most human beings aren't well suited to maintaining sharp focus and attention when that attention is only needed occasionally. I certainly couldn't do it. They need some technological solution that helps ensure the driver is being attentive. Of course that system too would need to be thoroughly tested and might suffer from some of the same problems. There is no perfect way to completely remove risks.
 
It’s unfortunate someone was killed but it will also be beneficial in the end.

We can speculate and assume what should have taken place but they can evaluate what did happen and build it better. Whether it’s Uber, the sensors or the driver it will all make self driving better in the future.

Ultimately it was the operator of the car at fault. I mean she was testing unproven software. Her checking the diagnostics is no different than texting or falling asleep at the wheel. This is one of those concerning things with self driving vehicles. Where to place the blame? However if the vehicle has to have a watcher in the car it’s not fully self driving. Secondly it’s concerning it seems that the person responsible was comfortable enough to not be watching the road.

Either way it’s still good data. Good data to help fix the issue in the future and good data to support having emergency braking maneuvere’s enabled due to overseers not paying attention. Diagnostic data should be damned and she should have been watching the road.

Glad Uber is paying up and taking responsibility as they should. But I think the driver bares a lot the blame.

As to the cyclist. It’s no different than kids who run into road and get hit. They shoulder some blame as well.
 
Happy to blame uber but seriously everyone is giving the cyclist a free ticket for crossing the road in an random location with a car bearing down on them and taking no evasive action? Self presevation...? Anyone?
I wouldn't say everyone, in the bazillion page thread when this happened there were quite a few of us who really put down the jaywalker.

I mean say all you want about the Uber car failing because something was switched off, the lights weren't switched off so that homeless gal should have seen a car coming.
 
Back
Top