Video Released of Uber Self-Driving Car Accident

I agree even fully alert driver would have hit this poor person no if or but about it they couldn't have avoided in 2sec time frame at night time how ever that may been diff story if it had been day time.
I blame it on poor lady not only was she jay walking in first place but fact that the bike had no lights of any kind not even any night time reflectors

Humans don't like killing other humans, typically. I would expect a human driver to swerve right if they had the reflexes. I get the feeling the computer decided the occupant was more important and carried on.
 
A 480p video?
It appears there is no time to react but the visual quality of that footage is so poor I am not swayed one way or the other.
I can't say for certain I could do better but I can say my vision is way more refined and detailed than the footage in that video.

I'm hoping that's the just dashcam. That said, video cameras do not have the dynamic range of the human eyeball, at least not in the price range of a system intended for mass marketing.
 
The original reports said she came out of nowhere. The latest information is that she was crossing the road from the opposite side of the road and not only crossed through the other lane but almost crossed through the lane the uber AV was traveling in. The impact was on the front right bumper. A minimal amount of even driver assist technology level braking would of prevented the accident.


I'm sorry aaronspink, I'm not seeing how my recollection of the capabilities proponents have trumpeted in the past has to do with where you are going with this?

I do agree that it wouldn't have taken much for this to have been avoided. I also do not believe that the video is truly representative of what a human would have seen in this situation if paying attention. I believe it's highly likely that a human could have avoided killing her.

I also believe that if self driving cars can't avoid an accident under these conditions then they are not ready for the road, not even for testing. They should still be doing circuits on a track "practicing".

They need to be better, not sub par.
 
Irrelevant. A human-driven car would have schmucked her all the same.

The autonomous system should have easily detected the pedestrian. That's the issue here.

Disagree, our eyesight are far better than that camera video, as others said before, this is a failure from three sides. 1, the j-walker, for not checking the road before crossing. 2, the tech, for failing to detect. 3, that stupid, piece of shit behind checking her phone instead of scanning the road. Hell, maybe Uber's fault for hiring her since she clear doesn't know what her responsibilities are.

I been a long time driver in a lot of driving conditions, and this shouldn't have happened if she paid attention.
It's people like this driver why I borderline shit my pants everytime I go out cycling as I been clipped/hit 3 times because other driving not paying attention to the road.
 
Last edited:
I'm sorry aaronspink, I'm not seeing how my recollection of the capabilities proponents have trumpeted in the past has to do with where you are going with this?

I do agree that it wouldn't have taken much for this to have been avoided. I also do not believe that the video is truly representative of what a human would have seen in this situation if paying attention. I believe it's highly likely that a human could have avoided killing her.

I also believe that if self driving cars can't avoid an accident under these conditions then they are not ready for the road, not even for testing. They should still be doing circuits on a track "practicing".

They need to be better, not sub par.

You can say Uber's system should not of been on the road, sure. Part of the issue is that Uber is widely considered to be about the worst system out there. They should of never been on the road in the first place. This situation is handled by OTS driver assist systems that you can buy off the lot today. Hell, even the Volvo car unmodified can handle this situation.
 
I place the blame squarely on the woman. I seen dogs look both ways before crossing and stop when they see a car coming.

I've seen them chase a semi truck, bite the wheel, and get drug underneath the wheels too.

There are at least three elements to every traffic accident, the road, what was struck, what did the striking, there can be more, like weather or additional vehicles, etc.

Are you certain that the video you watched accurately depicts the lighting conditions of that accident? Those lights are there to light that roadway, it looks like they are completely ineffective but I has seen areas lit like that that are almost as good as driving in daylight.

I would reserve my judgement until I knew this for certain.
 
  • Like
Reactions: PaulP
like this
And you know this how? Is that conclusion based on the four+ decades you knew I'd been driving? On your knowledge of the job I once had that required a lot of driving around NYC? Or the other job I had that involved a three hour commute five days a week for two years?

Oh wait, you don't know about any of that, do you? You're just talking out your ass, aren't you?


Only an idiot would assume that the video camera that recorded that video can "see" as well as a human being can in those lighting conditions.
It clearly cannot, as the lack of any details in most of the image makes clear.

Tempe is a land-locked low-rise city. There's hardly a square meter of it that isn't well-illuminated by streetlights.
You're never driving down a dark tunnel like that video makes it appear.

All your posts on this topic thus far come across as arrogant and/or being generally annoying and offensive.

You may have an opinion, and I find some of your points relevant, but calm down and post like an adult please or don't post at all.
 
Disagree, our eyesight are far better than than that camera video, as others said before, this is a failure from three sides. 1, the j-walker, for not checking the road before crossing. 2, the tech, for failing to detect. 3, that stupid, piece of shit behind checking her phone instead of scanning the road. Hell, maybe Uber's fault for hiring her since she clear doesn't know what her responsibilities are.

I been a long time driver in a lot of driving conditions, and this shouldn't have happened if she paid attention.
It's people like this driver why I borderline shit my pants everytime I go out cycling as I been clipped/hit 3 times because other driving not paying attention to the road.

It should be pointed out that the safety drivers aren't there to prevent things like this. They are there to take over when the AV system has a fault or breakdown. It simply isn't viable for the safety driver to be able to prevent issues like this. Its part of the automation valley. Either a human is in the loop or is an auxiliary, the in between simply doesn't work in any way and critical systems are designed around this valley.

There is basically no heavily automated job where we expect a human to instantly go from being an auxiliary to being a primary in the loop. Any system that wants a human to take over at a critical juncture requires that the human be in the loop before and after that particular juncture. A perfect example are critical phases of a plane flight like take off and landing. Those have a LOT of automation and we have the technology to do them generally autonomous, but we have the human pilots as a critical piece of the loop there that everything goes through for the corner cases where we would want them to takeover. Instead of the computer flying the plane, the plane tells the pilot what to do and the pilot makes the determination if that is correct and does it. For non-critical phases of flight where the transition from auxiliary to primary is measured in 5-10s of seconds, we use autopilot to allow the pilot rest/downtime from their hyper vigilance.
 
Last edited:
Didnt someone already do the math, and even if the car locked them up the millisecond it saw the person, it would have stopped 5 yards short? And someone else did the math and came to the conclusion that a human driver would have the reaction time to start breaking .5 seconds before the hit.

In both cases, the human brake would have reduced the fatality risk by 33%, and the AI instant break would have resulted in zero broken bones.
Sorry but brake reaction times for average driver is around 2/3 sec at best there no way in hell they couldn't have avoided it at 40 mph
The real question that come mind did the lidar actually see her we all need see the data side by side with corresponding video run in sync
 
Under those circumstance, a human driver would have hit her too.

She was impossible to see until up close and crossing a road with high speed traffic without any sort of crosswalk. This was not a fault with the software. The only way a computer program would have been able to see that was if it was using infrared to see further down the road than a human could. A self driving car with infrared would have been the only way she would have been avoided.

Bullshit. I've dealt with stupid bicyclists and pedestrians at 40mph before. Keep in mind the human eye adjusted for dark is going to see better than that camera does. The person was likely not "invisible" to the human eye. The person was likely not invisible to the vehicle either.

Can I understand thata shadowy human figure pushing a bike is hard to discern as a pedestrian? Sure. Bu I also don't want my automated car smashing into a cow at highway speeds because "eh it's not a person".
 
  • Like
Reactions: PaulP
like this
I can only assume a few things about the logic in a self driving car.
Should object detection exist for objects outside the path of the vehicle? Should objects smaller than cars be detected and avoided when you're going past a certain speed? At what point do you estimate an object's relative motion and predict you might get into an accident?

I'm just going to assume a few things which i'm sure someone will tell me is wrong. I would assume that at a high enough speed you would turn off pedestrian detection and only detect cars. Why waste cpu cycles on things that shouldn't be there. I would also assume that maybe you would turn it off on high speed areas with no pedestrian crossings.

I would also assume that traveling at 40-75 mph is different than traveling at 0-30. The rules that apply to both manual drivers and automated software is fairly self apparent.

I'm not saying it couldn't be made to be more effective. I'm just throwing out my ideas on why this happened in the first place. The logic should be changed so that if an object is detected and you're traveling at speed and the estimated path might intersect with the vehicle, that the vehicle should probably drop down in speed so there can be more reaction time. However you have to balance that with false alarms.

I think you might be close with one exception I see.

this one:
I would assume that at a high enough speed you would turn off pedestrian detection and only detect cars. Why waste cpu cycles on things that shouldn't be there.

Do you remember all those discussions regarding pedestrian avoidance and a "moral" value system, "should the car place a greater value on the life of the passengers or pedestrians?"

I argued that the car needs to "mimic" a human reaction, just do it better if they can. I argued against "value decisions". I believed that making choices a human would not be expected to make was a mistake, that trying to make "choices" when a human would just do his best not to hit anything was arrogant and wrong. Instead of just trying to avoid a collision, (maybe doing it a little better), was enough and that running through identification and value algorithms was just eating up compute cycles when the car should already be taking action.

Maybe, the reason the car didn't do anything to avoid the collision, was that it valued the driver's life more than the pedestrians and that the car worked exactly according to it's programing.
 
I think there are a lot of false assumptions going on. I strongly suspect the car's software wasn't working properly or a sensor wasn't working properly. The car has many sensors and cameras. Ideally we'd have the raw data from all of them leading up to the crash so we'd know what the car actually had access to in it's decision making process. Was it a hardware/lack of coverage issue? Was it a software/algorithm/issue? Was the onboard analytics rebooting or something? This lame video doesn't show any of that.

In my opinion, this crappy quality video was released to make it look like the pedestrian was at fault and that there was nothing the car could have done. That's what is in Uber's best interest. Downplay, minimize, protect yourself. As other people already showed, the lighting in this area is not nearly as bad as this Uber video makes it look. And there's too much focus on the visible spectrum of this particular camera. Visible spectrum is important to people, it may not matter much to some of the car's equipment.
 
I see alot of arguing about who's to blame. Frankly after seeing the video i think its a pretty good 50/50. The car should have been equipped well enough to handle this type of occurrence if they are testing it on live streets with real human lives at stake.
The woman clearly should have chosen not to walk out but she did and it cost her her life.
The whole blame someone fully just doesn't fit because the AI in the car is clearly lacking in object tracking at night which is a HUGE oversight on Uber's part. This tech is supposed to be in place for this vary reason to make it safer but the video shows that's not the case.
The woman should not have crossed but when the car hit her she was already half way past the car and it struck her on the right side of the car which means at about 25 feet she was already in the lane of traffic. The car clearly wasn't tracking the object so its a flaw that needs to be fixed. Not sure why everyone is saying that the vehicle should not be at fault.

My main argument is that in order to get this fully tested it needs to be put on a closed course with more of these scenarios before more people get killed for "testing purposes".
 
It should be pointed out that the safety drivers aren't there to prevent things like this. They are there to take over when the AV system has a fault or breakdown. It simply isn't viable for the safety driver to be able to prevent issues like this. Its part of the automation valley. Either a human is in the loop or is an auxiliary, the in between simply doesn't work in any way and critical systems are designed around this valley.

I frankly don't give a wooden nickel what they're there for, as a human being knowing that tech isn't 100% reliable, and human judgement isn't either, I'm sure as hell going to try and stop if something was going to happen. This mentality is why we still have fucked up shit going on in this world.... *oh, I'm not here to be pro-active, I'll just wait til someone/thing tells me to respond. What if it was something that would've endanger the BU driver?
 
I think there are a lot of false assumptions going on. I strongly suspect the car's software wasn't working properly or a sensor wasn't working properly. The car has many sensors and cameras. Ideally we'd have the raw data from all of them leading up to the crash so we'd know what the car actually had access to in it's decision making process. Was it a hardware/lack of coverage issue? Was it a software/algorithm/issue? Was the onboard analytics rebooting or something? This lame video doesn't show any of that.

In my opinion, this crappy quality video was released to make it look like the pedestrian was at fault and that there was nothing the car could have done. That's what is in Uber's best interest. Downplay, minimize, protect yourself. As other people already showed, the lighting in this area is not nearly as bad as this Uber video makes it look. And there's too much focus on the visible spectrum of this particular camera. Visible spectrum is important to people, it may not matter much to some of the car's equipment.

Well since the NTSB has jurisdiction here, hopefully they'll treat major AV accidents like they treat major aviation accidents and basically demand all the data and then fully publish it. And it isn't just the NTSB. That's pretty much standard practice for all major aviation investigation results from the NTSB or the various foreign equivalents. The theory being that everyone in the industry should look at the data and the results so that they can learn from it and update materials, designs, and procedures to prevent it from happening again.
 
I frankly don't give a wooden nickel what they're there for, as a human being knowing that tech isn't 100% reliable, and human judgement isn't either, I'm sure as hell going to try and stop if something was going to happen. This mentality is why we still have fucked up shit going on in this world.... *oh, I'm not here to be pro-active, I'll just wait til someone/thing tells me to respond. What if it was something that would've endanger the BU driver?

It simply isn't possible and has been proven time and again. If you are monitoring an automated system, you cannot react fast enough when that system does something bad. Unless you are actively in the loop, you simply can't switch in fast enough to make a difference. Nor can you maintain hyper vigilance for extended periods of time. Once again, proven continuously with massive amounts of data to back it up and taught as part of train for any leadership position, both civilian and military. It is the reason why you rotate guards on a fairly short and regular cadence and why video banks for security cameras are generally ineffective. Humans simply don't go from nothing nothing nothing nothing nothing to fully alert.
 
Bullshit. I've dealt with stupid bicyclists and pedestrians at 40mph before. Keep in mind the human eye adjusted for dark is going to see better than that camera does. The person was likely not "invisible" to the human eye. The person was likely not invisible to the vehicle either.

Can I understand thata shadowy human figure pushing a bike is hard to discern as a pedestrian? Sure. Bu I also don't want my automated car smashing into a cow at highway speeds because "eh it's not a person".

One of the issues though is that when things are off to the side of the main 'cone' of the beam, they do become very hard to spot until they're more in front of you, which in this case was just before impact. Not laying blame anywhere at all, as there are so many factors involved, but I've had close calls before with something coming across form the side where it was only visible seconds before I would have hit it (a deer in one case).
 
Humans don't like killing other humans, typically. I would expect a human driver to swerve right if they had the reflexes. I get the feeling the computer decided the occupant was more important and carried on.
I'm sorry by time you even saw her is would have to late and I do under stand what saying, But if she cross over under street night light in shed of in it shadow then this may been a diff out come
 
Knowing how these corporations operate I would imagine Uber would have provided the absolute worst quality video to the police hoping they would take it at face value. If you look at the video you can only just see the pedestrian's shoes catching the light when she's about 40' away (2 lines, 2 gaps @ 10' each); that would be woefully inadequate headlight performance and doesn't seem to match up with what other people in the thread have described when seeing these Uber cars.

That said, I don't see how the pedestrian can absolved of responsibility here, blindly walking out into traffic.
 
It simply isn't possible and has been proven time and again. If you are monitoring an automated system, you cannot react fast enough when that system does something bad. Unless you are actively in the loop, you simply can't switch in fast enough to make a difference. Nor can you maintain hyper vigilance for extended periods of time. Once again, proven continuously with massive amounts of data to back it up and taught as part of train for any leadership position, both civilian and military. It is the reason why you rotate guards on a fairly short and regular cadence and why video banks for security cameras are generally ineffective. Humans simply don't go from nothing nothing nothing nothing nothing to fully alert.

That's part of the issue with having a supervisor on board in these vehicles - the only way really I think to maintain that vigilance is to actually be driving the thing. Anyone's mind will wander if not actively engaged in the process of driving it, in my opinion.
 
It surprise me that with the amount of tech jargon out there today like AI, AR, VR, simulation, etc, that uber and other companies involved in self-driving cars still demand that they require real world driving with unfinished products knowing full well the safety ramifications. Used to be only purely software got away with releasing half baked products and making everyone essentially be beta testers. Otherwise, there damn well better be someone or someones that is criminally liable when this kind of thing happens. Someone died. There should be no "oops the software wasn't good enough" bs and no one penalized.
 
What kind of a moron casually crosses in front of a car (with headlights on-I'm assuming) that's closing in at 40 mph...literally deer in headlights
 
Bullshit. I've dealt with stupid bicyclists and pedestrians at 40mph before. Keep in mind the human eye adjusted for dark is going to see better than that camera does. The person was likely not "invisible" to the human eye. The person was likely not invisible to the vehicle either.

Can I understand thata shadowy human figure pushing a bike is hard to discern as a pedestrian? Sure. Bu I also don't want my automated car smashing into a cow at highway speeds because "eh it's not a person".

Even her outline wasn't visible on the camera until about a second before the impact, a second and a half at most. I'd like to see you react to an obstruction, whatever the shape, when you're going 40 miles per hour and can't see it until a second before you hit it. Humans can't react nearly fast enough to respond to that. Computers would be far better, but the braking distance is too far even if the computer reacted in a tenth of a second in that situation. (Human reaction time is no better than 2 tenths of a second.) There was simply no way either a person or a computer would have been able to avoid her.
 
What kind of a moron casually crosses in front of a car (with headlights on-I'm assuming) that's closing in at 40 mph...literally deer in headlights

On a curve, no less. That's the big reason she wasn't visible far enough out. The car didn't line up the headlights on her until it was about 40-60 feet away, and then there was no stopping.
 
There is more than just braking, she was hit on the right side of the bumper, ok, maybe you can't stop at that distance but you sure could move to the left a couple of feet while slowing down. It was avoidable with human drivers if not distracted and alert and recognizing what was going on. The UBER car apparently saw nothing wrong.

Also we notice if the person is not looking at us, instinctively we know when someone is doing something strange which alerts us quickly.
 
  • Like
Reactions: PaulP
like this
I'm not sure what people are expecting from self driving cars and such and the humans that are supposed to be actively monitoring things to step in at a moments notice. I drove trains for a number of years. I loved it. Very fun, very focused, intense job. I would be very mentally drained by the time I traveled 150 miles to the next terminal. The company I worked for introduced software where the train could drive itself. All you had to do was monitor it, honk the horn 20 seconds before each crossing and take over for the last 5 miles of the trip. I can tell you without a doubt that you lose track of where you are, what is going on and it's much easier to panic when something happens. You become disconnected from the vehicle you're operating. This is what I consider the effect of a self driving vehicle with humans behind the wheel.

So for people saying the driver should have been standing at attention for just such a situation, ready to intervene and prevent such circumstances with about 1 second of response time... They really have no concept of a self driving vehicle. I know legally they are "supposed" to be ready for such things. In the real world it just isn't going to happen.
 
Even her outline wasn't visible on the camera until about a second before the impact, a second and a half at most. I'd like to see you react to an obstruction, whatever the shape, when you're going 40 miles per hour and can't see it until a second before you hit it.

So you're saying the speed limit is 40 MPH so even though it cant see shit, hey that's the limit man? The exact point is the vehicle should not be moving at a speed that outpaces its vision systems ability to react. I would think that would be like bullet point 1 or 2 on the whiteboard in the initial lets make a computer drive a car meeting.

AI cars should make decisions to proactively protect themselves and those around them, like slowing down when conditions deteriorate. Not mimic a crap driver but with hopefully better reaction times.
 
So you're saying the speed limit is 40 MPH so even though it cant see shit, hey that's the limit man? The exact point is the vehicle should not be moving at a speed that outpaces its vision systems ability to react. I would think that would be like bullet point 1 or 2 on the whiteboard in the initial lets make a computer drive a car meeting.

AI cars should make decisions to proactively protect themselves and those around them, like slowing down when conditions deteriorate. Not mimic a crap driver but with hopefully better reaction times.
What if it was raining, foggy? This was just being at night with headlights lighting up the road. The video from the car is very inaccurate (maybe overly made to be so), real people with regular eyesight will see way more than that video shows, the video with the cell phone is way more accurate for how it would look.
 
As an fyi, there is a video out there on youtube of someone driving this path after the accident with cell phone for video camera at night and it is all perfectly well lit and cell phone cameras are generally shitty without a lot of light.



Nah it's much easier to choose to believe the video from a crappy worse-than-cell-phone camera is what represents reality.

Most of the people in this thread are trotting out red herrings about criminal records, jaywalking, distracted driving, but make no mistake, this is a major failure of Uber's AV system. Score one for California's big government regulation that sent Uber to Arizona I guess?
 
  • Like
Reactions: PaulP
like this
Nah it's much easier to choose to believe the video from a crappy worse-than-cell-phone camera is what represents reality.

Most of the people in this thread are trotting out red herrings about criminal records, jaywalking, distracted driving, but make no mistake, this is a major failure of Uber's AV system. Score one for California's big government regulation that sent Uber to Arizona I guess?

I know what you're saying, but isn't this accident a combination of a failure of Uber's autonomous system, distracted driving and jaywalking? It's like a perfect storm.
 
Then you're an idiot who has no experience driving. It's a straight road, no obstructions, car traveling at 40 MPH. If the headlights were working properly, a human driver paying attention would have plenty of time to see the woman entering the road and react appropriately.

I live in Tempe. I know this environment. This accident shouldn't have happened, period.

Uber will fry for this and they deserve to. They are going to get hammered in court (or settle for 7+ figures) and their self-driving tech is going into the garbage can.

There's a good chance this is related to Uber's alleged theft of the self-driving technology they are using.
If they'd developed it themselves, they might actually understand it, but if they just stole it ... not so much.

It's also related to Uber's corporate culture of breaking the law and putting people at risk (like by hiring felons as drivers) in the name of corporate valuation.

I tend to agree it could have been avoided - remember, to the human eye, there would have been quite a bit more seen (you know, assuming you were looking at the road like you're supposed to, instead of being a twit). This camera can't see as well as most people can at night. Though, not to discount the pedestrian as well, who obviously chose the wrong time and place to cross the road. A stupid choice by the driver, and a stupid choice by the pedestrian equals a stupid outcome.
 
Third, in my opinion Uber is going to pay through the nose once this goes to court.

Nah, the pedestrian agreed to a ToS when she used the app many years back that prevents Uber from having any liability .. at most, abitration. If not, touching the car surely signs you up for such an agreement and she technically touched the car.


I'm joking of course. But I wouldn't doubt if it isn't attempted at some point given the overreach of ToS agreements.
 
Humans don't like killing other humans, typically. I would expect a human driver to swerve right if they had the reflexes. I get the feeling the computer decided the occupant was more important and carried on.

I didn't see your post earlier when I had the same thought. Maybe the car did exactly what it was programed to do. No failure in systems or sensors, no problem detecting the pedestrian. Maybe the car just followed it's programing "and determined that one pedestrian was not worth the risk of the passenger's life" and plowed her under.
 
One would think with how bureaucrats love to meddle there would be a standard black box/telemetry format defined for situations like this so a victim can have access to what the Uber T1000 was up to when it ran them over... But apparently all you have to provide is a down-sampled dash-cam.
 
I didn't see your post earlier when I had the same thought. Maybe the car did exactly what it was programed to do. No failure in systems or sensors, no problem detecting the pedestrian. Maybe the car just followed it's programing "and determined that one pedestrian was not worth the risk of the passenger's life" and plowed her under.

The logic is not entirely incorrect. When you are presented with a) maintaing course and definitely killing one person or b) swerving into oncoming traffic and potentially killing all four people in that vehicle, what do you do? The human driver would probably take a chance at swerving but, for a programmer, you go with the option with a known outcome.
 
Back
Top