Tesla Autopilot Nearly Recreates Fatal Crash

You know some of these self driving car accidents are pretty scary, but don't they still have a better track record per mile than human drivers?
 
Per passenger mile, mass transit is much safer. And ultimately looking at this chart, it's much safer to be a passenger in anything other than a passenger car or light truck - otherwise it's a pedestrian or another vehicle dying, not the passenger(s) in the measured vehicle. :wideyed:

View attachment 64173


This data is a little old, but if still accurate it would seem that when it comes to cars and light trucks, the absolutely worst thing you can be in is a pickup truck, followed by an SUV.

Suv-3.png
 


Keep your hands on the wheel, eyes on the road, and treat Autopilot the same way you would a passenger in the car who says "Oh shit, watch out!" when they see something you might have missed. It's a safety system that you hopefully wouldn't ever need, but is an awesome tool that might save your skin if you're in an unexpected situation.


Second video @ 50 seconds... If that was me behind him I would've made it my point to follow that retard and wait until he stopped to punch him in the face or throw my floor jack thru his windshield.
 
Per passenger mile, mass transit is much safer. And ultimately looking at this chart, it's much safer to be a passenger in anything other than a passenger car or light truck - otherwise it's a pedestrian or another vehicle dying, not the passenger(s) in the measured vehicle. :wideyed:

View attachment 64173

But the point being made was we shouldn't accept deaths as part of the system, and mass transit was proposed as part of that answer. Surprise, there's death there too. Guess what, everyone, nothing is perfect. Human drivers kill people in huge numbers every year, but nobody says stop all driving.

People misusing advanced lane assist is an incredibly small percentage of that, and given the number of lane assist systems on the road, I'd lay money that the death rate is lower than unassisted drivers.

Ditto with AI "drivers".

If people afraid of losing their autonomy applied the same standards to human driving as they do to the automated systems, we'd all have speed governed cars that would drive no faster than 20 miles an hour.

BB
 
But the point being made was we shouldn't accept deaths as part of the system, and mass transit was proposed as part of that answer. Surprise, there's death there too. Guess what, everyone, nothing is perfect. Human drivers kill people in huge numbers every year, but nobody says stop all driving.

People misusing advanced lane assist is an incredibly small percentage of that, and given the number of lane assist systems on the road, I'd lay money that the death rate is lower than unassisted drivers.

Ditto with AI "drivers"

BB

Gotcha. Yeah, don't disagree with that. But I don't think the goal is no deaths; I think the goal is significantly lowering the current human accident rate. I don't think we're there yet with automated, but I hope we can get there.

However, the "edge" cases in driving loom pretty large.
 
This data is a little old, but if still accurate it would seem that when it comes to cars and light trucks, the absolutely worst thing you can be in is a pickup truck, followed by an SUV.

I wonder how much of that is self-selection of the drivers of such vehicles vs the vehicle type itself. Purely anecdotal, but it seems that SUV and pickup drivers often drive faster and more aggressively than even the average sports coupe, with a much longer stopping distance and higher center of gravity. Our monkey brain equates being taller than someone with power, seems to translate to sitting up higher in those types of vehicles.
 
It is of course detected by the sensors, the problem is that the neural net can't classify it properly and this is most likely an issue with the learning set and will improve. Given the large number of sensory combinations issues like this are bound to happen but hopefully most of these can be resolved over time.
 
It is of course detected by the sensors, the problem is that the neural net can't classify it properly and this is most likely an issue with the learning set and will improve. Given the large number of sensory combinations issues like this are bound to happen but hopefully most of these can be resolved over time.

So I assume you are willing to strap yourself into a Model S and splatter yourself on a concrete barrier...

You know.... for science.
 
And that is why it is still classified as Beta and they have to accept this disclaimer that BlueFireIce posted.

If a product is used as not intended but the product can essentially do things on its own, who is liable for the mistakes that come because of it?

Beta: Gaming/Os/IoT/Refrigerator/Smartphone/Flatscreen/thermometer.....(you get the point) software doesnt place a group of people lives in jeopardy if a subsystem isnt ready for mass consumption. Beta is supposed to mean a version ready for release. After alpha which usually never sees the light of day.

When a system depends on software for its critical functions, maybe beta isnt enough.

really bad point....but....... When the military out's a new computerized machine, it only kills the wrong group of people because of a targeting issue. When a car is seen as being able to drive itself, citizens lives are at risk. Citizens are now on a battlefield as testdummies for *soon to be improved* software.

I am looking forward to autonomous vehicles, I just expect it to be done responsibly and not have a *lets try this and see if it will kinda work* attitude. Publishing numbers that seem to rationalize a problem.

Only X # of deaths after X # million of miles. Accidents happen. These products *cause* the accident. Its a difference.

I really feel that the problem stems from how it was merchandised, so the solution is to enforce how the feature is sold. May not prevent every instance but it will at least seem that the manuf is more responsible.
 
You know some of these self driving car accidents are pretty scary, but don't they still have a better track record per mile than human drivers?

Actually they do - with a quick mental calculation. Some of the pushback may be because we expect our inventions to perform better than we do. Our computerized systems are faster and better performing than we are at the problems we give them.

Some of the other pushback may be because people as a whole complain about the "other driver" more than look at our own performance. Its always the other guy, but this time the other guy is a computer - a virtual undocumented immigrant. Lets really check his papers.

I am where the "new guy* is under a microscope until its proven that he is ready for the job. I like computers placing my browsers content on my lcd flat-screen. I still dont feel that they are ready for the Sci-Fi version of self driving cars that some are treating them as - even though manuf aren't saying the same.

Computerized Driving maybe similar to a 16 year old first time driver behind the wheel. Hopefully it will get better than the 16 year old. You know distractions.......
 
I wonder how much of that is self-selection of the drivers of such vehicles vs the vehicle type itself. Purely anecdotal, but it seems that SUV and pickup drivers often drive faster and more aggressively than even the average sports coupe, with a much longer stopping distance and higher center of gravity. Our monkey brain equates being taller than someone with power, seems to translate to sitting up higher in those types of vehicles.

I'm sure self selection is a part of the equation, as well as worse vehicle dynamics (handling, stopping distances, rollovers etc) of SUV's and trucks. That and the inferior crumple zones found in rigid truck frame models as compared to unibody models.

It all adds up. Quantifying how much is due to each is very difficult though.

Personally I am very paranoid whenever driving a truck, SUV or large van. Being used to low vehicles I feel too far up and too unstable, and it scares the shit out of me, so I avood them at any cost when I can, and drive them like a grandmother when I can't.
 
I wonder how much of that is self-selection of the drivers of such vehicles vs the vehicle type itself. Purely anecdotal, but it seems that SUV and pickup drivers often drive faster and more aggressively than even the average sports coupe, with a much longer stopping distance and higher center of gravity. Our monkey brain equates being taller than someone with power, seems to translate to sitting up higher in those types of vehicles.

This is a large part of it. The other info missing from the graph is the number of miles driven, and the type of miles driven.

This is also reflected in insurance rates.
I remember many years ago, when I was looking to buy a mid sized SUV (needed something to haul stuff for the house)
Insurance on the Toyota was almost 3 times the insurance on a Ford Explorer.
That was mainly due to the type of drivers.
Lots of young adults bought the Toyota because it was good off-road, They damaged them going off-road or got in accidents.
Meanwhile a large portion of the Explorers were Mini Van replacements driven by middle aged parents who didn't like the image of driving a Mini Van.

One of the reasons Mini Vans have a low death rate, is because the drivers are usually mothers taking their kids to school. Generally low speed driving, that even if you get in an accident, it will be minor.
 
Can't wait to see self driving in heavy snow, freezing rain, heavy fog, or add night time to the previous for more entertainment.
Yup.... it might look just like when humans fuck things up because they don't know how to drive in the snow!

AliveMemorableHookersealion-size_restricted.gif
 
He's also doing 59 in the left lane of the highway in light traffic while he is not passing anyone for no goddamn reason. I don't understand this whole fascination with autopilot tbh. Makes far more sense on long haul trucker routes than anything else, not urban techy young adults driving in congested cities.
 
You know some of these self driving car accidents are pretty scary, but don't they still have a better track record per mile than human drivers?
No, even Waymo for 2017 reported 63 disengagements for serious issues over 352K miles of testing.

https://www.dmv.ca.gov/portal/wcm/c...5-a72a-97f6f24b23cc/Waymofull.pdf?MOD=AJPERES

For 2016, Waymo reported that without human driver intervention, the cars would have hit something 9 times in 635K miles of testing.

https://spectrum.ieee.org/cars-that...ving/the-2578-problems-with-self-driving-cars

American drivers (often driving in more challenging environments) have an accident rate of 4.2/million miles.

https://vtnews.vt.edu/articles/2016/01/010816-vtti-researchgoogle.html
 
hmmm, the last report doesn't jibe with your conclusions/assertion.

Seems like they're close at this point.

BB
 
hmmm, the last report doesn't jibe with your conclusions/assertion.

Seems like they're close at this point.

BB
The last report doesn't take into consideration the number of accidents that the human safety drivers prevented. Self driving cars have also not been significantly tested in severe weather.
 
No, even Waymo for 2017 reported 63 disengagements for serious issues over 352K miles of testing.

https://www.dmv.ca.gov/portal/wcm/c...5-a72a-97f6f24b23cc/Waymofull.pdf?MOD=AJPERES

For 2016, Waymo reported that without human driver intervention, the cars would have hit something 9 times in 635K miles of testing.

https://spectrum.ieee.org/cars-that...ving/the-2578-problems-with-self-driving-cars

American drivers (often driving in more challenging environments) have an accident rate of 4.2/million miles.

https://vtnews.vt.edu/articles/2016/01/010816-vtti-researchgoogle.html
But but but neural nets! /S
WTF is a neural net anyway? Interdependent adaptative algorithms? That has nothing to do with neural anything. It just doesn't. Brains don't work like computers, they just don't. This stuff ain't ready, ain't going to be ready.. i used to think different.. until the Uber murder machine catastrophic failure.
 
If a product is used as not intended but the product can essentially do things on its own, who is liable for the mistakes that come because of it?

Beta: Gaming/Os/IoT/Refrigerator/Smartphone/Flatscreen/thermometer.....(you get the point) software doesnt place a group of people lives in jeopardy if a subsystem isnt ready for mass consumption. Beta is supposed to mean a version ready for release. After alpha which usually never sees the light of day.

When a system depends on software for its critical functions, maybe beta isnt enough.

really bad point....but....... When the military out's a new computerized machine, it only kills the wrong group of people because of a targeting issue. When a car is seen as being able to drive itself, citizens lives are at risk. Citizens are now on a battlefield as testdummies for *soon to be improved* software.

I am looking forward to autonomous vehicles, I just expect it to be done responsibly and not have a *lets try this and see if it will kinda work* attitude. Publishing numbers that seem to rationalize a problem.

Only X # of deaths after X # million of miles. Accidents happen. These products *cause* the accident. Its a difference.

I really feel that the problem stems from how it was merchandised, so the solution is to enforce how the feature is sold. May not prevent every instance but it will at least seem that the manuf is more responsible.

extrapolating from the efforts of corporate lobbies as well as select politicians which are eager to push tech ( as revealed recently by Arizona's governor's rapport with UBER. we do not know yet the extent of the relationship in other states ), what should had been vetted rigorously by the appropriate agency to the degree that major planes systems are tested, are allowed to operate-and-test on the field. There's plenty of money and political capital to be made for being lenient towards tech.

as such, it is not wrong to say that your politician is as much to blame for this absence of precaution
 
And those arent America. They are likely the size of a small state. Please stop trying to make us into somewhere else. We like it here.

I didn't, but it IS possible which was my point.
I live in a place without busses or anything.
 
To reduce fatigue and make small adjustments to keep on course, the same thing aircraft autopilot is for. Keep in mind that aircraft autopilot has far less to deal with as it's in the air and does not, despite what people, including yourself seem to think, fly the plane without any input or control/attention from a pilot, this allows them to keep better monitor of other functions such as the many systems on an aircraft to the weather. So the name is fitting, and when using the system (in a Tesla) it warns you about the system and what it is for and what it is not for and that you need to keep control at all times, I believe they still require you to sign an agreement when getting a car with this function that you understand it. But people not being stupid enough, Tesla also added in hand placement detection, audible warnings and visual warnings for when the person has chosen to ignore the paperwork they read and signed and the warning prompt you have to click when you engage the system.

View attachment 64037
What percentage of drivers are familiar with nothing but the hollywood version of commercial airplane autopilot? 90%?
 
What percentage of drivers are familiar with nothing but the hollywood version of commercial airplane autopilot? 90%?
You'd think this would not have to be stated. But i bet the number is closer to 97%.
 
I thought that these Tesla cars have radars, just like my almost 9 years old Prius.
If it does, than it can keep a safe distance behind a car. Why is it that it doesn't really have any problems with driving into a concrete wall then?
I remember a video where a Prius slamming the brakes for aluminum foil hanging in front of it. That's how a fella tested this new system after buying the car.

If it doesn't have one, then I really don't understand how the hell autopilot is better than cruise control with a radar. I think it is far easier to keep steering a car than to keep playing with the pedals.

Either way it sucks and as others have pointed out: the faintly flashing white lights is nowhere near enough to alert a distracted driver.
My Prius can scare the bejesus out of me (and anyone else in the car) when it beeps and tells me to 'BRAKE ! ! !' when I intend to get around a slowly turning bus in an intersection.
 
What percentage of drivers are familiar with nothing but the hollywood version of commercial airplane autopilot? 90%?

Given that only 1% of the population are pilots....probably like 99.5%.
 
The 'hold steering wheel' message and accompanying flashing border occur within the first 10 seconds of hands free driving. After 15 seconds you get a loud warning sound and at around 30 seconds the warning sound plays until, if still kept hands free, the vehicle begins to decelerate into a stop at about 40 seconds.

So basically they give you about 45 seconds of wanking off behind the wheel before it deactivates and are unable to reengage it until a couple hours later.

Jeez, so at a typical highway traveling speed of 80mph, it will travel almost 120 feet, before even starting to slow down, if something goes wrong?

That seems kind of bad.
 
I thought that these Tesla cars have radars, just like my almost 9 years old Prius.
If it does, than it can keep a safe distance behind a car. Why is it that it doesn't really have any problems with driving into a concrete wall then?
I remember a video where a Prius slamming the brakes for aluminum foil hanging in front of it. That's how a fella tested this new system after buying the car.

If it doesn't have one, then I really don't understand how the hell autopilot is better than cruise control with a radar. I think it is far easier to keep steering a car than to keep playing with the pedals.

Either way it sucks and as others have pointed out: the faintly flashing white lights is nowhere near enough to alert a distracted driver.
My Prius can scare the bejesus out of me (and anyone else in the car) when it beeps and tells me to 'BRAKE ! ! !' when I intend to get around a slowly turning bus in an intersection.
https://www.wired.com/story/tesla-autopilot-why-crash-radar/
I don't know how these systems are allowed to even be tested as fully autonomous... No one should consider Tesla implementation as autonomous that is for sure. ( I know they don't sell it as such)
 
You all do realize this is SkyNet shit, right? It's just the beginning...
 
Jeez, so at a typical highway traveling speed of 80mph, it will travel almost 120 feet, before even starting to slow down, if something goes wrong?

That seems kind of bad.

That seems better than a human to me. At 80mph you are going 117 feet per second. A typical human will need about three seconds to recognize there is an issue...
 
What percentage of drivers are familiar with nothing but the hollywood version of commercial airplane autopilot? 90%?

If people want to be ignorant of a word, that is their own doing. Not the person or company making proper use of it. Naming aside, it still requires them to ignore the paperwork they signed and the warning you have to agree to just to turn it on, and then you have to ignore the system warnings and use the system improperly. The people who do this and have this happen are seriously going out of their way to get in a wreck.
 
I thought that these Tesla cars have radars, just like my almost 9 years old Prius.
If it does, than it can keep a safe distance behind a car. Why is it that it doesn't really have any problems with driving into a concrete wall then?
I remember a video where a Prius slamming the brakes for aluminum foil hanging in front of it. That's how a fella tested this new system after buying the car.

If it doesn't have one, then I really don't understand how the hell autopilot is better than cruise control with a radar. I think it is far easier to keep steering a car than to keep playing with the pedals.

Either way it sucks and as others have pointed out: the faintly flashing white lights is nowhere near enough to alert a distracted driver.
My Prius can scare the bejesus out of me (and anyone else in the car) when it beeps and tells me to 'BRAKE ! ! !' when I intend to get around a slowly turning bus in an intersection.

Can't find anything definitive in a quick google search, but given the prevalence of hits on "concrete penetrating radars," I would think that radar would be nearly entirely useless at detecting concrete. Aluminum foil is highly reflective, and therefore easy for radar to see. So are massive hunks of metal. Those parking sensors that can detect objects like concrete barriers rely on ultrasound, and I doubt they are useful at high speeds.
 
There's plenty of money and political capital to be made for being lenient towards tech

I shouldn't be surprised about the amount of callousness in society, but I still am when I am reminded about it. Makes me get behind the FDA when they take an inordinate about of time to Ok a new drug when its already being used overseas.

One of my ongoing realizations about the ability of "big money' to make things happen, is when they place new drugs in the hands of doctors that will prescribe them to a needy public. Then there are all of the warnings than a lot of the new drugs come with.

A 60 second commercial about a new drug comes with 45 seconds of the narrator telling of all the other things that you have to be aware you may fall to.

Sometimes followed with another commercial, about lawyers telling you that they will represent you in a class action about last years go-to drug.
 
I shouldn't be surprised about the amount of callousness in society, but I still am when I am reminded about it. Makes me get behind the FDA when they take an inordinate about of time to Ok a new drug when its already being used overseas.

One of my ongoing realizations about the ability of "big money' to make things happen, is when they place new drugs in the hands of doctors that will prescribe them to a needy public. Then there are all of the warnings than a lot of the new drugs come with.

A 60 second commercial about a new drug comes with 45 seconds of the narrator telling of all the other things that you have to be aware you may fall to.

Sometimes followed with another commercial, about lawyers telling you that they will represent you in a class action about last years go-to drug.

Except the FDA et all are unnecessarily slow. There are life saving drugs/inventions out there today that arent used because the government has created so many layers of bureaucracy that its simply insane. Take aviation as another example. I was told it can take 5-10 years (depending on complexity) to get a product certified to be included in a plane. The exception to that is the "experimental" market where there is MUCH less oversight. What has been seen in general aviation is that the experimental aircraft (these are still production quality they just arent certified the same way) are now SAFER and more CAPABLE than most GA planes. Simple little devices like angle of attack (AOA) indicators that are cheap can be installed in an experimental plane and immediately improve safety. Get them to a certified aircraft and its years later and 4-5x the price.

WHY? Unnecessary bureaucracy thats why.

This is why the FAA is changing the process to a risk management based approach. AOA indicators were the first thing to change and because of the reduced burden on manufacturers the cost of these devices for installation in certified aircraft has plummeted.
 
The edge face of barriers is quite small and the systems have trouble tracking them at a distance. They are far more suited to tracking large objects (cars) at the same approximate speed as the radar.
I would probably have drawn that conclusion if it smashed into a single barrier from the longitudinal axis.

But in this video, as well as pictures of the lethal crash, both show barriers in front. Both of which are 2-3 feet wide.
I'm pretty sure the system is capable of tracking objects/obstacles this wide if it has no problem tracking a person, at a sufficient distance where even if it identified it too late, it would still be sufficient to slow the vehicle down to a non-lethal incident.
 
I would probably have drawn that conclusion if it smashed into a single barrier from the longitudinal axis.

But in this video, as well as pictures of the lethal crash, both show barriers in front. Both of which are 2-3 feet wide.
I'm pretty sure the system is capable of tracking objects/obstacles this wide if it has no problem tracking a person, at a sufficient distance where even if it identified it too late, it would still be sufficient to slow the vehicle down to a non-lethal incident.

Think about this from a data analysis and filtering perspective. It's a fixed object, just like the road. It is a regular occurrence you'll get some radio waves that hit fixed things along the road, whether it be small rocks, deflections in the pavement, or whatever. You must filter those out so that the system isn't just arbitrarily slamming on the brakes as it travels down the highway. How do you tell the difference between a deflection in the pavement which you will safely pass over without an issue and a wall that's 200ft out? Your data has to be perfect and your algorithm has to have enough confidence in the data that it accepts one, but not the other 100% correctly, 100% of the time. In contrast filtering out all the fixed objects and only focusing on things traveling approx the same speed for adaptive cruise is easy.

One of the reasons why I think a lot of this with the sensors in use takes on the wrong approach. Visual aids such as lane markers are designed for humans, aids for computers should be designed to be more reliably read by the computer system 100% of the time. I've always been in favor of doing "test highways" for long haul routes along interstates where automation and automation aids such as computer readable lane markers can be tested and perfected before bringing it in to urban areas. Force automated trucks to stay in lanes that are well marked and let them go from yard to yard (most of which already exist just out of town anyway where trucking and LTL facilities already are) where a human can take over as it comes in to the urban area. As they travel they will be identifiable by other drivers and can easily be passed and their impacts to existing traffic is low. You could easily gather up millions of miles of data this way. As it grows you add more and more highway routes. As the sensors and tech improves you can add markers to urban areas and light up well tested routes just as bus routes exist today. Pretty soon entire cities will be lit up, but only after the tech matures.
 
If the radar these use can't tell the difference between the road (or little rocks on the road that are an inch tall at most) and a several foot tall barrier, they need to just give up on autopilot using radar, or hire some people who have a clue what they're doing with it.
 
I'm not going to pretend to know what hardware they use or the data they have, but determining the difference between ignorable obstructions and road barriers or whatever is not the problem. Determining the difference correctly 100% of the time with no false negatives, is. And I'm sure in lab settings, it does. Also, no one is attempting autopilot on radar data alone. There should be multiple sensors going on here and their robustness is paramount in a project like this. Failures lead to fatal accidents as we've seen.
 
How do you tell the difference between a deflection in the pavement which you will safely pass over without an issue and a wall that's 200ft out?

There is testing data that shows the Model S can stop in <125ft from 60mph. If whatever sensors are onboard cannot figure out that a there's a crash barrier, 55 gallon drums, jersey guard, from that distance, then there's a design failure.
If the sensors had a failure, and autopilot did not disengage, that's still a design failure.

As an interesting note, the UK safety code (tried to look up US, but got UK results first) expects the average passenger car and average driver to stop in 240ft, although some studies show this is closer to 315ft.

Signs and lanes for autonomous vehicles seems interesting, and could work quite well, but that would require whatever local jurisdiction to have those in place. And in these end this still only helps with mostly static objects. This doesn't help if they're not kept up to date - perhaps some sort of digital sign? But once again that's additional costs.
 
A Tesla owner in Indiana decided to recreate the fatal crash from last month, where the cars autopilot mode drove straight into a K-Rail according to an article from electrek. The video below shows a Model S with the latest autopilot hardware mistakenly treating a right-side land as a left-side lane and pointing the vehicle squarely at a barrier.

How is it the radar system of neither of these cars could see a solid, immobile concrete barrier? Fortunately the Indiana driver in the video managed to stop just in time.

After reading all of these comments, I think these cars haven't been ready to publicly used by citizens. There are quite many news about self-driving cars' accidents that I have read in these past few months. I have also read some articles that discusses about this matter like at https://www.lemberglaw.com/self-driving-autonomous-car-accident-injury-lawyers-attorneys/. I personally have no intention to try to use one of these cars now.
 
Back
Top