How to Screw Up Autonomous Cars

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,629
Holy crap! Turn any stop sign into a 45mph speed limit sign, as far as some driverless cars are concerned. Actually if you read through the article, this sort of thing will certainly have to be dealt with. I know no [H] readers would do this....well, strike that. This Car and Driver blog is worth a read.

UW computer-security researcher Yoshi Kohno described an attack algorithm that uses printed images stuck on road signs. These images confuse the cameras on which most self-driving vehicles rely. In one example, explained in a document uploaded to the open-source scientific-paper site arXiv last week, small stickers attached to a standard stop sign caused a vision system to misidentify it as a Speed Limit 45 sign.
 
I'll bet even with these "pranks", you'd have fewer accidents nationwide if all the cars were autonomous.
 
  • Like
Reactions: rudy
like this
I'll bet even with these "pranks", you'd have fewer accidents nationwide if all the cars were autonomous.

I doubt it. Take a 4 way stop sign and turn all 4 into 45mph speed limit signs and you dont leave enough room for the cars to physically stop once they "see" the other cars.

Issues like this are minor compared to what could happen if someone infiltrates your cars OS. Lets face it we arent ready for this even if we make them perfect because the "security" in Car operating systems is so far less than pathetic it doesnt even register on the scale...
 
I doubt it. Take a 4 way stop sign and turn all 4 into 45mph speed limit signs and you dont leave enough room for the cars to physically stop once they "see" the other cars.

Issues like this are minor compared to what could happen if someone infiltrates your cars OS. Lets face it we arent ready for this even if we make them perfect because the "security" in Car operating systems is so far less than pathetic it doesnt even register on the scale...

Well, that "trick" will only work on certain models of camera with certain models of AI. So basically each "trick" will only effect some very small percentage of cars on the road. And there's no data that show how the OTHER cars would react (i.e., your car blows through the intersection at 45mph, but the OTHER cars may stop in time and avoid an accident).

Someone COULD hack into your car and take it over. Someone could also cut your brake lines or shoot out your tires. It's possible, it's going to happen to someone one day, but it's not like it's going to happen daily.
 
I doubt it. Take a 4 way stop sign and turn all 4 into 45mph speed limit signs and you dont leave enough room for the cars to physically stop once they "see" the other cars.

Issues like this are minor compared to what could happen if someone infiltrates your cars OS. Lets face it we arent ready for this even if we make them perfect because the "security" in Car operating systems is so far less than pathetic it doesnt even register on the scale...


Actually in comparison I think Bandalo is more right. Crashes have the wierdest odds especially including human error, so to be perfectly honest, murphy's law will dictate that most likely it wouldn't be any worse than what already can happen.
 
I doubt it. Take a 4 way stop sign and turn all 4 into 45mph speed limit signs and you dont leave enough room for the cars to physically stop once they "see" the other cars.
Once vehicle to vehicle/pedestrian/lights becomes available they won't need to "see" anything.
 
Once vehicle to vehicle/pedestrian/lights becomes available they won't need to "see" anything.

See was not meant as in visual. Even with those lights one could hack that system and you would still have the same issue.

Well, that "trick" will only work on certain models of camera with certain models of AI. So basically each "trick" will only effect some very small percentage of cars on the road. And there's no data that show how the OTHER cars would react (i.e., your car blows through the intersection at 45mph, but the OTHER cars may stop in time and avoid an accident).

Someone COULD hack into your car and take it over. Someone could also cut your brake lines or shoot out your tires. It's possible, it's going to happen to someone one day, but it's not like it's going to happen daily.

Actually in comparison I think Bandalo is more right. Crashes have the wierdest odds especially including human error, so to be perfectly honest, murphy's law will dictate that most likely it wouldn't be any worse than what already can happen.

Im less concerned about this "hack" than I am about someone ACTUALLY hacking into the car. I think to say it wont happen very often is seriously underestimating the situation. Computer security will become over more critical in cars as we add more and more complexity. Hell i already hack into my own truck just to change preferences and enable features that the vendor doesn't directly support. What if that were to screw up a AI driver somehow?

I think as we add more automation into cars they will become more vulnerable and the dishonest among us will exploit them more... People are already using the remote key fobs to steal from peoples cars.

Bottom line: hacking of cars will become ever more prevalent and if you arent worried about their security...
 
An AI car should never have online connectivity obviously.
 
An AI car should never have online connectivity obviously.

That doesnt solve anything, its still exploitable if it has any sort of connection (wifi is moreso than wired but wired is still doable).
 
  • Like
Reactions: PaulP
like this
See was not meant as in visual. Even with those lights one could hack that system and you would still have the same issue.

Im less concerned about this "hack" than I am about someone ACTUALLY hacking into the car. I think to say it wont happen very often is seriously underestimating the situation. Computer security will become over more critical in cars as we add more and more complexity. Hell i already hack into my own truck just to change preferences and enable features that the vendor doesn't directly support. What if that were to screw up a AI driver somehow?

I think as we add more automation into cars they will become more vulnerable and the dishonest among us will exploit them more... People are already using the remote key fobs to steal from peoples cars.

Bottom line: hacking of cars will become ever more prevalent and if you arent worried about their security...

I'm not saying it's not a risk, I'm just saying the overall risk of that type of hacking are very low compared to the every day risk of things like drunk drivers, teenage drivers, people texting while driving, people putting on makeup while driving, etc, etc.

The key fob thing is a perfect example. They show news stories and things about that kind of "hacking" all the time, and yet in the "wild" it's EXTREMELY rare. It takes technical skill and know-how to do that sort of thing, and people with those types of skills who are ALSO interested in car theft are hard to come by. Far fewer new cars are stolen now than in the past, since you can't just smash the window and hotwire it. Now you need a laptop, 3 different antennas, you have to catch the person getting into or out of the car to capture the signal, then you can break in. Oh, and all your gear and software only works on VW and Audi SUVs made between 2014-2016.
 
As a computer professional with 30+ years of experience I understand that computers crash, O/S's have bugs and garbage in/garbage out. "We" are masters of the machines not vice versa. This article only underlines the unforeseen technical issues that must be resolved in order to make autonomous cars practical. Add in the politcal, social and economic consequences, as yet unaddressed, and it is more likely that autonomous cars are decades away.

Is it just me or do autonomous cars stink of SJW's?
 
This can be problematic near UW universities where alot of punk rock or other bands, concert places, nihilists (irony, all over uw-milwaukee.... we dont believe in anything but lets post our beliefs everywhere) place their stickers on local signs etc...

My fav all time by uw-milwaukee was the stop sign that had "hammertime" underneath it.
STOP
Hammertime!

lol
 
AI is no were close to what it needs to be to replace driving. Even with the level of awesomeness being developed the human brain is simply wired biologically for these visual tasks. Just wait til the death toll racks up and we have no one to blame but ourselves. You know what else would lower car fatalities? Education and harsher driving laws.
 
  • Like
Reactions: PaulP
like this
Not sure whey they would setup a system where the vehicle "read" the sign....most signs are shaped a certain way. If a stop sign can be confused for a speed limit sign that is something that definitely needs updating. Need to incorporate sign shape to ensure this type of thing can't happen. What happens when someone shoots a sign with paintballs/ eggs / paints a sign? I see that all the time. It should be more robust than a few minor changes that throws off the whole system.
 
Someone could hacksaw off all the STOP signs at a 4-way, now that would be scary.

With updated maps and GPS, an autonomous vehicle would know what the speed limit is and where stop signs are located. Of course in a true networked world the ai car would know where all the other vehicles are located.
 
Is it just me or do autonomous cars stink of SJW's?

At first I was like, "Huh?", but then I see your point. I for one look forward to maybe not having to drive everywhere. Less accidents, less idiots to worry about.

But the SJW aspect I can see as it feels sort of...pussy. Like you aren't man enough to drive your own damn car. But I still like the idea of enjoying my time doing other things when I have to go 75 miles down the highway...
 
For some reason autonomous cars thread always seems to turn into some sort of religious argument.
 
At first I was like, "Huh?", but then I see your point. I for one look forward to maybe not having to drive everywhere. Less accidents, less idiots to worry about.

But the SJW aspect I can see as it feels sort of...pussy. Like you aren't man enough to drive your own damn car. But I still like the idea of enjoying my time doing other things when I have to go 75 miles down the highway...

warning.png
 
Someone COULD hack into your car and take it over. Someone could also cut your brake lines or shoot out your tires. It's possible, it's going to happen to someone one day, but it's not like it's going to happen daily.

If someone found a way to deliver an over the air hack, you could have all the cars on a busy interstate smash into each other - perhaps even all of the interstates in the country.

If you wanted to cut someone's brake lines, etc. you would have to visit each individual car and cut their brakes...

Just like if someone were taking sexy pictures with a Polaroid camera, you would have to find and steal the pictures, but you figure out a way into iTunes/iCloud and you can steal a whole bunch of secy pictures.

This is why putting networked electronics and ai on something so dangerous is worrisome to me.
 
If someone found a way to deliver an over the air hack, you could have all the cars on a busy interstate smash into each other - perhaps even all of the interstates in the country.

If you wanted to cut someone's brake lines, etc. you would have to visit each individual car and cut their brakes...

Just like if someone were taking sexy pictures with a Polaroid camera, you would have to find and steal the pictures, but you figure out a way into iTunes/iCloud and you can steal a whole bunch of secy pictures.

This is why putting networked electronics and ai on something so dangerous is worrisome to me.

Sure, if every single car was the same year, brand and model. But they're not and never will be, so at best you could hack a FEW cars. And most "over the air" stuff is more about telling all the cars where the traffic is, where the road construction is, etc, the "network" doesn't individually control every car and wouldn't be able to cause a huge thousand car pileup. A big hack might be able to cause traffic jams and slowdowns, but a doomsday accident scenario is unlikely. Not impossible, but very unlikely.
 
I just see these hacks are low hanging exploits ready to be taken advantage of. These algorithms are shit and shouldn't be considered road worthy.
I assume a whole lot of interesting hacks will come out soon because this tech and others like it aren't being properly tested and vigorously vetted with a hacker mentality.
Case and point are those automatic breaking systems being deployed on a lot of new cars. They rely on radar mostly?
What if i built a device that simulated an object being in front of a car and used it to purposely harm people on the highway? It would be practically non-detectable. I'm fairly sure crap like that exists all over the place.

Automobile makers have been talking a lot about securing the odb-II ports in the car because they're ripe with security flaws that allows people to take control of the car. There are odb-II scanners with wifi and bluetooth. There's an odb-II port in the engine compartment. Anyone with bad intent (like an assassin) who gets access to it can take control of a car remotely and kill someone.

I expect to see lots of lawsuits in the future.
 
  • Like
Reactions: PaulP
like this
Sure, if every single car was the same year, brand and model. But they're not and never will be, so at best you could hack a FEW cars. And most "over the air" stuff is more about telling all the cars where the traffic is, where the road construction is, etc, the "network" doesn't individually control every car and wouldn't be able to cause a huge thousand car pileup. A big hack might be able to cause traffic jams and slowdowns, but a doomsday accident scenario is unlikely. Not impossible, but very unlikely.

If you look at vulnerabilities in "core" libraries, like imagemagik, libzip, bzip, etc, they affect a multitude of proprietary software. If you can find a way to activate the code to be executed with your malicious code, you're in. It's hard for me to imagine that tesla, or any other manufacturer is going to rewrite the core libraries for every model.

Any network connection could possibly used to get some code in some library to run. I even understand that Tesla does over the air updates of their software. There have been cases in the past where folks have stolen the right encryption keys to maliciously install software in an automated update.

You would not have to individually control every car... you would only have to figure out how to get the car to accelerate at all times.

EDIT: I am not trying to imply that Tesla uses the libraries I mentioned. I mean that all Tesla models probably share some of the same code.
 
If you look at vulnerabilities in "core" libraries, like imagemagik, libzip, bzip, etc, they affect a multitude of proprietary software. If you can find a way to activate the code to be executed with your malicious code, you're in. It's hard for me to imagine that tesla, or any other manufacturer is going to rewrite the core libraries for every model.

Any network connection could possibly used to get some code in some library to run. I even understand that Tesla does over the air updates of their software. There have been cases in the past where folks have stolen the right encryption keys to maliciously install software in an automated update.

You would not have to individually control every car... you would only have to figure out how to get the car to accelerate at all times.

I'm not saying it's impossible, I'm saying it's unlikely. You can hack almost ANYTHING if you have enough free time.

You could probably hack in and get complete control of a particular car if you were knowledgeable and determined. But then you can really only crash ONE car, or maybe crash it a few times. Of course, the driver WILL be in the car, and they've always got the hard-wired "OFF" switch. There's only so much you can do with software.

But you're not going to realistically be able to take over dozens of cars at once and cause some sort of city-wide catastrophy.
 
At first I was like, "Huh?", but then I see your point. I for one look forward to maybe not having to drive everywhere. Less accidents, less idiots to worry about.

But the SJW aspect I can see as it feels sort of...pussy. Like you aren't man enough to drive your own damn car. But I still like the idea of enjoying my time doing other things when I have to go 75 miles down the highway...


Its a severe ethical issue that the SJWs will not touch. Tell me who decides who dies in an accident? The programmer?

Scenario: 4 18 year old students are in an AI car on a divided highway near a residential area when a 5 year old child runs out in front of the car. The distance is not sufficient for the car to stop in time and the only viable diversion path is into the barrier. The AI has two choices:

Hit the child, killing it instantly.
Divert the car into the barrier killing all occupants due to the high speed involved

Why does the programmer get to pick for me if I am in that car?
 
At first I was like, "Huh?", but then I see your point. I for one look forward to maybe not having to drive everywhere. Less accidents, less idiots to worry about.

But the SJW aspect I can see as it feels sort of...pussy. Like you aren't man enough to drive your own damn car. But I still like the idea of enjoying my time doing other things when I have to go 75 miles down the highway...
I am of the opposite opinion. Human beings are capable of responding to novel situations; programmed equipment is not. I see more congestion and more accidents as a result of premature adaption of autonomous cars.

SJW are like mindless drones advocating the progressive agenda without awareness of the social evolution of mankind. Progressives are behind autonomous cars therefore SJW will advocate for their adoption citing less accidents when no data exists supporting this conclusion.

Why would you assume there will be less accidents with autonomous cars? Can you cite a complex program that is bug-free and does never crashes across multiple platforms? Would you gamble your life on the stability of a computer program? I for one will not...
 
But you're not going to realistically be able to take over dozens of cars at once and cause some sort of city-wide catastrophy.

I think we are going to have to agree to disagree.

I am not sure how you have this opinion when there exists attacks like WannaCry that infected thousands of systems. I am not sure how you believe that a computer on a car is any different from a computer on a desk, but it doesn't seem like I will be able to change your mind.

EDIT: And I don't think you will be able to change my mind either ;)
 
Its a severe ethical issue that the SJWs will not touch. Tell me who decides who dies in an accident? The programmer?

Scenario: 4 18 year old students are in an AI car on a divided highway near a residential area when a 5 year old child runs out in front of the car. The distance is not sufficient for the car to stop in time and the only viable diversion path is into the barrier. The AI has two choices:

Hit the child, killing it instantly.
Divert the car into the barrier killing all occupants due to the high speed involved

Why does the programmer get to pick for me if I am in that car?
That was the main reason Will smith's character in "I, Robot" didn't trust robotic AI. It chose to save him over a child because of his odds of surviving. Its a very valid question over moral and ethical code.
 
I think we are going to have to agree to disagree.

I am not sure how you have this opinion when there exists attacks like WannaCry that infected thousands of systems. I am not sure how you believe that a computer on a car is any different from a computer on a desk, but it doesn't seem like I will be able to change your mind.

EDIT: And I don't think you will be able to change my mind either ;)

Well, there's a very different level of control for systems like autonomos cars and there's a limit on what a hacker can and can't do in a real world situation. A hacker can get in and lock up your files. He can't get in and make your monitor burn out by changing the refresh rate. He can't hack your phone and make the battery overcharge and explode. They can hack into your Nest thermostat, but they can't crank the temp enough to set your house on fire. There's a lot of safety features that would keep a hacker from causing the kind of chaos discussed.

Think about commercial airplanes. They fly autonomously like 95% of the time, pure auto-pilot. How many of them have been hacked and crashed over the years? Redundant systems, and they always have a human backup. Same with autonomous cars..they have redundant electrical systems, and the driver is always there if things get REALLY bad.

Again, I'm not saying it's IMPOSSIBLE. It WILL happen one day. It's incredibly unlikely though, and the risks presented are incredibly small compared to the risks of driving under the conditions you see today.
 
Its a severe ethical issue that the SJWs will not touch. Tell me who decides who dies in an accident? The programmer?

Scenario: 4 18 year old students are in an AI car on a divided highway near a residential area when a 5 year old child runs out in front of the car. The distance is not sufficient for the car to stop in time and the only viable diversion path is into the barrier. The AI has two choices:

Hit the child, killing it instantly.
Divert the car into the barrier killing all occupants due to the high speed involved

Why does the programmer get to pick for me if I am in that car?

One, lets not assume "SJWs" are the ones who sit around making decisions for how car AI is coded. Can you point to some SJWs that have refused to touch this issue, or are we just name calling?

Do you think you would be able to make a choice that fast in that situation? Or would a typical person barely have time to recognize something was in the road before they either hit the kid, or jerked the wheel sharply and hit the barrier. Did the driver really make a considered choice in that situaiton, or did they simply jerk the wheel and/or slam the brakes?

Do you think an AI would be FAR faster to apply the brakes and manuever to avoid that situation in the first place? An AI that wouldn't be distracted by talking the other kids in the car? Even if the car avoids the child and impacts a barrier, it's very likely the car would have slowed considerably before impact, and could minimize the damage and possible injury. Certainly FAR better than a human in a similar scenario.
 
My Garmin GPS system tells me the speed of the roads and how far it is to the light/stop sign all without "seeing" the signs.

Signs are made for people, not robots/AI cars. As a person, I can easily tell the sign from the article is suppose to be a stop sign. I can even interpret the sign that is leaning over after it got hit last night.
Where were going, we don't need any signs.
 
I'll bet even with these "pranks", you'd have fewer accidents nationwide if all the cars were autonomous.
The accidents that do happen will be legendary, though. I would bet everyone is hoping it happens to the other guy first.
 
The accidents that do happen will be legendary, though. I would bet everyone is hoping it happens to the other guy first.

Meh, I'll bet they've got nothing on the massive accidents people have caused. Hell, people have died in car accidents because they got distracted by a fly in the car.
 
I doubt it. Take a 4 way stop sign and turn all 4 into 45mph speed limit signs and you dont leave enough room for the cars to physically stop once they "see" the other cars.
Take away a person's ability to drive a car and leave it all autonomous and there won't be a need for stop signs at all. Think about that one.
 
Someone could hacksaw off all the STOP signs at a 4-way, now that would be scary.

With updated maps and GPS, an autonomous vehicle would know what the speed limit is and where stop signs are located. Of course in a true networked world the ai car would know where all the other vehicles are located.

Yep, this type of attack can also be carried out on normal drivers by removing the stop sign or replacing the stop sign.

Should it be fixed? Yes. Does it mean it's the end of autonomous cars? No, quite the contrary, it's why we have researchers.
 
I'll bet even with these "pranks", you'd have fewer accidents nationwide if all the cars were autonomous.

To date, those autonomous vehicle programs that report numbers are not doing better than the average human driver. We think the average human driver is shit, but they aren't. They have a remarkably low incident rate. I don't have my exact numbers in front of me, and they probably would require some adjusting as autonomous vehicles have both gotten better and had progress set back by overagressive business men pushing shitty algorithms ahead of them being ready, but if you take the number of incidents per miles dirven by humans, AVs are at somewhere around high 80% to low 90% at this point of that performance.... provided they only drive on well marked roads, in good repair, in nothing worse than light rain. Which is pretty shitty really. Given that humans are beating them including every DUI incident, driving while drowsy incident, and all shitty weather ever included in those numbers.

They aren't ready yet. If it requires dedicated, universal infrastructure changes to support them, they never will be.
 
To date, those autonomous vehicle programs that report numbers are not doing better than the average human driver. We think the average human driver is shit, but they aren't. They have a remarkably low incident rate. I don't have my exact numbers in front of me, and they probably would require some adjusting as autonomous vehicles have both gotten better and had progress set back by overagressive business men pushing shitty algorithms ahead of them being ready, but if you take the number of incidents per miles dirven by humans, AVs are at somewhere around high 80% to low 90% at this point of that performance.... provided they only drive on well marked roads, in good repair, in nothing worse than light rain. Which is pretty shitty really. Given that humans are beating them including every DUI incident, driving while drowsy incident, and all shitty weather ever included in those numbers.

They aren't ready yet. If it requires dedicated, universal infrastructure changes to support them, they never will be.

Do you have some sources? The numbers I've seen in the past showed a much higher rate of success. I mean Google's program had only a very small number of accidents for all their testing, and they were all caused by humans in other cars.

Also, remember autonomous vehicles WILL have a human backup, at least for now. So if the weather DOES go to shit, a sensor fails, or the conditions get too confusing, the car can always alert the driver to take the wheel. People ARE better with unexpected situations. Machines are great with the stop-and-go traffic, long, boring commutes, and driving drunken passengers home.
 
An AI car should never have online connectivity obviously.

They will likely all have connectivity for map updates, system upgrades, etc.

The biggest change is how long the car will be supported (i.e.) get updates, and are they going to charge for the updates?
Considering the ridiculous prices most companies charge for a simple map update for the on-board GPS, I'm not hopeful.
 
Its a severe ethical issue that the SJWs will not touch. Tell me who decides who dies in an accident? The programmer?

Scenario: 4 18 year old students are in an AI car on a divided highway near a residential area when a 5 year old child runs out in front of the car. The distance is not sufficient for the car to stop in time and the only viable diversion path is into the barrier. The AI has two choices:

Hit the child, killing it instantly.
Divert the car into the barrier killing all occupants due to the high speed involved

Why does the programmer get to pick for me if I am in that car?
 
Back
Top