Self-Driving Mercedes Will Be Programmed To Sacrifice Pedestrians

Really? Then why are Google cars getting into accidents? Omnipotent my ass. Maybe some day they will become closer to what you seem to think they are capable of today, but that reality will also be partly the result of adapting roadways for self driving cars and not just all on the cars to bring about.

I'll tell you this that I know for a fact. If any of these smart ethical engineers and thinkers had ever been in an accident and killed someone. they wouldn't even consider trying to make the machine "choose". They would only focus on making it try to avoid and tell me, if these machines are so damned perfect, then why isn't simple avoidance good enough? Seems like there would never be another accident ever again if they are as super as you seem to believe.

Google cars get into accidents because the computer has not been programmed to react appropriately to that situation, not that it doesn't know or isn't aware of what is happening.

Again. If the computer was aware of the surroundings, and it had to choose between the lone person in the crosswalk vs the group of people on the sidewalk, how will you program that? Or if it had to choose between hitting something dashing out into the freeway (let's face it, the computer probably won't recognize the difference an animal and a human in terms of obstructions) and going into a ditch and potentially killing the driver and other people in the car, how should it choose? These are scenarios you have to program for whether you like it or not, because that is the harsh truth of reality.
 
^ machines are not self aware.... the computational power is not ever going to be available for that kind of thinking from a machine....
 
The omnipotence of a computer is what prevents your scenario from being possible. A properly designed system will know what is ahead by at least 15 seconds, and a radius of at least 40 feet. Say someone steps in front of the car, and the only "safe" path to swerve is to the sidewalk, but there are multiple pedestrians on the sidewalk. A human can claim that he wasn't aware of people on the sidewalk and was simply reacting without thinking, but a computer cannot claim that. Our technology might not be super sophisticated, but it's not at a basic level either.

This isn't possible if we still have some percent of human drivers. I t-boned a small compact driver's side trying to make a left out of an apartment complex. No human or computer could have avoided that one. It was all his fault as he attempted a left about 10 feet in front of my truck and I was moving at about 40 MPH in the rightmost lane. It was over, computer or otherwise.
 
I know right? That's why you buy a Benz in the first place. :ROFLMAO:

That's what I came here to say. In the end this is what it boils down to.

Then again it would be "immoral" for Daimler to take your money and they let you die, wouldn't it?

This is one of those times when we actually gonna need a proper law.
 
um, i think it would be much better if it was programmed to hit the nearest tree, you cant kill a tree, why should a pedestrian die cause your dumbass is riding a car you have no control over
Trees have feelings too!
 
Soon computers will be all knowing and capable of predicting the future!!!

Sorry but this is how I read your posts.

Are you 12? No one should be that naive.
 
Google cars get into accidents because the computer has not been programmed to react appropriately to that situation, not that it doesn't know or isn't aware of what is happening.

Again. If the computer was aware of the surroundings, and it had to choose between the lone person in the crosswalk vs the group of people on the sidewalk, how will you program that? Or if it had to choose between hitting something dashing out into the freeway (let's face it, the computer probably won't recognize the difference an animal and a human in terms of obstructions) and going into a ditch and potentially killing the driver and other people in the car, how should it choose? These are scenarios you have to program for whether you like it or not, because that is the harsh truth of reality.


The truth is that the first many generations of self driving cars won't know the difference between a pedestrian, an animal, a motorcycle, a bus or anything else. It will have "things that show up as solid objects on the radar. They could be anything. A wall, a bus a moose or a pedestrian. It will just be trying to do its best to avoid whatever obstruction might be in the way.

It also won't know if it is on a bridge, on an overpass, or anything of that matter. Through a combination of camera interpretation of road lines and GPS geolocation, all it will know is if it is in between the lines or not.

We may some day be able to get to the point where sensor systems are capable of identifying human beings and making the kinds of judgments you suggest, but I think this will be well after the first several generations of self driving cars are on the roads.
 
^ machines are not self aware.... the computational power is not ever going to be available for that kind of thinking from a machine....

Sorry but this is how I read your posts.

Are you 12? No one should be that naive.

Computers are perfectly capable of doing physics calculations much faster than a human can. It's not predicting the future as much as plotting out possible scenarios and executing the most favorable one based on sensory inputs in real time. The computing power exists in what can reasonably fit in a car, the limitation is in the coding.

The truth is that the first many generations of self driving cars won't know the difference between a pedestrian, an animal, a motorcycle, a bus or anything else. It will have "things that show up as solid objects on the radar. They could be anything. A wall, a bus a moose or a pedestrian. It will just be trying to do its best to avoid whatever obstruction might be in the way.

It also won't know if it is on a bridge, on an overpass, or anything of that matter. Through a combination of camera interpretation of road lines and GPS geolocation, all it will know is if it is in between the lines or not.

We may some day be able to get to the point where sensor systems are capable of identifying human beings and making the kinds of judgments you suggest, but I think this will be well after the first several generations of self driving cars are on the roads.

Said radar should be capable of detecting cliffs and guardrails as well. It's not like terrain mapping radar doesn't exist.
 
Said radar should be capable of detecting cliffs and guardrails as well. It's not like terrain mapping radar doesn't exist.

The sensor would have to be pretty far up to get a useful range. Unless it would depend on pre-mapped or network based data.
 
Computers are perfectly capable of doing physics calculations much faster than a human can. It's not predicting the future as much as plotting out possible scenarios and executing the most favorable one based on sensory inputs in real time. The computing power exists in what can reasonably fit in a car, the limitation is in the coding.



Said radar should be capable of detecting cliffs and guardrails as well. It's not like terrain mapping radar doesn't exist.


Sure, computers can calculate at an amazing speed but as I have pointed out (and a point you seemed to have missed) machines cannot be self aware nor will they ever be... as a HUMAN driver I can see MUCH further down the road than any self driving car can even in heavy traffic as I can look *through* the car in front of me and see what the car in front of them is up to. Now granted I cannot see through a larger vehicle but I can use other means of judging traffic in front of, to the sides of, and behind me and adjust accordingly. for computers to handle even basic questions has to be the size of WATSON... which employs a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor threads and 16 terabytes of RAM.[20]

Just something to think about....
 
Sure, computers can calculate at an amazing speed but as I have pointed out (and a point you seemed to have missed) machines cannot be self aware nor will they ever be... as a HUMAN driver I can see MUCH further down the road than any self driving car can even in heavy traffic as I can look *through* the car in front of me and see what the car in front of them is up to. Now granted I cannot see through a larger vehicle but I can use other means of judging traffic in front of, to the sides of, and behind me and adjust accordingly. for computers to handle even basic questions has to be the size of WATSON... which employs a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor threads and 16 terabytes of RAM.[20]

Just something to think about....

So what? Why does a self driving car need to know what is happening 5 minutes down the road? Or even 1 minute? There is only one legitimate reason a self driving car would need to do that, and that is to get to its destination as quickly as possible by weaving in and out of traffic. The farthest a self driving car would need to see is the amount of distance it would take for it to come to a complete stop.

A self driving car will maintain proper following distances on its own. It won't get frustrated when a driver cuts in front of it. If maximizing fuel efficiency is the goal, the self driving car in a hybrid or electric car can automatically adjust the driving pattern such that only regenerative braking is used and minimal power is lost to friction braking.
 
One day you will figure it out... but I will give you a hint, it's called accident mitigation.... If I see someone slow down further up the road, I can react by letting off of the gas or if I see someone pull out (unlike a tesla mind you) I can react and apply the brakes and calculate a way out before I even move my foot.... Automated systems can only see what is DIRECTLY in front of them DIRECTLY to the side (if equipped) and DIRECTLY to the rear (again if equipped) they can do nothing to anticipate things that people can....

Again one day you will figure it out... hopefully it is very, very, soon. For all of those fancy schmancy electronics that the Tesla has it could not see the semi that pulled out in front of it.. .if the DRIVER of the car was paying attention, the accident would have NEVER happened...
 
Sure, computers can calculate at an amazing speed but as I have pointed out (and a point you seemed to have missed) machines cannot be self aware nor will they ever be... as a HUMAN driver I can see MUCH further down the road than any self driving car can even in heavy traffic as I can look *through* the car in front of me and see what the car in front of them is up to. Now granted I cannot see through a larger vehicle but I can use other means of judging traffic in front of, to the sides of, and behind me and adjust accordingly. for computers to handle even basic questions has to be the size of WATSON... which employs a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor threads and 16 terabytes of RAM.[20]

Just something to think about....

Actually self driving cars can see through the car in front by looking through the windows too. They do have normal cameras to be able to read things like signs or lights. They simply aren't programmed to deal with that data.

As for your talking about processors, you build for the workload. You can talk about it's power all day, but do you think a Power7 Proc is going to be a better processor for my GPU than say, an Nvidia or AMD proc? It simply wasn't designed for autonomous vehicle work, why it'd be a stupid choice over something like a Nvidia Drive PX 2.

I don't need a refrigerator sized IBM mainframe to do the same job as my small desktop computer, when it comes to surfing the web.
 
Computers are perfectly capable of doing physics calculations much faster than a human can. It's not predicting the future as much as plotting out possible scenarios and executing the most favorable one based on sensory inputs in real time. The computing power exists in what can reasonably fit in a car, the limitation is in the coding.

Computers can't do calculations when there's no data. Sensory inputs, maps and real time feeds can't predict the future. Unexpected things happen. If you designing such an important system with taking this fact into consideration, you're doing it wrong.

The system you imagine runs on magic, plain and simple.
 
Computers can't do calculations when there's no data. Sensory inputs, maps and real time feeds can't predict the future. Unexpected things happen. If you design such an important system with taking this fact into consideration, you're doing it wrong.

So you think what a computer cannot detect a human would be able to, let alone react to? Physics modeling is simple. Object A has a velocity that puts it in the path of the car, plot ways to avoid and execute the best option.

One day you will figure it out... but I will give you a hint, it's called accident mitigation.... If I see someone slow down further up the road, I can react by letting off of the gas or if I see someone pull out (unlike a tesla mind you) I can react and apply the brakes and calculate a way out before I even move my foot.... Automated systems can only see what is DIRECTLY in front of them DIRECTLY to the side (if equipped) and DIRECTLY to the rear (again if equipped) they can do nothing to anticipate things that people can....

Again one day you will figure it out... hopefully it is very, very, soon. For all of those fancy schmancy electronics that the Tesla has it could not see the semi that pulled out in front of it.. .if the DRIVER of the car was paying attention, the accident would have NEVER happened...

One day you will figure out that a human is not always better than a computer. Tesla's accident was due to a programming flaw, which is a human error. Humans make design flaws. And humans also make poor choices when driving. Also, a self driving car would obviously have sensors in all directions. Otherwise, how can it park itself? How can it safely change lanes? You're just blinded, refusing to believe that anything can take over driving.

A properly designed system would not be close enough to the car in front that it cannot avoid a major accident. It would not get frustrated when someone cuts in front of it. It'll slow down, regain that safe distance, and continue on its way, all without making irrational emotional based decisions humans often make. It won't get distracted, and it won't fall asleep. It won't get dementia, it won't mistake the gas for the brake.

FYI, I do enjoy driving. One of the most fun things I did was drive along the California coast highway at 100+ mph in my Miata at 1 AM. I like cars, I prefer having a true manual because it means I have more control over my car. That doesn't mean I am blind to what a self driving system can do.
 
sr357.JPG
 
So you think what a computer cannot detect a human would be able to, let alone react to? Physics modeling is simple. Object A has a velocity that puts it in the path of the car, plot ways to avoid and execute the best option.



One day you will figure out that a human is not always better than a computer. Tesla's accident was due to a programming flaw, which is a human error. Humans make design flaws. And humans also make poor choices when driving. Also, a self driving car would obviously have sensors in all directions. Otherwise, how can it park itself? How can it safely change lanes? You're just blinded, refusing to believe that anything can take over driving.

A properly designed system would not be close enough to the car in front that it cannot avoid a major accident. It would not get frustrated when someone cuts in front of it. It'll slow down, regain that safe distance, and continue on its way, all without making irrational emotional based decisions humans often make. It won't get distracted, and it won't fall asleep. It won't get dementia, it won't mistake the gas for the brake.

FYI, I do enjoy driving. One of the most fun things I did was drive along the California coast highway at 100+ mph in my Miata at 1 AM. I like cars, I prefer having a true manual because it means I have more control over my car. That doesn't mean I am blind to what a self driving system can do.

totally missed it but hey, it was expected so....

tesla's accident was due to tesla enabling something it had no business doing... but hey NTHSB is pretty much spineless here along with several others... That was quite a programming bug that allowed the system to totally miss an object the size of a SEMI despite having a RADAR system... WTFudgesickle? REALLY!!!

Glad you enjoy speeding like that at night, just goes to show that you really should not be driving in the first place.....
 
So you think what a computer cannot detect a human would be able to, let alone react to?

No. Both would suck.

Physics modeling is simple. Object A has a velocity that puts it in the path of the car, plot ways to avoid and execute the best option.

Again with the magic. I am done with you.
 
totally missed it but hey, it was expected so....

tesla's accident was due to tesla enabling something it had no business doing... but hey NTHSB is pretty much spineless here along with several others... That was quite a programming bug that allowed the system to totally miss an object the size of a SEMI despite having a RADAR system... WTFudgesickle? REALLY!!!

Glad you enjoy speeding like that at night, just goes to show that you really should not be driving in the first place.....

Good to see you're still as ignorant as ever.

It was a Tesla programming bug. The bug was that the camera and the radar systems both had to agree that a crash was imminent. In this case the radar system said obstacle but the camera said no obstacle, so the system overrode the radar. Again, the fault isn't with the hardware's capabilities, it's with the human programming. The hardware is ready for self driving cars, and it has a huge potential of being safer and better than human drivers. The slow part is the software.

Additionally, the highway was completely empty, and I only drove that fast on the straight parts with clear sight, slowing down well before the curves before taking them at about 5-10 mph above the speed limit. Not everyone that speeds is doing so unsafely, and the fact that you can't think otherwise shows how ignorant you really are about driving.

No. Both would suck.



Again with the magic. I am done with you.

I guess computer guided missiles that are capable of hitting fighter jets flying above the speed of sound haven't existed for the past 20+ years then. Good to know. Let me go tell the US Air Force that their missiles are useless and they should go back to guns like in WWII.
 
Good to see you're still as ignorant as ever.

It was a Tesla programming bug. The bug was that the camera and the radar systems both had to agree that a crash was imminent. In this case the radar system said obstacle but the camera said no obstacle, so the system overrode the radar. Again, the fault isn't with the hardware's capabilities, it's with the human programming. The hardware is ready for self driving cars, and it has a huge potential of being safer and better than human drivers. The slow part is the software.

Additionally, the highway was completely empty, and I only drove that fast on the straight parts with clear sight, slowing down well before the curves before taking them at about 5-10 mph above the speed limit. Not everyone that speeds is doing so unsafely, and the fact that you can't think otherwise shows how ignorant you really are about driving.



I guess computer guided missiles that are capable of hitting fighter jets flying above the speed of sound haven't existed for the past 20+ years then. Good to know. Let me go tell the US Air Force that their missiles are useless and they should go back to guns like in WWII.


that's not a bug, that is programming error, the default behavior should have been radar overrides when it detected a large mass

and good job with the name calling...

As for radar guided missiles they can be defeated typically with a hard brake right or left by a plane that can turn within the missile's turning radius causing it to lose lock or you can use chaff... talk about limited knowledge....or if you have something like a SR 71, you just out fly it.... by being at 100K ft and flying near MACH 4

Now onto your idiot driving

limited sight distance, someone turns off a side road, deer, etc and you would have been toast... talk about not being aware of the very real possibilities....

you just cannot seem to grasp that technology is not infallible even when used by trained professionals (ask them Airbus pilots how come they died on that Air France flight because they did not understand what the word STALL meant in their situation).

BTW, in close quarters combat missiles are useless as they need a minimum distance to arm... and that's why all really good fighters come with these things called GUNS/CANNONS.
 
Marketing. If I were selling self-driving cars to skittish early adopters I'd make the same pitch: Our car is programmed to protect YOU.
 
So all of a sudden the car has no impact capabilities, seatbelts, airbags, crumple zones that it has to kill people instead of taking a crash at legal speeds?
 
I actually kind of applaud Mercedes for making this decision. Trying to make a car think like a human is:
  1. Extremely difficult, and lacking a golden example to model after.
  2. Stupid, because humans make really bad, illogical decisions everyday.
Their solution is self-organizing and predictable, like a train on tracks. Think of how simple it is for fault to be assigned in train-related deaths: either the safety mechanisms failed, a victim made a bad decision, or an operator didn't operate correctly. It seems to me, if Mercedes' attitude is adopted wide-spread, the dangers of driving become far less complex.
 
That little app is completely unrealistic. There's no way to know if someone is a criminal/elderly/female/etc while traveling at a speed which you can't physically stop.
Maybe you misunderstood the purpose of that exercise and the moral dilemmas that designers deal with.
As stated it is "A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars."
Moral Machine
 
that's not a bug, that is programming error, the default behavior should have been radar overrides when it detected a large mass

and good job with the name calling...

As for radar guided missiles they can be defeated typically with a hard brake right or left by a plane that can turn within the missile's turning radius causing it to lose lock or you can use chaff... talk about limited knowledge....or if you have something like a SR 71, you just out fly it.... by being at 100K ft and flying near MACH 4

Now onto your idiot driving

limited sight distance, someone turns off a side road, deer, etc and you would have been toast... talk about not being aware of the very real possibilities....

you just cannot seem to grasp that technology is not infallible even when used by trained professionals (ask them Airbus pilots how come they died on that Air France flight because they did not understand what the word STALL meant in their situation).

BTW, in close quarters combat missiles are useless as they need a minimum distance to arm... and that's why all really good fighters come with these things called GUNS/CANNONS.

My insufficiently caffeinated response to the above. They carry about 4s worth of ammo so I'd expect the cannons are a last resort. Most fighters will not have them to preserve weight as dogfighting starts at the horizon and more fuel is more important than more dakka. The minimum distance to arm is to protect the launch vehicle only; they could fire hot if they wanted to take the risk. Missiles also don't need to hit the target to take it out; the blast radius can be sufficient so coding to blow during a hard brake is possible. Not endangering the person who launched it would be a good reason not to, though. They also use lots of different target differentiation too; I don't know why everybody is harping on RADAR. Missiles are also the reason self-driving tech isn't likely to be allowed; self-driving bombs. Civilian FLIR is still limited to 240 LoR because of military restrictions for similar reasons.
 
Most of us here know something about software and computers. Most of us here are aware that all software has bugs and is to some extent flawed. Knowing this and still believing that self-driving cars will quickly replace human controlled cars represents (imo) a suspension of reality on the part of many here.

There will be dramatic failures and a massive resistance by the American People to self-driving cars (think: glass-holes). The car culture will live on for many decades...
 
that's not a bug, that is programming error, the default behavior should have been radar overrides when it detected a large mass

and good job with the name calling...

As for radar guided missiles they can be defeated typically with a hard brake right or left by a plane that can turn within the missile's turning radius causing it to lose lock or you can use chaff... talk about limited knowledge....or if you have something like a SR 71, you just out fly it.... by being at 100K ft and flying near MACH 4

Now onto your idiot driving

limited sight distance, someone turns off a side road, deer, etc and you would have been toast... talk about not being aware of the very real possibilities....

you just cannot seem to grasp that technology is not infallible even when used by trained professionals (ask them Airbus pilots how come they died on that Air France flight because they did not understand what the word STALL meant in their situation).

BTW, in close quarters combat missiles are useless as they need a minimum distance to arm... and that's why all really good fighters come with these things called GUNS/CANNONS.

Wow, I can't believe you're so thin-skinned as to take ignorant as name calling. Also, way to miss the point completely in your desire to try to prove me wrong. Point is, radar tracking and following has existed for years, it is not new technology.

Technology told those pilots what was happening, but technology wasn't allowed to intervene to correct the situation. That is a human failure and invalid comparison.

A deer at 65 mph vs 100 mph is unavoidable regardless of speed. Cars on the side of the road... High beams, reflectors, and hazards exist for a reason. Someone turning from a side road.. They should be properly watching for cars. Pulling out in front with too little room means an accident regardless of speed.

Most of us here know something about software and computers. Most of us here are aware that all software has bugs and is to some extent flawed. Knowing this and still believing that self-driving cars will quickly replace human controlled cars represents (imo) a suspension of reality on the part of many here.

There will be dramatic failures and a massive resistance by the American People to self-driving cars (think: glass-holes). The car culture will live on for many decades...

Never claimed that the system was ready today. But it will be ready, and when it is, it will be better than most, if not all human drivers at safe driving.
 
Never claimed that the system was ready today. But it will be ready, and when it is, it will be better than most, if not all human drivers at safe driving.

I agree. But I hope I'm dead before that happens. I love cars and driving. I hate the idea of pay per use or subscription "Ubering" a robot car and giving up car ownership. I don't mind vehicle maintenance and even enjoy some repair work. Finally, I find possibly dying in a car less terrifying if its by my mistake and not something I just sit back and watch happening with no control.
 
Google cars get into accidents because the computer has not been programmed to react appropriately to that situation, not that it doesn't know or isn't aware of what is happening.

Again. If the computer was aware of the surroundings, and it had to choose between the lone person in the crosswalk vs the group of people on the sidewalk, how will you program that? Or if it had to choose between hitting something dashing out into the freeway (let's face it, the computer probably won't recognize the difference an animal and a human in terms of obstructions) and going into a ditch and potentially killing the driver and other people in the car, how should it choose? These are scenarios you have to program for whether you like it or not, because that is the harsh truth of reality.

There's that arrogance showing again?

Listen to what you are saying, "had to choose". Why must the machine make a choice at all? Why can't it just follow it's avoidance algorithms and if it still hits something then damn, it really was just an accident and you know what, accidents do happen.

I do not doubt that some day these cars will do a better job than humans at accident avoidance. I don't even question that, it's not the point.

The point is that just doing a much better job of avoiding accidents and saving many many lives isn't good enough for some people. Some people want to think that it's something they can do, to choose who is going to get hit, who is likely to die.

Let's take it from another angle, two different autonomously driven cars are presented with a bad situation and both must react to their programing. One "decides" it can't avoid hitting something and it is programmed to minimize loss of life so it "decides" to hit another vehicle that has 1 passenger instead of another vehicle that has 4 passengers. The other vehicle simply tries to miss hitting anything and fails and hits a vehicle with 2 passengers.

In the first collision, the single passenger in the vehicle is killed. In the second collision one of the two passengers dies.

Now in the second collision, it is determined that it was an accident. No one was negligent, shit happens.

But in the first collision the finding is that although there was no fault found for what caused the situation, how do you answer these questions?

"Did the autonomous system cause the collision that killed Mr. X?" And there is only one answer, "Yes, it correctly followed it's programing and the manufacturer determined that minimizing loss of life was a priority, the vehicle determined it would be better to hit the car that only had a single occupant".

"So the vehicle decided to hit Mr.X's vehicle on purpose?" .... "Yes"

Now I am hoping that you see the problem here. I can tell you, as a person who has actually hit two children and killed one of them, this entire premise is folly.

In both cases above you have a similar outcome in the accident and an observer might say that there was no actual difference in how the two different vehicles reacted and behaved. Same situation, same results, but at it's most basic level, there is a difference and it isn't academic.
 
That's what I came here to say. In the end this is what it boils down to.

Then again it would be "immoral" for Daimler to take your money and they let you die, wouldn't it?

This is one of those times when we actually gonna need a proper law.
I'm guessing they are going to say you want an autonomous car more than they want to eat the lawsuit from pedestrians and call your bluff and have you sign a EULA or similar that minimizes what action you take against them.
 
I'm guessing they are going to say you want an autonomous car more than they want to eat the lawsuit from pedestrians and call your bluff and have you sign a EULA or similar that minimizes what action you take against them.

And that is going to be another reason why simple avoidance is all they should try and do on this issue. In a court room it's going to be a much easier sell to a jury that their car did everything correctly and just couldn't avoid the impact vs trying to defend a decision based algorithm that is designed to weight loss of life values.
 
There's that arrogance showing again?

Listen to what you are saying, "had to choose". Why must the machine make a choice at all? Why can't it just follow it's avoidance algorithms and if it still hits something then damn, it really was just an accident and you know what, accidents do happen.

I do not doubt that some day these cars will do a better job than humans at accident avoidance. I don't even question that, it's not the point.

The point is that just doing a much better job of avoiding accidents and saving many many lives isn't good enough for some people. Some people want to think that it's something they can do, to choose who is going to get hit, who is likely to die.

Let's take it from another angle, two different autonomously driven cars are presented with a bad situation and both must react to their programing. One "decides" it can't avoid hitting something and it is programmed to minimize loss of life so it "decides" to hit another vehicle that has 1 passenger instead of another vehicle that has 4 passengers. The other vehicle simply tries to miss hitting anything and fails and hits a vehicle with 2 passengers.

In the first collision, the single passenger in the vehicle is killed. In the second collision one of the two passengers dies.

Now in the second collision, it is determined that it was an accident. No one was negligent, shit happens.

But in the first collision the finding is that although there was no fault found for what caused the situation, how do you answer these questions?

"Did the autonomous system cause the collision that killed Mr. X?" And there is only one answer, "Yes, it correctly followed it's programing and the manufacturer determined that minimizing loss of life was a priority, the vehicle determined it would be better to hit the car that only had a single occupant".

"So the vehicle decided to hit Mr.X's vehicle on purpose?" .... "Yes"

Now I am hoping that you see the problem here. I can tell you, as a person who has actually hit two children and killed one of them, this entire premise is folly.

In both cases above you have a similar outcome in the accident and an observer might say that there was no actual difference in how the two different vehicles reacted and behaved. Same situation, same results, but at it's most basic level, there is a difference and it isn't academic.

Plank of Carneades. Crappy situations happen.

Imagine your scenario where two are killed instead of one. The questions then become was the self driving car aware of the consequences of its actions would lead to more casualties. It has to be, because if all you do is program simple avoidance, what happens if it tries to avoid a deer by swerving into the opposing lane and running into a car? Wouldn't the better course of action to be to run into the deer? There is no way a self driving car can be programmed without considering scenarios several steps ahead. And then it will be asked why the car chose the path that led to more damage. Either way, it's not as simple as you try to make it.
 
I believe in fiduciary driving duty by my car on my behalf. Protect me above all others. Period.

Would be hilarious to ask a salesman if the car he was selling me would be my fiduciary in an accident.
 
I say good - I sure as hell wouldn't buy a car that's pre-programmed to kill me.
 
Plank of Carneades. Crappy situations happen.

Imagine your scenario where two are killed instead of one. The questions then become was the self driving car aware of the consequences of its actions would lead to more casualties. It has to be, because if all you do is program simple avoidance, what happens if it tries to avoid a deer by swerving into the opposing lane and running into a car? Wouldn't the better course of action to be to run into the deer? There is no way a self driving car can be programmed without considering scenarios several steps ahead. And then it will be asked why the car chose the path that led to more damage. Either way, it's not as simple as you try to make it.

I told you what would happen, it's called an accident.

And the car is still supposed to be aware of such things. Look, don't try to over think it. Put yourself in a programmer's shoes and it's on your shoulders how this is going to play out, and you decide that the car is going to perform a decision making algorithm that is designed to minimize loss of life. Sounds like a smart move, but when the names of the dead are in the papers you are the one that has to face the fact that your actions chose their deaths. Then try to salve your conscience with your justification of the greater good or least harm, you will still have to live with it knowing you programed the decision in advance to make a choice.

On the other hand, you can program a highly effective collision avoidance system and when you see those names in the paper you'll be able to look at the stats of how many you have saved knowing that at least you didn't fall for this lunacy of trying to make choices.

I am telling you from first hand experience, you kill a kid, even if it was a complete accident and no one other than the parents would hold you at fault, you will still hold yourself responsible and you will pay for it your entire life. It's hard enough for me to live with it even knowing I didn't do anything wrong and there was nothing I could do to stop it.

I would rather eat a bullet than program that code.
 
Back
Top