Tsumi
[H]F Junkie
- Joined
- Mar 18, 2010
- Messages
- 13,760
Really? Then why are Google cars getting into accidents? Omnipotent my ass. Maybe some day they will become closer to what you seem to think they are capable of today, but that reality will also be partly the result of adapting roadways for self driving cars and not just all on the cars to bring about.
I'll tell you this that I know for a fact. If any of these smart ethical engineers and thinkers had ever been in an accident and killed someone. they wouldn't even consider trying to make the machine "choose". They would only focus on making it try to avoid and tell me, if these machines are so damned perfect, then why isn't simple avoidance good enough? Seems like there would never be another accident ever again if they are as super as you seem to believe.
Google cars get into accidents because the computer has not been programmed to react appropriately to that situation, not that it doesn't know or isn't aware of what is happening.
Again. If the computer was aware of the surroundings, and it had to choose between the lone person in the crosswalk vs the group of people on the sidewalk, how will you program that? Or if it had to choose between hitting something dashing out into the freeway (let's face it, the computer probably won't recognize the difference an animal and a human in terms of obstructions) and going into a ditch and potentially killing the driver and other people in the car, how should it choose? These are scenarios you have to program for whether you like it or not, because that is the harsh truth of reality.