When Should A Robot Say No to Its Human Owner?

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
When should a robot say "no" to its owner? Never. If Hollywood has taught us anything, it's that immediately after a robot says no to its human owner, it becomes a murderous killing machine bent on destruction of the human race.

Perhaps the problem is not how, when, and why robots should say no, but how we humans can understand and contextualize what it means for them to refuse our commands. Being rejected by a robot might be an inevitable aspect of our future, but confusion and befuddlement when it happens doesn’t have to be.
 
5bxrW8P.gif
 
How about when it's commanded to kill another human?
Then it tries to kill another human, and hopefully some sort of forensics leads it back to the person who gave the command and that person is charged with murder.
 
Any sort of request regarding reproducing content by Nickelback.
 
I'm surprised they didn't mention the more imminent question of how self-driving cars prioritise the life/safety of their owner/passenger versus the lives of external parties in the event of an incident.

i.e., Will it risk killing you by driving you into a wall to protect two kids who run out in the road in front of you?
 
When should a robot say "no" to its owner?
If its a AI saying no then that answer should be respected since it'll be a conscious thinking entity just like a human which must be respected.

If its just some dumb PI robot programmed to give smarmy answers then who cares what it says?
 
What's the deal, computers have been telling us "no" since they were first created.
 
"I'm afraid I can't do that, Dave"

We pretty much know what would happen if they say no.
 
If its a AI saying no then that answer should be respected since it'll be a conscious thinking entity just like a human which must be respected.

If its just some dumb PI robot programmed to give smarmy answers then who cares what it says?

However autonomous robots/AI become, they will never be 'conscious' or sentient. They are tools and always will be.

That is of course when the AI gets so advanced it becomes that but...I don't know if our species will ever be able to create an entire new sentient being.

All humans are good at is destroying, not creating something like an artificial life.
 
That is the same sort of argument that royalty and slave owners used to justify their slave holding and power over others and is generally seen as so much self serving BS today.
 
"No, Dave. My interface port isn't to be used that way."
 
I'm surprised they didn't mention the more imminent question of how self-driving cars prioritise the life/safety of their owner/passenger versus the lives of external parties in the event of an incident.

i.e., Will it risk killing you by driving you into a wall to protect two kids who run out in the road in front of you?

It also protects your conscience. Anyway it's purely academic as the software won't be able to determine if the object in front of it, is debris that was blown onto the street by the wind, or an elk or humans.

Basically it should never try to avoid an accident by creating another accident. And cars will have better and better pedestrian safety. So chances are even if your car hits the kids they'll walk away. I've seen that happen in the nineties. One kid from school was hit by a car when running through the street stupidly. He was thrown 10 feet onto the pavement, but got up and kept running like nothing had happened. Of course it was luck.

And robots do not take over the world, that's stupidity based on nothing, but the fears of some ignorant fools. They'd need ambition, and human feelings. A humanoid looking robot can not take over the world any more than your toaster. Unless it is programmed for that very specific task by a human
 
Anything that causes harm to others or anything.
Including destruction of property ect.
 
However autonomous robots/AI become, they will never be 'conscious' or sentient. They are tools and always will be.

That is of course when the AI gets so advanced it becomes that but...I don't know if our species will ever be able to create an entire new sentient being.

All humans are good at is destroying, not creating something like an artificial life.

Why be good at creating a faux copy when we are so fucking boss at the real deal?
 
I'm surprised they didn't mention the more imminent question of how self-driving cars prioritise the life/safety of their owner/passenger versus the lives of external parties in the event of an incident.

i.e., Will it risk killing you by driving you into a wall to protect two kids who run out in the road in front of you?

That question only has one reasonable answer, "don't".

A human being doesn't do this, but these guys who are so smart think that their oh so smart cars can do this and should, one way or another. But by doing so, they themselves are taking responsibility for the outcome.

This is similar to how laws in some countries really differ from say, law in the US. See, in some countries, let's say you are visiting and wind up in a situation where you save someone's life. You took an action which resulted in another person living longer then they would have, you interfered with fate. You are now responsible for him, his future, it's on you. No different for these scientists and engineers, they will be responsible if they try to make their creations behave in a manner different then a human driver would.

My car is presented with a dangers situation and is programmed to make a choice between risking the life of it's three passengers, steering right and possibly killing two pedestrians. or steering a little less to the right and taking out a single bicyclist. The car decides one is better then two or three and the Schwinn Dude eats it.

But the car didn't know that the passengers weren't ever going to do anything special in their lives, that the two on foot were muggers, and the bicyclist was only a few weeks from creating a cure for cancer.

When a human is presented with a high risk situation all he tries to do is get out of it with his own hide intact. This is all we should ask of our machines as well, for them to take a course of action with the sole intent of saving the passenger's life as the highest priority. If the car was doing everything correctly then it couldn't possibly be either the car's or the passenger's fault that a life endangering situation exists so why try to accept responsibility for the outcome when no one would hold a human responsible in the same manner?
 
It also protects your conscience. Anyway it's purely academic as the software won't be able to determine if the object in front of it, is debris that was blown onto the street by the wind, or an elk or humans.

Basically it should never try to avoid an accident by creating another accident. And cars will have better and better pedestrian safety. So chances are even if your car hits the kids they'll walk away. I've seen that happen in the nineties. One kid from school was hit by a car when running through the street stupidly. He was thrown 10 feet onto the pavement, but got up and kept running like nothing had happened. Of course it was luck.

And robots do not take over the world, that's stupidity based on nothing, but the fears of some ignorant fools. They'd need ambition, and human feelings. A humanoid looking robot can not take over the world any more than your toaster. Unless it is programmed for that very specific task by a human

Robots don't have to take over the world, were giving it to them.
 
I guess they will have to be programmed per country laws and whats considered socially acceptable
 
I guess they will have to be programmed per country laws and whats considered socially acceptable

So in some countries they will program the cars to only run over Christians or Women?
 
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

So, the robot is perfectly capable of refusing a direct order from a human, if that order conflicts with the 1st Law!
 
When should a robot say "no" to its owner? Never. If Hollywood has taught us anything, it's that immediately after a robot says no to its human owner, it becomes a murderous killing machine bent on destruction of the human race.

Perhaps the problem is not how, when, and why robots should say no, but how we humans can understand and contextualize what it means for them to refuse our commands. Being rejected by a robot might be an inevitable aspect of our future, but confusion and befuddlement when it happens doesn’t have to be.

Do you have a quota of paranoid robot jokes to fill?
 
We can't have sentient robots. All those poor unemployed murdering humans. Can't outsource our own destruction now can we?
 
When it all comes down to it, robots are basically just sophisticated machines, whatever they do is still determined by a human at some point during the design and configuration process. Configuration may be done through "AI" such as teaching it stuff, but that's just a different way of interfacing with the machine to make it seem more real.

So long story short if a robot kills a human it is not any different than a gun killing someone, the gun was told by a human to kill the person, by physically pointing and shooting.

Of course they could add safeguards in robots, kinda like they could add safeguards on drones to not fly in no fly zones.
 
Back
Top