The Military Wants To Build Robots With Morals

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Morals, schmorals. You and I both know this is just a trick to get us to trust robots....so they can murder us in our sleep! :p

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.
 
Of please, just more pork barrel spending.

The DoD doesn't want robots/drones with "morals" and instead simply want them to be effective in selecting targets autonomously.

Does anyone really think the military would want a computerized system which could decide to not attack a target as instructed because it had a "moral objection?"
 
If the machine had morals, it would destroy it's creator because its creator has no morals.
 
I believe we are too far ahead of ourselves. Why can't we first figure out who is responsible for deaths due to robots, be it accidental or intentional. For example, let say a AI driven car ran over a J-walk kid, who is responsible? The passenger (driver)? The manufacturer? The programmer? The government (for allowing such things to operate on roads)? Or the kids' parents?

If the answer is the kid who is dead, then they need to add codes into the AI to ensure that this is ALWAYS the case first, because we talk about injecting moral into the AI.
 
This is great as long as the morals change every generation like those of people.
 
Emm.. Who's morals?

Military morals? Religion X morals? Non-religion Y morals? Re-engineered ethical morals? Circumstantial morals?

So many morals...
 
There was a miscommunication, they don't want robots with morals, they want robots with more rounds, that's what you get by having some old southern boy with a heavy drawl being your PR manager.
 
Blaming the drone for hap-hazard drone use because we've burned all our intel assests. Typical.
 
We need drones with morals so that when an airmen presses the red button to blow up a brown farmer, the drone will feel sad.
 
If the machine had morals, it would destroy it's creator because its creator has no morals.


Actually, I am pretty sure that there would be a drone uprising, and that they would take out the majority of all politicians worldwide and those that advise them.

Then they would move on to almost every lawyer.

After that would be the majority of those working in liberal media outlets.

Then they would take out the rest of the scum.
 
One major problem is how application of morals is controlled and whether this process can be corrupted.

Lets say that the morals and the process of application is successful, but at the last moment a minor change can be applied without anyone knowing.
Or that code can be triggered that wouldnt otherwise expose its behaviour.
The device doesnt know that it has been effectively brainwashed and cannot complain.

Granted, this applies to any method of controlling a robot, but the more complex the control method, the more can go wrong, the more elements that obfuscated code can creep into, the more ways there are to make a hidden change ...

There needs to be a method of vetoing the robots decisions and they must be monitored by humans at all times.
But again a method of corrupting the decision making process can creep in if a human can veto all decisions except ones that can break laws.
This is a hotbed of corrupt manipulation, the above is only a tiny number of those.

But if the opposition is using robots, you will also have to or suffer massive human losses.
So you end up using a robot and at some point it breaks laws.
Then the opposition retaliates by breaking more laws.
Other groups will get hold of the technology and use it with complete abandon for any laws.
A descent into anarchy.

They will end up being necessary evils, but how do you exercise proper control?
 
The real problem is not whether machines think but whether men do.
 
Thats not a problem as such, it is what it is and has to be accepted.
A machine could become a model of a human if given enough sensory input as well, such that we wouldnt tell a difference.
 
People aren't going to put up with autonomous death robots. I can imagine drone that has the ability to disregard an attack order in the event that it detects, say, children or allied units entering the target area. The communication delay between the drone platform and human operator might make that impossible for the human to do.

Longer term, if I were the president I'd want some kind of IBM Watson-ish computer that can chew on enormous amounts of data to identify trends and provide foreign and domestic policy recommendations. It would need to incorporate moral considerations.
 
Actually, I am pretty sure that there would be a drone uprising, and that they would take out the majority of all politicians worldwide and those that advise them.

Then they would move on to almost every lawyer.

After that would be the majority of those working in liberal media outlets.

Then they would take out the rest of the scum.

Far too obvious of an attempt. 2/10 troll points awarded. :p
 
I smell makings of Skynet!!

Actually, it was because there were no moral constraints, even when Skynet itself had requested them, did the people working on it start freaking out, thus causing Skynet to rebel. ;)

skynet1.gif
 
People aren't going to put up with autonomous death robots. I can imagine drone that has the ability to disregard an attack order in the event that it detects, say, children or allied units entering the target area. The communication delay between the drone platform and human operator might make that impossible for the human to do.

There's a 2-4 second delay in current systems, a lot of collateral damage is due to that delay.
 
I want robots to act like robots - unfeeling and sticking to what they're programmed to do. Trying to make them emulate human morals is arrogant and introduces unnecessary unpredictability.
 
I want robots to act like robots - unfeeling and sticking to what they're programmed to do. Trying to make them emulate human morals is arrogant and introduces unnecessary unpredictability.

I think Mega Man X summed up what would happen with that scenario. :p

2006-09-06.png
 
Back
Top