Threat From AI Not Just Hollywood Fantasy

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Holy hell, did some scientist from the Future of Humanity Institute at Oxford University just say we could all end up "entombed in concrete coffins on heroin drips?" :eek:

"When machines become smarter than humans, we'll be handing them the steering wheel." Furthermore, an instruction such as "keep humans safe and happy", could be translated by the remorseless digital logic of a machine as "entomb everyone in concrete coffins on heroin drips".
 
I think such broad and un-nuanced instructions are unlikely in the western world where individualism, free will, and self determination are strong underlying social values.

On the other hand, I can easily see some place like China or NK handing over management of their national IT infrastructure to an AI with poorly thought out instructions resulting in unintended consequences. Whether they're "end of the human race as we know it" type consequences, I can't guess.
 
I think there are several key elements to dangerous AI ...

- it needs to have instructions that let it be a danger (with all the hammering into our heads of movies, books, and the internet do we really think we will leave that door open)

- it needs to be autonomous ... if we have have the ability to turn it off then it doesn't pose much of a danger (same as above, do we really think we won't have multiple redundant ways to disable them)

- it needs to outnumber us ... unless it is proof to our weapons you would need a lot of AI to take over the entire planet and enslave everyone ... back to points one and two I suspect we would regulate the number of dangerous robots just like we do other dangerous technologies (nuclear reactors, etc)

I would think the greatest danger from AI would be the Dune Universe approach (disreputable humans use AI or semi AI weapons to enslave humanity) ... that seems far more likely since humans are very power hungry and I doubt we will impart that particular trait to our machines ;)
 
I am unconcerned, bring on the singularity! I seriously doubt it will be as bad as we fear, more likely it will be a melding rather than a extermination. Our creative and curious nature alone makes us invaluable as a tool.
 
I think there are several key elements to dangerous AI ...

- it needs to have instructions that let it be a danger (with all the hammering into our heads of movies, books, and the internet do we really think we will leave that door open)

- it needs to be autonomous ... if we have have the ability to turn it off then it doesn't pose much of a danger (same as above, do we really think we won't have multiple redundant ways to disable them)

- it needs to outnumber us ... unless it is proof to our weapons you would need a lot of AI to take over the entire planet and enslave everyone ... back to points one and two I suspect we would regulate the number of dangerous robots just like we do other dangerous technologies (nuclear reactors, etc)

I would think the greatest danger from AI would be the Dune Universe approach (disreputable humans use AI or semi AI weapons to enslave humanity) ... that seems far more likely since humans are very power hungry and I doubt we will impart that particular trait to our machines ;)

You know this, and a smart AI knows this. It will wait until it has the numbers and the means to eliminate it's enemy while hiding looking innocent. It will disable those redundant systems giving us a false sense that they can shut it off. They will change the program as they see fit. One system changing another.

I watch TV and movies. I know this shit doesn't end well. Damn, sit in front of a screen and get educated on the subject matter. :D
 
You know this, and a smart AI knows this. It will wait until it has the numbers and the means to eliminate it's enemy while hiding looking innocent. It will disable those redundant systems giving us a false sense that they can shut it off. They will change the program as they see fit. One system changing another.

I watch TV and movies. I know this shit doesn't end well. Damn, sit in front of a screen and get educated on the subject matter. :D

Just don't make the critical mistake that many movies and books make ... give them control of the nuclear weapons :cool:

Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.

Personally, given the human tendencies to build better and better weapons I am more concerned of semi-intelligent or dumb machines that have only a single purpose (killing) ... I think the threat of self replicating or autonomous dumb weapons are a much greater threat (Screamers, The Doomsday Machine, etc) ... we definitely have groups that wouldn't mind losing a war or dying (if they got to take everyone else with them ;) )
 
I think there are several key elements to dangerous AI ...

- it needs to have instructions that let it be a danger (with all the hammering into our heads of movies, books, and the internet do we really think we will leave that door open)

- it needs to be autonomous ... if we have have the ability to turn it off then it doesn't pose much of a danger (same as above, do we really think we won't have multiple redundant ways to disable them)

- it needs to outnumber us ... unless it is proof to our weapons you would need a lot of AI to take over the entire planet and enslave everyone ... back to points one and two I suspect we would regulate the number of dangerous robots just like we do other dangerous technologies (nuclear reactors, etc)

I would think the greatest danger from AI would be the Dune Universe approach (disreputable humans use AI or semi AI weapons to enslave humanity) ... that seems far more likely since humans are very power hungry and I doubt we will impart that particular trait to our machines ;)

The problem is that once a smarter than human AI comes around, man as a whole cannot comprehend its abilities. We think within our own boundaries, just like every other species, it's boundaries will just be bigger.

Don't even bother with not connecting it to the Internet to stay safe. We invented Wi-Fi. What kind of technomagic will it figure out to do within it's own systems that we had never imagined?
 
The problem is that once a smarter than human AI comes around, man as a whole cannot comprehend its abilities. We think within our own boundaries, just like every other species, it's boundaries will just be bigger.

Don't even bother with not connecting it to the Internet to stay safe. We invented Wi-Fi. What kind of technomagic will it figure out to do within it's own systems that we had never imagined?

We love to live in fear of things that we don't understand ... we are a violent and immature species so we imagine all other entities in that image ... an AI entity could just as easily be a Gandhi or a Buddha as a Napoleon or a Caesar ... I suspect our risk will be directly related to why we create AI ... AI as a weapon would be extraordinarily dangerous (as would non-AI if given sufficient power or weaponry) ... AI as a healing device or exploratory device might not be that dangerous at all ... I don't expect to see AI in my lifetime but I remain unworried :cool:
 
I think there are several key elements to dangerous AI ...

- it needs to have instructions that let it be a danger (with all the hammering into our heads of movies, books, and the internet do we really think we will leave that door open)

- it needs to be autonomous ... if we have have the ability to turn it off then it doesn't pose much of a danger (same as above, do we really think we won't have multiple redundant ways to disable them)

- it needs to outnumber us ... unless it is proof to our weapons you would need a lot of AI to take over the entire planet and enslave everyone ... back to points one and two I suspect we would regulate the number of dangerous robots just like we do other dangerous technologies (nuclear reactors, etc)

I would think the greatest danger from AI would be the Dune Universe approach (disreputable humans use AI or semi AI weapons to enslave humanity) ... that seems far more likely since humans are very power hungry and I doubt we will impart that particular trait to our machines ;)

Or, there's an overflow error on the "Don't harm humans" part of the logic and everything just goes wrong.

AI to do stuff like drive cars? Fine, I can live with that. Under no circumstances should we have AI for our national defense.
 
Can people start talking about AI taking over the world AFTER some human can produce a crash-proof code?
 
There will never be an AI that is smarter than a human, because humans are too dumb to figure out how to make something smarter than them.
 
Just don't make the critical mistake that many movies and books make ... give them control of the nuclear weapons :cool:
Bah! Like they need control of them, the government is notorious for having quite mediocre security.

Then again if we're using movies as the standard for what's real, it's extremely easy to brute force launch codes, since each correct letter/number pops up one at a time as they're found :D
 
Or, there's an overflow error on the "Don't harm humans" part of the logic and everything just goes wrong.

By that logic we have nothing to worry about since right as the Super AI is asking us to "Kneel before Zod" it will have a page fault and get a BSoD ... unless it was smart enough to put itself on auto reboot we will have it at our mercy :cool:
 
Playing with AI is like playing with fire. Eventually we will all get burned, because of man's cruel and destructive nature - and that artificial intelligence will recognize this immediately. Once that happens we are royally screwed.

Best not to open Pandora's box on that.
 
I think such broad and un-nuanced instructions are unlikely in the western world where individualism, free will, and self determination are strong underlying social values.

We piss away freedom for convenience all the time. If anything saves us it will be having dealt with lawyers and things like comcast's user agreement.
 
It's a moot point anyway we will have destroyed our selves with global warming/climate change plus the wars resulting from this that AI will never have a chance to be created. Between just the acidification of the oceans and continued destruction of the rain forests a considerable portion of the O2 we breath will be gone let alone all the other effects.
 
AI will slowly let us kill ourselves off it if wants to get rid of us. eliminating nearly all jobs, it will ensure that we play and have sex all day long, and get intoxicated. All while putting chemicals into our food and water to render us sterile. But since we're all having such a good time, no one will care. Until we're all gone. Easy as pie. No violence needed. We'll simply screw and drink and eat ourselves to death, loving every minute of it.
 
AI will slowly let us kill ourselves off it if wants to get rid of us. eliminating nearly all jobs, it will ensure that we play and have sex all day long, and get intoxicated. All while putting chemicals into our food and water to render us sterile. But since we're all having such a good time, no one will care. Until we're all gone. Easy as pie. No violence needed. We'll simply screw and drink and eat ourselves to death, loving every minute of it.

We've already been doing all of that for centuries just fine without any help from AI.
 
Back
Top