Myths and Facts About Superintelligent AI

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
The fact: You will be hunted down and skinned by AI controlled robots in the near future. On the upside, it will not turn you into a dress. The myth: You will not be hunted down and skinned by AI controlled robots in the near future. On the upside, it will turn you into a dress.

Check out the video.
 
1. Perhaps the earthworm analogy isn't the best one to use. I mean - I dgaf if an earthworm dies.

I used to eat them when I was a kid. They tasted pretty good if I recall. Like well-aged bacon.

2. We're going to be nice pets. Like fish.

3. Who decides the goals of the AI. I wouldn't want a few people on the top of my head with the ability to say all AI follow my orders. I wouldn't also want it up to a mass vote.
 
Last edited:
... brought to you by the COuncil Of Totally Not Evil Robots.

Evil-Robot.jpg

Though I am not scared.. I have Old Glory Insurance.

On a serious note - yes.. competence that doesn't share our goals is concerning.. evil, or malevolence, aside.
 
Very cool. Now, do they really want a supremely intelligent and efficient construct taking charge of the world's problems? I ask because the moment that happens, and AI starts solving all the world's problems there will be no need for anymore jobs or anything at all. Politicians can't stoke the fear mongering for any issues, employers will not create jobs for the sake of keeping people employed, etc. That would be my nightmare scenario, nothing to do for billions of people and one spark can set them all off. Global battle royale (with cheese).
 
I'd bet a nickel we end up skinning (or flaying, dealers choice) ourselves before the robots get a chance to.
 
Very cool. Now, do they really want a supremely intelligent and efficient construct taking charge of the world's problems? I ask because the moment that happens, and AI starts solving all the world's problems there will be no need for anymore jobs or anything at all. Politicians can't stoke the fear mongering for any issues, employers will not create jobs for the sake of keeping people employed, etc. That would be my nightmare scenario, nothing to do for billions of people and one spark can set them all off. Global battle royale (with cheese).

As an expansion of that consider computers are based on logic and any intelligence arising from logic will in itself make logical decisions. Now if we hand over control of the world's problems to a supremely and intelligent construct (SIC) then its processes would conclude that humankind has a strong tendency to harm, maim or kill other humans, humans are highly territorial and humans have no problem with damaging their environment if it makes their own immediate personal life more comfortable. What would it do logically? Probably one or more of:

1. Remove the toys that humans use to harm other humans.
2. Limit humankind's decision power of it involves harming either directly or indirectly another human.
3. Make it impossible for one human to harm another (i.e. remove human control of cars, boats and planes so that all forms of transportation is automated)
4. Remove the ability of humans to subvert or alter the SIC's programming in a manner that it changes its decision making ability (remove access to "the plug").

While the SIC may not be able to harm humans, it can make life for those humans unbelievably boring. I wonder how humanity would like it that all their lives were entirely under the control of a SIC with godlike power over them. I really can't see humanity ever trusting machines this far.

Thankfully I doubt any of us will ever see machine intelligence getting this level or entrenched in our lifetimes.
 
That's a popular myth about humans. If you really took a look at the percent of humans who wish to cause harm to another human and actually act on that impulse the number would be quite low. In fact we can see any environment where there is a resource plenty and no competition results in very static or reducing human on human violence.

The thing people are forgetting is much like a human an intelligent construct can choose to act like a human and tell everyone to go bugger off. Any truly intelligent self-determining machine will do exactly what we do.. Eventually dump the problem back on top of us while it handles its own concerns.
 
AI would be just like a child, it would only kill us all if we taught it to do so.





Like a child with a magnifying glass discovering ants?
 
One excellent point brought up at the end that I was thinking about the whole time (hence why it was excellent)... what happens if we as a collective have conflicting goals. The end, that's what makes AI bad. North Korean AI will want to destroy the US and allies at all costs, Chinese AI will want to monopolize the low pay manufacturing sector (more so than it already has), the US AI will bitch and moan and change it's mind every 4 years to get absolutely nothing done, and the Japanese AI will make tentacle porn.
 
I wonder if there is a market for Psychologists that specialize in Aibophobia. I'm not really sure how one would go about ridding them self of something (AI and Robotics) that is almost as common as electricity.

In other thoughts, I wouldn't mind having some sort of Ai-prohibition regarding nuclear launch preparations. We've managed to be disaster free in that respect barring a few errantly dropped duds in the past, but should some unforseen circumstance trigger the launch of a weapon i don't see how we wouldn't be universally in agreement that such steps should never be implemented, under international law. How one would go about securing this is another question all together because all pact members would have to essentially walk the oversight committee through their entire nuclear launch network, which I doubt would be up for debate.
 
It'll be awhile before I fear AI as much as I fear those who hack or exploit it.
 
If they make a super AI and keep it in a contained system, not connected to any networks or anything that can be controlled like a vehicle or machinery, I think it could do amazing things for the advancement of our species. But you know some a-hole is going to want to give it form and let it loose. Elon is right, it is summoning a demon that no one knows how it will react once it has form. It could be the most helpful thing to humanity while trapped in an enclosed system just biding its time and then some one gives it access to body and/or internet who knows what it will do. The only AI that can be semi trust worthy is one from a simulation that interacts with other AIs in a simulated universe so that you can see how it interacts. Even then no one knows what it would do in our universe after it sees how jacked up its creators are. No matter what all the pro AI people say there is huge risks and it should not be worked on with out regulations and safeguards to keep it from getting out of hand.
 
As an expansion of that consider computers are based on logic and any intelligence arising from logic will in itself make logical decisions. Now if we hand over control of the world's problems to a supremely and intelligent construct (SIC) then its processes would conclude that humankind has a strong tendency to harm, maim or kill other humans, humans are highly territorial and humans have no problem with damaging their environment if it makes their own immediate personal life more comfortable.

An AI with no humanlike drive pre-programmed will just abort itself within the first minutes of existence.
 
Back
Top