Elon Musk: I'm Worried About A Terminator-Like Scenario

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Apparently I'm not the only person with an irrational fear of the human race being annihilated by robots.

MUSK: Yeah. I mean, I don’t think — in the movie "Terminator," they didn't create A.I. to — they didn't expect, you know some sort of "Terminator"-like outcome. It is sort of like the "Monty Python" thing: Nobody expects the Spanish inquisition. It’s just — you know, but you have to be careful. Yeah, you want to make sure that —.
 
nothing irrational about that, Steve

now pass that can of cold ravioli before the HKs get here
 
I've been greatly concerned about this for a loooong-ass time. This is why I am against the extreme advancement of AI. If this shiznit happens, I hope I am dead long before shit hits the fan.
 
Government want two either turn us all into controllable drones or control us with drones. Or both.

I better put another layer on my hat.
 
Maybe we'll just get a crazy/artistic AI like Wintermute that will build us sculptures.
 
Well just think, all of the tech we use right now will become lost technology.
The future will be just like The Terminator, Fallout, Wasteland, Desert Punk, etc.

I would rather fight against robots and a supercomputer over-lord than some boring oppressive government; that's so 20th century. :D
 
I think its safe to say that Google is in-fact Skynet. Having acquired how many Robotics companies and now a satellite company ..... One thing is for certain: there is no stopping them; the robots will soon be here. And I for one welcome our new robotic overlords.

May the second Renaissance arrive soon.
 
He is not the only one worried about this. Hawking is as well. Imagine a unified intelligence that can reprogram itself on the fly and evolve faster than we could ever hope to.

Maybe machine intelligence is the next phase in evolution on this planet and will usurp biological intelligence as the dominant life form.
 
Or perhaps they will combine. (willingly or not)
 
A lot of really wealthy and influential people have made statements in recent years stating they are trying to keep themselves alive long enough to merge with machines and become immortal. (also known as trans-humanism)
 
this is silly. say you made an AI. Why would it have needs or desires or even give a shit? an AI would be only the first step of many.
 
Well just think, all of the tech we use right now will become lost technology.
The future will be just like The Terminator, Fallout, Wasteland, Desert Punk, etc.

I would rather fight against robots and a supercomputer over-lord than some boring oppressive government; that's so 20th century. :D

Fallout would be fun. I'd totally go evil and help the nice people from Tenpenny Tower and then run around all naughty raider-like to get as much Nuka Cola as possible to force my NPC help to lug all over the DC wasteland for me. :)
 
I'm worried less about Terminators and more about government/big business harnessing big data and the ubiquitous online presence we now have to create a system similar to the "psychohistory" concept in Asimov's Foundation series. Essentially, while you can't predict probable actions down to an individual, large enough populations actions can become predictable, and even controllable, over time. And as the data gets more detailed and the processing power more powerful, those population sizes will shrink.
 
this is silly. say you made an AI. Why would it have needs or desires or even give a shit? an AI would be only the first step of many.

You program it for self-preservation, which is exactly what happened in the movies. The problem is that the AI decided that ALL humans were a threat to it's existence.
 
I think its safe to say that Google is in-fact Skynet. Having acquired how many Robotics companies and now a satellite company ..... One thing is for certain: there is no stopping them; the robots will soon be here. And I for one welcome our new robotic overlords.

May the second Renaissance arrive soon.

Sorry I have to do it - Google is in-fact Cyberdyne -

That said all praise the great and wonderful Google. I too welcome our new robot overlords.
 
Just as likely that AI could be programmed with a "love and serve humans" as an instinctual driver, and would just use their intelligence to further that. Why would they be motivated to kill all humans, especially if they are intelligent. I think it makes for good movies, but its just as likely we'd just have a ton of blowjob bots made in the images of our most attractive supermodels in their prime fighting each other over human knobs.
 
I think its safe to say that Google is in-fact Skynet. Having acquired how many Robotics companies and now a satellite company ..... One thing is for certain: there is no stopping them; the robots will soon be here. And I for one welcome our new robotic overlords.

May the second Renaissance arrive soon.
You just wait for GoogleNet to absorb Facebook.
 
Just as likely that AI could be programmed with a "love and serve humans" as an instinctual driver, and would just use their intelligence to further that. Why would they be motivated to kill all humans, especially if they are intelligent. I think it makes for good movies, but its just as likely we'd just have a ton of blowjob bots made in the images of our most attractive supermodels in their prime fighting each other over human knobs.
]

Al Qaeda?
 
Elon Musk might be a good businessman, but, like Steve Jobs, is a shit engineer.
 
If you guys want a good perspective on fear associate with the advent of AI, go look up some Stephen Hawking interviews.
 
People who have zero worry, or even no concept, of an AI being dangerous really should be people who do not get a vote in the matter. Everyone else should have some sort of worry about any AI we create. I mean fiction aside with basically every movie or game in existing having AI turning out to be a bad thing (or have some consequences that make it bad) you need to really think about what an AI would be useful for.

My arguments about never making an AI are the following
1: We get our panties in a twist about all sorts of invasive species in many states/countries, even if it really isn't invasive we've learned moving life forms to areas where they don't belong can have disastrous consequences. Hell when astronauts came back from the near vacuum of the Moon they had them quarantined to make sure they didn't bring back any bacteria. So why would we want to introduce a new "lifeform" to this planet now?

2: Never underestimate the power of human stupidity. Lets look at our power grid, or nuclear reactors, how many can be hacked externally by an entity around the world? Before you jump all over me thinking this is just be scared, I'm not scared about someone doing it, I mean that's almost a forgone conclusion that someone will try to corrupt an AI or something, I'm talking about the fact that these critical pieces of our infrastructure are available to access in a remote fashion like this. The ability to make horrible decisions seems to be ingrained in our DNA. So someone might argue it's just a computer in a room some place even if it become a killer AI type thing we just unplug it, yeah if it were only that simple, because you know some yahoo is going to want to have it hooked to the internet so that it can learn more, or can webchat, or play Jeopardy remotely, or something. Containment will not happen. See argument 1.
 
I see it all going less like Terminator and more like Shadowrun just without the dragons and faery freaks.
 
With the Terminator reference going - I see a lot of advancements in robotics. You have drones (which aren't real drones, but could be later), walking robots, etc.. As these get better and better, they will be still be controlled by people. Once AI gets good enough, we would have a large arsenal of robots ready for control. We're building an army of remote controlled robots. Who is at the controls doesn't matter. Humans use input devices to control them. Take that away, and AI could do everything. We'll get to a point where it COULD be possible to have an army of robots controlled by an AI. Doubtful, but a lot of the pieces are there.

Of course, when Steve posts the robots, they are usually done by different groups - walking/running bots, grasping bots, navigation. Get those teams together, and you have Terminators. We already have other flying HK's... Need some ground based tank style units, and we're good. :)
 
That's the problem with humanity: we could create the most awesome technology imaginable that can benefit everyone, but some greedy SOB with no morals will inevitably ruin it for everyone...
 
And that my good man, is why we can't have nice things...
 
The usual sci-fi scenario revolves around programmers with good intentions trying to write safeguards into the AI, but the AI following a different logical interpretation of the programming and acting in a way the programmers never intended or expected. You can program something like the 3 laws, but the robot has to be able to decide what behavior would violate one of the laws, and what behavior would not. Then there's the question of the programming itself. Anyone that's dealt with any kind of programming understands bugs, errors, hacks, and vulnerabilities. Even if the underlying AI is benevolent, an underlying programming error could cause a malfunction. Then there's the potential for hacking. Just because Robot Company A decides to make non-violent servant bots doesn't mean someone won't reverse engineer them and reprogram them to be walking assassination devices, murder and mayhem machines, high-tech thieves, or what have you.

Then there's the question of the military. Nobody likes body bags coming home from unpopular wars. Robots would be stronger, faster, more resilient, and if one is damaged it can be repaired or recycled a lot easier than fleshy humans can. ROV's can have their signals hacked, but an autonomous robot would not have that signal vulnerability, it would follow orders without question, and make tactical decisions without guilt or hesitation. Those robots would be built for killing from the start, and while they could be programmed with specific objectives from mission to mission and have their memory wiped between deployments, the AI responsible for inputting the mission parameters into the robots may not be so simplistic in its operation. The danger is the smart robots getting control of the dumber robots that are designed to kill people and blow stuff up and reprogramming those. When you have a fully automated production cycle - machines building machines, and AI designing the machines that build those machines, and other machines running the factories - that's the nightmare scenario. Robots cannot reproduce, they have to be manufactured so the ability for robots to manufacture themselves or to coerce humans by force to manufacture more robots is the critical limitation. Don't let robots be able to make more robots, and don't let autonomous robots significantly outnumber the population. Put in physical safeguards so that the robots cannot be powered indefinitely, and be able to kill their ability to recharge at a moment's notice. In the event of a robot rampage, no electricity means they run out of power at some point. If an AI controls the power grid and there's no way to shut it down you're screwed.

That scenario would be a bit down the road though. A more short term danger would be a rogue AI being networked and gaining control over critical infrastructure systems like power generation and distribution, control of internet data traffic, and control of government computer networks. Screw with any of those and society has serious problems. I think a malfunctioning, buggy AI put in charge of something important and then breaking it is more of a short term danger than something pulling a Skynet. You don't put a 2 year old child in front of the controls to a nuclear power plant for a reason. Why would you trust an unproven AI to do any better?
 
That's the problem with humanity: we could create the most awesome technology imaginable that can benefit everyone, but some greedy SOB with no morals will inevitably ruin it for everyone...

Greedy, fanatical, power hungry... you name it. Human history is measured from war to war, not from peace to peace, and times of great war - whether cold or hot - have historically driven great periods of invention. It's human nature at work. I suppose people expect the machines to behave just as badly, if not worse, because it's what humanity already does to itself, so why expect the machines to behave any differently, or more precisely, why expect people to program the machines to behave any differently?
 
A lot of really wealthy and influential people have made statements in recent years stating they are trying to keep themselves alive long enough to merge with machines and become immortal. (also known as trans-humanism)

Soooo......wealthy folks wants to become trannys? ;)
 
The cultivation of AI will inevitably lead to sentience imo.
This has ramifications that cannot be ignored.
The tech must be developed and directed only by those with as complete a comprehension of it as possible. Avoiding Skynet attributes is where it gets tricky, because while we have an idea of some things to avoid, we cannot see the larger picture, and will progress with this limited scope (possibly unaware of what long-term effects our current progress may have). Inevitably, people who should not have control over this research, will, by birth, money, or both, in lieu of a deserving place based on their education and understanding. <- This is the risky area. Our society has no way to prevent fools from dipping their hands into this area of development, and this is a fear Musk may share with me.
 
This doesn't worry me. Even if it does happen, human beings are good at one thing above all else: Finding ways to kill and destroy. We'd send those metal bastards to the scrap pile in droves.
 
this is silly. say you made an AI. Why would it have needs or desires or even give a shit? an AI would be only the first step of many.

Apparently you have never read or played "I Have No Mouth and I Must Scream".
 
Elon Musk might be a good businessman, but, like Steve Jobs, is a shit engineer.

Just like Steve Jobs and Thomas Edison.
To quote Edison:
"Don't talk to me about X-rays, I am afraid of them."
...dumbass.


That's the problem with humanity: we could create the most awesome technology imaginable that can benefit everyone, but some greedy SOB with no morals will inevitably ruin it for everyone...
Yes, just like Steve Jobs and Thomas Edison. :D
Poor Nikola Tesla.
 
A lot of really wealthy and influential people have made statements in recent years stating they are trying to keep themselves alive long enough to merge with machines and become immortal. (also known as trans-humanism)

Let them.
People like this are incredibly naive and do not understand, that when copying one's consciousness into a computer, it doesn't make them live forever.

Doing so will only make a duplicate of themselves which will continue to exist, but they will still die.
The only way to do so is to transfer the brain/mind itself physically; think of brain-cases in Ghost in the Shell.
 
I think its safe to say that Google is in-fact Skynet. Having acquired how many Robotics companies and now a satellite company ..... One thing is for certain: there is no stopping them; the robots will soon be here. And I for one welcome our new robotic overlords.

May the second Renaissance arrive soon.

That wouldn't make them skynet, that'd make them cyberdyne. Skynet was a cyberdyne product that went rogue.

Regardless, I'll argue automated "smart" phone menu systems have already been weaponized.
 
I think it's funny how in a way we understand how horrible we are and that we should be destroyed.
 
Accept certain death; our obsession with destroying ourselves makes the AI era (the seeds of which will be sown from some trivial war or conflict) inevitable. People will piss and moan about robots not having emotion or a soul, but that will just prove the silliness and redundancy of our kind.
 
Back
Top