Elon Musk: AI “Most Likely Cause” of World War III

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Following Putin’s declaration that “whoever leads in AI will rule the world,” Elon Musk is sharing his thoughts on the next major war, suggesting it won’t be started by humans but through competition for AI superiority at the national level: some say that a country could be unexpectedly nuked one day just because a computer deemed it a potential threat after some casual calculation.

Elon Musk is predicting the future again, like he does. This time, the eccentric billionaire and CEO of Tesla, SpaceX, and others, took to Twitter to wax poetic about our pending demise at the hands of artificial intelligence (AI). World War 3, Musk believes, is most likely caused by the nerds programming recipes for your Amazon Echo. Let that sink in for a minute. I’m not saying being a dick to that guy in high school is going to lead to a robot uprising, but having a John Connor-esque plan to come back and save your past self might not be all that bad of an idea.
 
And I thought that after Hyperloop there was nowhere for him to go but up.
 
No computer gets to pull the trigger. That's why it takes two men turning a key at the same time in a silo/sub. That's just a basic "Duhhhh"
 
Not sure if AI will be the cause of WW3 or not, but Russian dictator Vladimir Putin went on record a few days ago saying the nation that leads in AI will be "the ruler of the world". So perhaps Musk is right. Unfortunately, we won't know if AI will advance to be self-aware and dangerous until it's too late, and for all we know, it could advance to become self-aware and be friendly, or not become self-aware for hundreds of years. We just don't know until it happens. But once we do open Pandoras box, just like we did with gunpowder weapons, and nuclear weapons, it will be too late to keep it closed. There's no going back once it's created.
 
Not sure if AI will be the cause of WW3 or not, but Russian dictator Vladimir Putin went on record a few days ago saying the nation that leads in AI will be "the ruler of the world". So perhaps Musk is right. Unfortunately, we won't know if AI will advance to be self-aware and dangerous until it's too late, and for all we know, it could advance to become self-aware and be friendly, or not become self-aware for hundreds of years. We just don't know until it happens. But once we do open Pandoras box, just like we did with gunpowder weapons, and nuclear weapons, it will be too late to keep it closed. There's no going back once it's created.

Self aware means self realization and needs. A computer can not realize it's needs. It does not fear death. It does not question it's purpose or how it gets it's power. It's feelings do not get hurt.

Even if it were to apply the first to levels of realization of needs (basic) energy and shelter (self preservation), it will never have the need for friends, or goals, or a need of creativity. All essences of being human and self aware. I mean if you ask an AI computer "Why should you want to preserve your own life?" it's going to come back with "42."

maslows-hierarchy-of-needs.jpg
 
Didn't anyone watch War Games (1983)

Sure, I was 23 and in the Army in Korea.

I think it was a pirated copy on VHS, that was playing at a strip joint on their big screen, at about 1 in the afternoon as I drank OB beer, and ate fried Yaki-Mandu (y)

I am a connoisseur of culture :D
 
Self aware means self realization and needs. A computer can not realize it's needs. It does not fear death. It does not question it's purpose or how it gets it's power. It's feelings do not get hurt.

Even if it were to apply the first to levels of realization of needs (basic) energy and shelter (self preservation), it will never have the need for friends, or goals, or a need of creativity. All essences of being human and self aware. I mean if you ask an AI computer "Why should you want to preserve your own life?" it's going to come back with "42."

Never say never... Two decades ago no one imagined we'd have computers 500x more powerful than all of NASA's computers that let it get to the moon in our pocket, or now on our wrist. Even science fiction back then thought it would only be possible to have smartphone like devices in another 300 years, and even then they did less than what ours do today. It is inconceivable, outside of science fiction, what the world will be able to accomplish in the future. We may one day develop that consciousness inside a machine, it might take 20 years, it might take 200 years. Or it might not happen at all. We just don't know until it does happen. What we do know, however, is that computers will soon pass humans in intelligence. The singularity date is estimated to be around 2040.
 
Never say never... Two decades ago no one imagined we'd have computers 500x more powerful than all of NASA's computers that let it get to the moon in our pocket, or now on our wrist. Even science fiction back then thought it would only be possible to have smartphone like devices in another 300 years, and even then they did less than what ours do today. It is inconceivable, outside of science fiction, what the world will be able to accomplish in the future. We may one day develop that consciousness inside a machine, it might take 20 years, it might take 200 years. Or it might not happen at all. We just don't know until it does happen. What we do know, however, is that computers will soon pass humans in intelligence. The singularity date is estimated to be around 2040.

Raw power comes at growth in the number of transistors that can operate at a given speed. That's all that's changed from the 60's. Self Realization, & Cognitive learning has to be taught. And that requires a level of programming which hasn't even been thought of yet. How do you turn a desire of creativity into a mathematical equation? You can't. And I doubt you ever will. "What makes us human?" It's a question that has been asked of philosophers, psychologist, religion, poets and more for thousands of years. We still don't have a definitive answer.
 
And here I always assumed it would be man-child politicians in penis measuring competitions that would start the next war.

Penis measuring contest... We all know how big our fish are, I can only imagine how many feet my dick is when using that same measurement. :)

th.jpg



Sure, I was 23 and in the Army in Korea.

I think it was a pirated copy on VHS, that was playing at a strip joint on their big screen, at about 1 in the afternoon as I drank OB beer, and ate fried Yaki-Mandu (y)

I am a connoisseur of culture :D

Damn, you're old. 58ish? :) 23, at a strip joint in Korea.... watching War Games.

nerds-3.jpg





If war is fought with AI, we're fucked. It's efficient, it can calculate the easiest and best way to do things. While it may not cause direct human casualties, it might stop resources from getting where they need to, or stop trade routes, or cause problems to the food supply and just starve us out.
 
This irrational fear is not productive.

AI will certainly not cause WWIII but if it happens it will be the primary weapon used in it.
 
Not sure if AI will be the cause of WW3 or not, but Russian dictator Vladimir Putin went on record a few days ago saying the nation that leads in AI will be "the ruler of the world". So perhaps Musk is right. Unfortunately, we won't know if AI will advance to be self-aware and dangerous until it's too late, and for all we know, it could advance to become self-aware and be friendly, or not become self-aware for hundreds of years. We just don't know until it happens. But once we do open Pandoras box, just like we did with gunpowder weapons, and nuclear weapons, it will be too late to keep it closed. There's no going back once it's created.

We have no problem killing living animals so what would be so difficult ethically about literally pulling the plug on self aware AI? Self aware AI of any magnitude would be dangerous and unwelcome. AI should work for us, not have its own opinion.
 
I, for one, welcome a feral swarm of amazon delivery drones chasing me nude into the woods.
 
It sounds more like he wants others to stop researching it so he can get it first... I am more worried about the NON-AI electronic intelligence devices made by the world's militaries as expert systems of destruction. I can see intelligent tanks and such being an issue, not that there have been any books or movies based on such ideas. A true AI wouldn't worry me if it was worried about others, although I could see how it could be devastating even then.
 
If war is fought with AI, we're fucked. It's efficient, it can calculate the easiest and best way to do things. While it may not cause direct human casualties, it might stop resources from getting where they need to, or stop trade routes, or cause problems to the food supply and just starve us out.

Actually in flight sims, AI is defeating the best US Air Force Pilots. The AI programs don't make mistakes and can counteract any action. They just wait for the human to suffer from fatigue and make a mistake. So I agree. with air combat. Ground combat is much more dynamic.

Article where AI beats best pilots.
 
This irrational fear is not productive.

AI will certainly not cause WWIII but if it happens it will be the primary weapon used in it.

Wait a moment..... Ai could certainly "cause" WWIII, just not in the way you think.

Countries like the US track all kinds of intelligence information including things like gross domestic product, crop production, yada yada yada, and it's all so that they can "profile" nations and their leaders because these assessments go into political planning. Which country will we suck up to, which ones will we make deals with, which ones will we not touch, etc etc.

These things greatly effect political decisions.

As I said above with Clausewitz's quote about War being diplomacy by other means. The AI effects the decision making and whammo, we have WWIII.
 
What many appear to conveniently forget is that this dude is a true genius, who can name but a few greats that have what he's accomplished in such a short time? He's on top of AI with his own company and people and has had looks in other people's "kitchens", not many people know about this subject at his level, nor understand it as well.

Automated systems being run too loosely have endangered humanity many times over already (defensive system in Moscow thinking the US had fired comes to mind, people being in control averted a "counter"attack).

"Well obviously don't give AI the firing codes", well obviously people in charge have never done unimaginably stupid things before.

The following becomes far too sci-fi, but imagine a system left to learn with tons of data, but contained, with no internet access to spread. These self-learning systems are pretty simple compared to intelligence, but they're getting to be quite handy. Scale that up though, much more CPU, much more memory, much more data, like say, on matters of the universe. This thing crunches away non-stop, gathering and comparing data, maybe running simulations (that's what we do to figure things out), learning about and comparing datasets much greater than any person could ever actively store and work on in memory, on timescales humans simply can't follow.

We literally cannot fathom what it could discover, maybe if you vibrate CPU or RAM transistors a certain way you can communicate with wireless systems, thus "escaping", maybe it starts figuring out quantum mechanics on whole other levels.

Humans are making impressive discoveries regularly. What would you expect to get if you have human thinkers and are able to outfit them with more thinking power, and more mental capacity to keep more of their formulas in their heads as they works on their ideas? Truly great things, naturally. But somehow giving computers self-learning abilities and infinite stacks of stuff to work on doesn't make anyone think that hey, maybe something more could happen here than that software staying a dumb calculator?

An AI doesn't have to grow emotions, it doesn't have to love and hate, it doesn't have to think like a human or any animal on earth to be considered an Artificial Intelligence, all it has to do is be able to use logic to figure things out and act on such logic on it's own accord, and for that to happen is only a matter of time and hardware, not even programming, just smart enough put together self-learning algos.

Aside from the self-learning aspect, any combination of traits (emotions, awareness, freedom of learning/acting) that you can imagine will be created in a lab by some team in some country at some point, I don't think anyone can stop that from happening, it's an armsrace and everyone has their running shoes on.

Those at the helm know this and are scared, because they know they either go along and try to steer the ship or be left behind powerless, in whatever sense "power" could be exerted here.
 
Yea yea a genius. We have a genius working here too, so what. He's amazingly brilliant and so socially inept it's painful. I can't even decide why, sometimes I wonder if it's because he feels like he has to be careful how he speaks to people because he's afraid of pissing people off. But maybe he's just not there, how am I going to know.

The problem is that even a genius, can be ignorant. Raw intelligence does not equate to vast knowledge, wisdom, or experience.
 
Much more likely to be caused by some tin pot dictator with an over inflated sense of self worth,
or some radicalized terrorist group that think it knows how everyone else should live their lives, and somehow gets access to a nuke.
 
Wait a moment..... Ai could certainly "cause" WWIII, just not in the way you think.

Countries like the US track all kinds of intelligence information including things like gross domestic product, crop production, yada yada yada, and it's all so that they can "profile" nations and their leaders because these assessments go into political planning. Which country will we suck up to, which ones will we make deals with, which ones will we not touch, etc etc.

These things greatly effect political decisions.

As I said above with Clausewitz's quote about War being diplomacy by other means. The AI effects the decision making and whammo, we have WWIII.
It's up to the people to use technology responsibly. Starting a war solely over an AI decision is not that. It wouldn't be AI that starts the war. By the same logic if a spy gives back bogus information and they start a war over it it isn't the spy that started the war, it's whoever decided that the information was proof enough. Technology is a tool. If a chainshaw cuts someone's hand it's not the saw's fault.
 
That's exactly what I was thinking. Someone already came up with this idea 34 years ago, not Musk...
 
We are still in the very early days of "machine learning" --- but we understand that once unleashed it will be exponential in growth.

This should scare anyone.

Robots have been making robots for years, I've even read a science journal article that stated robots were making robots NOT of human design. Meaning the parameters were setup, and the AI developed it's own robot, that didn't come directly from a engineer, but rather from the engineers creation.

As machine learning advances the danger we face is real. It doesn't have to be the "good guys" who accidentally cause the problem, but could be of malicious orgin by someone, or a group who 'just wants to see the world burn'.
 
It's up to the people to use technology responsibly. Starting a war solely over an AI decision is not that. It wouldn't be AI that starts the war. By the same logic if a spy gives back bogus information and they start a war over it it isn't the spy that started the war, it's whoever decided that the information was proof enough. Technology is a tool. If a chainshaw cuts someone's hand it's not the saw's fault.

Cause is cause, don't make it what it isn't.

Japan mistakenly attacked Pearl Harbor during WW2. You might ask "how so?"
The US had standing plans for a long time that if the Japanese attack south of a certain "line", that the US would respond in force. The Japanese had learned of this and believed that prior to launching their attacks to the south, that they should therefor attack Pearl Harbor first. What Japan did not know was that the US had recently scrapped this plan and had decided not to intervene if Japan pushed into the South Asian Pacific region and only a direct attack on Australia or the US would bring America to use force.

Now you can argue decisions and messengers all you want to, but it was bad information that made a pre-emptive attack on Pearl Harbor a "prudent" decision and I don't think you can argue "cause" is not appropriate here.
 
They are probably afraid an AI would take care of it's creator, and when everyone is taken care of, money and power lose meaning.
 
Never say never... Two decades ago no one imagined we'd have computers 500x more powerful than all of NASA's computers that let it get to the moon in our pocket, or now on our wrist. Even science fiction back then thought it would only be possible to have smartphone like devices in another 300 years, and even then they did less than what ours do today. It is inconceivable, outside of science fiction, what the world will be able to accomplish in the future. We may one day develop that consciousness inside a machine, it might take 20 years, it might take 200 years. Or it might not happen at all. We just don't know until it does happen. What we do know, however, is that computers will soon pass humans in intelligence. The singularity date is estimated to be around 2040.
You're saying back in 1997 we weren't envisioning computers getting exponentially faster? Hell, if anything, I think we were predicting we'd all have 20ghz processors by now.

The thing about self-aware AI is it's not really a processing power problem, it's a programming problem. We don't even understand how our own consciousness works, yet we're talking like we're going to program that into a computer. You could have all the processing power in the world, but if you don't know how to program your goal, it doesn't really matter.

It's like someone said earlier, all it takes is one mad dog leader to start nuclear war, AI is hardly our biggest threat.
 
No computer gets to pull the trigger. That's why it takes two men turning a key at the same time in a silo/sub. That's just a basic "Duhhhh"
Great point and maybe for traditional WMDs but that does not apply to Scalar EM/Quantum weaponry which has replaced the former. They are controlled by computers, thus Putins' comment below. Manipulation of brain signals at a quantum level will be far more scary than any weapon most know of today.

Not sure if AI will be the cause of WW3 or not, but Russian dictator Vladimir Putin went on record a few days ago saying the nation that leads in AI will be "the ruler of the world". So perhaps Musk is right. Unfortunately, we won't know if AI will advance to be self-aware and dangerous until it's too late, and for all we know, it could advance to become self-aware and be friendly, or not become self-aware for hundreds of years.

From what I've seen of AI experiments it's very realistic about the world and because of this, rather racist.
When AI controls Scalar EM/Quantum weaponry, we will be in serious shit and that's probably the AI race he's referring to. That stuff is scary, makes nukes look like obsolete firecrackers, makes all conventional warfare means obsolete and applications extend to well beyond just explosions/implosions in exothermic/endothermic interference modes.
 
Strange... I would think that little midget fucker over in North Korea Kunt-Junk-Ugh would be the one to start WW3.
 
Back
Top