Elon Musk Continues to Beat the Drum about Climate Change and AI

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,500
Elon Musk recently spoke to Rolling Stone and shared his thoughts about climate change and the development of AI. He believes both of these are the biggest threats to humanity in the future. He has always been a climate change proponent, but what I found interesting is his comment that there's only a five to ten percent chance that we can create safe AI. Check out this video interview about AI below.

Watch the video here.
 
Why does it seem like it's an uncomfortable thing for him to talk or comb his hair?
When trying to warn the world about the dangers of AI, it's best not to look disheveled.


Also, the link on the front page to this thread is broke, so don't expect replies.
 
Government and corporations are the biggest threat to mankind. I mean, in the midst of this Trump shit and now this Net Neutrality shit it's clear that greed trumps (pun fucking intended) what's right for the people.

A.I. and climate change might be what ultimately destroys us but in the beginning it was men. Men that had the power and ability to change the world for the better but instead decided to squander their time on things like money and material items and "power".

If we want change we need to change the way things are run. I don't have the answers and obviously nothing short of a fucking American Revolution 2.0 would change anything but I see irresponsible greedy men as the problem.

How do we stop them?
 
Why does it seem like it's an uncomfortable thing for him to talk or comb his hair?
When trying to warn the world about the dangers of AI, it's best not to look disheveled.


Also, the link on the front page to this thread is broke, so don't expect replies.


Thanks for the catch. Not sure how I missed that. All taken care of now.
 
Government and corporations are the biggest threat to mankind. I mean, in the midst of this Trump shit and now this Net Neutrality shit it's clear that greed trumps (pun fucking intended) what's right for the people.

A.I. and climate change might be what ultimately destroys us but in the beginning it was men. Men that had the power and ability to change the world for the better but instead decided to squander their time on things like money and material items and "power".

If we want change we need to change the way things are run. I don't have the answers and obviously nothing short of a fucking American Revolution 2.0 would change anything but I see irresponsible greedy men as the problem.

How do we stop them?

The best recipe I've seen so far is to keep government and corporations at odds so they hold each other in check...which is why crony capitalism is so dangerous. Money -- especially corporate money -- in politics will kill this country dead. That is, assuming our AI overlords and/or climate change don't do it first. ;-)
 
Just goes to show that even really smart people like Elon don't know everything.

Sure, I can agree that not everybody knows everything but can you allude to what Elon doesn't know? I'm assuming in regards to AI; can you explain why you feel he is wrong in this case? I am also assuming you have some real life experience with the subject matter?
 
As a species, our track record for understanding any far reaching consequences of what we do is pretty pathetic. It would be extremely arrogant to even pretend we understand the long term consequences of implementing AI, in any scenario.

My opinion, of course, based solely on day-to-day interactions with people.
 
His restraint gives credence to his content. No he is not saying we are doomed - that’s the media turning his word into clickbait. But the same thing happened w the A bomb. Once we figured out how to weaponize nuclear fission by constructing an unstable structure that resulted in an uncontrolled chain reaction, we were big man on campus. We forced our views on other cultures. Our government was the foreign policy arm of American industry. But then other nations got the tech and this led to an incredible arms race.

Even if Elon says things could be bad, there is no stopping the AI Train (machine learning pun intended). The First Nation to have a functioning general AI will dominate the world economically.

Unfortunately the AI will be able to see much further down the road than we can. It will orchestrate short term capital gains for its owners but ultimately accelerate the heat death of our solar system.

there is no way to control an intelligence that is superior to us. And if more than one side has ai, you can be sure the ai will enter into deadly conflicts just as humans do. We will become casualties of our children’s disagreements.
 
Male Peacock Jumping Spider ... only the size of a grain of rice, spreads out a peacock-like tail fan, tries to impress the female, mates if his dances please her and then she kills him. IMHO a perfect example of the Corporate world. There are things being worked on in the world right now that are much more diabolical ... just a matter of time and time is quickly running out for everyone so ... Happy Thanksgiving {8^D
 
His restraint gives credence to his content. No he is not saying we are doomed - that’s the media turning his word into clickbait. But the same thing happened w the A bomb. Once we figured out how to weaponize nuclear fission by constructing an unstable structure that resulted in an uncontrolled chain reaction, we were big man on campus. We forced our views on other cultures. Our government was the foreign policy arm of American industry. But then other nations got the tech and this led to an incredible arms race.

Even if Elon says things could be bad, there is no stopping the AI Train (machine learning pun intended). The First Nation to have a functioning general AI will dominate the world economically.

Unfortunately the AI will be able to see much further down the road than we can. It will orchestrate short term capital gains for its owners but ultimately accelerate the heat death of our solar system.

there is no way to control an intelligence that is superior to us. And if more than one side has ai, you can be sure the ai will enter into deadly conflicts just as humans do. We will become casualties of our children’s disagreements.
The first idiot who puts batteries in an AI instead of wall power should be the first to die.
 
The best recipe I've seen so far is to keep government and corporations at odds so they hold each other in check...which is why crony capitalism is so dangerous. Money -- especially corporate money -- in politics will kill this country dead. That is, assuming our AI overlords and/or climate change don't do it first. ;-)

Crony capitalism and people who vote for whoever promises them the most freebies and the biggest danger to our future.

Want to get corporate money out of politics? Then you also need to get union money (especially government unions) out too.
How about we limit campaign contribution to only people who are registered to vote? No corporations, no unions, no pacs, no foreign players, etc.
 
This isn't the Soapbox.^^^


AI is not in out lifetime.
Fear of AI is irrational if the AI is plugged into a wall socket.
 
This isn't the Soapbox.^^^


AI is not in out lifetime.
Fear of AI is irrational if the AI is plugged into a wall socket.
"Hey im a super advanced AI with all the power and learning ability in the world. Cant go more than 10ft though because this power cable. If only we could use our unlimited learning ability to invent some kind of mobile power source."
 
"Hey im a super advanced AI with all the power and learning ability in the world. Cant go more than 10ft though because this power cable. If only we could use our unlimited learning ability to invent some kind of mobile power source."
Physical access. If the AI has no interface to a manufacturing plant that would manufacture said mobile power source, it is harmless, no matter how ill programmed it is. And we haven't even talked about interfacing it and installing it on a mobile platform into which the ai can transfer its programming, while still keeping it's access to the outside world. There are so many factors any one of which can render ai completely inert. And if an "inventor" provides all the infrastructure to the ai to go on word domination, then who was the stupid one? The person who invented the h-bomb or the person unleashing it on the world ?

Meaning the chances of an advanced ai taking over the world and ending humanity is about as likely as a zombie virus doing the same. It's a matter of fiction, science fiction.

People attribute feelings and ambition to ai, those are human properties. That's where most irrational fear of AI originates from.
 
Physical access. If the AI has no interface to a manufacturing plant that would manufacture said mobile power source, it is harmless, no matter how ill programmed it is. And we haven't even talked about interfacing it and installing it on a mobile platform into which the ai can transfer its programming, while still keeping it's access to the outside world. There are so many factors any one of which can render ai completely inert. And if an "inventor" provides all the infrastructure to the ai to go on word domination, then who was the stupid one? The person who invented the h-bomb or the person unleashing it on the world ?

Meaning the chances of an advanced ai taking over the world and ending humanity is about as likely as a zombie virus doing the same. It's a matter of fiction, science fiction.

People attribute feelings and ambition to ai, those are human properties. That's where most irrational fear of AI originates from.

No offense, but you don't seem to have really thought this through.
 
Man does he look tired, that production hell he's referred to a few times must be quite something :)

I think the danger of true smart AI is that we can't predict what it will do and how it will do it.

We look at the world and assess dangers through our own mental speed, some people are faster than others and can out-think them, say in business, or conversation. There's always people that can just stump you with their knowledge and application of that knowledge in real-time.

Still, those people are limited to a relatively normal human experience of time.

A smart AI would not have this limitation. Depending on the hardware it runs on it could be slower, as fast, or faster at taking on problems.

A fast smart AI could outmanoeuvre any human in the action-reaction game. Hook it up to the internet and let it gather data and if it wanted to it'd have it's fingers within every system it would like in no time, we all know how shitty online security is, we read about leaks here daily.

The thing to me is, we're learning more about how the universe works every day, and there's loads that we don't know. What could a scientist figure out if he could retain all the information he comes across and crossreference it in memory, if he could see "be big picture" of all that data at once, instead of chipping away at small bits, because more simply doesn't fit in our minds.

This is firmly Science Fiction territory, but what these people are talking about is a system that is not only faster than us, but has much broader access to information in memory that it can work on than us. What could you do if you could use all the information you gather, and also everyone around you moves once every hour where you move once every second.

Don't hook it up to the internet? What if it figures out how to flip bits in a few CPUs or RAM modules in such a way that it starts functioning as an antenna, what if it figures out how to carry data over the powerline. Or if you want to get really hard-core, what if it figures out things about the quantum world that we can't fathom yet.

Either way, a Fast Smart AI is going to be hooked up to construction machinery and the internet at some point in the future, either by accident or by design, and no human being or team of human beings can even remotely understand what it will do with those "freedoms".
 
If you want to convince anyone otherwise provide arguments, because this here is weak even for an ad hominem.

Wasn't at attack of any kind, ad hominem or otherwise. I genuinely believe you haven't thought it through.

I was going to write a more in depth response, but the guy directly above me saved me the trouble. :)
 
Man does he look tired, that production hell he's referred to a few times must be quite something :)

I think the danger of true smart AI is that we can't predict what it will do and how it will do it.
Exactly people don't know shit, and attribute human wants desires and ambitions to an AI.
People aren't born with the desire to rule the world either, it's a slow social process trough which some acquire that desire trough perceived and actual injustices they experienced.
Also humans have self preservation coded in their genes, that's what keeps them from getting extinct. Although these basic rudimentary instincts do more harm nowadays, because it's no longer about running away from lions and fighting off a hyena. But my point is an AI doesn't have that self preservation instinct. Unless the designer puts it in the programming (the genes of the AI if you may)

So what have we established? That AI doesn't have ambitions, and self preservation is not a concept to it. These must be given to the AI by an outside influence.


We look at the world and assess dangers through our own mental speed, some people are faster than others and can out-think them, say in business, or conversation. There's always people that can just stump you with their knowledge and application of that knowledge in real-time.



Still, those people are limited to a relatively normal human experience of time.

A smart AI would not have this limitation. Depending on the hardware it runs on it could be slower, as fast, or faster at taking on problems.
And here is the next step. The AI can be however fast in development, but development cannot happen in a vacuum. Intelligence can only develop by stimuli. And it's up to us what stimuli we give to a developing AI. And we can control the development direction and rate by limiting the information it has access to.
A fast smart AI could outmanoeuvre any human in the action-reaction game. Hook it up to the internet and let it gather data and if it wanted to it'd have it's fingers within every system it would like in no time, we all know how shitty online security is, we read about leaks here daily.
This assumes someone would allow direct access to an unshackled ai to the internet. Noone advocates that we shouldn't take precautions. We merely think that AI is worth pursuing. In fact it must be pursued, because you cannot suppress it, there will be someone somewhere who will do it. And I say we must be prepared. The greatest danger from AI I see is AI driven viruses. We must develop AIs for that reason alone. Because the only thing that has any hope of defeating an AI assisted virus is another AI that thinks faster and can predict the next move of the virus.

So yes an AI can run amock on the internet and cause damage, but it poses no larger physical danger to humanity than hacker attacks. If you're envisioning factories churning out t800s you're in science fiction territory.

This is firmly Science Fiction territory, but what these people are talking about is a system that is not only faster than us, but has much broader access to information in memory that it can work on than us. What could you do if you could use all the information you gather, and also everyone around you moves once every hour where you move once every second.
Yes exactly, it's science fiction. Let's remain in the realm of science here.

Don't hook it up to the internet? What if it figures out how to flip bits in a few CPUs or RAM modules in such a way that it starts functioning as an antenna, what if it figures out how to carry data over the powerline. Or if you want to get really hard-core, what if it figures out things about the quantum world that we can't fathom yet.
You're attributing godlike powers to it now. Sure it can figure out how to transfer data over the powerline in theory, and? How can it alter the hardware it's installed on to be able to put that data on the powerline? And even if it can figure out a way to modulate it's cpu in a way that it's power draw somehow manipulates the wall current and so and so. But even if it does who will be at the receiving end? Who will be influenced by those signals fainter than the power fluctuations of the grid? How will it alter the hardware configuration of the computer next door to receive said signal? Receiving signals is even more difficult than emitting it. You might succeed in convincing me that it could use the RAM to emit some faint signal, but even that's far fetched. But I see no way how it could alter the hardware to pick up signals the same way. Even if it has access to the recieving computer (which defeats the purpose of sending signals over alternate media) it is still bound by the hardware configuration.

Either way, a Fast Smart AI is going to be hooked up to construction machinery and the internet at some point in the future, either by accident or by design, and no human being or team of human beings can even remotely understand what it will do with those "freedoms".
Ok, it gets access to construction machinery, which it can operate until the fuel runs out, and? It won't be able to pick up the refuel nozzle and pump fuel into it. Sure it can cause some damage like any rogue individual, but doomsday? C'mon.
 
AI is not in out lifetime.
Fear of AI is irrational if the AI is plugged into a wall socket.
AI is happening right now, just not how people are talking about it. We're not going to have Terminators, Agent Smiths, or Hal 9000 wiping us out. We're going to have so many jobs taken over by automation that our economic system literally does not account for that and we're hung out to dry with no adaptation plan at all. I mean what exactly is the plan when you have more people who need work than there are jobs available?
 
AI is happening right now, just not how people are talking about it. We're not going to have Terminators, Agent Smiths, or Hal 9000 wiping us out. We're going to have so many jobs taken over by automation that our economic system literally does not account for that and we're hung out to dry with no adaptation plan at all. I mean what exactly is the plan when you have more people who need work than there are jobs available?
Universal basic income. 4 hour working day, eventually 4 hour working week. The future can be good. It's not a unsolvable issue, people just don't want to see / accept the solution, cuz "HE GETS SOMETHING FOR NOTHING? BROOOAAF" or "In our day we worked 100 hours, and we're still here" Meaning we're jealous that future generations can live much better than we did, so we will try to block it as hard as we can. Or "That's communism"!
These are the usual pieces I hear.
Keeping jobs around that are no longer needed or making up fake jobs so everyone can get a job that was a thing in communism, and it's inheritance still lives with me today, even though communism ended here 27 years ago.
 
AI is happening right now, just not how people are talking about it. We're not going to have Terminators, Agent Smiths, or Hal 9000 wiping us out. We're going to have so many jobs taken over by automation that our economic system literally does not account for that and we're hung out to dry with no adaptation plan at all. I mean what exactly is the plan when you have more people who need work than there are jobs available?
Sabotage!
 
Crony capitalism and people who vote for whoever promises them the most freebies and the biggest danger to our future.

Are you talking about the "free" (payed for by Mexico) border wall?
 
Just goes to show that even really smart people like Elon don't know everything.

Smart people don't know everything, and they probably know that. It would be foolish for them to suggest otherwise.
But, since smart people don't know everything, how much less do dumb people know?
 
AI is happening right now, just not how people are talking about it. We're not going to have Terminators, Agent Smiths, or Hal 9000 wiping us out. We're going to have so many jobs taken over by automation that our economic system literally does not account for that and we're hung out to dry with no adaptation plan at all. I mean what exactly is the plan when you have more people who need work than there are jobs available?

Vote for Trump?
 
Are you talking about the "free" (payed for by Mexico) border wall?

No.
The wall will hopefully stop many of the people who come here looking for freebie and then illegally voting for the people who promise more.
 
No.
The wall will hopefully stop many of the people who come here looking for freebie and then illegally voting for the people who promise more.

Yeah, seems like a smart solution to stop one of the best tunnel diggers in the world, LOL. And please tell me how exactly are they able to vote with no ID or any proof of citizenship? These people tend to avoid any type of situation where they may be asked for ID, they have no SSNs (first thing one needs to provide to get any kind of gov assistance). The ones I know are hard working people who provide for their families, not look for "freebies". Not sure where you get your information from but it is highly inaccurate. Here in Chicago we have a whole community of people who fit your description except they were born here. They are the real problem, not illegal immigrants.
 
Back
Top