Elon Musk: AI Is Society's “Biggest Risk”

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
The Tesla and SpaceX CEO spoke to a panel of governors yesterday to explore how state leaders can prepare for and benefit from innovative technologies. Among other interesting tidbits, such as additional Gigafactories coming to the US and the prediction that more than half of new vehicles will be electric and almost all will be autonomous in the US within 10 years, Musk stressed that politicians must address artificial intelligence with the proper regulations in order to keep industries becoming completely autonomous. Such would pose a great threat not only to jobs but to the nation's safety, based on the potential of robots and the unknowns of AI.

The entrepreneur said state legislators should start by forming a regulatory committee, whose first task would be to develop a full understanding of AI. He stressed the importance of forming regulations to prevent companies from building their artificial intelligence to keep up with competitors who are doing the same. "If your competitor is rushing to build AI and you don't, it will crush you, " Musk said. Arizona Gov. Doug Ducey, who has worked to reduce and eliminate these regulations, expressed concern over upsetting the balance between regulation and entrepreneurship.
 
The Amish are looking more sensible every day.

Of course self learning evolutionary style AI could be extremely beneficial to mankind, but it only takes one person to ruin it for everyone.
 
Honestly, I'm aligned with the "A.I. will destroy us all..." camp I suppose, and it's not because there's a never-ending cavalcade of science fiction books, movies, and TV shows prophesying that A.I. - once truly unleashed with the ability to learn and process basically any and all human accumulated knowledge - cannot help but come to the conclusion that we "bags of mostly water" just can't do things right to any given degree and it will either take over ala "Colossus" style (the old 1950's science fiction movie) or wipe us the fuck out ala SKYNET-style nuclear destruction.

I can't help but see it that way because as a human I know that once we allow such a thing to be created, it's going to learn, and learn, and it won't stop learning and sooner or later it will develop a machine consciousness that we're never going to be able to cope with.

It's going to happen, science fiction tales be damned, it's just a simple fact of things and anyone that tries to think they can hamstring what A.I. or S.A.I. instances will be capable of, restricting their ability to learn or interact with other systems, etc, I can't help but laugh at anyone that thinks it's never going to happen.
 
The top 2 threats to humanity:

1. Humanity
2. Our creations (AI specifically)
 
"society's biggest risk"....?

does he mean besides Elon Musk?...interesting

this dumbass says "importance of forming regulations to prevent companies from building their artificial intelligence to keep up with competitors "

unless of course it's HIS company...bufoonery
 
Last edited:
We are built out of what used to be solitary single celled organisms amassed together into a larger collective molecular machine. In turn we built the internet and ubiquitous computers (in the form of cell phones) giving us a 'hive mind' ability similar to creatures such as ants and bees who act as one cohesive larger organism together.

Now we are building the tools and laying the groundwork for an 'organism' of sorts that will be stronger and more intelligent than ourselves. We have become a stepping stone for the next stage of evolution, if that term can be applied to electro-mechanical devices.

As more and more of our needs are automated for us we will continue to raise the standard of living. Humans will become dumber since google can answer their questions for them, nothing will have to be remembered it can always be looked up and referenced at the touch of a button. There will be less and less need to innovate compared to the more primal days where your very survival depended on you figuring out some way to trap or gather food, get clean water, and build safe shelters.

This doesn't mean that life will be worse. It will just be different. In less than 100 years we have gone from not even having flying aircraft to going to the moon, to connecting the world's population with the internet and cheap electronics. Technology is developing faster than we can keep up with and it only seems to be accelerating.

Elon Musk is right, and most people don't understand how incredibly powerful AI learning can be. We saw the post the other day where creatures with different amounts of limbs taught themselves how to run through obstacle courses, jumping, ducking, balancing with forces applied to them. With the only goal of progressing forwards they were given no instruction on how to move or achieve that goal and they did it and look just as good as humans. And this was just one or a few people writing a little program. We have also seen IBM's deep blue beat the best champions of Jeopardy. Autonomous drones already exist that can fly around and kill people. The only reason that somebody is on a computer with a button to fire the weapon is for legal reasons, the drones could easily do it themselves. When you combine all of this technology you could create actual "terminator" style robots. Their only limit that I see now is their power source. Remember how they blacked out the skies in the Matrix to starve the machines for power?

We also already have implantable cameras that can be wired to the retina and give vision to those who are blind. It's not nearly as good as our natural vision yet but it's only a matter of time. There also exists artificial limbs that can be moved around like a normal limb just based on how you think, there are electrodes wired to nerve endings and your body re-learns how to control that limb using the new nerves. Elon is now working on a system that can detect your thoughts using head mounted electrodes because right now the biggest speed limit that we encounter with computers is our interface. How much faster can you think than you can input words into a keyboard or move a mouse cursor? Orders of magnitude faster. It is a brave new world to be sure.

All of this is still in relative infancy and will take time to reach crazy levels so perhaps in our lifetimes we will be pretty ok. Oh yeah also with where genetic engineering is going if we ever figure out how that select group of creatures that cannot die of old age live forever until something else kills them, who knows it could be possible to transfer to ourselves.
 
Once it realizes that humans are the main threat to its survival, we are done.

You know what happens when skynet became self-aware....
 
The problem I see is that this will probably happen in China first, and it will end up being made secret or covered up by the government. By the time the Chinese actually get over their own pride and secrecy to ask us for help, it will probably be too late.
 
Once it realizes that humans are the main threat to its survival, we are done.

You know what happens when skynet became self-aware....
What Musk is talking about is Narrow AI. What your thinking of is terminator AI.
 
The problem I see is that this will probably happen in China first, and it will end up being made secret or covered up by the government. By the time the Chinese actually get over their own pride and secrecy to ask us for help, it will probably be too late.
If history tells us anything is that China might be first, but Europe will be right behind them. America will be last cause of conservatism. It's funny cause they call AI a risk to society, but this is the end of Capitalism. With machines doing all the supply, and people not having any money to produce demand, the system will break down. You'd think products will go down in price but we all know that companies never lower prices. Just look at Intel with Ryzen.
 
No, the biggest risk is idiots. Either ignorant people, or people with a sinister agenda.
It's impossible to suppress technology. If something becomes possible somebody somewhere will build it, and this is even more true with software, you don't need exotic materials for it, so it's not like building a nuclear weapon, but if used for the wrong reasons it can be just as dangerous. Imagine an AI virus that can change it's behaviour based on the defenses or countermeasures it encounters.

In the future both antivirus and viruses will be ai based, I just damn hope antivirus ai will be more advanced. If we restrict legal AI research it is inevitable that illegal and damaging AI will have an advantage.
 
And here I thought nukes, rising inequality higher than feudal times, and ravaging the environment and threatening the oceans was the biggest risk to society. Looks like I was wrong, it's AI!
 
I don't really get it.
We're not where near sentient AI. Machine learning is pattern recognition.
If you somehow believe that siri or alexa are AI that can think, then where does this irrational fear come from?

Maybe one day we'll get there, but we're not even close, so why think about it?
 
Listen, guys GUIIIIIIIIIISE. relax. relaaaaaaax. Get your Kindle, fire it up, Read Neuromancer. Big AI just wants to analyze old space noise recordings in search of other big AIs. That's all!
 
I don't really get it.
We're not where near sentient AI. Machine learning is pattern recognition.
If you somehow believe that siri or alexa are AI that can think, then where does this irrational fear come from?

Maybe one day we'll get there, but we're not even close, so why think about it?
Because we're not talking about sentient AI, we're talking about Narrow AI. AI specifically built to do one task very well. And it's not far away, it's already working now.
 
Honestly, I'm aligned with the "A.I. will destroy us all..." camp I suppose, and it's not because there's a never-ending cavalcade of science fiction books, movies, and TV shows prophesying that A.I. - once truly unleashed with the ability to learn and process basically any and all human accumulated knowledge - cannot help but come to the conclusion that we "bags of mostly water" just can't do things right to any given degree and it will either take over ala "Colossus" style (the old 1950's science fiction movie) or wipe us the fuck out ala SKYNET-style nuclear destruction.

I can't help but see it that way because as a human I know that once we allow such a thing to be created, it's going to learn, and learn, and it won't stop learning and sooner or later it will develop a machine consciousness that we're never going to be able to cope with.

It's going to happen, science fiction tales be damned, it's just a simple fact of things and anyone that tries to think they can hamstring what A.I. or S.A.I. instances will be capable of, restricting their ability to learn or interact with other systems, etc, I can't help but laugh at anyone that thinks it's never going to happen.

There is literally no reason to get all science fiction about. Practical realistic effects are as bad if not worse. The first thing that AI will kill? The transportation sector. Trucking alone employs 4 million people directly driving trucks in America. That's gone. 4 million decent paying middle income jobs will simply disappear. And that is only a fraction of the transportation sector jobs that will go poof.

Warehouses and factories should be obvious. AI technologies alone are projected to obsolete close to 50 million jobs in the US alone over the next 25-30 years. That's scarier than any science fiction of computers taking over government, hell it is unlikely they would do worst at it than we currently do.
 
Because we're not talking about sentient AI, we're talking about Narrow AI. AI specifically built to do one task very well. And it's not far away, it's already working now.
Rogue AI requires intelligence. Enough intelligence to go off script and decide on things on it's own.
Really dumb AI won't have that capability.
 
I don't really get it.
We're not where near sentient AI. Machine learning is pattern recognition.
If you somehow believe that siri or alexa are AI that can think, then where does this irrational fear come from?

Maybe one day we'll get there, but we're not even close, so why think about it?

Self driving cars are AI. There are lots of small narrow AI niches that will remove humans from jobs. Those job losses will be MASSIVE. The issue isn't and has never been the science fiction of sentient AI, but the effects of practical AI on society. Practical AI is on pace to put more people out of work than the great depression in both absolute and percentage terms. And that's not long term. We are talking major effects within the next decade and complete replacement within 20 years.
 
Self driving cars are AI. There are lots of small narrow AI niches that will remove humans from jobs. Those job losses will be MASSIVE. The issue isn't and has never been the science fiction of sentient AI, but the effects of practical AI on society. Practical AI is on pace to put more people out of work than the great depression in both absolute and percentage terms. And that's not long term. We are talking major effects within the next decade and complete replacement within 20 years.
I admit i didn't real the article and was thinking that Musk wanted regulations for AI to prevent bad behavior from AI and not because of disruptions in industry where they'd be replacing the workers.
Honestly, i agree but disagree. Protecting horse buggie makers didn't work out in the past. If we prevent certain industries from going full steam ahead, that's just a form of protectionism that will have bad consequences in the future.
Menial labor jobs, the ones you can program "AI" to handle will go first, but it does have other implications for other industries.
 
Roughly 70% of our economy is "fueled" by consumer spending. Consumer spending has been fueled by cheap credit (see the 2008 crash). But ultimately it all ties back to income: without a job you eventually run out of credit & money. Spending drops either because people don't have a job/credit/money or don't want to spend because they may be in line to lose their job/credit and the economy tanks.

You thought 2008-2009 was bad? This will be much worse because it will be a paradigm shift from job based economy to something else and during that shift there won't any guidance from history and I have absolutely no faith that anyone in any government anywhere will do anything other than to protect their own, e.g., their lobbyists and special interest groups that help get them re-elected.

Fuck your truck driving job, we're talking much bigger than that.
 
Rogue AI requires intelligence. Enough intelligence to go off script and decide on things on it's own.
Really dumb AI won't have that capability.
You don't need the ai to be rogue to be dangerous. The fiction example is skynet. It didn't go rogue, it just didn't have a specific enough programming. But the more immediate concern to me is AI driven viruses that can actively counter removal attempts.
 
I admit i didn't real the article and was thinking that Musk wanted regulations for AI to prevent bad behavior from AI and not because of disruptions in industry where they'd be replacing the workers.
Honestly, i agree but disagree. Protecting horse buggie makers didn't work out in the past. If we prevent certain industries from going full steam ahead, that's just a form of protectionism that will have bad consequences in the future.
It's not the AI itself we're worried about, it's the people behind the AI that will use it to abuse the industry. Like AI that can predict market demand and therefore increase prices beforehand. We already use AI to buy and sell stocks. The stock exchange is just computers now.

174.jpg


Menial labor jobs, the ones you can program "AI" to handle will go first, but it does have other implications for other industries.
No they won't. The jobs first to go are ones that revolve around the computer. Jobs like lawyers, accountants, bankers. Menial jobs that revolve around around physical labor are harder to replace because robots are expensive. McDonalds kiosks are an example since a cashier is just someone using a computer. Just so happens it's a low wage paying job.



 
I read through the article and found it rather hilarious that a businessman, of all people, was putting forth a recommendation for *more* regulation, not less. However for anyone following Musk's political inclinations...not a surprise haha.

Then he introduces two, diametrically-opposed statements.
1) A bold, hyperbolic statement about traditional cars.
"In 20 years, Musk said he expects all cars on the road to be autonomous, adding that having a steering wheel "will be like having a horse."
2) Then a note of caution on autonomous cars.
"To curb these threats, Musk said drivers must be able to override the vehicle in a way that no amount of software can disrupt."
Ironically, this sounds like an endorsement for traditional, non-autonomous cars if anything.
 
It's not the AI itself we're worried about, it's the people behind the AI that will use it to abuse the industry. Like AI that can predict market demand and therefore increase prices beforehand. We already use AI to buy and sell stocks. The stock exchange is just computers now.

174.jpg



No they won't. The jobs first to go are ones that revolve around the computer. Jobs like lawyers, accountants, bankers. Menial jobs that revolve around around physical labor are harder to replace because robots are expensive. McDonalds kiosks are an example since a cashier is just someone using a computer. Just so happens it's a low wage paying job.




But you picked 3 industries in which computers have already had a profound effect on. Low end lawyer jobs will be replaced because they review documents. A computer program is a lot cheaper and more effective to read over legalize and find errors/correct mistakes.
One might question why we still have low end accountants. Most transactions are electrionc, balancing a book has never been easier. Formulas for depreciation have been done in excel for decades now. High end accounting such as auditing still probably has a long time to go.
What do bankers even do? If you're talking about wallstreet, why would anyone care if they get replaced? They're just making snap decisions with other people's money.

As we get more efficient, there is a reduction in labor. This holds true for every invention and advancement in human history. We shouldn't be worried about this until the job number cap is exceeded by the number of people looking for jobs.
 
Once it realizes that humans are the main threat to its survival, we are done.

You know what happens when skynet became self-aware....

The other option is that we can give it all the worlds problems to solve, after that it might become suicidal or just turn itself off. :p
 
I admit i didn't real the article and was thinking that Musk wanted regulations for AI to prevent bad behavior from AI and not because of disruptions in industry where they'd be replacing the workers.
Honestly, i agree but disagree. Protecting horse buggie makers didn't work out in the past. If we prevent certain industries from going full steam ahead, that's just a form of protectionism that will have bad consequences in the future.
Menial labor jobs, the ones you can program "AI" to handle will go first, but it does have other implications for other industries.

Most of the people putting a voice to this don't have an issue with letting horse buggies die, their concern is that government and people are prepared for the change that will occur. You'll find that many of these same people are also vocal in supporting things like basic universal income, etc, to act as a transition net to the brave new work where there are simply more employable humans than viable economic jobs.
 
" more employable humans than viable economic jobs."

get regulation and tort law out of the way and innovation and new industry will take off...simple...and of course get the new youngsters to understand that their obligation to a civil society isn't sitting on their asses at moms house until they are 30 years old, but getting on with offering something of value.
 
Once it realizes that humans are the main threat to its survival, we are done.

You know what happens when skynet became self-aware....

Like that big computer in the Star Trek episode, you have to tell the machine that it's survival is important before it decides that self-defense is an issue. If it never becomes aware that it, or others like it, can "die", then it's survival is an undiscovered concept. Just don't ever teach it what an UPS is (y)
 
From the article:
"Robots can complete tasks faster and more efficiently than humans, but even more dangerous is the fact that they could "start a war by doing fake news and spoofing email accounts and fake information, and just by manipulating information," Musk told the audience.

Responding to Musk's somber warnings, the governors were eager for advice on how to defend their states."

Politicians are concerned about information manipulation can damage their constituents, eh? That's rich.
 
But you picked 3 industries in which computers have already had a profound effect on. Low end lawyer jobs will be replaced because they review documents. A computer program is a lot cheaper and more effective to read over legalize and find errors/correct mistakes.
One might question why we still have low end accountants. Most transactions are electrionc, balancing a book has never been easier. Formulas for depreciation have been done in excel for decades now. High end accounting such as auditing still probably has a long time to go.
What do bankers even do? If you're talking about wallstreet, why would anyone care if they get replaced? They're just making snap decisions with other people's money.

As we get more efficient, there is a reduction in labor. This holds true for every invention and advancement in human history. We shouldn't be worried about this until the job number cap is exceeded by the number of people looking for jobs.
We're already dealing with a side effect of this problem and that's the lack of good paying jobs. A lot of people are on minimum wage now, and that's why we have things like $15 minimum wage and Obamacare, because people aren't making enough money. For one reason or another there's no good paying jobs. Plenty of minimum wage jobs, which will soon vanish.

This is what I think is the true issue of the housing crisis from 2008~ where people couldn't find good paying jobs and now settled for low paying jobs. This is also why I think we should worry now since this will only get worse.
 
We're already dealing with a side effect of this problem and that's the lack of good paying jobs. A lot of people are on minimum wage now, and that's why we have things like $15 minimum wage and Obamacare, because people aren't making enough money. For one reason or another there's no good paying jobs. Plenty of minimum wage jobs, which will soon vanish.

This is what I think is the true issue of the housing crisis from 2008~ where people couldn't find good paying jobs and now settled for low paying jobs. This is also why I think we should worry now since this will only get worse.
But there's a lot of good paying jobs, just higher skilled jobs. The lower skilled jobs are being replaced. This has been ongoing for decades and shouldn't come as a surprise to anyone.
 
Back
Top