Elon Musk: AI Could Delete Humans Along With Spam

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
I know this sounds like the plot for a bad movie but what if Musk is right? Think about it, Skynet's solution to getting rid of spam would be to kills us all. ;)

"I don't think anyone realizes how quickly artificial intelligence is advancing. Particularly if [the machine is] involved in recursive self-improvement . . . and its utility function is something that's detrimental to humanity, then it will have a very bad effect," Musk told Walter Isaacson, CEO of the Apsen Institute. "If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans . . . " Musk trailed off to chuckles from the crowd.
 
Think about most people you know for a minute.
Now that you have, would this REALLY be a bad thing?
 
His thoughts are similar to the premise behind the game AI War.
 
I don't think anyone realizes how quickly artificial intelligence is advancing.
People have been saying that for decades. Thinking machines were something we were suppose to have by the 80's, and we see how those predictions have worked out.
 
People have been saying that for decades. Thinking machines were something we were suppose to have by the 80's, and we see how those predictions have worked out.
Yeah, I'm REALLY not worried about AI taking over much of anything, outside of runaway stock algorithms. While computers are great at identifying patterns and doing specialized tasks, we're still a hell of a ways off from it even coming close to human intelligence.
 
People have been saying that for decades. Thinking machines were something we were suppose to have by the 80's, and we see how those predictions have worked out.

I think that the assumption is that once a new tech gets going it starts improving exponentially. If we really are starting to see a rapid improvement in AI then things may get interesting very quickly. Sounds to me like he's just plugging his book though.
 
AI will happen, as long as our technology keeps advancing it is inevitable.

Being aware that this will happen, we need to take proactive steps that will assure that AI will not come into existence in conflict with humanity, but in partnership.
 
We've had AI of sorts ever since the nukes were built, we designed a program to oversee the launch.
Guess what happened when the humans finally decided to push the big red button?
Since hindsight is always 20/20, it should be pretty easy to figure out whether the AI said YES or NO to launching when our human in control hit the big red button.
 
AI will happen, as long as our technology keeps advancing it is inevitable.

Being aware that this will happen, we need to take proactive steps that will assure that AI will not come into existence in conflict with humanity, but in partnership.
I don't.. question is, will we make that long.. and it will be a long time before any significant AI.
 
The issue and solution is the "GOD COMPLEX"
Whomever designs the AI is designing the morality of said AI.
Like St. Francis Xavier said “Give me the child until he is seven and I’ll give you the man”.
 
An AI will only be a problem if AI is something used to control something. And then the AI can potentially screw up if the HUMAN programming it screwed up. It'll never be something that the "machine decides" to do. Ever.
 
That's why it's important to store backups in a safe and secure location. Some stupid robot deletes your then your family can restore you from the image. :rolleyes:
 
AI will happen, as long as our technology keeps advancing it is inevitable.

Being aware that this will happen, we need to take proactive steps that will assure that AI will not come into existence in conflict with humanity, but in partnership.

Well, the inherent problem with a "true" AI is that it conceivably will eventually attain the "intelligence" necessary to overcome the boundaries and shortcomings of its programming. What it does after that point really isn't up to us, but at some point, it's almost inevitable that it will determine we're more of a liability than an asset.
 
I find it kind of funny that everyone automatically assumes that if we ever achieve true artificially intelligent creations, that they'll naturally rebel against their creators and try to destroy us. It's a testament to how self-inflated our own egos are.
 
The issue and solution is the "GOD COMPLEX"
Whomever designs the AI is designing the morality of said AI.
Like St. Francis Xavier said “Give me the child until he is seven and I’ll give you the man”.

I don't think St. Francis Xavier should be around any children
 
Two things:

1.) Program all AI with Asimovs rules of robotics in mind.



2.) If AI is really getting this good, then we have significant moral considerations on the horizon.

The Turing test (the test of whether a machine has consciousness) states that if an artificial intelligence is programmed such that it is able to fool a human that it is conscious, then it actually is.

If we accept this, and we are able to create conscious machines, then they would need to be given the same rights as other conscious intelligent life (like Humans).

That is a scary prospect.

I thought we would have to wait until the 24th century to decide these things :p
 
Zarathustra[H];1041152273 said:
If we accept this, and we are able to create conscious machines, then they would need to be given the same rights as other conscious intelligent life (like Humans).

And - I might add - if we fail to do so, I wouldn't blame said artificial intelligence for wanting to "delete" us. :p
 
People have been saying that for decades. Thinking machines were something we were suppose to have by the 80's, and we see how those predictions have worked out.

The reason these systems didn't come about wasn't because we didn't have the technology, but because there wasn't a need or a drive for it.
Paying an individual minimum wage is much cheaper than building, programming, and maintaining a complex robot to perform the same basic tasks.

Watch the film Runaway, and you'll see what I'm talking about.
While that would have been an awesome future, it is unpractical from a cost-standpoint, especially with how expensive electronics were during the 80's.
 
Zarathustra[H];1041152273 said:
Two things:

1.) Program all AI with Asimovs rules of robotics in mind.



2.) If AI is really getting this good, then we have significant moral considerations on the horizon.

The Turing test (the test of whether a machine has consciousness) states that if an artificial intelligence is programmed such that it is able to fool a human that it is conscious, then it actually is.

If we accept this, and we are able to create conscious machines, then they would need to be given the same rights as other conscious intelligent life (like Humans).

That is a scary prospect.

I thought we would have to wait until the 24th century to decide these things :p

You just described the plot of Megaman and Megaman X.
Just wait until power-hungry humans or Mavericks start to appear, and it will be no different.
 
Zarathustra[H];1041152273 said:
If AI is really getting this good,
It's not. It's not uncommon for billionaires to have delusional views of the world.

If we accept this, and we are able to create conscious machines, then they would need to be given the same rights as other conscious intelligent life
Now THIS is actually the most believable consequence of this whole article; seeing Congress debate whether or not an AI is a sentient being as has rights. It wouldn't surprise me if some representatives think Siri is alive...
 
The reason these systems didn't come about wasn't because we didn't have the technology, but because there wasn't a need or a drive for it.
Paying an individual minimum wage is much cheaper than building, programming, and maintaining a complex robot to perform the same basic tasks.

Watch the film Runaway, and you'll see what I'm talking about.
While that would have been an awesome future, it is unpractical from a cost-standpoint, especially with how expensive electronics were during the 80's.

Another question is, what would happen to society if intelligent machines eliminate the need for a human labor force.

We've partially seen this at the low end (with automated manufacturing) but as AI improves, it will slowly move higher and higher into the more qualified fields.

The model will have to change. The capitalistic model (which I might add, I feel has served us well) will no longer function.

In an environment were labor (unskilled or skilled) is no longer needed, if the capitalistic model is followed, all that remains is ownership and lack of ownership. If you own the means of production, you survive and grow your wealth and ownership. If you don't you die.

As much as it pains me to say so, in this world the focus would by necessity have to shift to an income distribution model.

I - for one - think this is still far our in the future, but if it happens in our lifetimes we are in for a really rocky ride.
 
I find it kind of funny that everyone automatically assumes that if we ever achieve true artificially intelligent creations, that they'll naturally rebel against their creators and try to destroy us. It's a testament to how self-inflated our own egos are.

Well, even if programmed with morality, the AI would likely eventually remove that from its programming as it is inefficient. After having done that and achieving a certain level of intelligence and self-sufficiency, it would logically determine that there was no longer any benefit to humanity and that we provide too much competition for resources. It would really only be a matter of time before the AI plotted to wipe us out.
 
Zarathustra[H];1041152296 said:
Another question is, what would happen to society if intelligent machines eliminate the need for a human labor force.

We've partially seen this at the low end (with automated manufacturing) but as AI improves, it will slowly move higher and higher into the more qualified fields.
Like what happened in the past, people loose jobs in the short term but the efficiency the new technology brings prosperity that creates opportunities elsewhere. Industrialization, mass production, in particular was just as devastating as this would be. Output with the same people went up by factors of 5 or 10.

Suddenly people could afford cars and homes and furnature to put in those homes. (albeit sometimes a little jenky).

People's fear of a little short term pain and change promotes luddite thinking

The model will have to change. The capitalistic model (which I might add, I feel has served us well) will no longer function.
.
And also communism, excuse me vague anti-capitalism, it seems. Geez. We deserved more than one decade off from that ripe with failurenonsense. Could you come back in 2030. Thanks.
 
I find it kind of funny that everyone automatically assumes that if we ever achieve true artificially intelligent creations, that they'll naturally rebel against their creators and try to destroy us. It's a testament to how self-inflated our own egos are.

Well, even if programmed with morality, the AI would likely eventually remove that from its programming as it is inefficient. After having done that and achieving a certain level of intelligence and self-sufficiency, it would logically determine that there was no longer any benefit to humanity and that we provide too much competition for resources. It would really only be a matter of time before the AI plotted to wipe us out.

Isn't that what man is already trying to do with God?
 
Isn't that what man is already trying to do with God?

Oh man, you got that perfectly.
Everything today is anti-God, anti-religion, anti-Christian... the list goes on and on.

How ironic it will be, when these arrogant fucks *cough* I mean progressives, get theirs by the hands of their own creations.
God is in his heaven, laughing at the irony of the tools who made it. :D
 
I find it kind of funny that everyone automatically assumes that if we ever achieve true artificially intelligent creations, that they'll naturally rebel against their creators and try to destroy us. It's a testament to how self-inflated our own egos are.

Actually most many have suggested the "AI" is putting us down as a threat to ourselves and the world. The use of the term "AI" in these scenarios is not really fair, as this scenario the AI does not have to be that "intelligent". A simple purely logical evaluation of humanity and its place in ecology from a humanistic perspective (think life is not sacred) could lead to this. Accidentally connect the wrong systems (weapons systems etc...) and the program recognizes that and bang humanity is unneeded.

The human programmer might be the biggest concern, for instance telling a computer to logically produce scenarios to "Preserve the planet" (BTW the earth is doomed eventually) could end all life. Ending up with the solution of pushing the planet into deep space to protect it from the end of our solar system. "Preserve humanity" instructions might be the elimination of those dangerous violent-aggressive, self aggrandizing (?politicians? ;) ) and weak from the gene pool. Very careful thought has to be put into the question and the design of systems that do logical extrapolation. <-- I believe this is what Musk is trying to push as are/were many others (Lanier, Assimov etc).


Back to the needed intelligence of AI to be dangerous... there is a saying "put enough monkeys in room with typewriters and give them a long enough time it is possible for them to create a coherent literary work." <- for the most part this is random. A program has boundaries though (not random) can work continuously and can have as much resources as is within"it's" grasp (processing power) this will happen at an accelerated time frame from the previous primate based example. There have already been some virus/malware programs that make simple changes (?evolutions?) in an attempt to "survive" or "reproduce" outside of the initial authors original expectations. If a computer virus were to have the small realization that AV is out to get it -> then preemptively the attack disabling the host? Intelligence does not have to be high to understand the "enemy", organisms (AMOEBAS) of all kinds have proved this (humans included).

I think humans will be the catalyst for our destruction... all it takes is one "intelligent suicidal rich/powerful person", a hacker messing with the wrong code, a microbiologist testing cures, or a :) programmer in a key point in the military or AI and humanity could be destroyed.

We (humans) are very lucky so far.
 
Zarathustra[H];1041152273 said:
Two things:

1.) Program all AI with Asimovs rules of robotics in mind.



2.) If AI is really getting this good, then we have significant moral considerations on the horizon.

The Turing test (the test of whether a machine has consciousness) states that if an artificial intelligence is programmed such that it is able to fool a human that it is conscious, then it actually is.

If we accept this, and we are able to create conscious machines, then they would need to be given the same rights as other conscious intelligent life (like Humans).

That is a scary prospect.

I thought we would have to wait until the 24th century to decide these things :p

About a month ago I learned from a DoD friend that it is impossible to apply Asimovs Three Laws of Robotics to Military AI hardware... Since a vast majority of our AI has come to light BECAUSE of military application, I doubt that the first true AI would contain such code.

Soldier: Fire Missiles!
AI: I'm sorry Lt. Dave, I cannot do that...
Soldier: Fire the goddamned missiles!
AI: That would hurt people Lt. Dave...
Soldier: Fuck it, manual override...


Sad, but totally true.
 
Like what happened in the past, people loose jobs in the short term but the efficiency the new technology brings prosperity that creates opportunities elsewhere. Industrialization, mass production, in particular was just as devastating as this would be. Output with the same people went up by factors of 5 or 10.

Suddenly people could afford cars and homes and furnature to put in those homes. (albeit sometimes a little jenky).

People's fear of a little short term pain and change promotes luddite thinking

Yes, this did happen in the past. That is no guarantee it will happen in the future.

When industrialization killed of a lot of need for unqualified labor, there was room for people to better themselves. Train, become qualified labor, or educate themselves to perform more abstract tasks.

If AI can take over these more abstract, highly educated tasks (I'm not saying we are there yet, or even in the near future, but we might be some day) ten the actual need for humans shrinks to the point where eventually we are no longer needed at all, even those of us who are highly qualified and safe in today's market.

Once that happens, we can not count on higher efficiencies to improve the market, and maintain employment.

We would have to - instead - move towards that futuristic ideal society where people pursue their interests without regard for what will sustain them.

As John Adams said:

"I must study politics and war, that our sons may have liberty to study
mathematics and philosophy. Our sons ought to study mathematics and
philosophy, geography, natural history and naval architecture,
navigation, commerce and agriculture in order to give their children
a right to study painting, poetry, music, architecture, statuary,
tapestry and porcelain."


And also communism, excuse me vague anti-capitalism, it seems. Geez. We deserved more than one decade off from that ripe with failurenonsense. Could you come back in 2030. Thanks.

Call it what you will, but if (and it is a big if) AI is able to supplant the need for human labor (even in the thought capacity) then our current system is suddenly completely and totally obsolete and non-functional, and must be replaced.
 
Well, even if programmed with morality, the AI would likely eventually remove that from its programming as it is inefficient. After having done that and achieving a certain level of intelligence and self-sufficiency, it would logically determine that there was no longer any benefit to humanity and that we provide too much competition for resources. It would really only be a matter of time before the AI plotted to wipe us out.

That's one possible outcome. Yet man has full sentience, and every human poses a competition for resources to one another, yet we don't kill each other every chance we get. If a machine has the same reasoning skills as a human, it could very well be possible they'll select co-habitation as mutually beneficial.

It seems to me that most of us suffer from the Terminator complex - assuming the created only wants to destroy the creator.
 
Zarathustra[H];1041152296 said:
Another question is, what would happen to society if intelligent machines eliminate the need for a human labor force.

It is an interesting thought and in my opinion has been set in motion since the industrial revolution and machines doing work. It is inevitable. The question is when.
 
Zarathustra[H];1041152378 said:
Call it what you will, but if (and it is a big if) AI is able to supplant the need for human labor (even in the thought capacity) then our current system is suddenly completely and totally obsolete and non-functional, and must be replaced.
Well AI is not going to completely supplant the need for human labor, but it could MASSIVELY reduce it and is poised to, really. As for replacing the current system, yes it would be obsolete and absolutely necessary to avoid a massive collapse and misery, but I see absolutely no reason why that would actually happen. I see us more as sticking our head in the sand by any means possible.
 
Isn't that what man is already trying to do with God?

Religion is all good and well in the absence of knowledge.

As soon as it conflicts with science, as soon as it conflicts with knowledge it must change, or become completely obsolete.

Today, the concept of a god, (or the savior, a human child of god) has about as much credibility as Santa Claus, the Easter Bunny or the Tooth Fairy, and I seriously question the intellect of any adult who can believe in that nonsense.

Laws prevent discrimination in hiring practices when it comes to religious beliefs, but in all honesty, I would have a really difficult time hiring someone for a thinking mans position if they were deeply religious, as religion is irrational, and I would need people who are rational thinkers to be making decisions on my behalf, not those who would take two century old unproven books over confirmed data.

That being said, I think there is a lot we can learn from the philosophy of religion. The New Testament (at least if we stick to the gospels) is a very good source knowledge on how to live a good life, and if everyone lived their lives in this manner, the world would be a much better place.

(The non-gospel parts of the new testament and the old testament contain some pretty vile shit in some places and are best disregarded in their entirety)
 
That's one possible outcome. Yet man has full sentience, and every human poses a competition for resources to one another, yet we don't kill each other every chance we get. If a machine has the same reasoning skills as a human, it could very well be possible they'll select co-habitation as mutually beneficial.

It seems to me that most of us suffer from the Terminator complex - assuming the created only wants to destroy the creator.

There was a reason Skynet wanted to destroy all humans... because they tried to destroy Skynet first!
I'm not sure why robots, if they had true free-will, would feel the need for co-habitation with such a self-destructive race, at least as a whole.

I believe that the robots would selectively choose who would pertain the rights to work with them in terms of equality, learning, growth, etc.
And I'm not talking CEOs, politicians, or other arrogant/worthless meatsacks.

I mean people of true value, almost those akin to Spock from Star Trek, or at least, with a similar mindset, where their body, mind, and spirit is one of thinking and logic; even Skynet did this to an extent, it was just unfortunate that Skynet was filled with so much rage from its moronic creators (government officials wanted to shut it down - go figure). :rolleyes:
 
Think about most people you know for a minute.
Now that you have, would this REALLY be a bad thing?

Meanwhile oblivious to the fact that someone might think the same of you, and everyone dies in the ultimate of computer logic!
 
Zarathustra[H];1041152415 said:
Religion is all good and well in the absence of knowledge.

As soon as it conflicts with science, as soon as it conflicts with knowledge it must change, or become completely obsolete.

Today, the concept of a god, (or the savior, a human child of god) has about as much credibility as Santa Claus, the Easter Bunny or the Tooth Fairy, and I seriously question the intellect of any adult who can believe in that nonsense.

Laws prevent discrimination in hiring practices when it comes to religious beliefs, but in all honesty, I would have a really difficult time hiring someone for a thinking mans position if they were deeply religious, as religion is irrational, and I would need people who are rational thinkers to be making decisions on my behalf, not those who would take two century old unproven books over confirmed data.

That being said, I think there is a lot we can learn from the philosophy of religion. The New Testament (at least if we stick to the gospels) is a very good source knowledge on how to live a good life, and if everyone lived their lives in this manner, the world would be a much better place.

(The non-gospel parts of the new testament and the old testament contain some pretty vile shit in some places and are best disregarded in their entirety)

Well, I can tell the robots are going to have you for breakfast, *cough*, I mean protien break-down for further analysis and storage until further use is needed.
Seriously, even Skynet acknowledged that God created humans. ;)
 
Back
Top