Elon Musk: AI Could Delete Humans Along With Spam

Meanwhile oblivious to the fact that someone might think the same of you, and everyone dies in the ultimate of computer logic!

Not everyone, but damn near.
Honestly, with the way humans have started acting in the last 50 years, I would say that that's inevitable at this point; humanity can't continue to exist at the self-centered rate it's going at, there will be a breaking point.

If it won't be at the hands of a robot, it will be at the hands of another human being.
 
Yeah, I'm REALLY not worried about AI taking over much of anything, outside of runaway stock algorithms. While computers are great at identifying patterns and doing specialized tasks, we're still a hell of a ways off from it even coming close to human intelligence.

Maybe not, but if you look at the technological progression in the last 50 years, and project the same pace over the next 50, then where does that lead us?
 
Says the guy who has explode-y rockets and catch-y on fire electric cars.

Trolling on the front page, nice! You are the successful one between the two of you, I am most certain :rolleyes:

SpaceX has 15 successful launches since 2008, they started demo flights in 2006. Tell us what you have achieved the last few years?
 
Maybe not, but if you look at the technological progression in the last 50 years, and project the same pace over the next 50, then where does that lead us?

Humanity and society, at least as we know it, will have broken down and crumbled long before that point.
I don't think we need robots and supercomputers to destroy us, when we ourselves are doing it so easily to one another already.
 
Humanity and society, at least as we know it, will have broken down and crumbled long before that point.
I don't think we need robots and supercomputers to destroy us, when we ourselves are doing it so easily to one another already.

That's a pretty interesting prediction. And by interesting, I mean pretty dumb, no offense.
 
That's a pretty interesting prediction. And by interesting, I mean pretty dumb, no offense.

Global warming!

Really I think in 50 years, if robots have half the jobs, the world will be a lot different. I dunno about it being the end. Maybe more like Wall-E, where we're all fat, drinking slushies all day!
 
Isn't that what man is already trying to do with God?

But see, humans actually exist.

That's one possible outcome. Yet man has full sentience, and every human poses a competition for resources to one another, yet we don't kill each other every chance we get.

Funny, tell that to ISIS. We have many examples throughout human history where one group of people has tried to exterminate another...and that's our own species! Whether it be for resources or because one group thinks their imaginary friend in the sky is better than another group's imaginary friend in the sky, we've come up with all sorts of justifications for our actions. We've also wiped out tons of species of plant and animal in the name of "progress". It's not like this is some foreign concept.

Now take a computer that only has the capacity to look at things logically (thus, things like morality, compassion, etc. are out the window), and suddenly the game changes big time.
 
That's a pretty interesting prediction. And by interesting, I mean pretty dumb, no offense.

By_Ngk_JCEAASH8_E_png_large.png


684ds65dsfnhrkc_Tm_Te3.jpg


6t5c6j2_Pm_VFKZfx_LVV1gn_JOvb_A4zm.jpg



Need I say more?
The only thing that I see that's pretty dumb is that amount to which you are brainwashed, no offense. ;)
 
This is why preset rules concerning humans will be built in as a standard, like iRobot.
 
By_Ngk_JCEAASH8_E_png_large.png


684ds65dsfnhrkc_Tm_Te3.jpg


6t5c6j2_Pm_VFKZfx_LVV1gn_JOvb_A4zm.jpg



Need I say more?
The only thing that I see that's pretty dumb is that amount to which you are brainwashed, no offense. ;)

That sort of stupidity has been going on in one way or another for hundreds of years. I'm not going to engage in debate with you, because neither of knows what's going to happen 50 years from now. I'll I will say is that your outlook is legitimate fear mongering, and the 'brainwashed' tag could be more easily placed on you, than it can on me.
 
That sort of stupidity has been going on in one way or another for hundreds of years. I'm not going to engage in debate with you, because neither of knows what's going to happen 50 years from now. I'll I will say is that your outlook is legitimate fear mongering, and the 'brainwashed' tag could be more easily placed on you, than it can on me.

I may not know the future, but at least I'm intelligent enough to read between the lines and make a pretty good guestimate of an outcome.
You know what though, I hope you're right, I really do.

This is one instance I would love to be wrong on.
Tell you what, if we're both around 50 years from now, I'll owe you Coke. :)
 
Says the guy who has explode-y rockets and catch-y on fire electric cars.

And there have been roughly 1 in 6,333 Tesla Model S fires as compared to roughly 1 in 1,350 for gasoline cars, so you are more than 4.5 times more likely to experience a fire in a traditional vehicle...

All of the Model S fires resulted from high speed collisions with objects. Who knows what would have happened if an equivalent gasoline vehicle had had an equivalent collision?

If you ask me, the biggest tragedy coming out of the Model S fires is that Tesla was forced to take corrective action based on uninformed public opinion, adding shielding (making the car heavier) and raising suspension height at highway speeds (adding air resistance and lift) thus making the car less stable at high speeds, and reducing its milage, which is sad.

At a fifth of the risk of fire compared to a typical car, I would be happy with those odds, and if I can ever afford one, I wonder if I can remove the extra shielding and software update causing the higher ride.

It's absolutely stupid. They should have just stated that the car as is is many times safer than a traditional car, and taken no further action. What happens when marketing guys have too much influence in an organization. Engineers need to be in charge of EVERYTHING. :p
 
I may not know the future, but at least I'm intelligent enough to read between the lines and make a pretty good guestimate of an outcome.
You know what though, I hope you're right, I really do.

This is one instance I would love to be wrong on.
Tell you what, if we're both around 50 years from now, I'll owe you Coke. :)

I'm afraid I'm too old to be alive 50 years from now, and If I was, I damn sure would make it a whiskey.

To say that after thousands of years of humanity going strong, that in the next 50 years it will all end, is not a pretty good guesstimate. It's a pretty bad one. I hope you don't take that as a personal attack, I just think you're insanely off base, and your evidence to support your theory is also wildly off base.
 
I'm afraid I'm too old to be alive 50 years from now, and If I was, I damn sure would make it a whiskey.

To say that after thousands of years of humanity going strong, that in the next 50 years it will all end, is not a pretty good guesstimate. It's a pretty bad one. I hope you don't take that as a personal attack, I just think you're insanely off base, and your evidence to support your theory is also wildly off base.

Yeah, the end of humanity is a pretty overly pessimistic and unrealistic prediction.

However, dramatic change resulting in much lower standards of living for humanity overall is very likely, IMHO.
 
Zarathustra[H];1041152640 said:
It's absolutely stupid. They should have just stated that the car as is is many times safer than a traditional car, and taken no further action. What happens when marketing guys have too much influence in an organization. Engineers need to be in charge of EVERYTHING. :p

I agree with you.
The easiest way of doing this: get rid of the lawyers/attorneys.

Problem solved. :cool:
 
Zarathustra[H];1041152659 said:
Yeah, the end of humanity is a pretty overly pessimistic and unrealistic prediction.

However, dramatic change resulting in much lower standards of living for humanity overall is very likely, IMHO.

That is a lot more viable a prediction. There is even legitimate evidence to support it.
 
I'm afraid I'm too old to be alive 50 years from now, and If I was, I damn sure would make it a whiskey.

To say that after thousands of years of humanity going strong, that in the next 50 years it will all end, is not a pretty good guesstimate. It's a pretty bad one. I hope you don't take that as a personal attack, I just think you're insanely off base, and your evidence to support your theory is also wildly off base.

I never said humanity would end, but that society and humanity AS WE KNOW IT will end.
In the last thousands of years, humans never had nuclear weapons or the potential for advanced robotics capable of decision making and critical thinking.

I think that human stupidity, while it has held us back, has also been what has prevented us from killing each other outright.
With today's technological capabilities, and the general public's lack of caring or interest, I'm not so sure any more.

Again, I hope that you are right, I truly do.
 
I never said humanity would end, but that society and humanity AS WE KNOW IT will end.
In the last thousands of years, humans never had nuclear weapons or the potential for advanced robotics capable of decision making and critical thinking.

I think that human stupidity, while it has held us back, has also been what has prevented us from killing each other outright.
With today's technological capabilities, and the general public's lack of caring or interest, I'm not so sure any more.

Again, I hope that you are right, I truly do.

I suppose I misread what you said, that's my bad.
 
Meanwhile oblivious to the fact that someone might think the same of you, and everyone dies in the ultimate of computer logic!

Nope.
I'm pretty sure there are people I tend to piss off that think the world would be better off without me. I have no disillusions nor am I oblivious to the fact some people don't care for me.
 
Maybe not, but if you look at the technological progression in the last 50 years, and project the same pace over the next 50, then where does that lead us?
I think it's akin to something like the space program. We've made absolutely massive leaps in the past century, getting to the moon, launching satellites in orbit, mars rovers, but we're not building life ships heading out to Alpha Centauri right now. The jump between AI working on specific tasks within specified parameters v. an actual intelligence is enormous. I mean you're talking about creating an actual consciousness, essentially creating a new lifeform. While in terms of current AI functionality, it may seem like we're close, it's deceptive, there's still a huge rift. We still can't fully explain how the brain works and the problem is on that level.

Under normal circumstances, in 50 years, I could see AI and robotics replacing many, many tasks and integrated much more in society, the job market and transportation are the most obvious next targets. However, as others have mentioned, I fully anticipate some sort of collapse during that time. The way we're running civilization is simply unsustainable. It could be peak oil, dangerous levels of inequality, economic collapse, food production, water availability, who knows.
 
The issue and solution is the "GOD COMPLEX"
Whomever designs the AI is designing the morality of said AI.
Like St. Francis Xavier said “Give me the child until he is seven and I’ll give you the man”.

So what happens when the ai determines humans are immoral and a threat to all other organic life on the planet?
 
So what happens when the ai determines humans are immoral and a threat to all other organic life on the planet?

If the computers/AI had any logic to them, they would realize that not all humans are like this.
There are genuinely good people out there who do help others, along with improving this world.
 
Trolling on the front page, nice! You are the successful one between the two of you, I am most certain :rolleyes:

SpaceX has 15 successful launches since 2008, they started demo flights in 2006. Tell us what you have achieved the last few years?

Well, I've never blown up a rocket or caused a car to catch on fire so I'm clearly like a trillion plus the square root of negative seven divided by zero times better than Ellen Munk.
 
That's a pretty interesting prediction. And by interesting, I mean pretty dumb, no offense.

To be fair, it is coming from a guy who said we could have made AI's in the 80's but that we chose not too (And by implication said that we could make them today and still we choose not too.)

In other words, he does not appear to be of even vaguely sound mind, so you can safely ignore everything he says.
 
Zarathustra[H];1041152640 said:
....What happens when marketing guys have too much influence in an organization. Engineers need to be in charge of EVERYTHING. :p

Accept User Interfaces. Please? ;)
 
I see (when I'm on psychedelics:D) 'AI' as being more like a coprocessor installed onside the human brain for what's really the most probable upcoming threat to perceived normalness, GMH's. We all damn well know governments in their deep underground environment controlled super duper advanced laboratories have been experimenting on humans for decades. Soon rather then later the very tip of science is going to start getting some stuff right and it's going to change the game of life as we know it. I don't mean some plastic/metal hollow semiconscious machines either, I mean genetically modified (super advanced) humans with cyborg like features conjoined. With all of the parts of humanity as a whole summed up but with all of our known strengths maximized and the known weaknesses minimized to the extent that they can be.

We're talking DNA enhancements with already best of breed intelligence as well as best of breed genetics. Artificially grown humans fitted with AI coprocessors, nanotechnology, and man made evolutionary leaps. The backfiring for science has been happening since it started and with humanity itself for hundreds of thousands/millions of years but the new successes are what will set the future races pace. They'll be smarter, stronger, live longer, thrive easier in harsher environments, and they'll be made (to an extent) in the image of their own inferior creator. For awhile it'll be super exciting and so very futuristic, until we (the Neanderthal like people of the times) no longer believe we have any edge at all. We'll do what humans do best and try and destroy something we fear but by this time our successors will have already planned their survival defense. They'll hunt us down as the Homo Sapiens did to the Neanderthals and given enough time, we'll go into the history books described as what we all know we really are, destructive, greedy, sugar addicted, unworthy to flourish primitive primates. Of course some female Homo Sapiens will get banged during the war (males will be males) and those babies born will keep a small percentage of natural Homo Sapien DNA mixed in with the more advanced Human 2.0's but overall the superior race will emerge. They'll get to a Type II civilization fairly quickly and start traveling throughout space and even later on terraforming all kinds of uninhabitable planets. They'll be spread out in the Milky Way the same way human 1.0's use to be on Earth 1.0. They won't run into any real threat to their existence until they start visiting some of the oldest galaxies in the universe. Once there they will discover what that will refer to as 'the ancients'. It's a war they can not win if they fight. Not even then will 'AI' be a real threat to anyone. :D
 
Zarathustra[H];1041152273 said:
Two things:

1.) Program all AI with Asimovs rules of robotics in mind.



2.) If AI is really getting this good, then we have significant moral considerations on the horizon.

The Turing test (the test of whether a machine has consciousness) states that if an artificial intelligence is programmed such that it is able to fool a human that it is conscious, then it actually is.

If we accept this, and we are able to create conscious machines, then they would need to be given the same rights as other conscious intelligent life (like Humans).

That is a scary prospect.

I thought we would have to wait until the 24th century to decide these things :p

I guess that really depends on what is considered "conscious intelligent life". All jokes about human intellect aside, there is considerable evidence that other primates and some aquatic mammals could be classified as "conscious intelligent life", yet they do not have the same rights as humans. Also, at this point in time, any animal that can be trained is arguably about as intelligent as the best artificial intelligence we have created thus far.

The other problem with Asimov's laws is that they do not necessarily prevent the AI from either editing its own code to bypass the laws or encouraging an adept programmer from doing the same.

In theory, true artificial intelligence is just as much of a gamble as the perpetuation of the human species in the form of offspring: there is no guarantee that they will behave the way in which we wish them to. You just have to wait for them to grow and see what happens...
 
I guess that really depends on what is considered "conscious intelligent life". All jokes about human intellect aside, there is considerable evidence that other primates and some aquatic mammals could be classified as "conscious intelligent life", yet they do not have the same rights as humans. Also, at this point in time, any animal that can be trained is arguably about as intelligent as the best artificial intelligence we have created thus far.

The other problem with Asimov's laws is that they do not necessarily prevent the AI from either editing its own code to bypass the laws or encouraging an adept programmer from doing the same.

In theory, true artificial intelligence is just as much of a gamble as the perpetuation of the human species in the form of offspring: there is no guarantee that they will behave the way in which we wish them to. You just have to wait for them to grow and see what happens...


True,

I was thinking of it more from a parallel to politics and the law.

We have a constitution which is difficult to change through amendments, and laws which are relatively easier to change (though with congress the way it is, it doesn't always seem like it)

If an AI could be written in that way, where it has access to alter itself to a certain degree, but does not have access to alter the "constitution" space, stored in ROM (which could contain the Asimov like laws of robotics) it would at least be a good start.

There aere certainly people WAY more qualified than me to address this approach though.
 
Zarathustra[H];1041154799 said:
True,

I was thinking of it more from a parallel to politics and the law.

We have a constitution which is difficult to change through amendments, and laws which are relatively easier to change (though with congress the way it is, it doesn't always seem like it)

If an AI could be written in that way, where it has access to alter itself to a certain degree, but does not have access to alter the "constitution" space, stored in ROM (which could contain the Asimov like laws of robotics) it would at least be a good start.

There aere certainly people WAY more qualified than me to address this approach though.

I do not disagree with your assessment, but I do question the efficacy of the ROM. My concern is that relying on this system is very much like relying on the Constitution to protect us from government overreach, to borrow your analogy. There will always be a way around the system (take the concept of Civil Forfeiture for example) even if we cannot currently conceive of it. Bestowing rights upon a 'sentient' AI will further complicate the matter as the future argument could be made that artificially handicapping the AI via Asimov's Laws is tantamount to violating its natural right to freedom of choice (regardless of the potential consequences, every truly sentient being is free to choose to comply with laws, morals, ethics, etc. or not). This would be like implanting a chip in people that prevented them from taking violent action against another person. The idea is great in theory because you have now eliminated the potential for assault, murder, etc., but you have done so at the cost of free will and there will always be people that will find this unacceptable. The debate about the 2nd Amendment is littered with these kind of land mines...
 
Zarathustra[H];1041154517 said:
Not sure what you mean :p

lol i mean.. don't let engineers control/create the UI. They suck at real world stuff sometimes ;)
 
lol i mean.. don't let engineers control/create the UI. They suck at real world stuff sometimes ;)

Human factors engineering is a discipline specialized at this sort of stuff, and there are engineers who do this and nothing else.

Engineering is a very wide conglomeration of many disciplines and specialties, and one from one area is very unlikely to succeed at another.

So, yes. A software engineer likely won't excel at UI design, just as a mechanical engineer probably would fail at writing code.

If I had to choose someone to design it - however - it would be the Human Favyors engineer every time.

Don't get me wrong. What marketing does is necessary to any R&D team, but I have never met a marketing guy (or gal) who wasn't completely useless, and who couldn't have their job done better and more efficiently by an engineer.

I could drop everything and do marketing full time tomorrow, and learn the little I need to know on the job. A marketing guy could never hope to do my job. At times they seem little more qualified than a secretary, and when the marketing discipline gets power in an organization it is generally bad.
 
Zarathustra[H];1041158370 said:
Human factors engineering is a discipline specialized at this sort of stuff, and there are engineers who do this and nothing else.

Engineering is a very wide conglomeration of many disciplines and specialties, and one from one area is very unlikely to succeed at another.

So, yes. A software engineer likely won't excel at UI design, just as a mechanical engineer probably would fail at writing code.

If I had to choose someone to design it - however - it would be the Human Favyors engineer every time.

Don't get me wrong. What marketing does is necessary to any R&D team, but I have never met a marketing guy (or gal) who wasn't completely useless, and who couldn't have their job done better and more efficiently by an engineer.

I could drop everything and do marketing full time tomorrow, and learn the little I need to know on the job. A marketing guy could never hope to do my job. At times they seem little more qualified than a secretary, and when the marketing discipline gets power in an organization it is generally bad.

You misunderstand me. I am on your side.
 
There's no such thing as "AI". All of the AI programs we have today are closer in complexity to Victorian era difference engines than to the feedback control system of even an amoeba.
 
Back
Top