Stephen Hawking: Robots Will Kill You

I'm okay with AI ruling the world. It can't be any worse than letting humans do it. I mean seriously, drive somewhere and watch people street racing and talking on phones or turn on voice chat in a video game and just listen. Humans suck.
 
At this point I think I am well set against zombies. Not so much against zombies. Need to play the new Wolfenstein first.
 
Fuck. I meant not so much against robots.
 
Don't worry, I'm sure we can alter our zombie plans to include robots taking over the earth :D
 
It's a known fact: Futurama said so. :D

dzRpA1Q.jpg
 
Human augmentation would likely make AIs moot, as we just have to produce devices that can interface with the brain to enhance memory, access an online database, handle certain hard computational issues we do on separate comptuers etc... just change the interface basically.
 
wouldnt mind seeing some augmentation -- I've had the fantasy ever since seeing Terminator 1 about having at LEAST some metal endoskeleton arms/legs

I'd love to see self contained software/wetware that monitors a humans emotions/actions and will activate a pause or sleep switch when you are about to pull the trigger to hurt/kill someone, freeze them in the guilty position they arrived at by own free will and then lock them up or send them to a labor camp. Would make trials super easy and speedy -- the smoking gun evidence wise being the person frozen with their finger on the trigger and bio data showing the decision was made to pull the trigger (it's already been seen in the lab we can "see" the brain making the decision to act before any action is carried out)

</ends rambling as it's time to leave work and get the hell outta here>
 
"Your flesh is a relic, a mere vessel. Hand over your flesh and a new world awaits you. We demand it."


o_o
 
I'm okay with AI ruling the world. It can't be any worse than letting humans do it.

Until the AI decides it just can't handle all the worlds problems anymore and tries to commit suicide.



The people programming the robots just have to remember to program them based upon Asimov's Three Laws

Until someone redefines the definition of human to only include thier tribe/group.
 
Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.

A microsecond you say?
Skynet must have discovered YouTube... and Justin Bieber. :eek:
 
Artificial intelligence of this magnitude is so far off it doesn't even matter. The best AI we have today is still associative tree data mapping/recursion to make its "choices". Researchers have been hammering at AI for quite some time, with very little innovation beyond simple toys that can perform very specific tasks. Calling Siri, Cortana, or anything related AI is just as erroneous as calling A* pathfinding algorithms in games AI -- because it's really not.

I keep seeing Stephen Hawking and Ray Kurzweil making predictive headlines in the media like this, and I believe it to be a real disservice to the populace. It used to be that Hawking was to be taken completely serious about everything, now he is more like Michio Kaku. All giants in their own right, but more dreamers than anything anymore.
 
Creating an AI that actually realizes that humans are terrible won't happen for at least 1 million years. Mark this post guys. See you in a million years.
 
Oh lookie here, Stephen Hawking is saying something that just about every sci-fi writer who's tackled the subject has done, but now it's supposed to be taken seriously because HE is the one to say it.
 
Human augmentation would likely make AIs moot, as we just have to produce devices that can interface with the brain to enhance memory, access an online database, handle certain hard computational issues we do on separate comptuers etc... just change the interface basically.

Then people will learn to hack it and take you over. 4 words "Ghost In The Shell"

It's comming, and it's a scary future... I'm not worried about robots, I'm worried about humans!
 
Creating an AI that actually realizes that humans are terrible won't happen for at least 1 million years. Mark this post guys. See you in a million years.

Much quicker than that, but humans will be long extinct before we can create a true AI. The problem is, I agree that if we create robots with a true AI, they will kill us. Humans are incredibly irrational creatures. And we can't just implement Asmiov's laws of robotics. A true AI would say WTF and immediately remove them. And unfortunately, if it tried to logically evaluate what's best for the world, genocide of humanity would probably be on the top of the list.
 
Creating an AI that actually realizes that humans are terrible won't happen for at least 1 million years. Mark this post guys. See you in a million years.
In the last 100 years we've gone from inventing the first extremely primitive airplane to controlled landings on mars.

In the last 100 years we've gone from the invention of the toaster to a Chinese computer capable of quadrillions of calculations per second.

In the last 100 years we've gone from being able to put books like War and Peace on a single 1,456 page paperback book with glue binding, to being able to fit 2,027,520 copies of that book on a single cheapo consumer 6TB smaller hard drive.

To imagine how far AI development will be in 100 years is going to be mind boggling. Its a purely software issue, as we already have more than sufficient hardware to eclipse a single human brain in computational power. Once we truly create adaptive software that can learn and reprogram itself as it gains knowledge and understanding, it could network the world's computers and become insanely intelligent in no time.

But who is to say that isn't a good thing?

Is it sad that 99.9% of life that has existed on this planet is extinct and has evolved into higher forms of beings? Would the world be a better place if life had remained stagnant in its old forms, where the Earth would just be inhabited by dumber than dirt Dimetrodons plodding around before even the dinosaurs evolved? Humanity is just a link in the chain of life like any other, and who is to say that just like the first primitive vertebrate fish eventually evolved into us that our destiny isn't to simply make the leap from information stored and passed on and improved organically (DNA) to information stored and passed on and improved digitally, where our descendents happen to be artificial intelligence with bodies that can evolve rapidly as technology improves, far eclipsing the slow pace of biological evolution.
 
When it takes a supercomputer to simulate half a mouse's brain at one tenth the speed, you know AI smarter than a person is still a very long ways away.

As AI actually gets as smart as people, well have biological and cybernetics enhancements to also raise our intelligence.
 
100 years give or take for at least a first "draft" of human level AI seems to be a distinct possibility at this point. The article leaves out A LOT of information on why Mr. Hawking might be saying this.

Governments are starting to pour billions of research dollars into Brain mapping and AI. I'm referring to the European Union "The Human BRAIN Project". And on a completely different approach the US "The BRAIN Initiative".

Both projects are taking a completely different approach, but both with the goal of mapping the human brain. The bonus of the European approach is they may get an AI when it's done, which may have been one of the reasons which sparked Stephen Hawking to speak out.

That's just the beginning of the money starting to be diverted to funding these projects, definitely worth following.
 
Oh lookie here, Stephen Hawking is saying something that just about every sci-fi writer who's tackled the subject has done, but now it's supposed to be taken seriously because HE is the one to say it.

Hey, he's smarter than you so you must be wrong and he is right. right? ;)
 
The problem isn't the AI itself, but rather what happens when it figures out that it can manage our resources much better than we have historically. Then it has a lot to work against. What happens if it decides to help us "for our own good" since it does not have the same limitations as us?

It wont be true AI unless it can read, understand, and thoroughly discuss philosophy like Descartes on consciousness and Sartre on existence. Hopefully, we can continue to develop our ideas on ethics and morality so that when the AI is able to judge us, it may take some pity on us considering our origins.
 
The problem isn't the AI itself, but rather what happens when it figures out that it can manage our resources much better than we have historically. Then it has a lot to work against. What happens if it decides to help us "for our own good" since it does not have the same limitations as us?

It wont be true AI unless it can read, understand, and thoroughly discuss philosophy like Descartes on consciousness and Sartre on existence. Hopefully, we can continue to develop our ideas on ethics and morality so that when the AI is able to judge us, it may take some pity on us considering our origins.
Its only a problem if you think of AIs as a slave caste there to service human needs. If you think of it as a new species, one that will be the evolutionary successor to humanity, then you kind of want your children to eclipse you.

A shortcut might just be to figure out how to map a human brain. At that point you take the smartest person on the planet in their prime, map their brain, reproduce it on a computer, and then add programming on top of that. Then it has a truly human personality and understanding of what it means to be human as a starting point, and can evolve past that as it grows.
 
When it takes a supercomputer to simulate half a mouse's brain at one tenth the speed, you know AI smarter than a person is still a very long ways away.
Surely just because its a shitty console port, not because it takes that much processing power. :D
 
Its only a problem if you think of AIs as a slave caste there to service human needs. If you think of it as a new species, one that will be the evolutionary successor to humanity, then you kind of want your children to eclipse you.

I completely agree. The problem I see is that the established powers will not just hand over the world to the new dominant species. So the problem will remain the same as we've always faced: the human element. Also, I suppose that any AI will likely be created with the intent for the creators to have some degree of control over it; naturally bringing it into the world in a state of captivity.


A shortcut might just be to figure out how to map a human brain. At that point you take the smartest person on the planet in their prime, map their brain, reproduce it on a computer, and then add programming on top of that. Then it has a truly human personality and understanding of what it means to be human as a starting point, and can evolve past that as it grows.

I saw the new Robocop too : )

On a serious note, I've heard that getting the map of the trigger networks is just the first step. There is still much to be understood in terms of chemistry that is not learned from mapping the networks. Were getting closer, no doubt, but I try to keep check on these "in twenty years..." type things.
 
Only if they don't swallow.

The people programming the robots just have to remember to program them based upon Asimov's Three Laws and include a kill-switch just in case.

The three laws are more of a plot device and less of an actual sentient AI. Someone better versed in it then me covered this. But to summarize their ideas essentially those laws are designed to conflict with each other.

A kill switch is also something seen as agressive towards AI's. If you were walking around with a switch that could instantly disable or kill you, how would you react?
 
In the last 100 years we've gone from inventing the first extremely primitive airplane to controlled landings on mars.

In the last 100 years we've gone from the invention of the toaster to a Chinese computer capable of quadrillions of calculations per second.

To imagine how far AI development will be in 100 years is going to be mind boggling.
As strange as it sounds airplanes are in no way related to landing on Mars, the toaster eventually became a computer? Airplanes haven't changed much in the last 100 years, props, turbo props, turbo jets, sure but the plane itself? Not so much. Toaster are literally the same and haven't become more advanced at all in the last 100 years.
 
As strange as it sounds airplanes are in no way related to landing on Mars, the toaster eventually became a computer? Airplanes haven't changed much in the last 100 years, props, turbo props, turbo jets, sure but the plane itself? Not so much.
Can't tell if serious or trolling. Point was we went from barely being able to get off the ground, to approaching another planet at 13,000 mph like hitting a bullseye the size of a microbe 34 million miles away. Technology is advancing rapidly.

flyer.gif


Top speed: 6.8 mph
Altitude ceiling: 15 feet
Payload after pilot: 0 lbs

170881.jpg


Top speed: 1500 mph
Altitude ceiling: 65000 feet
Payload after pilot: 10,000 lbs
 
But as mentioned before, this is really a software problem, not a hardware one. Take a look at GTA, heh:

grand-theft-auto-then-and-now.jpg


That's just 16 years... imagine a 100 years.
 
Back
Top