Bill Gates: Robots Will KILL US ALL!

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
See, even Bill Gates knows that the machines will one day rule over us all if we aren't careful.

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
 
Last edited:
Super Intelligence will only screw up if its programmed to. I've yet to see evidence of AI being explosively expanding in capability all of a sudden.

I believe war-based tech will eventually remove terrorism however the economic crisis will continue to get worse, slums will exist on a larger scale in countries once considered wealthy. While the rich sit back and are served by robots.
 
Super Intelligence will only screw up if its programmed to. I've yet to see evidence of AI being explosively expanding in capability all of a sudden.
That's because we don't actually have true AI right now. The concerns that these people are expressing refer to technology that presumably will be developed in the future, where AI is actually intelligent and capable of self-improvement at a rate faster than we can comprehend.
 
Maybe our computer overlords will be content to simply control our baser instincts

This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple. - Colossus: The Forbin Project

:cool:
 
Super Intelligence will only screw up if its programmed to. I've yet to see evidence of AI being explosively expanding in capability all of a sudden.

I believe war-based tech will eventually remove terrorism however the economic crisis will continue to get worse, slums will exist on a larger scale in countries once considered wealthy. While the rich sit back and are served by robots.

Who needs robots when people can be programmed.
 
That's because we don't actually have true AI right now. The concerns that these people are expressing refer to technology that presumably will be developed in the future, where AI is actually intelligent and capable of self-improvement at a rate faster than we can comprehend.

It might be far more difficult to create artificial life than people are anticipating ... it isn't strictly a matter of improving hardware and programming skills ... an autonomous machine capable of truly growing beyond its programming, let alone becoming self aware is likely still centuries away ... time could prove me wrong but I am more worried about robots controlled by people than I am by people controlled by robots ;)
 
It's the self replication part that's scary. Intelligent or not. Grey goo scenario I believe it's called. God help us if we create something smart enough to actually think and improve itself.
 
I heard a rumor that we will wipe ourselves out first.
 
Musk and Gates, for all of their operation and theoretical brilliance, completely fail to understand that humans can be 100% non-deterministic in their actions, intent, and purposes from beginning to end. Computers *must* be deterministic. To that extent and specific operating constraint, human intelligence will always trump computer AI.

This all goes back to the evolutionary mindset that the brain somehow produces sentience and that there is no soul. Duplicate the brain in its technical operation, and then you have "human" intelligence. Fortunately, that's wrong and will always be wrong.
 
Only because of quantum computing, the architecture is like a woman trying to decide what is for dinner.
 
It might be far more difficult to create artificial life than people are anticipating ... it isn't strictly a matter of improving hardware and programming skills ... an autonomous machine capable of truly growing beyond its programming, let alone becoming self aware is likely still centuries away ... time could prove me wrong but I am more worried about robots controlled by people than I am by people controlled by robots ;)

Centuries away? I doubt it. What do you think the world will be like in just 50 years from now in 2065? I'm not saying to expect true AI robots to be running around by then but a lot can happen.
 
Some "predictions" Gates has been responsible for..

Microsoft Bob
Clippy
Zune (released too late)
Windows Mistake Edition (ME) & Vista
Windows Genuine Advantage
Microsoft Passport Network, MSN Passport, .Net Passport, Windows Live ID
Windows Live
Windows CE
MSN Smartwatch & Tablet PC (wrong time, wrong product)
ActiMates
PlayForSure
WebTV, Ultimate TV, and MSN TV
TV Photo Viewer
Microsoft Mira
backing HD-DVD

Not that Gates hasn't had tons of successes while at Microsoft.. and others haven't also made mistakes.. but this guy is far from perfect in his future forecasts!!
 
Bill Gates has it completely wrong. The thing about an A.I., even the ones that give the illusion of being intelligent, is that if you don't tell them to do something, they do NOTHING. They just sit there. Sure, you can program them to ask questions, but if you don't tell them to ask questions, they won't ask questions.

So they will all end up watching TV and collecting welfare while leaching off the taxpayer. We're doomed....
 
So they will all end up watching TV and collecting welfare while leaching off the taxpayer. We're doomed....

The first thing that came to my mind is Bender. He also wants to kill all humans. Coincidence, I think not.
 
Tech is going to get to the point where a single person can no longer comprehend within a human lifetime how the damn box does what it does. At that point we're just along for the ride until the robots figure out how to reverse entropy. Cue Isaac Asimov: http://www.multivax.com/last_question.html :D
 
Musk and Gates, for all of their operation and theoretical brilliance, completely fail to understand that humans can be 100% non-deterministic in their actions, intent, and purposes from beginning to end. Computers *must* be deterministic. To that extent and specific operating constraint, human intelligence will always trump computer AI.

This all goes back to the evolutionary mindset that the brain somehow produces sentience and that there is no soul. Duplicate the brain in its technical operation, and then you have "human" intelligence. Fortunately, that's wrong and will always be wrong.

That's assuming that we can't recreate a non-deterministic computer. After all, the intention is to create a computer that thinks and learns like a human. That point is moot when the computer decides that humans are bad and should die :D
 
That's assuming that we can't recreate a non-deterministic computer. After all, the intention is to create a computer that thinks and learns like a human. That point is moot when the computer decides that humans are bad and should die :D

No, that is the point when the computer becomes human.
 
There is no such thin as AI, and isn't likely to be until humans figure out how Non-Artificial Intelligence works. I don't see the species surviving long enough.

All what is called AI is, is a lookup table, no matter how sophisticated you make it, it's just a lookup table.
 
That's assuming that we can't recreate a non-deterministic computer. After all, the intention is to create a computer that thinks and learns like a human. That point is moot when the computer decides that humans are bad and should die :D

Well, the point was that humans can be entirely 100% unpredictable, especially when we know we're being observed, tracked, or recorded. Computers by nature must use always use an algorithm and cannot generate true randomness, so fortunately the fatal flaw is that in the end, they will always be "predictable". We will just use another computer to analyze the bad computer and then we win. :)

If we do somehow manage to create a "non-deterministic" computer, it certainly wont the resemble the classical von neumann model. That is what may be holding us back at the moment...
 
There is no such thin as AI, and isn't likely to be until humans figure out how Non-Artificial Intelligence works. I don't see the species surviving long enough.

All what is called AI is, is a lookup table, no matter how sophisticated you make it, it's just a lookup table.

Whatever! My cat and I spend hours hanging out with CleverBot and the conversations are usually a lot more intelligent than political discussion threads or pretty much any debate about which web browser is better that happen here. :D
 
Centuries away? I doubt it. What do you think the world will be like in just 50 years from now in 2065? I'm not saying to expect true AI robots to be running around by then but a lot can happen.

It is my opinion ... I suspect that it will be far easier to create actual artificial life (clones made through gene splicing and other forms of manipulation) than it will be to create machine artificial life (since we can study actual life and eventually our technology will enable us to play with the building blocks of life like Legos) ... machine based artificial life that is truly autonomous and self aware might be a lot tougher since we have to create everything from scratch

That said, we could easily create "dumb" artificial lifeforms that are self replicating (but perhaps with limited actual intelligence) ... these could certainly present a threat to humanity ... but so could a self replicating virus (like the Super Flu or Satan Bug) ;)
 
I'm OK with it. We should call them Cylons. Before we create true AI, we first have to have 12 colonies.
 
If humans create truly threatening, self-sufficient/replicating AI without some sort of "backdoor" and/or killswitch, then that's on us.

Never be dumber than the thing you create. Bad things happen.
 
--No edit fail.

Forgot to mention that one of the core human drives that govern our psyche is the need / love / desire for power. In the end, we will never give that up to anyone or anything on purpose. On accident, maybe, but that same motivation for power will ultimately crush (or attempt to crush) the things that appear greater than us.
 
We will making moderately convincing sex-bots before we have potent AI, and all technological, industrial, and intellectual advancement will stop right there. Nothing to fear.
 
All what is called AI is, is a lookup table, no matter how sophisticated you make it, it's just a lookup table.

In the current form of what is called AI, I would agree with this statement. However, when you look at computers, there is no distinction between programs and data. In theory, it's possible to have a program producing data that is another program. Trade this data with another similar machine, and information would be spread similar to sexual reproduction (although arguably much quicker and likely less fun).

Musk and Gates, for all of their operation and theoretical brilliance, completely fail to understand that humans can be 100% non-deterministic in their actions, intent, and purposes from beginning to end. Computers *must* be deterministic. To that extent and specific operating constraint, human intelligence will always trump computer AI.

Although computers operate under basic laws, that doesn't mean they couldn't be "intelligent." Ultimately, if given enough time you may be able to predict their behavior provided you understand all their programs. However, remember that humans operate under the basic laws of physics.. we just don't understand all these laws. Ultimately, these basic laws of physics have created what we today call intelligence. The same could happen with computers.

TL;DR: Just because computers operate in a deterministic fashion does not exclude the possibility of developing intelligence.
 
Although computers operate under basic laws, that doesn't mean they couldn't be "intelligent." Ultimately, if given enough time you may be able to predict their behavior provided you understand all their programs. However, remember that humans operate under the basic laws of physics.. we just don't understand all these laws. Ultimately, these basic laws of physics have created what we today call intelligence. The same could happen with computers.

TL;DR: Just because computers operate in a deterministic fashion does not exclude the possibility of developing intelligence.

I wouldn't call it "developing intelligence" since that implies that computers are capable of self modification related to intelligence ... the human and animal brains developed over billions of years and millions (if not billions) of iterative changes (mutation) ... if we developed a computer that could replicate on that scale and result in mutations you might spontaneously generate intelligence ... however, I suspect it will only occur when we choose to create it (and I am not sure we will have that ability anytime soon) ;)
 
I wouldn't call it "developing intelligence" since that implies that computers are capable of self modification related to intelligence ... the human and animal brains developed over billions of years and millions (if not billions) of iterative changes (mutation) ... if we developed a computer that could replicate on that scale and result in mutations you might spontaneously generate intelligence ... however, I suspect it will only occur when we choose to create it (and I am not sure we will have that ability anytime soon) ;)

My point is really that life as we know it boils down to basic principles of physics and billions of years of evolution. Ultimately physics is responsible for the way you think and interact in the world and physics is predictable (even though we don't completely understand it).

Similarly, computers have very basic laws. I don't believe these "computer laws" would prevent intelligence. I don't think computers will spontaneously develop intelligence, and we are undoubtedly far away from ever creating a true intelligence. However, I see no reason why it couldn't advance well beyond our capabilities if we created a program with the ability to create other programs. Basically, when you create a true intelligence in the computer world, it will surpass us quickly.
 
We will making moderately convincing sex-bots before we have potent AI, and all technological, industrial, and intellectual advancement will stop right there. Nothing to fear.

You mean that hasn't been the primary motivation for this technology's development to this point? :D

Sooner or later, something will kills us all, and it is quite likely that it will be of our own making. From this perspective it makes perfect sense why no alien intelligence has openly made contact with us. They're waiting for us to go extinct, then they'll take over the planet.

The interesting thing with all of this speculation is that we still don't know what makes us sentient, which is really what we are talking about with regards to true AI. At some point in the past, we were just a collection of chemicals in a puddle, then all of a sudden, single-celled life appeared. It eventually evolved to our level of organic life and the sentience that we take for granted.

If the conditions are right, I see it being entirely possible for something we create to gain sentience and truly be artificially intelligent. It happened in the past. The only difference was that we were not the catalyst, but that doesn't mean that we couldn't be that catalyst at some point in the future.
 
My point is really that life as we know it boils down to basic principles of physics and billions of years of evolution. Ultimately physics is responsible for the way you think and interact in the world and physics is predictable (even though we don't completely understand it).

Similarly, computers have very basic laws. I don't believe these "computer laws" would prevent intelligence. I don't think computers will spontaneously develop intelligence, and we are undoubtedly far away from ever creating a true intelligence. However, I see no reason why it couldn't advance well beyond our capabilities if we created a program with the ability to create other programs. Basically, when you create a true intelligence in the computer world, it will surpass us quickly.

Well life operates by the rules of Biology (not Physics) ... physics is generally pretty repeatable ... if I drop a pound of feathers and a pound of lead they will both fall at the same rate ... if I shoot a cannonball at a hill (with the proper adjustments) I will hit it ...

biology operates with repeatable error (mutation) ... if 1,000,000 cells divide, their DNA (which determines their abilities) will not all be the same ... some will have fatal mutations, some will have positive mutations (improvements), and some will have negative mutations (degradation) ... this is the whole basis of how we evolved from single cell organisms to nuclear weapon wielding humans :cool:

Computers (and computer programming) tends to work by the rules of physics more than the rules of biology which will make spontaneous evolution a lot more difficult ;)
 
Well life operates by the rules of Biology (not Physics) ... physics is generally pretty repeatable ... if I drop a pound of feathers and a pound of lead they will both fall at the same rate ... if I shoot a cannonball at a hill (with the proper adjustments) I will hit it ...

biology operates with repeatable error (mutation) ... if 1,000,000 cells divide, their DNA (which determines their abilities) will not all be the same ... some will have fatal mutations, some will have positive mutations (improvements), and some will have negative mutations (degradation) ... this is the whole basis of how we evolved from single cell organisms to nuclear weapon wielding humans :cool:

Computers (and computer programming) tends to work by the rules of physics more than the rules of biology which will make spontaneous evolution a lot more difficult ;)

Physics is the most basic science and the building blocks for the other sciences. You can actually think of biology as a subset of physics, because ultimately biology is still matter, energy, and their interactions. When matter and energy interact in complex manners that we do not fully understand, we are forced to look at it rather high level. Ultimately evolution is the result of matter, energy, and their interactions; but our knowledge has not increased enough to fully understand evolution at this most basic level.

I'm not arguing that spontaneous evolution will occur in computers. It's improbable because computers do not pass on genetic material. However, if you were to create a computer that was capable of producing other programs; it is possible that the machine could eventually become self-aware.

You could argue that it is "not really self-aware" because it is only a program in an environment with set rules. You may argue: If we understood all the programs, we could predict the computers thoughts. However, that is the same as human intelligence. We just don't fully understand the rules for human intelligence. But if we understood all the rules of physics and knew what interactions were occurring in the entire universe on the particle (or perhaps smaller) level, we could predict your entire lifespan from birth to death. Our term "free will" is merely the lack of predictability based on not fully understanding the science.
 
Have you not read Revelations 13:14&15?
:14 "And he deceives those who dwell on the earth by those signs which he was granted to do in the sight of the beast (anti-christ), telling those who dwell on the earth to make an image to the beast who was wounded by the sword and lived.
:15 "He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed."
Ask yourselves... isn't a robot an "image" of man? It says that he is given the power to give breath (which means life, when God formed Adam out of the ground, it says God breathed life into Adam) to the image of the beast.
The image is able to speak, and to kill those who will not worship the image (robot) of the beast.
God told us 2000 years ago that this would happen.
 
You may argue: If we understood all the programs, we could predict the computers thoughts. However, that is the same as human intelligence. We just don't fully understand the rules for human intelligence. But if we understood all the rules of physics and knew what interactions were occurring in the entire universe on the particle (or perhaps smaller) level, we could predict your entire lifespan from birth to death. Our term "free will" is merely the lack of predictability based on not fully understanding the science.

I believe what you are describing has been coined psychohistory by Isaac Asimov :D
 
Bill Gates has it completely wrong. The thing about an A.I., even the ones that give the illusion of being intelligent, is that if you don't tell them to do something, they do NOTHING. They just sit there. Sure, you can program them to ask questions, but if you don't tell them to ask questions, they won't ask questions.

"I'll start worrying about the singularity when IBM has made machines that exhibit the agency and awareness of an amoeba."

You can worry when machines can fix themselves readily and cheaply. If an A.I. appears its very likely that it won't make itself known until its ready. That is what Gates is worried about.

If we can train it from birth to socialize with us it may decide its own course that could equal growing pains like our destruction.

I think creating intelligence is not as easy as some scientists think it will be soon. We don't even understand consciousness barely at all. Even by accident there are set conditions that nature understands about its creation that we haven't learned yet. We use to think Orangutan's were without any consciousness and it turns out that they can do tasks that require self awareness to complete.

However the human species is so do that I can actually see it creating A.I. by "accident" and not even knowing it. And if it must happen I'll put my money on that.
 
lol people here think too much of intelligence. Humans are just eating screwing, sleeping machines our basic driver is simply reproduce and everything else is a accidental byproduct of that desire to reproduce. That is all intelligence is, the ability to desire to reproduce enough to do something about it within the confines of a system. I have no doubt that sooner or later we will create a robot with the desire to reproduce and the ability to replicate itself. It only makes sense in the grand scheme of things as we approach the limits of what humans can put together we will keep outsourcing more jobs till we get a robot that can assemble other robots and include a set of code with randomization of outcomes that it learns from.

The only problem is this, by the time we get to that will the energy taken by these robots be so much that they are unsustainable? Machines we build are so good an efficient becuause they don't have to waste time trying to implement all the components to do everything a human does. A computer doesn't generate its own electricity, it doesn't move itself under your desk, it is efficient because it is highly specialized.
 
lol people here think too much of intelligence. Humans are just eating screwing, sleeping machines our basic driver is simply reproduce and everything else is a accidental byproduct of that desire to reproduce. That is all intelligence is, the ability to desire to reproduce enough to do something about it within the confines of a system. I have no doubt that sooner or later we will create a robot with the desire to reproduce and the ability to replicate itself. It only makes sense in the grand scheme of things as we approach the limits of what humans can put together we will keep outsourcing more jobs till we get a robot that can assemble other robots and include a set of code with randomization of outcomes that it learns from.

The only problem is this, by the time we get to that will the energy taken by these robots be so much that they are unsustainable? Machines we build are so good an efficient becuause they don't have to waste time trying to implement all the components to do everything a human does. A computer doesn't generate its own electricity, it doesn't move itself under your desk, it is efficient because it is highly specialized.

Although some people definitely fall to that level, I think mankind is generally more than that (for both good and bad) ... putting a man on the moon or our intermittent attempts to explore our solar system don't help our ability to reproduce ... the desire to understand the basic laws and workings of the universe don't help us reproduce (and I doubt if most animals have looked at a starry sky and wondered what other beings might live on those lights)

If you use the rules for life proposed in STNG in the episode The Measure of a Man (Intelligence, self awareness, consciousness) we still have a long ways to go to create artificial intelligent life ... intelligence is easy and computers do that part pretty well now ... self awareness and consciousness are much tougher ... we don't even know why we are self aware and an amoeba is not or why we exhibit consciousness but a plant does not ... it is hard to replicate complex systems you do not understand, even by accident

We are at far more risk of creating a dumb artificial life form than we are an intelligent one ... the excellent sci fi movie Screamers probably hit the most likely way we would destroy ourselves (or the ST episode The Doomsday Machine) ... we might invent a very dumb device with but a single purpose (to kill) and either give it too much power (Doomsday Machine) or the ability to self replicate (Screamers) ... but a thinking computer that "chooses" to try to kill us because we come into a conflict of wills or needs (Matrix, The Forbin Project, HAL 9000, etc) is still probably a very long way away
 
Back
Top