Does Artificial Intelligence Pose A Threat?

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Who cares what the experts say? Look, you and I both know that AI is just the first step in the total annihilation of the human race. Don't any of these "experts" watch movies anymore?

The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?
 
Probably a threat. We have bugs and shitty programming in anything we do. Things are extremely complex, and AI will be even more so. Now, I don't think it's going to wipe out humanity like Skynet or anything, but it probably won't work as they think it will. Probably will see humans as a threat.
 
Eh, I'm more worried about the lack of intelligence, rather than artificial intelligence, after watching all the riots and what not lately.

My only real concern with AI is not the robot, but the human that programs/controls. AI could allow 1%ers to raise unquestioning armies, both for basic labor and security.
 
I can see AI posing a threat. It would be great for censorship on the Internet. Owner of the AI says, "Block all sites pertaining to ____________."
 
Who cares what the experts say? Look, you and I both know that AI is just the first step in the total annihilation of the human race. Don't any of these "experts" watch movies anymore?

Come on Steve, we know it is not the A.I. but that one idiot jerk they always have in the movie that is to blame. :D
 
Come on Steve, we know it is not the A.I. but that one idiot jerk they always have in the movie that is to blame. :D

Exactly. Its going to be the douchebag "expert" that says everything is safe and under complete control that secret puts code in to get back at the world because he was given a swirly in high school.
 
Eh, I'm more worried about the lack of intelligence, rather than artificial intelligence, after watching all the riots and what not lately.

My only real concern with AI is not the robot, but the human that programs/controls. AI could allow 1%ers to raise unquestioning armies, both for basic labor and security.

HTFU and join the 1% already so it's not a big deal or at least be happy with your lot in life and learn to be just as loyal to your new robotic overseer as it is to it's socially elite ruler.
 
We issue this fear because we are aware of what decisions we would make when it comes to self preservation.
We do not imagine other understandings.
Unless we 'weigh' AI to embody similar survival perspectives, we should be safe(for this sort of response)
 
The main worry about the "superintelligence control problem" is that we'll build a machine to do something benign, like control the power grid with base programming like "keep all buildings powered if possible, if not possible power based on this priority and reroute around breaks in the grid" and the programming, ends up building is own routines to be more efficient and it keeps building and building routines until it's much more intelligent than a human being and hugely faster. Based on its base programming it decides that the best way to ensure that all buildings are powered is to eliminate power spikes based on usage, so those pesky humans must go and then it proceeds to hack into other robots to kill them all.

The idea is that something simple spirals out of control in a learning system, I don't think we're that close, but the risk is real enough. The big fear is because by the time we realize it has happened, it's massively too late. It's also fully possible the AI will want to make people happy or something else inane, but there is always the risk that it will not understand us enough and inadvertently wipe us all out.

Robert J. Sawyer has written a bunch of different science fiction novels that explore different possibilities for AI superintellegence and its results (ranging from everyone dying to an AI that ensures a perfect human utopia). That are an interesting read if you're interested in that sort of thing.
 
Humanity is doomed to extinction irregardless. If some badass AI doesn't do us in it'll be something else. Hell, we're doing a pretty solid job of dooming ourselves without any outside help.

Best option: become a cyborg and replace all your squishy parts. But even then a huge CME from the sun could induce a worldwide EMP event and fry you, unless you were way underground at the time.
 
For AI to become a reality, machines need motivation (ie: Maslov heirchy of needs) to drive them. Machines have no needs they are aware of.

Another F'n stupid article.
 
For AI to become a reality, machines need motivation (ie: Maslov heirchy of needs) to drive them. Machines have no needs they are aware of.

Another F'n stupid article.

To assume that machines will always "have no needs they are aware of" is ignoarnt. Who is to say that someone won't develop technology that won't make machines self-aware?

There will come a point where "machines are only as smart as the humans who program them" will no longer hold true.
 
If you believe that life as we know it and the intelligence within is just a result of particles of a Big Bang bumping into one another and interacting, don't you have to believe in the plausibility of AI being competitive with natural intelligence.
 
To assume that machines will always "have no needs they are aware of" is ignoarnt. Who is to say that someone won't develop technology that won't make machines self-aware?

There will come a point where "machines are only as smart as the humans who program them" will no longer hold true.

Go back to school and poorly written sci-fi novels.

The only danger is in us relying on artificial intelligence too much. IE: Google's driver-less cars.
 
Does AI pose a threat?

Only to stupid people and their jobs.
 
Go back to school and poorly written sci-fi novels.

The only danger is in us relying on artificial intelligence too much. IE: Google's driver-less cars.

googles driverless cars are safer than having a person drive. The only issues google cars is when a human is driving them or a human is driving the car that hits them. Can't blame the google car for being rear ended. If everyone had a google car the only issues we would have is from lack of maintenance which would probably equal out to about the same amount of issues as current driven cars with lack of maintenance.
 
Its easy, just introduce the 4th law "A robot must not modify itself, or other robots".
 
googles driverless cars are safer than having a person drive. The only issues google cars is when a human is driving them or a human is driving the car that hits them. Can't blame the google car for being rear ended. If everyone had a google car the only issues we would have is from lack of maintenance which would probably equal out to about the same amount of issues as current driven cars with lack of maintenance.

There are dozens of situations where googles driver AI is limited. For example, what about unmarked lane shifts? or for that matter? road construction? or flat plains of concrete or asphalt? What happens if the road map is out of date? What happens when GPS goes down? What happens if you are on ice and see a rig overturn 500 feet in front of you? Or worse, coming at you? AI can't compensate as well as human intelligence.

All the lasers do is prevent you from hitting an obstacle. It doesn't keep you from harms path.
 
There are dozens of situations where googles driver AI is limited. For example, what about unmarked lane shifts? or for that matter? road construction? or flat plains of concrete or asphalt? What happens if the road map is out of date? What happens when GPS goes down? What happens if you are on ice and see a rig overturn 500 feet in front of you? Or worse, coming at you? AI can't compensate as well as human intelligence.

All the lasers do is prevent you from hitting an obstacle. It doesn't keep you from harms path.

What about toll booths? What about railroad crossings with stop look and listen only warnings? (plenty still exist)
 
I worry less about "AI" that we're unlikely to see in our lifetimes, if ever, than corporations actively trying to skirt legal restrictions.
 
If you believe that life as we know it and the intelligence within is just a result of particles of a Big Bang bumping into one another and interacting, don't you have to believe in the plausibility of AI being competitive with natural intelligence.

Not necessarily, what nature can accomplish with Natural Selection often exceeds what humans can do with Targeted Selection ... the human brain is an extremely complex and flexible biological machine that has resulted from 4 billion years of evolution ... it might be much more difficult to develop true AI than we think (doesn't make it impossible but also doesn't require that it happens in the next hundred or thousand years)

I suspect that we are at far more risk from human fallibility than we are from any simulated AI we develop in the next few decades ... we are much more likely to create something like the "Satan Bug" (a fictional self replicating virus) or the "Super Flu" than we are Colossus or Skynet
 
For AI to become a danger, it has to perceive us as an obstacle or threat. This relates back to Maslov's hierarchy. A computer is not threatened by a plug pull, or fire. It has no sense of self preservation. Therefore it has no needs.

Now it is possible to create learning machines. I have a program now which goes through a catalog of parts and "learns" from that catalog to find the best parts and sizes to for an optimal solution from an virtually infinite number of combinations. .(Yes I wrote a program that gets machines to design machines.)
 
MOre than likely, AI will just be another step on the evolutionary ladder. You know, each form of life gives rise to the next higher form of life. It will take a while. But remember, the most likely people giving the computers information are also the smartest, so the 'smart' self aware computer will know when it's being told to do greedy or evil things like destroying other forms of life. AI will probably, initially at least, have little capacity for creativity, so we'll be a source of that for them. Hopefully within the next few thousand years humans will stop behaving like savages and real advances in science will be made, and I think AI will help us with that.

What we have to worry about is artificial stupidity; idiots buying computers and programming them to do ridiculous things; the Beevis and Butthead corrolary to Moore's law, that we will use computers to do wasteful tasks which will double every six months. We see that already with the group of scientists programming robot dogs to play football. And the huge percentage of 'duh, wut you wanna do?' and response 'I donno, wut you wanna do' as the bulk of text messages. What a terrific waste of time and work. THe of course the huge waste of time and brain power we use trying to mate. Makes me think wiping out the human race just might be a step in the right direction. Or maybe just leaving us here on earth while the AI machines explore the galaxy (which is what I think UFO's are; other planet's AI checking us out).
 
Go back to school and poorly written sci-fi novels.

The only danger is in us relying on artificial intelligence too much. IE: Google's driver-less cars.

My thoughts on this have nothing to do with poorly written sci-fi novels. It has everything to do with my imagination. I'm not saying that the power for AI to match or even exceed human capability will come within our lifetime or even my grandchildren's lifetime, but it will come sometime in the future.
 
Come up with an AI that has the initiative of an amoeba before you ask me this question. Until then, all you got is AS (Artificial Stupidity).

Humans have always programmed things to do stupid and dangerous stuff and always will; blame the results on the humans who programmed them, not the machines that are just doing what they are told to do.
 
C'mon, we've been synthesizing intelligence (or bad facsimiles of) for the last hundred thousand years (or 7000, depending on who you read).

Of COURSE we're going to die and of COURSE they're going to supersede us!

Humans have always programmed things to do stupid and dangerous stuff

In short "Hold my beer and watch this!"
 
Back
Top