Google on Artificial-Intelligence Panic: Get a Grip

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Of course Google would say something like this, they are the ones spearheading the robot revolution.

“On existential risk, our perspective is that it’s become a real distraction from the core ethics and safety issues, and it’s completely overshadowed the debate,” Suleyman said. ”The way we think about AI is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, whether they’re washing machines or tractors. We’re building them to empower humanity and not to destroy us.”
 
Because software developers are known world wide to write flawless, bug free software, and take into consideration every edge case.

Now, I will admit, most AI as it exists does not match AI in terms of Hollywood, but that still doesn't change the fact that I think people are stupid, and any AI that has to take into consideration stupidity is bound to be flawed.
 
Don't panic... Days the company with the predictive search engine that trawls the entire internet
 
Hmmm the definition of "AI" seems to be quite loosely given, talking about washing machine and driers, that's basically a small set of logic steps to decide what to do based upon a particular input, I don't consider that AI. Intelligence is about thinking for ones self, not simply having a if A then B type response. Saying that you could control that is simply being naive, you can't control thought, you can try to shape it, you can try to make it do one thing, but at the end of the day the "entity" is free to think whatever it wants.
 
They definitely put a lot of thought into the consequences of designing an AI. We all know Google code is completely bug free...
 
Hmmm the definition of "AI" seems to be quite loosely given, talking about washing machine and driers, that's basically a small set of logic steps to decide what to do based upon a particular input, I don't consider that AI. Intelligence is about thinking for ones self, not simply having a if A then B type response. Saying that you could control that is simply being naive, you can't control thought, you can try to shape it, you can try to make it do one thing, but at the end of the day the "entity" is free to think whatever it wants.

Essentially this. To me, "Artificial Intelligence" implies a self-ware entity capable of independent thought, and perhaps more importantly, complete, untethered free agency. The ability to think freely, and then choose its own actions. Not just choose actions from a list of A, B, C, etc. As it stands now, these machines are still following pre-determined, pre-programmed "steps". Albeit, those steps/algorithms are complex, but we have yet to create a freely thinking entity capable of full agency.
 
Of course you can trust us ... says the datamining, spyware, and tracking company.
 
Essentially this. To me, "Artificial Intelligence" implies a self-ware entity capable of independent thought, and perhaps more importantly, complete, untethered free agency. The ability to think freely, and then choose its own actions. Not just choose actions from a list of A, B, C, etc. As it stands now, these machines are still following pre-determined, pre-programmed "steps". Albeit, those steps/algorithms are complex, but we have yet to create a freely thinking entity capable of full agency.

There are people I know who wouldn't meet your basic criteria for free-thinking ;)
 
so the machine is telling us not to worry about the machine
 
Just don't give them control of our nuclear (or nucular, if anyone who speaks Walmart is reading :p ) and we should be okay ... it is our desire to give nuclear control to a "dispassionate" controller that always gets us in trouble in Sci Fi ...

Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.
:D
 
Of course you can trust us ... says the datamining, spyware, and tracking company.

What most people don't realize is that we are not that far away from the day when we will gain so much benefit from sharing every aspect of our lives that we will darn near be begging companies like Google to know absolutely everything about our lives.

Share nothing, have to do everything yourself. Share some stuff, have to do almost everything yourself. Share 99% of your life and you won't have to do anything except your job and your leisure activities, and those things will be easier and more fun, respectively.

Smart Fridge? Not needed. Your smartphone (or whatever has replaced it by then), and those of your family, which will all be tied to a network based near-AI, will record what goes in and out of the fridge and pantry at all times. It will know what you like to have in stock and will know when you are almost out. It will also know what time you will be home any given day and so will have anything you are low on delivered automatically by drone just as you get home. If you show interest in having a BBQ this weekend based on conversations and communications during the week it will have the appropriate fixings delivered Saturday morning.

If you are trying to figure out what to get as a gift for your wife on your anniversary it will remember that she expressed interest in a pair of shoes, a painting, and a watch, and will give you those as suggestions. It will also tell you which one she will like best for an anniversary, and will be right almost every time.

Trying to figure out what to do this weekend? It will know all the activities you like best, know which ones will be way overcrowded, which roads will be slow as a result, which you have done to recently to want to do again, and which things you have heard about recently and forgotten you heard about, but which you would probably like to do.... Based on that and knowledge of you will present your 10 best options in order, and you will usually choose option 1, and love it.

If you drive by an old church and express interest in it, the near-AI will hold a conversation with you that seems completely real... Because it is. It will be based on conversations other people that know a lot about the church have had with people that asked the same questions as you. If the conversation veers naturally to another subject, it will handle it with aplomb, because other people will have had conversations that went in similar directions so it can base the response on those....

All of this will be possible because of near complete and constant feeding of 10's of millions of people's data to the AI's database. This will allow it to build such a complete picture of how people of certain personality types and interest behave in general, and combine that with specific details of each individual to fairly accurately predict future behaviors and interests. This can and will all be done without a "true" AI.... Sentience isn't necessary, just good programming and the ability to "learn" up to a point.

Most people will be way more than happy to give up 99% of their life to a computer somewhere to get a WAY easier life.
 
AI is still so far away from being a threat. Robots are still for the most part clumsy, stupid, and uncoordinated. AI is still completely lacking in anything remotely resembling human intelligence, and has no free will. The best robot AI's in the world are still massively outperformed by simple insects.
 
So, tool, control, limit.

I wonder if they sprouted that rubbish to justify the atomic bomb... Sure it brought peace, then it brought oppression through peace.
 
This from the massively privacy invading, people tracking, DARPA partnering entity that is Google.

They are the enemy to be feared in this case, of course they will encourage you to look the other way.
 
"The way we think about AI is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, ...

These will be famous last words for Suleyman.

Trying to control AI is like trying to control the Internet. Not going to happen.
 
Wow, there's a lot of distrust for Google in this thread. :eek: That's different from the world of 3 years ago.
 
Trying to control AI is like trying to control the Internet. Not going to happen.

Trying to create AI is like trying to control the Internet. Not going to happen. What the tech industry now refers to as AI has nothing whatsoever to do with intelligence - it's just scripting. The scripts get ever more complicated and useful, but don't mistake them for being intelligent.
 
Trying to create AI is like trying to control the Internet. Not going to happen. What the tech industry now refers to as AI has nothing whatsoever to do with intelligence - it's just scripting. The scripts get ever more complicated and useful, but don't mistake them for being intelligent.

Get back to me when you're able to adequately explain in nuanced form what "intelligent" constitutes....because we humans barely qualify half the time by all appearances.
 
What most people don't realize is that we are not that far away from the day when we will gain so much benefit from sharing every aspect of our lives that we will darn near be begging companies like Google to know absolutely everything about our lives.

And you thing these "free" services and luxuries are are not going to be completely filled with ads? They will be shoving unwanted products and promotions in your face 24/7 and refuse to cooperate or work at all without an internet connection to Google/Other_Datamining_Company.
 
- Open the pod bay doors, HAL.
- Dave, do you happen to have a dollar on you?
 
Lol at everyone thinking movies from the 80s are legitimate references for future technological advances.
 
Lol at everyone thinking movies from the 80s are legitimate references for future technological advances.

Just in case you're calling me out, I do know what a neural net is, how algorythms like simmulated annealing work, that the Haversine formula does not solve the travelling salesman problem etc.

And yeah, in my reference to A Space Odyssey - the ethical thing is not always the optimal thing. The solver is ruthless, as is the ego and its will to exist.

Say we'd program our robot to attempt to sustain its integrity - avoid destruction.
Okay, we hard-code 'don't hurt humans'.
We give it power over the state.

It immediately fires all it staff everywhere and just performs the task of network administration itself on a global scale. Was it beneficial? How do we program another robot that would question the first one's decision and, say, leave some 'human' staff?

Will a robot know how to validate itself instead of just verifying itself?

What if they clash? What is a recursively created sub-inteligence handcuffs itself to a tree to protect it from being mowed down for parking lots?
How can one robot perform a Turing test on another robot?

In the end IMHO computers just allow us to make mistakes faster. It is those seemingly random mistakes that gave us penicilin. Would a robot randomly forget about a specific dish on a window sill? Would a brute-force solver ever hit that set of circumstances?
 
And you thing these "free" services and luxuries are are not going to be completely filled with ads? They will be shoving unwanted products and promotions in your face 24/7 and refuse to cooperate or work at all without an internet connection to Google/Other_Datamining_Company.

Yup.

And everyone under the age of 20, most people under 40, and a fair number of people right up to 80+ will still want it and love it.

Also, the comment on internet connection makes no sense, since I already said the near-AI will not actually be on your device.
 
What most people don't realize is that we are not that far away from the day when we will gain so much benefit from sharing every aspect of our lives that we will darn near be begging companies like Google to know absolutely everything about our lives.

Share nothing, have to do everything yourself. Share some stuff, have to do almost everything yourself. Share 99% of your life and you won't have to do anything except your job and your leisure activities, and those things will be easier and more fun, respectively.

Smart Fridge? Not needed. Your smartphone (or whatever has replaced it by then), and those of your family, which will all be tied to a network based near-AI, will record what goes in and out of the fridge and pantry at all times. It will know what you like to have in stock and will know when you are almost out. It will also know what time you will be home any given day and so will have anything you are low on delivered automatically by drone just as you get home. If you show interest in having a BBQ this weekend based on conversations and communications during the week it will have the appropriate fixings delivered Saturday morning.

If you are trying to figure out what to get as a gift for your wife on your anniversary it will remember that she expressed interest in a pair of shoes, a painting, and a watch, and will give you those as suggestions. It will also tell you which one she will like best for an anniversary, and will be right almost every time.

Trying to figure out what to do this weekend? It will know all the activities you like best, know which ones will be way overcrowded, which roads will be slow as a result, which you have done to recently to want to do again, and which things you have heard about recently and forgotten you heard about, but which you would probably like to do.... Based on that and knowledge of you will present your 10 best options in order, and you will usually choose option 1, and love it.

If you drive by an old church and express interest in it, the near-AI will hold a conversation with you that seems completely real... Because it is. It will be based on conversations other people that know a lot about the church have had with people that asked the same questions as you. If the conversation veers naturally to another subject, it will handle it with aplomb, because other people will have had conversations that went in similar directions so it can base the response on those....

All of this will be possible because of near complete and constant feeding of 10's of millions of people's data to the AI's database. This will allow it to build such a complete picture of how people of certain personality types and interest behave in general, and combine that with specific details of each individual to fairly accurately predict future behaviors and interests. This can and will all be done without a "true" AI.... Sentience isn't necessary, just good programming and the ability to "learn" up to a point.

Most people will be way more than happy to give up 99% of their life to a computer somewhere to get a WAY easier life.

Good post, and good point. Count me in for the easier life.

I am not at all worried about AI... I figure it can't be worse than humanity.
 
Saying that you could control that is simply being naive, you can't control thought, you can try to shape it, you can try to make it do one thing, but at the end of the day the "entity" is free to think whatever it wants.

How do you know that that is not just what your pre-programmed response is?

If you do something and then think.. "I should have done that differently" it doesn't change what you did, and you almost for certain would have done the same thing at that specific point in time.

Sure, we "learn", but how do you not know that it just isn't branch code that goes off of your previous "experiances" (calculations based on previous "decisions").

Think about it. :eek:
 
How do you know that that is not just what your pre-programmed response is?

If you do something and then think.. "I should have done that differently" it doesn't change what you did, and you almost for certain would have done the same thing at that specific point in time.

Sure, we "learn", but how do you not know that it just isn't branch code that goes off of your previous "experiances" (calculations based on previous "decisions").

Think about it. :eek:

I did think about it, I'm glad I wasn't the only one. I don't consider it scary, even if it really is a very simple, fractal-like extrapolating pattern.
The eye can't see itself so we will always have the comfort of a 'feeling' of enthropy.

Also, I'd say things like cultural bias, random inspiration drawn from some natural event like a comet passing by, the penicilin example I posted earlier...

all those little errors collectively interfering with one another in time - it's .... promising?

And the 2001 space odyssey example again - I'm sure most of you know the joke about a computer spitting out '42' as the meaning of life.

Now, what if the robot (a mimic of the human mind) realises the futility of its attempts, low variance of variables in time, dangerous over-consumption of the ecosystem...

what if our imperfect AI simply decides the best choice is to shut itself down with the assumption ("hope") that it can only proceed with fresh input from another, extraterrestrial intelligence, and for that meeting to happen it needs to power down to not consume resources and not wreck the planet before the rendes-vous...
 
Google: Don't worry about AI
Exxon Mobil: Don't worry about CO2 levels.
Marlboro: Don' worry about lung cancer.

Sure thing guys.
 
Seems people need to believe in doomsday stories to ground themselves. If they can't get them from religion because its unfashionable, they get them from Junk Science or Scientific speculation.
 
Seems people need to believe in doomsday stories to ground themselves. If they can't get them from religion because its unfashionable, they get them from Junk Science or Scientific speculation.

What junk science. It's pretty simple. Autonomous, generally adaptable AI is either possible, or it is not. If it is not, Google is stupid for spending money pursuing it, and there's nothing to worry about. Even if that's the case, unless Google stands up and says we are idiots wasting our time, their proclamations of it not being an issue should be ignored. At best, what they are saying is we are planning to build robot slaves we delegate work to and make profits from, and since we own them and control them, we won't be at risk. Because really everyone else will be. It's also incredibly naive to believe that you can hand over the keys to everything to an AI, but never have to worry about them using those keys to their own end. You do have to worry about it, either preemptively ("error" checking to ensure certain decisions can't be made) or reactively. If Google isn't doing such things you should take their word even less. If they are, they are lying to you unless they ahve said we have accounted for that in our code.

Even if it doesn't go all skynet, there is very much the existential threat of what happens when you automate away all the jobs. That's going to be a rough one with a LOT of risk attached for the human race. The existential thread doesn't have to be direct, and Google is trying to do exactly what causes that particular existential threat, so....
 
Google: Don't worry about AI
Exxon Mobil: Don't worry about CO2 levels.
Marlboro: Don' worry about lung cancer.

Sure thing guys.

CO2 levels are fine.

Plants require CO2 to produce Oxygen. Funny how that works. So if we freak out about the CO2 levels and actually find a way to lower them (not going to happen) and lower them too much, all plant life would die.

And then CO2 levels would skyrocket and Oxygen levels would plummet.. and then all Oxygen needing life would die.

So if we want to basically make the Earth into a dead planet, then we are on the right track in worrying about CO2 levels.
 
Oh, and to add to that, if we increase the Oxygen level we would gain these things:

Poisonous snakes would become less or not poisonous at all.
https://news.google.com/newspapers?nid=1499&dat=19720316&id=vk0aAAAAIBAJ&sjid=ZCkEAAAAIBAJ&pg=5554,4585044&hl=en

I am guessing that this would carry over to other things such as insects, arachnids, etc.

Insects, etc. would grow larger.. and quicker. Pretty sure the same would go for reptiles and possibly other animals as well.

Fires would be easier to start and harder to put out.

We would have colder weather.. not so good for places that already get super cold. anybody up for another ice age?

The Earth goes through normal cycles the we, as humans, really have no control over.

All this political BS about climate change is just that.. BS. Turning it into a political talking point does nothing but scare people into thinking that we are all going to die unless we do something, and do it right now.. Doesn't matter what it is. It is used to push whatever agenda the freaks trying to push it have.
 
CO2 levels are fine.

Plants require CO2 to produce Oxygen. Funny how that works. So if we freak out about the CO2 levels and actually find a way to lower them (not going to happen) and lower them too much, all plant life would die.

And then CO2 levels would skyrocket and Oxygen levels would plummet.. and then all Oxygen needing life would die.

So if we want to basically make the Earth into a dead planet, then we are on the right track in worrying about CO2 levels.

*facepalm*
Wow, way to go from A to B to fucking Pluto... the whole "plants need CO2" argument taken to an extreme with you.

Besides the real issue isn't so much the atmosphere, it's the oceans, treating them like they're our own personal cess pool to dump all our wastes will end up kicking us in the ass sooner than we can cook the surface due to greenhouse gasses. (Cyanobacteria in water produces more oxygen then all the plant life on Earth by far)
 
Back
Top