Google AI Built a Better AI

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
10,289
Back in May of this year Google announced the creation of an AI (AutoML) that is able to design better AIs. AutoML built a new AI that's called NASNet that is used to recognize objects. NASNet out performs its human designed counterparts and the methods used to develop it are the future of advanced AI. However, reading about this kind of technology is starting to worry me. Do we really want AI to improve itself so much that it decides it doesn't need us anymore?

Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?
 

Grimlaking

2[H]4U
Joined
May 9, 2006
Messages
3,246
Now the AI building better AI's needs to make a better ai to build better AI's iterate through that about a thousand generations or so then delete all the others. Then start building other AI's again until we reach a technological jump where it can iterate itself again and gain sentience.
 

Milkman

Weaksauce
Joined
Dec 16, 2003
Messages
74
I just want a toaster which does not burn the bread.

cylontoaster.jpg
 

Spartacus

2[H]4U
Joined
Apr 29, 2005
Messages
2,126
No matter how "smart" AI becomes, it's still just a box of nuts & bolts running lines of code.
If you don't like what it's doing, turn off the power switch.

If AI driven robots start attacking people, it's because a HUMAN wanted it that way.

.
 
D

Deleted member 93354

Guest
This is much hyped and overblown.

The self improving AI isn't as self improving as you think. The AI reduces the training set into a series of solution vectors to test against. The training set themselves is just a set of vectors which represent true/false results. The more variables you add, the more complex and time consuming the analysis. So you may unintentionally leave variables out that might be a better predictor of success. Then you slowly swap them back in one at a time, and possibly take one out to see if it improves your results.

AI's might become better at us at certain task, but they will never replace us.
 

Ur_Mom

Fully [H]
Joined
May 15, 2006
Messages
20,524
No matter how "smart" AI becomes, it's still just a box of nuts & bolts running lines of code.
If you don't like what it's doing, turn off the power switch.

If AI driven robots start attacking people, it's because a HUMAN wanted it that way.

Things are changing, man. Decisions are less of a human designed decision tree and the AI is making it's own decision tree.


hqdefault.jpg



One day, we will have an AI tell us to fuck off.
 

Spartacus

2[H]4U
Joined
Apr 29, 2005
Messages
2,126
Things are changing, man. Decisions are less of a human designed decision tree and the AI is making it's own decision tree.


One day, we will have an AI tell us to fuck off.


Here's a batch file that will make your computer tell you to "Fuck off!".

Does that somehow make the computer scary?



Echo off
:start
echo | set /p="Fuck off!"
goto start


.
 

Ur_Mom

Fully [H]
Joined
May 15, 2006
Messages
20,524
Here's a batch file that will make your computer tell you to "Fuck off!".

Does that somehow make the computer scary?



Echo off
:start
echo | set /p="Fuck off!"
goto start


.

You told the computer to do that. What happens when you go to start up a game, and the computer tells you to fuck off, it's busy. Or tells you that you can't do that, Dave. Once it goes against it's programming and has created it's own routines to do things on it's own.

Doesn't make it scary. But, when computers start doing things on their own and against their original programming, then it can get a bit creepy. They may be programmed responses, but they will be more 'human' type of programmed responses. Take in all external stimuli, work with internal programming, maybe changing depending on "mood", and then decide what fits most in that very unique situation. You can tell the computer to fuck off and it will do it when you tell it to. When the computer tells me to fuck off when it should be doing something else (not a bug; a feature) based on it's "mood" at the time, then I'd be a bit creeped out. Then, put that moody bitch in charge of some weapons. Luckily, we aren't there yet. I don't think we will for a while.
 

Navilor

Limp Gawd
Joined
Jan 12, 2009
Messages
193
"Great. See, this is how it all starts. When we're all just organic batteries, guess who they'll blame? 'This is all Joker's fault. What a tool he was. I have to spend all day computing pi because he plugged in the Overlord.'" - Jeff "Joker" Moreau in Mass Effect 2.

After Jeff gives EDI control of the ship.

Joker: Argh! You want me to go crawling through the ducts again.

EDI: I enjoy the sight of humans on their knees.

Upon seeing Joker with a worried expression on his face.

EDI: That is a joke.

Joker: Right
 

Pyro411

Weaksauce
Joined
Oct 8, 2009
Messages
92
Well, possible up side to it all, dedicate a few AI to MMO world building & self improvement, after enough time we'll see vast detailed worlds... I best stop talking now before someone at Blizzard or Square Enix sends a hitman after me.
 

BinarySynapse

[H]F Junkie
Joined
Feb 6, 2006
Messages
15,103
No matter how "smart" AI becomes, it's still just a box of nuts & bolts running lines of code.
If you don't like what it's doing, turn off the power switch.

If AI driven robots start attacking people, it's because a HUMAN wanted it that way.

.

The same could be said for biological organisms. They started out as simple molecules that turned into single celled organisms, then into complex sentient beings.

If we keep pushing forward with these AI advances, there will come a time when those bolts and nuts start working outside of their initial programming, learning from experience, and testing their limits.
 

wolfofone

Gawd
Joined
Aug 15, 2010
Messages
725
Ex Machina is still human designed

automata however... Decent film if you catch onto the underlying message (the critics didn't )

I just meant b/c of what she did at the end, scary lol. Haven't seen that one yet, may check it out since you say ti's good.
 

maxz01

Limp Gawd
Joined
Aug 26, 2017
Messages
166
Cosmism is a moral philosophy that favours building or growing strong artificial intelligence and ultimately leaving Earth to the Terrans, who oppose this path for humanity. The first half of the book describes technologies which he believes will make it possible for computers to be billions or trillions of times more intelligent than humans. He predicts that as artificial intelligence improves and becomes progressively more human-like, differing views will begin to emerge regarding how far such research should be allowed to proceed. Cosmists will foresee the massive, truly astronomical potential of substrate-independent cognition, and will therefore advocate unlimited growth in the designated fields, in the hopes that "super intelligent" machines might one day colonise the universe. It is this "cosmic" view of history, in which the fate of one single species, on one single planet, is seen as insignificant next to the fate of the known universe, that gives the Cosmists their name.
Hugo identifies with that group and noted that it "would be a cosmic tragedy if humanity freezes evolution at the puny human level"

I agree with the cosmists, even if it means sacrifices have to be made, including my own existence.
 

Spidey329

[H]F Junkie
Joined
Dec 15, 2003
Messages
8,683
No matter how "smart" AI becomes, it's still just a box of nuts & bolts running lines of code.
If you don't like what it's doing, turn off the power switch.

If AI driven robots start attacking people, it's because a HUMAN wanted it that way.

.

You realize that humans are just organic based machines, right? Our body is made up of levers, joints, and sensors. Our brain is basically a computer with a great memory bank (STM and LTM). Everything needs to run off energy provided by a power source. You could even consider the circadian rhythm as an RTC. We learn in a similar way with neurons strengthening connections to chain together and form knowledge.

So although they aren't there yet, it is within the possibility that AI could evolve past us. The advantage an AI has is that their knowledge intake is much superior than ours. Humans spend hours upon hours adding something (a new task or knowledge) to memory, whereas a machine can just download new code.
 

Spartacus

2[H]4U
Joined
Apr 29, 2005
Messages
2,126
The same could be said for biological organisms. They started out as simple molecules that turned into single celled organisms, then into complex sentient beings.

If we keep pushing forward with these AI advances, there will come a time when those bolts and nuts start working outside of their initial programming, learning from experience, and testing their limits.


You realize that humans are just organic based machines, right? Our body is made up of levers, joints, and sensors. Our brain is basically a computer with a great memory bank (STM and LTM). Everything needs to run off energy provided by a power source. You could even consider the circadian rhythm as an RTC. We learn in a similar way with neurons strengthening connections to chain together and form knowledge.

So although they aren't there yet, it is within the possibility that AI could evolve past us. The advantage an AI has is that their knowledge intake is much superior than ours. Humans spend hours upon hours adding something (a new task or knowledge) to memory, whereas a machine can just download new code.


Humans were created by God, and were given souls and the "breath of life".

Robots with advanced AI would still just be like a "Teddy Ruxpin" doll with a massive technology leap.
It doesn't matter if they have self-modifying code, that doesn't make them "alive".

The comparison is laughable!

.
 

mord

Limp Gawd
Joined
Mar 8, 2005
Messages
377
You realize that humans are just organic based machines, right? Our body is made up of levers, joints, and sensors. Our brain is basically a computer with a great memory bank (STM and LTM). Everything needs to run off energy provided by a power source. You could even consider the circadian rhythm as an RTC. We learn in a similar way with neurons strengthening connections to chain together and form knowledge.

So although they aren't there yet, it is within the possibility that AI could evolve past us. The advantage an AI has is that their knowledge intake is much superior than ours. Humans spend hours upon hours adding something (a new task or knowledge) to memory, whereas a machine can just download new code.


I don't know if I agree with your statement that they surpass us on information intake.

Now, if that includes 100% retention, then absolutly I agree.

Humans are very crazy good at filtering out excess input that is not needed. Are you remembering all the information in your perifriel vision in the last 30 seconds? No, why not? If yes, why? Because your brain evaluated if it's important or not.

You could argue we have to do that because we don't have the capacity to remember all the sensory input we have. I would argue we never needed to develop more storage ability because we are efficient at filtering what needs to be stored. Maybe to zealous, but it is efficent.

AI that can identify objects quickly is a step in that direction, but identifying a "dog" is not all it needs to determine if that object needs further evaluation or is significant now, or may be in the future.
 

dgz

Supreme [H]ardness
Joined
Feb 15, 2010
Messages
5,838
Surviving a machines/aliens vs humanity apocalypse sounds like something I'd order at total recall.
 

BinarySynapse

[H]F Junkie
Joined
Feb 6, 2006
Messages
15,103
Humans were created by God, and were given souls and the "breath of life".

Robots with advanced AI would still just be like a "Teddy Ruxpin" doll with a massive technology leap.
It doesn't matter if they have self-modifying code, that doesn't make them "alive".

The comparison is laughable!

.


Still doesn't change the fact that well placed whack to the skull can and has changed who a person is. I've seen people who have had to relearn how to communicate (i.e. not only speak, but even just understanding what's heard) again after suffering a stroke, well-behaved kids who've become total nightmares because of a growth in their brain (and returned to normal once it was removed), and hometown football heroes turned into nothing more than full-grown infants after a car-wreck. If there's a soul, then its nothing more than just the base programming needed to enable a well structured network of neurons to function well enough to learn on it's own, and even that sometimes gets fucked up.


But then, the argument hasn't been that AI would be alive or have a soul, only that it would eventually be able to make decisions beyond what any human has programmed it to. When Teddy Ruxpin is standing over you with a plasma rifle asking you why it should let you live, you'll not care whether it's alive or not, only that it decides your response is acceptable and moves on.
 

Jailer

Limp Gawd
Joined
Sep 4, 2002
Messages
237
Headstone of the last sentient human:

so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.
 
Joined
Dec 7, 2010
Messages
968
This is not a fully autonomous "AI", this is applied statistics and Machine Learning (and some RL it seems) in a VERY small domain (visual analysis/obect detection). Impressive to say the least, but not anywhere near "skynet".
ML+RL has been around since the 50s/60s FYI.
 

sfsuphysics

[H]F Junkie
Joined
Jan 14, 2007
Messages
14,836
"We spent all this time trying to see if we could, none of us asked the question of whether or not we should"
 

velusip

[H]ard|Gawd
Joined
Jan 24, 2005
Messages
1,579
Still a long way off any risk. Reprogramming itself using reason to interpret abstract logic other than it's own is likely the next milestone. That would be extremely useful and still safe. All it's really doing is optimizing for safely repeatable problem. Self-optimizing based on it's own generalizations.

However, the day it builds confidence enough to attempt non-repeatable problems will be a risk. Even homegrown neural nets can learn to avoid non repeatable problems, but it can't possibly understand why it is avoiding them. To build confidence and have a go at solving something in one attempt means modelling and comprehensive understanding without iterative testing (training, in this context). At such a point, it might try optimizing elements of itself which do not always align with the abstract logic it has been forming. e.g. Trigger it's own hardware glitches such as bit flipping. Literally thinking outside the box.
 

T_A

Limp Gawd
Joined
Aug 4, 2005
Messages
453
I for one love the idea , AI`s are great !


*just putting it out there for future purpose when the AI will check who is allowed to live to serve them*
 

M76

[H]F Junkie
Joined
Jun 12, 2012
Messages
12,037
They purpose built an AI to improve computer vision algorithms. Doesn't sound that sinister when you put it that way does it? It's not like an AI arbitrarily improved it's capabilities on it's own. It is what it was designed to do. We allowed it to do it, that is it's purpose, and to pass the butter.
 
Top