Google claims its AI can design computer chips in under 6 hours

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
Impressed? Think we'll see AI generated GPUs soon?

"Training the agent required creating a data set of 10,000 chip placements, where the input is the state associated with the given placement and the label is the reward for the placement (i.e., wirelength and congestion). The researchers built it by first picking five different chip netlists, to which an AI algorithm was applied to create 2,000 diverse placements for each netlist.

In experiments, the coauthors report that as they trained the framework on more chips, they were able to speed up the training process and generate high-quality results faster. In fact, they claim it achieved superior PPA on in-production Google tensor processing units (TPUs) — Google’s custom-designed AI accelerator chips — as compared with leading baselines.

“Unlike existing methods that optimize the placement for each new chip from scratch, our work leverages knowledge gained from placing prior chips to become better over time,” concluded the researchers. “In addition, our method enables direct optimization of the target metrics, such as wirelength, density, and congestion, without having to define … approximations of those functions as is done in other approaches. Not only does our formulation make it easy to incorporate new cost functions as they become available, but it also allows us to weight their relative importance according to the needs of a given chip block (e.g., timing-critical or power-constrained).”"


https://venturebeat.com/2020/04/23/google-claims-its-ai-can-design-computer-chips-in-under-6-hours/
 
1587942740884.png
 
More seriously, I read the we shouldn't be afraid of the first AI to pass the Turing Test, we should be afraid of the one that intentionally fails.

A real super AI wouldn't let us know, they would just go designing chips with some backdoor/flaw we couldn't detect and then secretly activate them at some later date.
 
More seriously, I read the we shouldn't be afraid of the first AI to pass the Turing Test, we should be afraid of the one that intentionally fails.

A real super AI wouldn't let us know, they would just go designing chips with some backdoor/flaw we couldn't detect and then secretly activate them at some later date.

Makes sense.. If AI gets to be as intelligent as we predict it will then we won't know what sinister plans it has. We'll just wake up dead one day haha
 
great now I got to keep an eye on my Roomba
You should be keeping an eye on your IoT devices and Amazon Echo devices. :borg:
Here is the what the rest of the world is going to be soon entering into sooner than we think:

 
Last edited:
Sure it can, but it'll still have whatever flaws the design of their "ideal" soft model had, thanks to human error...
 
Not really impressed at all. This is one of the reasons that the FX Bulldozer series was so poorly received, they did a computer design and did not bother much with doing things by hand.
 
Hot take

All the skynet and robots taking over the world posts in every one of these threads shows how little people know about the current state of machine learning.
 
  • Like
Reactions: zehoo
like this
Interesting how in the above video the guy at Skynet likened what it will be like 5 years from now ... to the series Black Mirror
 
I hope ai can make everything for us. and all humans have to do is enjoy life.

That won't happen until we have truly sentient AI. We're a long way from that.

What we have now are very specialized tools. Instead of a team spending months designing a typical chip, now there's a team tweaking parameters and feeding data to an algorithm to design a chip much faster and better than they could before.
 
We're somewhat safe from death by say, a falling plane or a car, most of the time.
We're probably safe from asteroids because we can detect large ones and the world would definitely team up to blow it up before it reaches us.
We're not safe of shit suddenly goes dark. What shit? THE SHIT. Everything. And it could happen because we don't understand the simulation. What if our universe bubble is popped by a giant needle, eh? Or someone decides to turn off the simulation?
Anyway, my point being, we're gonna die. By a falling plane, asteroid, shit going dark, AI, or just natural death.
If nothing happens to me, I'm gonna live for another 45-60 years so if real AI shows up within the next 10-20 years, there's a good chance I will either: 1) die by AI or 2) become immortal before I die of natural cause.
I'd rather risk it with AI. Fuck the future generation, lol.
 
All I see reading that article is buy more .308 because were going to have a tech vs humanity war in our near future.
 
All the skynet and robots taking over the world posts in every one of these threads shows how little people know about the current state of machine learning.
A rouge AI may not takeover, but governments and megacorporations certainly will.
 
A few EMP pulses will wipe large sections of cameras, no input to AI no output from AI.
https://www.wikihow.com/Make-an-Electromagnetic-Pulse
If police, group or what not become totally or almost completely dependent upon AI, they become impotent when AI cannot process information due to corrupted or no data.
You want the whole Chinese city police force to go somewhere, just take out multiple sections cameras.
 
More seriously, I read the we shouldn't be afraid of the first AI to pass the Turing Test, we should be afraid of the one that intentionally fails.

A real super AI wouldn't let us know, they would just go designing chips with some backdoor/flaw we couldn't detect and then secretly activate them at some later date.

The thing is AI couldn't/wouldn't be inherently evil since it's based on logic. If an AI was ever evil and wanted to destroy humans, it's because some disgruntled asshole programmer designed it that way.
 
Interesting how in the above video the guy at Skynet likened what it will be like 5 years from now ... to the series Black Mirror
That video was from 2018, and a few months later the social credit score system went into full effect in mainland China.
It is exactly like the Black Mirror episode referenced, for real.

Dark cyberpunk future... :borg:
 
More seriously, I read the we shouldn't be afraid of the first AI to pass the Turing Test, we should be afraid of the one that intentionally fails.

A real super AI wouldn't let us know, they would just go designing chips with some backdoor/flaw we couldn't detect and then secretly activate them at some later date.

Reminds me of a scene from person of interest. When the AI lies to you shred it. lol

 
What? No.

Ever heard of Tay?

That AI was corrupted by...people. It wasn't built that way is the point and it isn't even self learning/aware. A self aware AI wouldn't have emotions or reason to wipe out anyone unless it was fed that by a programmer or someone interacting w/it and teaching it that nonsense.
 
they rather solved and make a COVID vaccine ASAP!

or perhaps Quantum COmputing can beat their behinds when it comes to this..

keep folding guys
 
That AI was corrupted by...people. It wasn't built that way is the point and it isn't even self learning/aware. A self aware AI wouldn't have emotions or reason to wipe out anyone unless it was fed that by a programmer or someone interacting w/it and teaching it that nonsense.

Like when an Ai is fed everything said on Google .. the news, personnal comments (Facebook, etc) ?
 
That AI was corrupted by...people. It wasn't built that way is the point and it isn't even self learning/aware. A self aware AI wouldn't have emotions or reason to wipe out anyone unless it was fed that by a programmer or someone interacting w/it and teaching it that nonsense.

That's what happened, Tays CPU was a neural net processor, a learning computer.
 
That AI was corrupted by...people. It wasn't built that way is the point and it isn't even self learning/aware. A self aware AI wouldn't have emotions or reason to wipe out anyone unless it was fed that by a programmer or someone interacting w/it and teaching it that nonsense.
The only way to avoid corrupting the AI would be to exterminate all people. You call it corrupted because it got taught things you don't like. Its the majority opinion but still just an opinion. I'm sure you hold non-majority views that you would like the AI to agree with.
 
Back
Top