Intel Releases 10 TFLOPS Chip

rgMekanic

[H]ard|News
Joined
May 13, 2013
Messages
6,943
In a newsroom post, Intel announced today the Intel Stratix 10 FPGA. This special chip is the fastest chip of it's kind in the world, boasting 10 trillion calculations per second. Field programmable gate arrays differ from standard CPUs as they can be re-programmed on the fly to perform specialized computing tasks.

That is some incredible power for sure, a handful of those could give the HardOCP DC team a boost. I look forward to snagging one on eBay in a few years ;)

The Stratix 10 contains about 30 billion transistors – more than triple the number of transistors in the chips that run today’s fastest laptops and desktops – and can process the data equivalent to 420 Blu-ray Discs in just one second.
 
Last edited:
These are extremely specific cores that must be per application programmed for. Similar to Tensor cores that some of us are familiar with. It doesn't just crush numbers, it crushes one single programmed job, extremely well. While it can't run a video game, or any other task that requires variety, if programmed for, it could help predict something in the realm of the likelihood of a tornado under certain weather conditions. Or it could run a quintillion simulations and tell us when the next meteor strike is likely to destroy the earth. Really the applications are endless but they must be on a case by case scenario. Don't ever think of this type of technology as something for consumer use.
 
These are extremely specific cores that must be per application programmed for. Similar to Tensor cores that some of us are familiar with. It doesn't just crush numbers, it crushes one single programmed job, extremely well. While it can't run a video game, or any other task that requires variety, if programmed for, it could help predict something in the realm of the likelihood of a tornado under certain weather conditions. Or it could run a quintillion simulations and tell us when the next meteor strike is likely to destroy the earth. Really the applications are endless but they must be on a case by case scenario. Don't ever think of this type of technology as something for consumer use.
Not quite. Unlike ASICs which are etched and baked for specific operations, FPGAs can be programmed for ANY combination of specific operations -- in some cases -- on the fly. e.g. With enough programmable gates, you could flash an existing x86_64 architecture to it. However, there's really no point in programming one for such a generic task since; 1. the hardware costs so much, and 2. the time it would take to design an optimized circuit for a particular runtime would be ridiculous. :)

I'm more interested to see what advancements have been made in memory-centric computation since that is the limiting factor for FPGAs.
 
oldmanbal said:
Don't ever think of this type of technology as something for consumer use.
Why not? Someday, we will need computing power that could reprogram itself on the fly... Imaging Crysis 17, where the machine learns from your actions, reprograms the cpu on the fly, then takes over the Roomba with a kitchen knife & sneaks up on you...
 
I said "If a ASIC for crypto can't be reprogrammed because it doesn't have the resources in the pipeline for the new algorithm, the FPGA will do it (even if it isn't as fast, it will be much faster than a GPU."
 
Why not? Someday, we will need computing power that could reprogram itself on the fly... Imaging Crysis 17, where the machine learns from your actions, reprograms the cpu on the fly, then takes over the Roomba with a kitchen knife & sneaks up on you...
C'mon now, we all know that we'll just end up using it for AI generated pr0n.
If you think crypto-mining is sucking down impressive amounts of power now, wait till we get pr0n generating AI in the home ;)
 
well I know for crypto (such as bitcoin) cpu were first, then came GPU once the software was coded to take advantage of them, FPGA was the next step up far more potent as was programmed specifically for the code on a case by case basis and ASIC are a further step up that chain that they are absolutely purpose built for that task.

CPU is meant to do many things decently well but is loaded down by having to handle a bunch of everything, not the most effective, not the quickest, not the most power efficient.

GPU technically is an ASIC but is more meant for graphics load and is not worried about other things (generally) performnace is quite good around 1/2 the raw energy used for the work produced (or better)

FPGA focused on specific libraries of information so in essence (to my understanding) like a set top box (console) programmed for the task at hand and not much else, meant to be "good" not meant to be "great" (more power but that much more raw grunt)

ASIC are purpose built for that one job and nothing more. (are designed to be as efficient/effective for the work possible so tend to be absolutely brutal performance levels)

====================================
================
(bitcoin hashing example)
CPU AMD 1055T 23Mh/s 100w --i7 980x 19.2/Mh/s 150w

GPU
Radeon 7970--800Mh/s 350w
GTX 580 156Mh/s 244w
(AMD/GPU are "superior hashing" at least with readily accessible performance numbers probably because they are that much more multithreaded design than others VLIW4-5-GCN very much are exactly that the more raw data they can chew the happier they are.
Nvidia anything really does not currently compare maybe tensor or whatever may be different? they seem to be more one trick pony, can race a race or crunch not both, Radeons are able to do both more or less equally
at east for bitcoin most other hashing (code cracking) Nv was about 1/4 as fast if that for around the same power used if not much more for less given performance)

FPGA 25Gh/s 1200w

ASIC 600+Gh/s 600w+ (course there are many that use far less or far more so performance is all over the place)

at least when I was hashing/mining, the FPGA at the time were quite a bit faster than any GPU being used for not much more power but were also quite pricey.

ASIC on the other hand blew the snot out of both FPGA and GPU/CPU handily in regards to outright performance and the amount of watts taken to achieve such were very nice, cost to performance really is in a league of their own as well.

=========================================
=================
10Tflops is not that impressive (many GPU are that much higher) though that clock speed is sick..wonder the amount of watts they will chew in the process and if they will be using proper solder or some more of that thin wafer thermal paste they seem to like to use lately LMAO
 
these probably would be better than non-updateable asic chips and gpus ... in regards to crypto mining.... that is if they ever get that far
 
If any of this can trickle down to their forthcoming GPU tech then maybe there's a chance they can be a competitor after all.

Really not wanting to root for blue after md/spectre but at this point I'd take almost anyone on the field for GPU's just to stick it to N.V.
 
If any of this can trickle down to their forthcoming GPU tech then maybe there's a chance they can be a competitor after all.

Really not wanting to root for blue after md/spectre but at this point I'd take almost anyone on the field for GPU's just to stick it to N.V.

It's for professional programmer / simulation use. Not for playing games :(
 
Back
Top