Tachyum Prodigy Native AI Supports TensorFlow and PyTorch

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,897
NVIDIA needs to hustle as we all knew they custom dedicated ASICs were coming same as with the Bitcoin miners. Exciting times!

"Prodigy is truly a universal processor. In addition to native Prodigy code, it also runs legacy x86, ARM and RISC-V binaries. And, with a single, highly efficient processor architecture, Prodigy delivers industry-leading performance across data center, AI, and HPC workloads. Prodigy, the company's flagship Universal Processor, will enter volume production in 2021. In April, the Prodigy chip successfully proved its viability with a complete chip layout exceeding speed targets. In August, the processor is able to correctly execute short programs, with results automatically verified against the software model, while exceeding the target clock speeds. The next step is to get a manufactured wholly functional FPGA prototype of the chip later this year, which is the last milestone before tape-out.

Prodigy outperforms the fastest Xeon processors at 10x lower power on data center workloads, as well as outperforming NVIDIA's fastest GPU on HPC, AI training and inference. A mere 125 HPC Prodigy racks can deliver 32 tensor EXAFLOPS. Prodigy's 3X lower cost per MIPS and 10X lower core power translates to a 4X lower data center Total Cost of Ownership (TCO), enables billions of dollars of savings for hyperscalers such as Google, Facebook, Amazon, Alibaba, and others. Since Prodigy is the world's only processor that can switch between data center, AI and HPC workloads, unused servers can be used as CAPEX-free AI or HPC cloud, because the servers have already been amortized."


https://www.techpowerup.com/271434/tachyum-prodigy-native-ai-supports-tensorflow-and-pytorch
 
I dont belive it. we already have companies at the edge of each respective technogy and no one is going to come in with a chip thats a combination of heavy instruction set archs slim instruction set archs and be able to beat everyone at what process node? or do they have a better process node then the rest of the simi conductor industry.

I believe for this to be viable you would need a very heavy instruction set processor coupled with a combination of underlying architectures?
 
Last edited:
I dont belive it. we already have companies at the edge of each respective technogy and no one is going to come in with a chip thats a combination of heavy instruction set archs slim instruction set archs and be able to beat everyone at what process node? or do they have a better process node then the rest of the simi conductor industry.

I believe for this to be viable you would need a very heavy instruction set processor coupled with a combination of underlying architectures?
c'mon

GPUs aren't completely optimal for the long term acceleration of ML workloads, it's a compromise or set of compromises
 
c'mon

GPUs aren't completely optimal for the long term acceleration of ML workloads, it's a compromise or set of compromises

and a arch resembling a cpu is even worse then a gpu. I dont deny you can make a decent asic for just about any workload I deny a magic chip that has the advantages of everything.
 
The article opened with a bunch of gibberish about AI and integrated tensors, I wasn't familiar enough to immediately disbelive.
But when they got to the CPU part, I know BS when I read it. Vaporware claims don't have to make sense, only fool investors.

Crusoe's thing was dynamic recompilation of whatever to native, with hardware to help gather optimizing clues from the actual run.
To some extent even MAME was able to do this. Not likely to magically transform any cpu to gpu or the reverse. No FPGA or group
of them can compete with purpose wired logic. If FPGAs were competitive at AI, Coins, CPU, or GPU, we'd be still using them.
 
Last edited:
The article opened with a bunch of gibberish about AI and integrated tensors, I wasn't familiar enough to immediately disbelive.
But when they got to the CPU part, I know BS when I read it. Vaporware claims don't have to make sense, only fool investors.

Crusoe's thing was dynamic recompilation of whatever to native, with hardware to help gather optimizing clues from the actual run.
To some extent even MAME was able to do this. Not likely to magically transform any cpu to gpu or the reverse. No FPGA or group
of them can compete with purpose wired logic. If FPGAs were competitive at AI, Coins, CPU, or GPU, we'd be still using them.
Gibberish? :( :(. C’mon :(
Is it really that transparent and obvious? I’ll take a wait and see approach but it seems reasonable to me.. we saw a similar path with Bitcoin mining and now using GPUs seems pretty sub-optimal for crypto mining
 
Gibberish? :( :(. C’mon :(
Is it really that transparent and obvious? I’ll take a wait and see approach but it seems reasonable to me.. we saw a similar path with Bitcoin mining and now using GPUs seems pretty sub-optimal for crypto mining


Bitcoin mining is not a difficault workload hints how easily it could be moved to a fpga and subsiquintly asic. The problem is very few actually workloads are that fixed and resourse unintencive the reason we eventually hit a point on asics for ether and other more complex cryptos is you could build a memory heavy asic that competed on price to performance (only in large batches) gpus are good at this task as they already compete heavily on this landscape due to there nature. Cache base algos remain on cpus most of the time as its not efficient to print a asic with a bunch of cache on the chip (Intel phis do pretty good at those algos too due to 16gb of mcdram on the chip and a bunch if threads)

With enough r&d you can make a asic do all most any workload however asics are normally a bit behind the edge for fabing so dont expect them to beat cpus or gpus at there own game.

It would be interesting to see a chip with a cluster of fpga and heavy logic blocks as well as a clever software running on it but it would be a difficault device to create and it would also be behind traditional fabs when they print it. (Still could pose advantages for non traditional ai workloads)
 
I dont belive it. we already have companies at the edge of each respective technogy and no one is going to come in with a chip thats a combination of heavy instruction set archs slim instruction set archs and be able to beat everyone at what process node? or do they have a better process node then the rest of the simi conductor industry.

I believe for this to be viable you would need a very heavy instruction set processor coupled with a combination of underlying architectures?

Tacyum has a new clean architecture that combines RISC, VLIW and vector.
 
  • Like
Reactions: erek
like this
Tacyum has a new clean architecture that combines RISC, VLIW and vector.

So a heavier RISC core (which is common whenever someone actually tries to develop a RISC core to serve any task) now there are a few companies modifying RISC cores and many products out there today using such however I don't have much faith in a company who as yet to build the chip on a fpga (I would think that would be a early step, no?) yet alone deal with anything related to actually fabing the chip
 
So a heavier RISC core (which is common whenever someone actually tries to develop a RISC core to serve any task) now there are a few companies modifying RISC cores and many products out there today using such however I don't have much faith in a company who as yet to build the chip on a fpga (I would think that would be a early step, no?) yet alone deal with anything related to actually fabing the chip

It is not a RISC core. The architecture includes VLIW and CISC elements.
 
It is not a RISC core. The architecture includes VLIW and CISC elements.

Isnt that the case with most RISC implementations? I don't belive there are too many "RISC" cpus that are truly RISC processors
 
Isnt that the case with most RISC implementations? I don't belive there are too many "RISC" cpus that are truly RISC processors

RISC implementations use RISC ISA. Prodigy ISA is more close to EPIC than to RISC.
 
Back
Top