paradoxical
Limp Gawd
- Joined
- May 5, 2013
- Messages
- 290
Well I mean HDLs are not crazy rare. Any time a company wants to design any logic they are going to use an HDL (verilog or vhdl, +systemverilog or systemc for verification combined with python scripts etc etc). Intel uses them to design their processors, AMD uses them, Nvidia, Qualcomm, Apple, ARM, TI, Marvell, Broadcom, Asmedia, NXP, etc etc etc. It's obviously not as common as general software development languages, but I'm just saying every large company that does any sort of logic design is using them and have entire engineering teams that use them.
I'll take it one step even further. People are often familiar with high end high performance FPGAs, but they forget that there are very small FPGA that do very basic functionality. When you pick up your phone and simply USE it there is a good chance your phone has something like this:
http://www.latticesemi.com/iCE40
This is a very small basic FPGA and might do basic power sequencing logic, i2c logic, spi logic, interrupt control, basic logical signal routing.
I don't know of any GPU that has any onboard programmable logic like an FPGA has. GPUs are usually pretty specialized for their function, using programmable logic would take a lot more silicon space/power that they dont have to spare.
In general I always hesitate to accept the idea of "rise of the FPGA in common computing".
All of the below in the context of FPGA vs purpose design ASIC.
The advantage of the FPGA as general accelerator would be that you have a single block of silicon that you can reconfigure to any application. For a particular application it may be faster than a CPU. However, for any particular application the same logic burned into an ASIC will always be faster/smaller/+efficient (smaller is pretty much guaranteed, faster/efficient would be where you would be making a balance).
So then the question becomes:
How many different applications do you need for your accelerator to handle before an FPGA would make more sense than an ASIC?
1 application? ASIC wins
2 applications? You could probably fit two separate ASIC accelerators into the same silicon area as a single FPGA for the same tasks and ASIC design would still be smaller and faster/+efficient
3 applications?
4 applications?
5?
I dont know, at what point does the FPGA win for general computing acceleration? FPGA logic is not exactly dense, and it's not fast.
There are other contexts where the FPGA wins NOT because its better performing, but because the economics/time makes more sense.
Do you need 1-100 designs with your logic and it runs ok on an FPGA? Then you probably are not going to need to design an ASIC. it will cost enormous amounts $$ to spinup just a few ASIC chips. Go with FPGA
Are you an engineer that is testing /debugging logic that will ultimately target ASIC? Have fun waiting a few months every time you want to want to test a single change. Get an FPGA for testing/debugging.
Nvidia has entire FPGA racks that do nothing but run their logic for their GPU (basically run all the logic of a single 700mm^2 GPU chip in several racks of FPGA at highly reduced speeds.... again... FPGAs are not dense compared to ASIC...)
https://www.cadence.com/en_US/home/...otium-s1-fpga-based-prototyping-platform.html
https://blogs.nvidia.com/blog/2011/05/16/sneak-peak-inside-nvidia-emulation-lab/
The other advantage a FPGA has is longevity as things develop in the future. Our products ship in qty 10k+, but we expect to be continually updating the core DSP functionality 4,5,6 years down the road. An ASIC would lock us into a very specific way of doing things opposed to a FPGA, and a FPGA is already more than fast enough to handle it. Therefore, we choose FPGA even though are quantities are high because of the flexibility. As best practices and new methods emerge, we can push them out to our customers in basically real time.
This makes sense to me why Apple would have chosen a FPGA for something as flexible as a video transcoding accelerator vs. an ASIC.