FPGA Folding

extide

2[H]4U
Joined
Dec 19, 2008
Messages
3,494
So, does anyone think this will ever happen? I mean based on this article on Anandtech, it appears that FPGA's can run OpenCL, and apparently offer significantly better performance/watt than GPU's. (10x better is roughly quoted in that article). Unlike bitcoins there is no financial incentive in developing this stuff, so I would imagine it would have to be a grassroots effort in collaboration with stanford... As far as I can tell, it seems like it is TECHNICALLY possible though.


Thoughts....


bittwareHR_678x452.jpg
 
If you can get the OS to recognise it as an OpenCL device then core 17 will run on it.
 
From what I have read in various forums including at WCG, it would require the project to code specifically for them since they are highly specialized devices. Then you would have to have a user base willing to purchase them that makes all that coding and testing worth it. There is a large interest from former Bitcoiners to recycle their gear by using them at DC projects. However, there doesn't seem to be much interest from the major DC projects to even try.
 
If you can get the OS to recognise it as an OpenCL device then core 17 will run on it.

Excuse my ignorance on the issue but wouldn't the application also need to "find" this device on the list of compatible GPUs/devices?
Maybe Stanford could be convinced to add/accept it if the returns were promising enough.
 
Excuse my ignorance on the issue but wouldn't the application also need to "find" this device on the list of compatible GPUs/devices?
Maybe Stanford could be convinced to add/accept it if the returns were promising enough.

Once the OS sees it as an OpenCL device fah will simply be able to use the devices OpenCL index number.
 
True, but once found on that list, the fact that it leverages OpenCL should make it usable.

Me want. I like exotic computing!
 
Also if the OpenCL implementation is not 100% to specification Core 17 will crash.
 
Some serious Verilog would probably need to be written to get the FPGA running OpenCL, unless there is already some "IP" loaded as part of the board design that can already run it.
Writing an FPGA compatible processor emulator or buying an existing piece of IP would be a major hurdle.
 
Altera's new tools write the Verilog for you.
Paste in your OpenCL functions with some modifications here and there and at the end Altera produces the Verilog needed to run the functions on their FPGA. Basically it automatically create a custom chip architecture for you with advanced arithmetic units you'd never get in a GPU and optimizes the lengths of pipelines and number of cores specifically to run your code as fast as possible.

It still sounds hard and won't make sense for F@H because FPGAs are so expensive, they target home users, and they don't have enough developers. It could be pretty amazing for others doing computational research though because with a few nice expensive FPGAs they can create a "supercomputer" type system that is 100% designed around only running the calculations they need as fast as possible. "I wonder what happens when we use this other formula"...scientist complies a billion transistors into a custom floating point unit that runs thousands of copies of the new function in parallel. No need to optimize the code for the hardware, the hardware optimizes itself to the code.

http://www.altera.com/products/software/opencl/opencl-index.html
 
If you can get the OS to recognise it as an OpenCL device then core 17 will run on it.
This device doesn't provide OpenCL API the way a GPU does.
One feeds OpenCL *source* code to its compiler which (the source code, not the compiler)
is subsequently compiled into logic.

One could, however, try to evaluate porting effort of OpenMM to it which, nb, could pose
some interesting challenges [minimizing traffic between the board and host machine, for
instance].
 
Yeah it seems like you would need the core17 source code, at least the OpenCL part. Then you would need to put it into Alterra's tool which would build the VHDL and whatnot for a FPGA. If stanford did that part for us and then released a binary that you could flash on to an FPGA, we could probably make it work. So yeah we would definitely need some help from stanford, and then you would probably need a pretty specific FPGA. I wonder how much that dev board costs (the one in the pic in my OP).

Oh well it probably wont ever happen, but it sure would be cool if it did. :/
 
Seems the FPGA chip alone is around $7000 if you want to solder it on yourself, I think the dev board could be cheaper than that though since the goal is to get people trying out the tools.
 
Last edited:
Back
Top