martinmsj
[H]ard|Gawd
- Joined
- Mar 3, 2005
- Messages
- 1,581
I think it is not clear and dry as most of you make it to be when it comes to NVidia and pushing CUDA.
NVidia has it easy selling CUDA to developers. The code is practically the same C code these developers program in. I'm a C# programmer (though the first language I learned was C++, then Java, then C for operation systems course) and I got some routines up and running with CUDA.
DirectCompute is just too limited when you look at the range of NVidia GPU's that can compute CUDA threads and operating system they can run on.
OpenCL is what I would love to support but the code is ugly. Quite a pain just looking at it and so far I've only heard what a pain it is to implement. For me, the learning curve is steep (I've done simple polygon shapes, texturing, animations, and effects using OpenGL and C++ when I had free time in college.)
When you consider what CUDA looks like in your C code, the learning curve, time investment, operating systems it works on, and range of GPU's you can run CUDA code on (not to mention free money for using it from NVidia) it really doesn't seem like a "NVidia is evil and they pay money to evil devs" situation.
NVidia has it easy selling CUDA to developers. The code is practically the same C code these developers program in. I'm a C# programmer (though the first language I learned was C++, then Java, then C for operation systems course) and I got some routines up and running with CUDA.
DirectCompute is just too limited when you look at the range of NVidia GPU's that can compute CUDA threads and operating system they can run on.
OpenCL is what I would love to support but the code is ugly. Quite a pain just looking at it and so far I've only heard what a pain it is to implement. For me, the learning curve is steep (I've done simple polygon shapes, texturing, animations, and effects using OpenGL and C++ when I had free time in college.)
When you consider what CUDA looks like in your C code, the learning curve, time investment, operating systems it works on, and range of GPU's you can run CUDA code on (not to mention free money for using it from NVidia) it really doesn't seem like a "NVidia is evil and they pay money to evil devs" situation.
Last edited: