There was never any need to offer something to someone if they are free to take it. CUDA is free.
No, CUDA is free to use, not free to port to a new platform. That is a *huge* difference.
I explained why it was wrong, and you said I have no point which is rude since I do have a point.
Sorry, I must have truly missed where you explain why what I said was wrong. Can you please elaborate on why my statement was wrong? Preferably with less emotion.
I think you all need to step back here and understand one thing. CUDA is not a failure.
(I’ll try to not repeat myself.) If you read my previous post on why CUDA is an easy sell then you’re half way there. GPGPU computing sure is nothing new however the fundamental difference is before (programmers, students, researchers) were forced to shoe-horn in general computations on the graphics pipeline. It makes what we can do on the GPU (as far as general computations) even more limited than now (besides wanting to stab yourself trying to code for it.)
CUDA is not a failure.
C and CUDA are practically indistinguishable. Every student in mathemtatics, science, engineering, and computer science has to learn C. CUDA works with MatLab which is huge in the fields I just mentioned. MatLab assured CUDA's wide acceptance in the college/university/research world. This is where companies hire their researchers from and where developers scout their new talent.
There is a lot more than just “NVidia knocking down on developers doors and saying, ‘here’s CUDA and money, let’s make a toast. Gentlemen To Evil!’” than people make it out to be.
In the case of mathmatics, science, and engineering using MatLab and CUDA, if people are using it then you are absolutely correct that it isn't a failure. What I original said doesn't dispute that at all, since Nvidia isn't paying for those people to use CUDA.
I think you missed my point, what is being discussed here, is not OTHER uses for CUDA, it's the gaming uses for it. if you want to discuss OTHER uses for it, head over to the physics processing forum, or the distributed computing forum =)
When he states that CUDA is a failure, I'm 100% sure hes talking about the physx/cuda implementation in games being implemented due to NV "supporting" the devs.
My statement isn't restricted to any particular category, but CUDA can certainly be a success in the engineering fields while being a failure in the gaming fields.
He/she said: If Nvidia needs to pay developers to use CUDA, then CUDA is a failure, pure and simple.
Ok...so suppose Nvidia did pay the devs to use CUDA for JC2 or those devs would not have used it. Then we have CUDA being a failure. Even if nearly ever other game in the universe uses CUDA without Nvidia paying them...it is a failure. That is pure logic and so clearly it makes no sense from an if/then point of view. I explained that before...but I have no point. Very frustrating.
No, now you are twisting my statement to apply to a single studio/game. I said "developers". If Nvidia has to pay to get some developers to use CUDA while other devs decide to pick it up, then CUDA isn't a failure.
So far, though, only PhysX and JC2 use CUDA in games (PhysX uses CUDA). So far, GPU PhysX has been primarily (only?) used in TWIMTBP games, as is JC2. So far, CUDA in gaming is limited to games developed with Nvidia's "support".
My statement applies to the market *in general*. If nobody/very few people in a given area (gaming, for example) use the technology because they like the technology (and not because they were paid to), then that technology is a failure.
By no means is CUDA the programming language a failure if Nvidia has to pay devs to use it. I would certainly like to move past that point.....
Are you sure you're know logic? You can't just say "it certainly isn't that, lets move on".
But is CUDA a failure in games? Maybe...but that was never the intended purpose, was it? The purpose was and always has been for using the gpu for task other than graphics. If CUDA gets into games as well, then that is just an added bonus.
Games aren't just graphics. PhysX is a prime example of Nvidia's desire to use CUDA in games.
If openCL, and other languages become more popular...that is great. But this whole argument could be done and over with if ATI would support CUDA. Then there would be a single good language that the devs from JC2 could have used to make both camps happy.
...
However, ATI would then be forced to follow Nvidia's lead. Look at Intel and AMD and SSE support. The reason AMD is always behind Intel in supporting the newest SSE versions is because Intel sits on it until it has a product on the market. You'd be a fool to think Nvidia wouldn't do something similar, if not the exact same thing.
Not to mention Nvidia would be free to require things that its cards do better than ATI's or tweak the system to favor its architecture. That would ultimately hurt consumers as ATI would be forced to follow Nvidia's lead.