DirectCompute vs. CUDA vs. OpenCL

I think it is not clear and dry as most of you make it to be when it comes to NVidia and pushing CUDA.

NVidia has it easy selling CUDA to developers. The code is practically the same C code these developers program in. I'm a C# programmer (though the first language I learned was C++, then Java, then C for operation systems course) and I got some routines up and running with CUDA.

DirectCompute is just too limited when you look at the range of NVidia GPU's that can compute CUDA threads and operating system they can run on.

OpenCL is what I would love to support but the code is ugly. Quite a pain just looking at it and so far I've only heard what a pain it is to implement. For me, the learning curve is steep (I've done simple polygon shapes, texturing, animations, and effects using OpenGL and C++ when I had free time in college.)

When you consider what CUDA looks like in your C code, the learning curve, time investment, operating systems it works on, and range of GPU's you can run CUDA code on (not to mention free money for using it from NVidia) it really doesn't seem like a "NVidia is evil and they pay money to evil devs" situation.
 
Last edited:
I think it is not clear and dry as most of you make it to be when it comes to NVidia and pushing CUDA.

NVidia has it easy selling CUDA to developers. The code is practically the same C code these developers program in. I'm a C# programmer (though the first language I learned was C++, then Java, then C for operation systems course) and I got some routines up and running with CUDA.

DirectCompute is just too limited when you look at the range of NVidia GPU's that can compute CUDA threads.

OpenCL is what I would love to support but the code is ugly. Quite a pain just looking at it and so far I've only heard what a pain it is to implement. For me, the learning curve is steep (I've done simple polygon shapes, texturing, animations, and effects using OpenGL and C++ when I had free time in college.)

When you consider what CUDA looks like in your C code, the learning curve, time investment, operating systems it works on, and range of GPU's you can run CUDA code on (not to mention free money for using it from NVidia) it really doesn't seem like a "NVidia is evil and they pay money to evil devs" situation.
This is what I meant when I said I had a feeling there was something else going on. If this this the case, it's hard to understand why [H] has so much hate for green and the JC devs in today's review...
 
DirectCompute is just too limited when you look at the range of NVidia GPU's that can compute CUDA threads.

this statement is inaccurate, DirectCompute runs on all DX11 / DX10 hardware

DirectCompute is part of the Microsoft DirectX collection of APIs and was initially released with the DirectX 11 API but runs on both DirectX 10 and DirectX 11 graphics processing units

don't blame me, easy wiki quote :p

I think the biggest problem now is the ease of development, CUDA has been around for a while, NV have put out some good SDKs, etc. so time to implement these features are minimal.

how ever, with MS being behind directcompute and the fact that it comes bundled with Direct X, support for it will be huge soon enough, as developers of both games and applications can take advantage of a common API, that will have support, and be around for a while.
 
Yes, There is DirectCompute 4 and DirectCompute 5, 4 is DX10 and runs on DX10 hardware, or you can use 5 which runs on DX11 hardware. So pretty much any video card in the last few years can run DirectCompute.
 
Yes, There is DirectCompute 4 and DirectCompute 5, 4 is DX10 and runs on DX10 hardware, or you can use 5 which runs on DX11 hardware. So pretty much any video card in the last few years can run DirectCompute.

what I'm wondering is, how this will translate into a game, I assume there is alot of the base that will be supported from version 4 to say version 10, maybe most effects can be done with this, or maybe each iteration will bring performance to the table (like DX for the most part)

btw here you go :p NV using DirectCompute to push CUDA
http://www.nvidia.com/object/cuda_directcompute.html
 
this statement is inaccurate, DirectCompute runs on all DX11 / DX10 hardware

Snip



Yes, There is DirectCompute 4 and DirectCompute 5, 4 is DX10 and runs on DX10 hardware, or you can use 5 which runs on DX11 hardware. So pretty much any video card in the last few years can run DirectCompute.

Key word there is video card. I guess I should of mentioned operating system. In particular windows xp which is where all the computational mathematics runs (MatLab people, friend is a Math masters going into doctorate) in addition to the people that helped me with certain scenarios have their workstations running XP. It's a situation like engineering back in the college days where everyone has a TI89 and the guy who got the HP that could do 100 things more with full color was stuck because everyone had a TI-89 and only could help him if he had one. In addition to clients that I need to target the XP platform for. (Yes I know, 7 is awesome and if I had it my way the very mention of the name XP or windows 2000 would get you feathered, humiliated, and fired on the spot.)

I have yet to work at/with anyone where our main dev platform is not windows xp. So my statement should of included operating system.

And lets not confuse internet citation versus practice. Our AMD-fan could not get DirectCompute to function properly using a 4850 and DirectX 10. I helped him with trouble shooting. The best we would come up with was that there is a clause somewhere where it is switched off and can't change it. Only functions properly when a DX11 card is installed and that is what we left it at. (Back to the TI-89 example.)

Edit: On a side note, the AMD-Fan was pretty bummed since he wanted into bring life back in this 4850 (he was running two 4850's in CFX, replaced with 5850.) Maybe someday :)
 
Last edited:
Leaving out? Because ATI cards can't play the game? Or wait, they can't see the ending clip scene? No. It's none of those things. IF you have an Nvidia card you get some extra graphical features.

At no cost to the developer they had the option to ADD features for a subset of users, and you are saying they shouldn't have done it because it's unfair to people who bought ATI's graphics cards. How is it fair to Nvidia's user base to not add the features when it is free for the developer?

Agree 100%. The main effect of all this is that someone buying an Nvidia card probably pays more than they otherwise would since the money to subsidize games has to come from somewhere. Nothing is stopping ATI from subsidizing games and charging a bit more for its cards. I think this is sour grapes from ATI backers who want to have their cake and eat it too...they want a dirt cheap gfx card but they also want the benefits that go along with a more expensive card.
 
if this is true (and I say that because I've never read it, not because I doubt you :D ) then it's ATi's own fault for their customers missing out on these effects, not Nvidia's, and [H]'s frustration is completely misplaced.

You could say that, it's a little misguided however.

If a company builds (or buys) a propiatary technology and trys to release make it mainstream, it makes sense both for the competing technology (and in the long run, the consumer) if other non-propiatary standards win.

It should be obvious why, as soon as one standard corners the market it puts one company in power over another, they can then charge whatever they like to make life hard for the competition which justs hurts competition, drives down overall productivity and removes choice.

Open standards are a god send, closed standards like PhysX are nasty, no gamer should buy into a closed standard like physx owned by one of the main competitors.
 
Open standards are a god send, closed standards like PhysX are nasty, no gamer should buy into a closed standard like physx owned by one of the main competitors.

Really? So what games do you play that benefit from an open GPGPU API?
 
So then you are saying this about GPGPU APIs in general and not specifically about CUDA? There are a lot of examples of GPGPU APIs out there, NONE of them are in great use thus far.

STOP saying CUDA when you can only mean ANY gpgpu api. You are impossible to talk to because you miss the point by miles and miles.

You haven't really made a point.

I meant CUDA, which is why I said CUDA. If you'll notice, my statement was an if/then. I didn't say CUDA *is* a failure, I said *IF* Nvidia has to pay developers to use it *THEN* its a failure. If the tech can't stand on its own merits, its a failure.

I have not talked about whether or not OpenCL and DirectCompute are failures, nor whether or not GPGPU APIs in general are failures.

Yes it does!!!!

I am talking about GPGPU programming. There are many APIs, not just CUDA and it was not the first. So far CUDA is the most successful. End of story fact.

If NO API is successful because developers are not adopting it, then is the failure that of any one specific API? Or it is something else? It must be that something else, and you are saying that is a failure, and then making the illogical jump to say CUDA is therefore a failure. Those are your words. Do not try to dice into something else....:rolleyes:

No, now you are trying to force my point to be broader than what I said. Those also aren't my words at all. Here, I'll quote what I said since you seem to have grossly misread it:

If Nvidia needs to pay developers to use CUDA, then CUDA is a failure, pure and simple.
 
You haven't really made a point.

I meant CUDA, which is why I said CUDA. If you'll notice, my statement was an if/then. I didn't say CUDA *is* a failure, I said *IF* Nvidia has to pay developers to use it *THEN* its a failure. If the tech can't stand on its own merits, its a failure.

I have not talked about whether or not OpenCL and DirectCompute are failures, nor whether or not GPGPU APIs in general are failures.



No, now you are trying to force my point to be broader than what I said. Those also aren't my words at all. Here, I'll quote what I said since you seem to have grossly misread it:

Fine....you want to play Mr. Logic we can.

Give me your specific definition of the word failure. Then prove your unsupported claim (which for you non math folk means the if/then which I will refer to as an implication from here on.)

Just to be sure you understand what is needed:
To prove an implication you can either assume the antecedent is true and show this leads to the conclusion which will need to be based on your definition of failure. Alternatively you can demonstrate the contrapositive which means you assume CUDA is a success (or whatever the negation of failure is for you) and show that implies that Nvidia never paid (or will pay) devs.

You may need qualifiers depending on the specific meaning of the terms you want to use.
I am fully versed in predicate calculus and formal Zermelo Frankel set theory so feel free to get as technical as you like.

The burden of proof is on you, the person making the claim so...go ahead and demonstrate your assertion.
 
Fine....you want to play Mr. Logic we can.

Give me your specific definition of the word failure. Then prove your unsupported claim (which for you non math folk means the if/then which I will refer to as an implication from here on.)

Just to be sure you understand what is needed:
To prove an implication you can either assume the antecedent is true and show this leads to the conclusion which will need to be based on your definition of failure. Alternatively you can demonstrate the contrapositive which means you assume CUDA is a success (or whatever the negation of failure is for you) and show that implies that Nvidia never paid (or will pay) devs.

You may need qualifiers depending on the specific meaning of the terms you want to use.
I am fully versed in predicate calculus and formal Zermelo Frankel set theory so feel free to get as technical as you like.

The burden of proof is on you, the person making the claim so...go ahead and demonstrate your assertion.

I also know predicate calculus. I've taken formal logic courses. Hell, still have my logic textbook.

But no, I'm not going to bust out deductive logic and formalize what I said. Why? Because what you seem to continually fail to grasp is that all I meant is what I explicitly stated. If you would like to explain why Nvidia needing to pay developers to use its technology is not a failure of the technology, I would be happy to listen. But if you are going to continue to try and dance around the point I made and attempt (fruitlessly, I might add) to pick apart the single sentence statement, then I really just don't give a shit. It is not my job to teach you reading comprehension.

And what I mean by failure is the lack of success (which is also how dictionaries define it, crazily enough). As in, it was not successful in penetrating the market. It was not successful in being used. It was not successful in getting developer attention.

EDIT: And since you seem to get personally offended when people don't like CUDA (you should really chill and not get so emotional about something that doesn't give a shit about you, by the way), let me elaborate by stating that CUDA can be good while being a failure. Plenty of interesting and good technology have been failures. Be it due to licensing issues, lack of supported hardware, failure to really distinguish itself from the competition, etc... Likewise, plenty of successful tech absolutely sucks (*cough, flash, cough*)
 
I also know predicate calculus. I've taken formal logic courses. Hell, still have my logic textbook.

I was assuming you know predicate calculus, I was letting you know that I was expecting you to be accurate and correct by telling that I am up to par.

In any case. You made the claim, not me. I explained why it was wrong, and you said I have no point which is rude since I do have a point. Ok...fine...I will not repeat myself. It is your claim, and so you have the burden of proof. You have NOT proved it.

http://en.wikipedia.org/wiki/Philosophic_burden_of_proof#Holder_of_the_burden

You have not demonstrated your claim by asking me to justify why I think it is wrong. If everything I say does not demonstrate the falseness of your claim (which I think I have done so already), then you are not right. You still have to establish you claim. It does not stand on its own..end of story!!!

I am getting emotional because I feel like I am wasting my time on someone who seems intelligent, but for some reason is being retarded with respect to understanding why he/she (I do not presume to know your gender) is wrong.

If you ask me to prove something...or to explain why I think you are wrong...or anything other than follow logic which dictates that you support your claim...then I am done. I cannot have a rational conversation with someone who is using fallacious reasoning and refuses to correct it.
 
I think you all need to step back here and understand one thing. CUDA is not a failure.

(I’ll try to not repeat myself.) If you read my previous post on why CUDA is an easy sell then you’re half way there. GPGPU computing sure is nothing new however the fundamental difference is before (programmers, students, researchers) were forced to shoe-horn in general computations on the graphics pipeline. It makes what we can do on the GPU (as far as general computations) even more limited than now (besides wanting to stab yourself trying to code for it.)

CUDA is not a failure.

C and CUDA are practically indistinguishable. Every student in mathemtatics, science, engineering, and computer science has to learn C. CUDA works with MatLab which is huge in the fields I just mentioned. MatLab assured CUDA's wide acceptance in the college/university/research world. This is where companies hire their researchers from and where developers scout their new talent.

There is a lot more than just “NVidia knocking down on developers doors and saying, ‘here’s CUDA and money, let’s make a toast. Gentlemen To Evil!’” than people make it out to be.
 
Last edited:
in terms of gaming, CUDA is a failure, I think that's what he is saying, this is obviously what we are discussing, we're not discussing people using CUDA for math apps.
 
in terms of gaming, CUDA is a failure, I think that's what he is saying, this is obviously what we are discussing, we're not discussing people using CUDA for math apps.

Tunnel vision doesn't invalidate my point nor the fact that you completely missed the boat. CUDA is a big deal in computer science and computer engineering. It goes far beyond "math apps" though these math apps include the effects in Just Cause 2 that started this whole conversation thread. Where do you think your favorite game dev scouts for new talent?

Bah done with this thread.
 
Last edited:
Tunnel vision doesn't invalidate my point nor the fact that you completely missed the boat. CUDA is a big deal in computer science and computer engineering. It goes far beyond "math apps" though these math apps include the effects in Just Cause 2 that started this whole conversation thread. Where do you think your favorite game dev scouts for new talent?

I think you missed my point, what is being discussed here, is not OTHER uses for CUDA, it's the gaming uses for it. if you want to discuss OTHER uses for it, head over to the physics processing forum, or the distributed computing forum =)

When he states that CUDA is a failure, I'm 100% sure hes talking about the physx/cuda implementation in games being implemented due to NV "supporting" the devs.
 
in terms of gaming, CUDA is a failure, I think that's what he is saying, this is obviously what we are discussing, we're not discussing people using CUDA for math apps.

He/she said: If Nvidia needs to pay developers to use CUDA, then CUDA is a failure, pure and simple.

Ok...so suppose Nvidia did pay the devs to use CUDA for JC2 or those devs would not have used it. Then we have CUDA being a failure. Even if nearly ever other game in the universe uses CUDA without Nvidia paying them...it is a failure. That is pure logic and so clearly it makes no sense from an if/then point of view. I explained that before...but I have no point. Very frustrating.

But nearly every option for possible interpretation makes it absurd. Also, we need to stop calling it an API...cause it is a language, not an api. EDIT: this, I suppose could be true or false depending on what we are calling a language and what we are calling an api. Upon reflection I think I should have stuck with api...

By no means is CUDA the programming language a failure if Nvidia has to pay devs to use it. I would certainly like to move past that point.....


But is CUDA a failure in games? Maybe...but that was never the intended purpose, was it? The purpose was and always has been for using the gpu for task other than graphics. If CUDA gets into games as well, then that is just an added bonus.

If openCL, and other languages become more popular...that is great. But this whole argument could be done and over with if ATI would support CUDA. Then there would be a single good language that the devs from JC2 could have used to make both camps happy.

In fact....that would make a whole lot of people happy, and it would only hurt ATIs pride a little. I think it would help the bottom line so even AMD share holders should want this. After all if AMD does have faster GPUs, then all of a sudden there is a whole crap ton of CUDA code out there that can now run on ATI cards, and I would happily buy an ATI card to run CUDA.

Except that it does attack Nvidia directly in an area that Nvidia has spent a lot of time and money. That is only reasonable case for AMD being dicks about CUDA.
 
Last edited:
He/she said: If Nvidia needs to pay developers to use CUDA, then CUDA is a failure, pure and simple.

Ok...so suppose Nvidia did pay the devs to use CUDA for JC2 or those devs would not have used it. Then we have CUDA being a failure. Even if nearly ever other game in the universe uses CUDA without Nvidia paying them...it is a failure. That is pure logic and so clearly it makes no sense from an if/then point of view. I explained that before...but I have no point. Very frustrating.

But nearly every option for possible interpretation makes it absurd. Also, we need to stop calling it an API...cause it is a language, not an api.

By no means is CUDA the programming language a failure if Nvidia has to pay devs to use it. I would certainly like to move past that point.....


But is CUDA a failure in games? Maybe...but that was never the intended purpose, was it? The purpose was and always has been for using the gpu for task other than graphics. If CUDA gets into games as well, then that is just an added bonus.

If openCL, and other languages become more popular...that is great. But this whole argument could be done and over with if ATI would support CUDA. Then there would be a single good language that the devs from JC2 could have used to make both camps happy.

In fact....that would make a whole lot of people happy, and it would only hurt ATIs pride a little. I think it would help the bottom line so even AMD share holders should want this. After all if AMD does have faster GPUs, then all of a sudden there is a whole crap ton of CUDA code out there that can now run on ATI cards, and I would happily buy an ATI card to run CUDA.

Except that it does attack Nvidia directly in an area that Nvidia has spent a lot of time and money. That is only reasonable case for AMD being dicks about CUDA.

You guys really like to nit pick don't you? if you feel like taking his comment out of context go ahead, I'm not going to sit here and play that game with you so that you can justify your point of view.

You do know what Physx is right? Physx is NV's "gaming" use for CUDA,
CUDA/Physx in gaming, is not the graphics aspect of it, CUDA/Physx in gaming, is the physics simulation to make games realistic, and more interactive.

and to your last comment, we have no clue who is being a dick about what, and I doubt either company is going to come out and start pointing fingers.
 
You guys really like to nit pick don't you? if you feel like taking his comment out of context go ahead, I'm not going to sit here and play that game with you so that you can justify your point of view.

Fair enough....but his (do you know him?) comment is wrong if I nit pick or otherwise.

You do know what Physx is right? Physx is NV's "gaming" use for CUDA,
CUDA/Physx in gaming, is not the graphics aspect of it, CUDA/Physx in gaming, is the physics simulation to make games realistic, and more interactive.

and to your last comment, we have no clue who is being a dick about what, and I doubt either company is going to come out and start pointing fingers.

Ok...but I did not think Physx was just CUDA code. I have never seen Physx code so I have no idea. But JC2 did not use Physx as far as I know. What we are talking about is using CUDA....if Physx uses it...fine.

Maybe there is a general misunderstanding. I am all for open standards that everyone can use equally. So I would say I do not like Physx for that reason. It has been my understanding that CUDA is very different from Physx in this respect, but I could be wrong.
 
Windows
Direct 3D
Direct Compute
.net
Java
OSX
Cocoa
Carbon
Flash
CUDA

Oh my all closed source! they are all evil!!! Lets all move the dying platform of Linux(as a user OS, not server side)/Open GL/ Open CL! Oh wait there arent any games there...
 
Fair enough....but his (do you know him?) comment is wrong if I nit pick or otherwise.



Ok...but I did not think Physx was just CUDA code. I have never seen Physx code so I have no idea. But JC2 did not use Physx as far as I know. What we are talking about is using CUDA....if Physx uses it...fine.

Maybe there is a general misunderstanding. I am all for open standards that everyone can use equally. So I would say I do not like Physx for that reason. It has been my understanding that CUDA is very different from Physx in this respect, but I could be wrong.

PhysX is CUDA

"In February 2008, Nvidia bought Ageia and the PhysX engine and has begun integrating it into its CUDA framework"- WIKIPEDIA
 
PhysX is CUDA

"In February 2008, Nvidia bought Ageia and the PhysX engine and has begun integrating it into its CUDA framework"- WIKIPEDIA

Is Physx code a subset of CUDA? I didn't think so....the wiki does not specifically say. I don't want to DL a physx sdk just to see....integrating Physx into CUDA could mean a lot of things.

Sometime I will look at it, and its license. I still think Physx != CUDA both in terms of code and licensing.
 
Fair enough....but his (do you know him?) comment is wrong if I nit pick or otherwise.



Ok...but I did not think Physx was just CUDA code. I have never seen Physx code so I have no idea. But JC2 did not use Physx as far as I know. What we are talking about is using CUDA....if Physx uses it...fine.

Maybe there is a general misunderstanding. I am all for open standards that everyone can use equally. So I would say I do not like Physx for that reason. It has been my understanding that CUDA is very different from Physx in this respect, but I could be wrong.

Physx utilizes the "CUDA" architecture(Hardware side), instead of using a PPU. the SDK/API itself is not the same though, but Physx is done through CUDA.(not 100% sure on this, but I think what is currently done, is Physx has it's own API, but both run on "Cuda cores" on the GPU)

http://www.eurogamer.net/articles/nvidia-cuda-and-physx-article
 
I feel like I'm repeating myself.

CUDA, outside of gaming, is a huge success. There's a reason nVidia has the entire professional world locked tight within its grasp. Nobody is really arguing against that. I believe this topic started and continued to be related to the implementation of PhysX in games, not the idea of

However, in gaming, the types of effects it produces could be easily done within direct-compute if the developers wanted to give the same experience to both parties. It's either the developers being absolutely lazy, or nVidia paying them off. There have been many examples of interactive water in games...

http://www.youtube.com/watch?v=_QDf1WcKUeI&feature=related
http://www.youtube.com/watch?v=2nywdxgJbHQ
http://www.youtube.com/watch?v=JAfjN2e_ZmQ&feature=related

And check out this comparison video for JC2. Doesn't look all too impressive...
http://www.youtube.com/watch?v=A29uSkIo04s

And I'll again state that if there wasn't eye-candy like dof effects that practically do nothing and a pre-constructed water simulation that would have been easily done through another way, this game would be a TWIMTBP title that played better on lower priced hardware from ATi.

Meaning, if they didn't have those effects, and didn't add them in the last minute... TWIMTBP would be pretty useless in this case. I'm totally confidant, given how nVidia has acted this last year, that the developers of JC2 were paid off in order to use pretty elementary effects on nVidia-specific hardware.

And this is wrong, given a variety of examples. I've bought nVidia in the past, and had a GTS 250 before grabbing a 5850. But I find it really hard to see how this isn't blatantly anti-competitive.
 
Windows
Direct 3D
Direct Compute
.net
Java
OSX
Cocoa
Carbon
Flash
CUDA

Difference being that most of the products you listed weren't made by companies with the intent for it to only run on hardware that they own and control. The exception probably being OSX, which seems more like an example of closed proprietary fail than anything else. In fact, many of the technologies listed would never have become successful if they did not work on a wide range of hardware. Can you imagine if something like Java had come out and only worked on a certain brand of processor? We wouldn't still be talking about it today.
 
Is Physx code a subset of CUDA? I didn't think so....the wiki does not specifically say. I don't want to DL a physx sdk just to see....integrating Physx into CUDA could mean a lot of things.

Sometime I will look at it, and its license. I still think Physx != CUDA both in terms of code and licensing.

PhysX is a bunch of libraries that was ported to CUDA after NVIDIA aquired Aegia.

I doubt that CUDA will be of any significance to the mainstream user. Commercial programs often include support for OpenGL for GPGPU and games avoid closed API's like GPU PhysX unless there is special compensation either in form of money or valuable co-marketing deals that follow it.

Unlike Microsoft DirectX which is a standard accepted by different hardware vendors, where the vendors can support it on equal terms, CUDA and PhysX are an API where Nvidia has their hands deep in the cookie jar. I doubt we'll ever see any wide support for this, especially now that more and more alternatives pop up in the market which both Nvidia and ATI supports.
 
And this is wrong, given a variety of examples. I've bought nVidia in the past, and had a GTS 250 before grabbing a 5850. But I find it really hard to see how this isn't blatantly anti-competitive.

The "blatantly anti-competitive" part is where they've made the decision for you that you can't use your GTS250 as a PhysX card because you are running an ATI card also; for no good technical reason.
 
In fact....that would make a whole lot of people happy, and it would only hurt ATIs pride a little. I think it would help the bottom line so even AMD share holders should want this. After all if AMD does have faster GPUs, then all of a sudden there is a whole crap ton of CUDA code out there that can now run on ATI cards, and I would happily buy an ATI card to run CUDA.

Except that it does attack Nvidia directly in an area that Nvidia has spent a lot of time and money. That is only reasonable case for AMD being dicks about CUDA.

I feel like you are assuming a lot here. Can you back up how AMD is the one being dicks about it? Can you prove that Nvidia freely offered AMD to use CUDA for their cards with no hidden agenda? Can you show how even if Nvidia did offer the use freely (this is a big if) that Nvidia couldn't program their CUDA functions to run better on Nvidia cards than AMD cards thus forcing ATI into a disadvantage? (OK in all fairness you can't show this but I put it in as a consideration).

I really don't know enough about the reasoning behind these decisions but you seem to know them so I'd like to see where you got your information as that could change my view of the situation. Right now I view it as a Nvidia having sole control over these features and if AMD signs up to run CUDA they are very much at the mercy of Nvidia and how they implement those features. It really makes no sense to me that AMD would give that kind of control to Nvidia.
 
Last edited:
I'm pretty sure NV offered Physx to ATI/AMD, not CUDA

There was never any need to offer something to someone if they are free to take it. CUDA is free.

That is why I am getting confused here. Unless Nvidia changed the license, this should still be the case. But if Physx is a subset of CUDA, then offering CUDA for free is the same as offering Physx for free. I can't see a reason for that...so I feel like I have no idea what is going on at this point.

I guess I need to dl an sdk and so some reading....
 
There was never any need to offer something to someone if they are free to take it. CUDA is free.

That is why I am getting confused here. Unless Nvidia changed the license, this should still be the case. But if Physx is a subset of CUDA, then offering CUDA for free is the same as offering Physx for free. I can't see a reason for that...so I feel like I have no idea what is going on at this point.

I guess I need to dl an sdk and so some reading....

Writing applications that use CUDA, and having CUDA run on your own hardware are 2 very different things(in licensing too), While CUDA is free for developing applications(and it should be, NV wants people to use their hardware), I don't know if NV has given anyone (amd included) licensing to run CUDA on their hardware.

Anyway, in that respect Physx is free, :p make more sense now ^^?
http://developer.nvidia.com/object/physx_downloads.html
 
The "blatantly anti-competitive" part is where they've made the decision for you that you can't use your GTS250 as a PhysX card because you are running an ATI card also; for no good technical reason.

When there is an issue with PhysX or the 3d rendering where both are enabled in the system. Who is going to support it? ATI is going to say remove the NV card or NV will tell you to call ATI?
 
I feel like you are assuming a lot here. Can you back up how AMD is the one being dicks about it? Can you prove that Nvidia freely offered AMD to use CUDA for their cards with no hidden agenda? Can you show how even if Nvidia did offer the use freely (this is a big if) that Nvidia couldn't program their CUDA functions to run better on Nvidia cards than AMD cards thus forcing ATI into a disadvantage? (OK in all fairness you can't show this but I put it in as a consideration).

I really don't know enough about the reasoning behind these decisions but you seem to know them so I'd like to see where you got your information as that could change my view of the situation. Right now I view it as a Nvidia having sole control over these features and if AMD signs up to run CUDA they are very much at the mercy of Nvidia and how they implement those features. It really makes no sense to me that AMD would give that kind of control to Nvidia.

I can prove that CUDA is free. Take a look at the license.....there is no restriction on implementation of CUDA. If ATI wanted to modify it in any way...that is something different so in that respect CUDA is not free. But is it open for public use according to the license.

As far as the programming is concerned....it is a compiled language. C code runs differently depending on which compiler generates assembly code as well as the assembler and linker. But there is nothing to keep ATI from making killer optimizations so CUDA would run as fast or faster than nvcc which is Nvidias compiler.

In other words, it would be upto ATI to make it run better...Nvidia would have no control over that aspect of it. Ordering of function calls, etc....does not matter too much. There might be specific types that CUDA uses that Nvidia supports natively. That could have some potential problems for ATI, but it could be work around for sure.

Right now I view it as a Nvidia having sole control over these features and if AMD signs up to run CUDA they are very much at the mercy of Nvidia and how they implement those features. It really makes no sense to me that AMD would give that kind of control to Nvidia.

That is correct as I see it, but I do not agree with your conclusion. AMD has no control over how C++ specified, but can still use. Same for Intel and AMD....what Intel and AMD do have control over is their respective compilers and optimizations. Same thing here.
 
When there is an issue with PhysX or the 3d rendering where both are enabled in the system. Who is going to support it? ATI is going to say remove the NV card or NV will tell you to call ATI?

There's a difference between supporting it and outright removing it. Nobody running ATi+nVidia is going to assume that they won't have problems.

I hate to say this, but you're pulling at straws. nVidia loves to offload their extra features onto a PPU, there's no reason it wouldn't work.
 
Writing applications that use CUDA, and having CUDA run on your own hardware are 2 very different things(in licensing too), While CUDA is free for developing applications(and it should be, NV wants people to use their hardware), I don't know if NV has given anyone (amd included) licensing to run CUDA on their hardware.

Anyway, in that respect Physx is free, :p make more sense now ^^?
http://developer.nvidia.com/object/physx_downloads.html

The have never restricted it in the first place. It is free for Intel, or AMD or anyone to implement. An intrepid programmer could make a CUDA translator to that generates OpenCL code. Then run this through the OpenCL compiler and bam....CUDA is running on ATI cards. Nothing about that is a breach of licensing as far I know.

EDIT: So...not really making more sense.
 
Writing applications that use CUDA, and having CUDA run on your own hardware are 2 very different things(in licensing too), While CUDA is free for developing applications(and it should be, NV wants people to use their hardware), I don't know if NV has given anyone (amd included) licensing to run CUDA on their hardware.

Anyway, in that respect Physx is free, :p make more sense now ^^?
http://developer.nvidia.com/object/physx_downloads.html

PhysX isn't free. Its ad-supported and in addition to terms like you need to apply in order to be allowed to use it commercially, you have many advertisment terms you need to follow:

6. Attribution Requirements and Trademark License. You must provide attribution
to NVIDIA, PhysX® by NVIDIA, and the NVIDIA PhysX SDK.

A: You will include a reference to the PhysX SDK and NVIDIA in any press releases
for such Game that relate to NVIDIA, or in-game physics, and will identify
NVIDIA as the provider of the "Physics Engine" (or such other term or phrase as
indicated by NVIDIA from time to time).
B: For Games and Demos that incorporate the PhysX SDK or portions thereof, the
NVIDIA and PhysX by NVIDIA logos must appear:
a. on the back cover of the instruction manual or similar placement in an
electronic file for the purpose of acknowledgement/copyright/trademark
notice;
b. on external packaging;
c. during opening marquee or credits with inclusion of “PhysX by NVIDIA”;
d. must appear on title marketing feature list with a specific call-out of
PhysX Technology
e. on the credit screen; and
Page 5 of 7
f. in the “About” or “Info” box menu items (or equivalent) of all Physics
 
The have never restricted it in the first place. It is free for Intel, or AMD or anyone to implement. An intrepid programmer could make a CUDA translator to that generates OpenCL code. Then run this through the OpenCL compiler and bam....CUDA is running on ATI cards. Nothing about that is a breach of licensing as far I know.

That's an incredibly optimistic view on the situation, given that nVidia is using PhysX as a marketing technique against ATi at this moment.
 
Back
Top