DirectCompute vs. CUDA vs. OpenCL

nVidia isn't helping developers to be a pretty cool guy, to them it's just an advertisement for their video cards. Once development gets treated like a giant advertising campaign up for grabs, who really wins?

Publishers. ;)
 
Ok so lets count ATI out of that, what about Intel and Microsoft? Intel owns Havok the most popular physics engine out there. They are in a very good position to push technology. Especially whenever Havok decides to support OpenCL. And Microsoft? DirectCompute is part of DirectX. They're in the best position to do something. But no instead all of their energy is focused on the 360.

So, Microsoft/Intel should develop the API then pay developers to use it while locking the competition until they pay their own licensing fees? Reverse capitalism?
 
To try and say CUDA and OpenCL are the same right now is a bit of a lie. It's not just games that mostly use CUDA, it's everything because CUDA has a whole set of libraries, tools and a stable interface.

Both OpenCL and direct X compute are barely out of beta, and don't have the libraries, tools or the stable interface required for easy development. I'm sure that will change, but right now to have a go at a dev for using CUDA shows a lack of understanding of what's involved.

Instead I say congratulate them for at least adding some gpu compute extra's, this experience will serve them well in the future, and perhaps then when opencl/direct x compute work better they will pick one of them instead.
 
So, Microsoft/Intel should develop the API then pay developers to use it while locking the competition until they pay their own licensing fees? Reverse capitalism?

I think you missed the point entirely. I'm not talking about closed standards. Both DirectCompute and OpenCL are open standards. The APIs for them (DX and a possible future version of Havok specifically) are not. You think Nvidia doesn't charge developers for the CUDA API? I didn't say anything about locking the competition into anything, I want the open standards to be pushed more.
 
If Nvidia needs to pay developers to use CUDA, then CUDA is a failure, pure and simple.

So, let me get this right.....

If a technology's worth is not immediately apparent to consumers (in this case developers), then it is a failure? Is that not what you are saying? I guess the only other option I see with your statement is that you are saying this only with respect to CUDA. But let us explore that option....

Do you know of some other GPGPU technology that is successful and being implemented in games? Is there really any large competition for this tech? Are there a lot games that are using the GPU to add thing like physics and other game enhancements that do not use CUDA? If there are games that use something other than CUDA, then it is CUDA that is failing and I would agree with you completely. If not (and I do not know of other games that use something other than CUDA), then you are essentially saying that the tech is a failure since developers wont use this tech on their own.

So unless you have some counter examples of other gpu computer APIs that are common use, we are back at the above definition being what you must have meant. That is, unless there is some other angle I am missing (feel free to fill me in.)

But just so you know. That is a stupid and worthless definition of "failure" since a lot of what many people consider to be very successful today was initially a "failure" by that myopic definition. The personal computer was, for example, a failure by that definition.
 
Last edited:
So, let me get this right.....

If a technology's worth is not immediately apparent to consumers, then it is a failure? Is that not what you are saying?

I don't think he mentioned consumers. I believe he was talking about developers.
 
I think you missed the point entirely. I'm not talking about closed standards. Both DirectCompute and OpenCL are open standards. The APIs for them (DX and a possible future version of Havok specifically) are not. You think Nvidia doesn't charge developers for the CUDA API? I didn't say anything about locking the competition into anything, I want the open standards to be pushed more.


Derangle, as far as I know using CUDA is free. And I do not just me for developers, but for ATI or Intel or any one or company that wants to use CUDA is free to do so.

At one time Nvidia wanted to charge ATI a penny per gpu so ATI could use Physx. After being rejected Nvidia was willing to go into contract with ATI to let it be completely free. But Nvidia has never asked for money for CUDA from anyone. There is NOTHING in the lincense for CUDA which prevents it use.

As a matter of fact, if someone felt so inclined, they could write a CUDA translator using the ATI SDK and in fact compile CUDA code to run on ATI. There is NO law preventing that from happening, and as far as I know Nvidia even seemed, at one time, to be encouraging it, short of doing it themselves.

Note: that would not mean that CUDA code which is compiled to run on Nvidia would run on ATI. (so that would not fix JC2 for ATI gpus) That means the CUDA code it self (the program) could easily be compiled to run on ATI hardware. It would also mean that JC2 could easily be recompiled to have the CUDA effects run ATI cards as well.
 
Derangle, as far as I know using CUDA is free. And I do not just me for developers, but for ATI or Intel or any one or company that wants to use CUDA is free to do so.

At one time Nvidia wanted to charge ATI a penny per gpu so ATI could use Physx. After being rejected Nvidia was willing to go into contract with ATI to let it be completely free. But Nvidia has never asked for money for CUDA from anyone. There is NOTHING in the lincense for CUDA which prevents it use.

As a matter of fact, if someone felt so inclined, they could write a CUDA translator using the ATI SDK and in fact compile CUDA code to run on ATI. There is NO law preventing that from happening, and as far as I know Nvidia even seemed, at one time, to be encouraging it, short of doing it themselves.

Note: that would not mean that CUDA code which is compiled to run on Nvidia would run on ATI. (so that would not fix JC2 for ATI gpus) That means the CUDA code it self (the program) could easily be compiled to run on ATI hardware. It would also mean that JC2 could easily be recompiled to have the CUDA effects run ATI cards as well.

If this is true, my entire viewpoint will change.

However... I've never heard of anything like that.

Also, why exactly would nVidia disable a secondary PhysX card in ATi+nVidia systems if anything you said was true?
 
I edited my post to point out the fact that developers are consumers since I figured that point would be missed.

I guess you're right, the word "failure" is a bit strong. I think the general point, however, is that if you have software that is better than the competition's, you shouldn't have to pay people to use it.

There are flaws in this reasoning that could and have been pointed out, but the idea behind it is valid.
 
To try and say CUDA and OpenCL are the same right now is a bit of a lie. It's not just games that mostly use CUDA, it's everything because CUDA has a whole set of libraries, tools and a stable interface.

Both OpenCL and direct X compute are barely out of beta, and don't have the libraries, tools or the stable interface required for easy development. I'm sure that will change, but right now to have a go at a dev for using CUDA shows a lack of understanding of what's involved.

Instead I say congratulate them for at least adding some gpu compute extra's, this experience will serve them well in the future, and perhaps then when opencl/direct x compute work better they will pick one of them instead.

I agree with this.

If there are competing APIs that are in common use and not too difficult to implement over CUDA, and the devs go with CUDA specifically because Nvidia was out trying to stifle competition, then I would agree with Brent Justice and closing paragraphs in the review. But that is not the case...
 
If this is true, my entire viewpoint will change.

However... I've never heard of anything like that.

Also, why exactly would nVidia disable a secondary PhysX card in ATi+nVidia systems if anything you said was true?

Well all my information is secondary, but I have looked the information up in the past and found very good links.

Here is a place to start. As far as CUDA is concerned the license is easy to find.

http://www.tgdaily.com/hardware-features/38283-nvidia-supports-physx-effort-on-ati-radeon

more

http://www.+++++.com/news/14254-physx-gpu-acceleration-on-radeon-update.html

cant seem to post the link...lol...I guess HOCP does not like N G O H Q....but put that in the +++++ and you're golden.
 
I agree with this.

If there are competing APIs that are in common use and not too difficult to implement over CUDA, and the devs go with CUDA specifically because Nvidia was out trying to stifle competition, then I would agree with Brent Justice and closing paragraphs in the review. But that is not the case...

But how can you be sure that DirectCompute wouldn't have been able to give the same water effects? it doesn't seem like anything revolutionary, especially considering how great their water was even without CUDA. I'm not going to mention the dof extra, considering that it's very minor and easily implementable.

I'm not going to mention anything about Crysis' water, I'm just going to put out there again that if there weren't these extra goodies then the game would run better on cheaper ATi hardware. The game would be "meant to be played" on a series of cards that flat out lose to the competition on this title.
 
Well all my information is secondary, but I have looked the information up in the past and found very good links.

Here is a place to start. As far as CUDA is concerned the license is easy to find.

http://www.tgdaily.com/hardware-features/38283-nvidia-supports-physx-effort-on-ati-radeon

more

http://www.+++++.com/news/14254-physx-gpu-acceleration-on-radeon-update.html

It's fantastic that nVidia is offering to license it, but at the same time I have never read anything about ATi giving a penny less for physX. Pricing has been mysteriously absent in all of the articles I've read. It certainly makes ATi look bad... but still. Very few hard conclusions can be drawn.
 
But how can you be sure that DirectCompute wouldn't have been able to give the same water effects? it doesn't seem like anything revolutionary, especially considering how great their water was even without CUDA. I'm not going to mention the dof extra, considering that it's very minor and easily implementable.

I'm not going to mention anything about Crysis' water, I'm just going to put out there again that if there weren't these extra goodies then the game would run better on cheaper ATi hardware. The game would be "meant to be played" on a series of cards that flat out lose to the competition on this title.

I am certain that most other GPGPU APIs can do this equally well, if not better in some cases. CUDA is just an API, but a good one so far that is well supported. But that is really beside the point.
 
I am certain that most other GPGPU APIs can do this equally well, if not better in some cases. CUDA is just an API, but a good one so far that is well supported. But that is really beside the point.

I would offer the opinion that it is then the developer's fault for leaving out a section of their buyers in favor of nVidia's marketing gain.

But that is me owning a 5 series.
 
It's fantastic that nVidia is offering to license it, but at the same time I have never read anything about ATi giving a penny less for physX. Pricing has been mysteriously absent in all of the articles I've read. It certainly makes ATi look bad... but still. Very few hard conclusions can be drawn.

The penny / gpu is out there, I do not feel like digging.

But this is interesting:

http://tech.icrontic.com/news/amd-c...g-physx-support-when-ati-hardware-is-present/

Hello JC,

I’ll explain why this function was disabled.

PhysX is an open software standard. Any company can freely develop hardware or software that supports it. NVIDIA supports GPU accelerated PhysX on NVIDIA GPUs while using NVIDIA GPUs for graphics. NVIDIA performs extensive Engineering, Development, and QA work that makes PhysX a great experience for customers. For a variety of reasons–some development expense some quality assurance and some business reasons–NVIDIA will not support GPU accelerated PhysX with NVIDIA GPUs while GPU rendering is happening on non-NVIDIA GPUs. I’m sorry for any inconvenience caused but I hope you can understand.

Best Regards,
Troy
NVIDIA Customer Care
 
http://www.bit-tech.net/bits/interviews/2010/01/06/interview-amd-on-game-development-and-dx11/1

ven though I don't think PhysX - a proprietary standard - is the right way to go, despite Nvidia touting it as an "open standard" and how it would be "more than happy to license it to AMD", but [Nvidia] won't. It's just not true! You know the way it is, it's simply something [Nvidia] would not do and they can publically say that as often as it likes and know that it won't, because we've actually had quiet conversations with them and they've made it abundantly clear that we can go whistle.

1,1,2,3,5....

Are you claiming that richard huddy is lying?
 
This isn't the early 2000s anymore. The gaming market has changed, significantly. We don't have developers that are always looking to push boundaries and take risks on new technology anymore.

Except reality disagrees with you. There are a number of DX11 games out, for example. JC2 itself is DX10 or higher only, no DX9 support. Developers are absolutely willing to try and use new tech, and they do.


So, let me get this right.....

If a technology's worth is not immediately apparent to consumers (in this case developers), then it is a failure? Is that not what you are saying? I guess the only other option I see with your statement is that you are saying this only with respect to CUDA. But let us explore that option....

Who said anything about "immediately"? I never gave any sort of a time frame, and CUDA was released in Feb 2007.

After 3 years what has CUDA been used for? Not much. There are some things that have popped up over the years, but nothing that has really stuck around.

The iPhone SDK was released a year after CUDA, in Feb 2008. What has the iPhone SDK been used for? Well, a shit ton. Android 1.0 came out in Oct 2008. What has Android been used for? Well, a shit ton. DX11 came out Oct 2009. What has DX11 been used for? Take a look: http://en.wikipedia.org/wiki/List_of_games_with_DirectX_11_support There are currently 6 released DX11 games, 8 unreleased DX11 games, and 9 game engines with DX11 support. And that is after 6 months with only ATI even having any hardware that supports it.

Do you know of some other GPGPU technology that is successful and being implemented in games? Is there really any large competition for this tech? Are there a lot games that are using the GPU to add thing like physics and other game enhancements that do not use CUDA? If there are games that use something other than CUDA, then it is CUDA that is failing and I would agree with you completely.

Nope, I don't. Doesn't change what I said, either.

If not (and I do not know of other games that use something other than CUDA), then you are essentially saying that the tech is a failure since developers wont use this tech on their own.

No. It could mean a number of things. There are other physics libraries with OpenCL support that just haven't been picked up by any games yet, so it could mean things are still in development. DirectCompute in particular is much newer than CUDA. It could mean that the current implementations just aren't attractive to developers.

But just so you know. That is a stupid and worthless definition of "failure" since a lot of what many people consider to be very successful today was initially a "failure" by that myopic definition. The personal computer was, for example, a failure by that definition.

No it wasn't. But you also can't compare consumer products with software libraries. They are *very* different in adoption. Software libraries that don't make a strong impression early on tend to fade away into obscurity. Their window for adoption is much smaller than a consumer product since the software industry moves much quicker.

But an API that is a failure today can absolutely become a success tomorrow, although that should occur through new versions of the API that has addressed developer complains and not by the API owner paying people to use it. Likewise, successful APIs can become failures through stagnation. OpenGL, for example, used to be a success. Not so much anymore. DirectX used to be a failure, not so much anymore.
 
The penny / gpu is out there, I do not feel like digging.

But this is interesting:

http://tech.icrontic.com/news/amd-c...g-physx-support-when-ati-hardware-is-present/

I've read that before, and find it highly interesting. What i don't understand is why support was taken down in the first place. it's evident through driver mods and hardware hacks that it works 100%, it's simply nvidia not allowing it to happen.

It's odd, because PPUs like the 9600gso were gaining momentum and could have become an entirely new market for nVidia to capitalize on. By taking a very firm stand on this, they are sending a very clear message.

The timing was very convenient, too. With the 5870 coming out, you saw people with the kinds of cards nVidia targeted as PPUs looking to upgrade. Had they not closed support on it, a lot more people would have been swayed to buy the 5870 and frankensteining physx on it rather than doing what nVidia wanted and buying a larger nVidia GPU while still using the other card.

It begs the question: how badly does nVidia really want physX on the Radeon series? If Radeon was developed with physx in mind, and it truly was free, they wouldn't make a penny. Yet they're ok with screwing people over who actually purchased an nVidia card with the intention of getting nVidia extras?

I just don't really buy it...
 
Direct Compute, and Open CL might be able to do these things. Honestly we don't know for sure because no one has, but let's assume for argument's sake that they BOTH can, and with equal effort to CUDA. As a developer, especially in today's environment of multi-million dollar gaming budgets, would you rather go with the standard that is open, used by no-one and pushed to you/advocated by no-one, or would you rather use CUDA, where you have a tangible, singular corporation who advocates for their product, a product that has been used many times successfully, and by a company that will pay you to use it?

All this, and ATi COULD be getting a slice of the pie if they so choose. Instead, they choose to deny their customers the experience of their competitor's products because they don't have a viable competing technology, their version was late to the game and is unproven, and for which they refuse to advocate.

It's a no brainer, and anyone who blames it on Nvidia is blind. ATi dropped the ball, Nvidia got to it first, and is being much more aggressive. Guess who wins? The 60% of gamers who have an nvidia card.

Finally, and I have no basis in fact for this statement, but I have a feeling that there is a practical reason why these visual effects in games (Batman, Just Cause, etc) are done via CUDA, and not direct compute or ATi's equivalent. I have a feeling that those other two don't compete in some way, whether it be programming, features, efficiency across card brands, scaling, whatever...
 
Direct Compute, and Open CL might be able to do these things. Honestly we don't know for sure because no one has, but let's assume for argument's sake that they BOTH can, and with equal effort to CUDA. As a developer, especially in today's environment of multi-million dollar gaming budgets, would you rather go with the standard that is open, used by no-one and pushed to you/advocated by no-one, or would you rather use CUDA, where you have a tangible, singular corporation who advocates for their product, a product that has been used many times successfully, and by a company that will pay you to use it?

All this, and ATi COULD be getting a slice of the pie if they so choose. Instead, they choose to deny their customers the experience of their competitor's products because they don't have a viable competing technology, their version was late to the game and is unproven, and for which they refuse to advocate.

It's a no brainer, and anyone who blames it on Nvidia is blind. ATi dropped the ball, Nvidia got to it first, and is being much more aggressive. Guess who wins? The 60% of gamers who have an nvidia card.

Finally, and I have no basis in fact for this statement, but I have a feeling that there is a practical reason why these visual effects in games (Batman, Just Cause, etc) are done via CUDA, and not direct compute or ATi's equivalent. I have a feeling that those other two don't compete in some way, whether it be programming, features, efficiency across card brands, scaling, whatever...

I'm pretty sure ATI stream was available before CUDA :p

and no, 60% of gamers do not have NV cards that support CUDA, 60% of gamers have NVcards period, and alot of the ones that do, lets say 8800GT being a median, couldn't even push these games with Physx enabled.
 
Direct Compute, and Open CL might be able to do these things. Honestly we don't know for sure because no one has,

http://www.youtube.com/watch?v=K1I4kts5mqc

but let's assume for argument's sake that they BOTH can, and with equal effort to CUDA. As a developer, especially in today's environment of multi-million dollar gaming budgets, would you rather go with the standard that is open, used by no-one and pushed to you/advocated by no-one, or would you rather use CUDA, where you have a tangible, singular corporation who advocates for their product, a product that has been used many times successfully, and by a company that will pay you to use it?

If someone paid you 100 grand to be in a foot fetish issue of playgirl, would you do it?
 
I'm pretty sure ATI stream was available before CUDA :p

and no, 60% of gamers do not have NV cards that support CUDA, 60% of gamers have NVcards period, and alot of the ones that do, lets say 8800GT being a median, couldn't even push these games with Physx enabled.

Stream may have been available first (wasn't it with that ATi branded transcoder that shipped with the drivers?) but it was never implemented in games, so I suppose my statement SHOULD read" CUDA was implemented in games first"

And I would say that the % of nvidia cards capable of these effects is proportional to the % of ATi cards that are capable of these effects, so I 'm willing to be the 60% would hold.
 
Who said anything about "immediately"? I never gave any sort of a time frame, and CUDA was released in Feb 2007.

After 3 years what has CUDA been used for? Not much. There are some things that have popped up over the years, but nothing that has really stuck around.

The iPhone SDK was released a year after CUDA, in Feb 2008. What has the iPhone SDK been used for? Well, a shit ton. Android 1.0 came out in Oct 2008. What has Android been used for? Well, a shit ton. DX11 came out Oct 2009. What has DX11 been used for? Take a look: http://en.wikipedia.org/wiki/List_of_games_with_DirectX_11_support There are currently 6 released DX11 games, 8 unreleased DX11 games, and 9 game engines with DX11 support. And that is after 6 months with only ATI even having any hardware that supports it.

So then you are saying this about GPGPU APIs in general and not specifically about CUDA? There are a lot of examples of GPGPU APIs out there, NONE of them are in great use thus far.

STOP saying CUDA when you can only mean ANY gpgpu api. You are impossible to talk to because you miss the point by miles and miles.

Nope, I don't. Doesn't change what I said, either.

Yes it does!!!!

I am talking about GPGPU programming. There are many APIs, not just CUDA and it was not the first. So far CUDA is the most successful. End of story fact.

If NO API is successful because developers are not adopting it, then is the failure that of any one specific API? Or it is something else? It must be that something else, and you are saying that is a failure, and then making the illogical jump to say CUDA is therefore a failure. Those are your words. Do not try to dice into something else....:rolleyes:
 
Last edited:
If NO API is successful because developers are not adopting it, then is the failure that of any one specific API? Or it is something else? It must be that something else, and you are saying that is a failure, and then making the illogical jump to say CUDA is therefore a failure. Those are your words. Do not try to dice into something else....:rolleyes:

I don't think his argument is completely sound, but at the same time I don't think anyone doubts that CUDA is fantastic for GPGPU processes. There's a reason nVidia dominates that field.

I think the bigger question is whether or not it's necessary to provide the kind of graphical enhancements they have in games.
 
I don't see why it couldn't have been implemented. Water physics are clearly right there.


But would you have normally? My point is that money and pushing isn't exactly a measure of what's best for the industry.

Exactly, there has to be a reason why it was not used, and that's not water physics, that's water simulation using direct compute which can be very different as there is no dynamic body in the water like a boat.

And though you are right, when the alternative to a product actively supported, advocated by the owner and actually USED by the industry is OCL and DC, which are used by no-one, advertised and pushed by no-one and have much less support than CUDA, the choice really isn't that hard as a dev. Add a nice paycheck into the deal ad it's a no-brainer.

What's bad for the industry are competing standards, open or closed that no one can agree upon and the resulting fracture in availability and implementation HURTS the consumer. Exmaple: Direct X.
 
I work on a team that has traditionally been heavily reliant on clusters for computational math. We started using CUDA 2 years ago and I think their support community is outstanding. Also, when we go to conferences, everyone uses CUDA and the short courses are for CUDA. At work, we're sticking with nVidia for these reasons..

However, at home, this puts me in a bind. If I want to work from home, I need an nVidia card to run my code. I want to buy a 5850 because of the bang/buck ratio but I'd have to figure out how to use both cards under Linux. It would be great if they came up with something everyone can agree upon and, in the end, make everyone a winner.
 
Exactly, there has to be a reason why it was not used, and that's not water physics, that's water simulation using direct compute which can be very different as there is no dynamic body in the water like a boat.

And though you are right, when the alternative to a product actively supported, advocated by the owner and actually USED by the industry is OCL and DC, which are used by no-one, advertised and pushed by no-one and have much less support than CUDA, the choice really isn't that hard as a dev. Add a nice paycheck into the deal ad it's a no-brainer.

What's bad for the industry are competing standards, open or closed that no one can agree upon and the resulting fracture in availability and implementation HURTS the consumer. Exmaple: Direct X.

http://www.youtube.com/watch?v=z_7DgpJK-eI

can't find exactly what both of us are looking for, but i don't think it would be incredibly hard to implement.
 
That's purty.
Anyway, Don't get me wrong, I would love to see DC take over. It's supposed to be a very capable product, but if no-one uses it in-game, we'll never know. To get that MS has to go out there and convince people to use DC instead of CUDA in a game. THAT's the hudle taht has to be overcome. ATi has had years with Stream out there, and they have just sat on it, so at this point I'd call it dead.

But since DC is so new, free and built into every Win7 system, I'd say it has a fair shot as adoption goes up...
 
Exactly, there has to be a reason why it was not used, and that's not water physics, that's water simulation using direct compute which can be very different as there is no dynamic body in the water like a boat.

No water physics were used in JC2 either, it's all simulation.
 
... then why are the effects Nvidia only?

If you want the best gameplay experience and value for your dollar, the GeForce GTX 470 is definitely the right answer. It supports the fancy Bokeh filter and CUDA-based GPU Water Simulation option, and it performs well at high resolutions to boot. It is clear that visually, the experience is better on current GeForce GTX 480 and GeForce GTX 470 GPUs in this particular game due to the use of NVIDIA only supported features..


EDIT: I think we're splitting hairs here.
"CUDA-based water simulation" if you prefer, then
 
... then why are the effects Nvidia only?

because CUDA/Physx were used to perform the calculation to create the simulations ?

to properly create a fluid body, you'd need millions if not billions of particles, I don't think any GPU can handle that today.
 
LOL, it was more of a rhetorical question. Obviously SOME GPGPU API would have to be used. CUDA was chose for all of the reasons i mentioned earlier...
 
I would offer the opinion that it is then the developer's fault for leaving out a section of their buyers in favor of nVidia's marketing gain.

But that is me owning a 5 series.

Leaving out? Because ATI cards can't play the game? Or wait, they can't see the ending clip scene? No. It's none of those things. IF you have an Nvidia card you get some extra graphical features.

At no cost to the developer they had the option to ADD features for a subset of users, and you are saying they shouldn't have done it because it's unfair to people who bought ATI's graphics cards. How is it fair to Nvidia's user base to not add the features when it is free for the developer?
 
Back
Top