NVIDIA works with devs to add CUDA ocean sim and graphics effects to Just Cause 2

YES YOU CAN....ATI is free to support this if they want to. :rolleyes:

No you can't. Nvidia owns it, nvidia controls it, nvidia determines who gets to implement it. ATI can't just make a CUDA driver without licensing it from Nvidia.

I am my own source. ATI was engaged with the devs but they never even visited the studio.. they started emailing as soon as they saw public interest in the demo. They weren't committed from the start at all.

In other words I can just completely ignore you, got it.

No, you are locked into using Microsoft's operating system.

Are you going somewhere with that statement?
 
So can anyone confirm whether or not these features are absent if you're running ATI hardware?
 
No you can't. Nvidia owns it, nvidia controls it, nvidia determines who gets to implement it. ATI can't just make a CUDA driver without licensing it from Nvidia.

Yes they can!!!!!

Are you being willfully ignorant or something?

In case you are too lazy....

http://www.google.com/search?source=ig&hl=en&rlz=&=&q=cuda+on+ati&aq=f&aqi=g2g-m2&oq=

In particular

http://www.maximumpc.com/article/news/cuda_running_a_radeon

http://www.extremetech.com/article2/0,2845,2324555,00.asp

http://www.techradar.com/news/compu...d-to-support-nvidia-s-cuda-technology--612041

It is AMD/ATI that do not want cuda to run on their cards.
 
Last edited:
In other words I can just completely ignore you, got it.

It's funny that people can be told the absolute bottom line truth and completely choose to ignore it. There's nothing more I can do but give you that. It's your choice whether you will accept it or let cognitive dissonance get in the way.
 
It's not a strawman.. what exactly would you say I am leaving out here? This is exactly what has been said here. Are you suggesting that there is some third alternative where NVIDIA uses direct compute for the benefit of ATI? Do you really think that's good from a business perspective? They want to enable and reward their loyal customerbase.. I don't see anything wrong with that.

Not to mention they wouldn't have been able to make this work with the gimped version of compute shader for older hardware .. CS4.0.. meaning if they'd used directcompute, they would've limited the install base that could use this to AMD and NV dx11 cards. There are tens of millions more CUDA capable GPUs out there than there are DX11 cards, so it makes sense. Plus its cool that its being enabled back to G80 via CUDA... it's not locking anyone into the latest generation to get the new stuff.

Yes,let's change points since you last straw man fell down. You stated that anyone who doesn't think what nV is doing with CUDA just wants crappy console ports. A fallacy on so many levels it's painful, from insinuating we can't get good PC games if nV doesn't add proprietray code to games to stating that without this attempt at vendor lock in we have no choice but to receive marginal console ports.

Don't try to paint CUDA/PhysX/etc... as some kind of rewards program, that is just funny stuff right there.
 

Do you understand the concept of "licensing"? ATI would have to pay nvidia to support CUDA.

It's funny that people can be told the absolute bottom line truth and completely choose to ignore it. There's nothing more I can do but give you that. It's your choice whether you will accept it or let cognitive dissonance get in the way.

No, you've only stated what you *claim* the truth is - and you're credibility is minimal at best. Since you can't source your claims, as far as I'm concerned you're just making shit up, *especially* since there is evidence directly contrary to what you are claiming.
 
Actually no, ATI would not have to pay to implement CUDA, they have no reason not to. ATI would have to pay for PhysicsX tho. Nvidia has said its reasonable, couple sends per card, but I don't know if that is alot or not.
 
Do you understand the concept of "licensing"? ATI would have to pay nvidia to support CUDA.

Yes I do. Nvidia is NOT charging anything for this RTFA. Do you not get that?

I have given you plenty of sources. You just give me...well nothing. You basically have been condescending towards me and others all while not supporting anything you say.
 
No, you've only stated what you *claim* the truth is - and you're credibility is minimal at best. Since you can't source your claims, as far as I'm concerned you're just making shit up, *especially* since there is evidence directly contrary to what you are claiming.

Pot meet kettle....
 
wow 1 game that uses Cuda....and everyone is freakin out.

Either way, Cuda is not going to stick.

Remember when the FX5900 came out and they didnt want to go by MS DX standard....that sure got them far didnt it?

The water that the OP links looks just like Crysis to me....

So If crysis can do it on any video card...why do we need Cuda
 
wow 1 game that uses Cuda....and everyone is freakin out.

Either way, Cuda is not going to stick.

Remember when the FX5900 came out and they didnt want to go by MS DX standard....that sure got them far didnt it?

The water that the OP links looks just like Crysis to me....

So If crysis can do it on any video card...why do we need Cuda

its nVidia's strategy to attract customer and make sure customer buys their GPU with higher price tag.
 
games should use OpenCL and ATI should get there driver team sorted out and Bundle the OpenCL as part as the Driver install like Nvidia has for Quite some time now, as ATI is holding back OpenCL support (Do not point me to the ATI OpenCL SDK website as you need to make an account that no normal user is going to do)

if i use Nvidia i get the CUDA, Open CL, Physx ( XP to win7 supported) and MS Direct compute (Vista - Win7 only so not very useful for every one as GPU programming works with less issues under XP then it does under VIsta or win7)

ATI you get Direct compute only (Vista- Win7 only) and something called CAL thats not very optimized at all that basically no one uses apart from F@H + lots of CPU use (unless settings are changed)
 
Yes I do. Nvidia is NOT charging anything for this RTFA. Do you not get that?

I have given you plenty of sources. You just give me...well nothing. You basically have been condescending towards me and others all while not supporting anything you say.

Actually you didn't read your own article.

Right smack in the middle of the ExtremeTech article it states that AMD/ATI would have to license PhysX in order to hardware accelerate the CUDA code. CUDA itself is worthless to AMD/ATI without PhysX.
 
Reeks too much of a Rambus patent trap anyway. Even if nVidia licensed Physx to ATI to get it well established in games, they could then cancel their license agreement and screw ATI, or jack up the licensing fee once every game uses it.
 
Every single one of the CUDA capable GPUs can run DirectCompute *AND* OpenCL. Currently DirectCompute and OpenCL can run on MORE cards than CUDA, as they both can run on every CUDA card in addition to a significant chunk of ATI cards as well. DirectCompute may be in DX11, but it can run on DX10 cards.

OpenCL would also have the advantage of seamlessly falling back to the CPU.

Nvidia implemented it in the best way to help Nvidia. Not the best way to help gamers.



I *am* addressing the main point. Since I've already stated basically the same damn thing 2 times and you just ignore it, I'm not going to repeat it a third time.

From what I have read, you have addressed nothing and have done nothing but complain. As someone already pointed out, the feature thru Direct Compute/OCL on anything other than DX11 hardware is gimped and more than likely does not perform up to par. CUDA, even on lowly 8800GTs runs far faster and better than DC/OCL does on even a 4870.
 
They could do it by delivering these effects via DX11 instead of CUDA, thus widening the customer experience of the game even more. Actually, I put this squarely on the game developer to do, it isn't AMD's or NVIDIA's responsibility to make sure the game has the best experience on both video cards. What I see happening here is the developer being ok with the game delivering two different experiences based on what brand video card you are using. I'm not sure I like that, personally.

The probrlem you have brent tho is this. How many DX11 cards are out there? HOw many are DC/OCL(my understanding anything DX10 or newer)? Which is faster and has more support, DC/OCL or CUDA? Which has the larger install base? and lastly Which enables these features to reach a leager number of people?

Answers, 2-3M atmost right now. All. CUDA. CUDA and again CUDA.
 
... My word...

You can use Cuda and PhysX on an ATI card with the specific driver hacks.

Who would want to though?

CUDA and PHYSX will be over taken by something better soon anyway.

Plus if you have an ATI card and you want PHYSX you can buy a cheapo Nvidia card from ebay for next to nowt and use that
 
Actually you didn't read your own article.

Right smack in the middle of the ExtremeTech article it states that AMD/ATI would have to license PhysX in order to hardware accelerate the CUDA code. CUDA itself is worthless to AMD/ATI without PhysX.

Really? can you point it out to me? I think you mean,

But what about PhysX? Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons. To do so would require ATI to build a CUDA driver, with the benefit that of course other CUDA apps would run on Radeons as well.

So ATI would have to support cuda to run Physx, not the other way around. And cuda is not worthless without Physx:rolleyes:
 
Just because they have the architecture to make it possible does not mean they would be licensing-free.

I think that's incredibly obvious.

Either way, nVidia has shown in the recent past that it has no intentions on working with ATi for a common good. By having developers close off AA to ATi cards and disabling PhysX when an ATi card is present, they are already showing the kind of predatory business tactics the red team would expect if they were to go into a mutual agreement.

Disregarding that, there is no telling what nVidia would demand for PhysX licensing on future release cards. Because ATi customers would expect PhysX, nVidia could charge whatever they please for their software and leave ATi cards without the price/performance selling point that they commonly fall back on. Leaving the pricing of your cards up to your only competitor is oligopoly suicide. It just doesn't make sense.
 
Really? can you point it out to me? I think you mean,



So ATI would have to support cuda to run Physx, not the other way around. And cuda is not worthless without Physx:rolleyes:

It was in the same damn paragraph as the quote you posted for crying out loud:

ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable—it would work out to less than pennies per GPU shipped.


From what I have read, you have addressed nothing and have done nothing but complain. As someone already pointed out, the feature thru Direct Compute/OCL on anything other than DX11 hardware is gimped and more than likely does not perform up to par. CUDA, even on lowly 8800GTs runs far faster and better than DC/OCL does on even a 4870.

Now compare CUDA on an 8800GT to DC/OCL on an 8800GT.
 
Really? can you point it out to me? I think you mean,



So ATI would have to support cuda to run Physx, not the other way around. And cuda is not worthless without Physx:rolleyes:

You make me laugh. How about you actually quote the ENTIRE statement instead of just the piece that benefits you. Here I'll do you a favor by highlighting the piece you keep leaving out.

ExtremeTech said:
But what about PhysX? Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons. To do so would require ATI to build a CUDA driver, with the benefit that of course other CUDA apps would run on Radeons as well. ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable

EDIT: Dammit kllernohj! You beat me to it. :mad: ;)
 
You make me laugh. How about you actually quote the ENTIRE statement instead of just the piece that benefits you. Here I'll do you a favor by highlighting the piece you keep leaving out.

Wow...just wow...so are you saying there is a contradiction in the article?

Though it has been submitted to no outside standards body, it is in fact completely free to download the specs and write CUDA apps, and even completely free to write a CUDA driver to allow your company's hardware (CPU, GPU, whatever) to run apps written in the CUDA environment.

If you do not understand this, then I am done. Nothing more can be said.

The part you quoted says that ATI would have to pay for PhysX. PhysX is not CUDA.

Running cuda is free. Why would the author contradict him/her self?
 
Last edited:
Running cuda is free. Why would the author contradict him/her self?

Because it's pro-nVidia.

Though it has been submitted to no outside standards body, it is in fact completely free to download the specs and write CUDA apps, and even completely free to write a CUDA driver to allow your company's hardware (CPU, GPU, whatever) to run apps written in the CUDA environment.

it is in fact completely free to download the specs and write CUDA apps

Says nothing about hardware, states that writing programs is free.

and even completely free to write a CUDA driver to allow your company's hardware (CPU, GPU, whatever) to run apps written in the CUDA environment.

Driver != Hardware.

The article implies that it is free to support CUDA through software, but does not mention the cost of implementing CUDA on hardware or licensing feed associated it. It's carefully worded to state absolutely nothing. Once you have quotes stating that ATi is welcome to implement hardware CUDA, you'll have an argument. Right now all you have is nVidia telling ATi that they can write their own drivers.
 
The article implies that it is free to support CUDA through software, but does not mention the cost of implementing CUDA on hardware or licensing feed associated it.
What are the license fees? Is this public knowledge? Are we certain there are license fees?
 
What are the license fees? Is this public knowledge? Are we certain there are license fees?

I don't think nVidia would spend time and money on GPU processing and give away the results for free. 11235 was directly stating that they are welcome to use it, which I believe is doubtful.
 
I wouldnt say fail Physx...but damn, at least make it where the effects couldnt be done via CPU/ATi or claiming that only CUDA/Physx can do soft shadowing or depth of field...I mean, the water is in no way better than Crysis with CUDA...all it does is add swells and the like, which, again, is something that Crysis already does ON THE CPU with no Physx required.

Soft shadows and depth of field? How is this CUDA? Didnt know that soft shadows required physics processing...

Again, Nvidia just ass kissing to get what they want only for their customers...I am really trying not to hate Nvidia, but damn, theyre complete dicks with this shit...

This is how the meeting went:

"Duhrr...lets add some stuff for this game and claim only we can do it...duhrr (drools)..."
"Bahhh...what can we do?//...add stuff like soft shadows and depth of field??/"
"Dahh...but that means ATi can do it too..."
"But uhh...not if we claim its CUDA and limit only to our hardware thru the drivers and software!"
"Brwiant!"

Meeting done!

You're fucking taking it out of context entirely... nowhere during that video did they say this is only possible with Nvidia hardware, BUT it is Nvidia so it's also doubling as advertising for them just like when games same "Meant to be played with Nvidia".

All they said it's possible... and i'm pretty sure that if it was ATI was doing this you wouldn't have shit to say. You're just trying to flip this into something it isnt.
 
I don't think nVidia would spend time and money on GPU processing and give away the results for free. 11235 was directly stating that they are welcome to use it, which I believe is doubtful.

What I said is true. The information is out there, and I have posted sources supporting my claims. Not a SINGLE counter example to anything I have said has been shown. Just a lot of fluff and misunderstanding.

A LOT of misunderstanding...

I am done with this thread...please feel free to spread misinformation. :rolleyes:
 
You're fucking taking it out of context entirely... nowhere during that video did they say this is only possible with Nvidia hardware, BUT it is Nvidia so it's also doubling as advertising for them just like when games same "Meant to be played with Nvidia".

All they said it's possible... and i'm pretty sure that if it was ATI was doing this you wouldn't have shit to say. You're just trying to flip this into something it isnt.

The fault lies within the developers who are putting things into the game that only benefit a certain amount of their consumers. Considering the demo has TWIMTBP slapped all over it, I wouldn't be surprised at all if the development team was getting a little extra slice of the pie to make Fermi and nVidia look a little better after the numerous fiascoes they've had.

People are upset because they aren't getting the same game as nVidia users are. And rightly so.

What I said is true. The information is out there, and I have posted sources supporting my claims. Not a SINGLE counter example to anything I have said has been shown. Just a lot of fluff and misunderstanding.

A LOT of misunderstanding...

I am done with this thread...please feel free to spread misinformation. :rolleyes:

Oh look, you completely avoided my response. What a surprise. There is no misunderstanding, just you posting that nVidia would give away their proprietary software so that both companies can hold hands and sing songs. I think we both know that is false, and you are just bending the facts to make it seem acceptable that the developers of this game are using gimmicks to implement features that could easily be done on the cpu.
 
Last edited:
I don't think nVidia would spend time and money on GPU processing and give away the results for free. 11235 was directly stating that they are welcome to use it, which I believe is doubtful.
A fair stance, but it'd do you well to substantiate it with something rather than try and discount it with conjecture about phantom licensing fees.
 
A fair stance, but it'd do you well to substantiate it with something rather than try and discount it with conjecture about phantom licensing fees.

I don't have to prove my point; CUDA will never be licensed or implemented by ATi. The only think I "have" to do to "win" the argument is disprove or imply doubt that nVidia would love to give ATi CUDA.

The entire argument is pointless, and has little to do with the topic.
 
I don't have to prove my point; CUDA will never be licensed or implemented by ATi. The only think I "have" to do to "win" the argument is disprove or imply doubt that nVidia would love to give ATi CUDA.

The entire argument is pointless, and has little to do with the topic.

Someone in this thread has severe reading comprehension issues.

CUDA = The free to use

PhysX = Must license to use

It doesn't get any simpler than that, but some people in this thread can't seem to GRASP even the SIMPLEST of concepts when it comes to this stuff. But ohwell.
 
People are upset because they aren't getting the same game as nVidia users are. And rightly so.

Reminds me of kids on the playground. Everybody is happy with one scoop of ice cream until they see the other guy with two. No different to what's happening here.
 
If CUDA is free and better than OpenCL ATI should use it, end of story. PhysX needs to die quickly and painfully though.
 
Now compare CUDA on an 8800GT to DC/OCL on an 8800GT.

Sure thing.

http://forum.beyond3d.com/showthread.php?t=56105

These are OCL results.

On stock HD5870 it takes:
2.27s for 16 queen
0.85s for 16 queen -local

Just for fun on HD5870@1130/1248:
2.07s for 16 queen
0.67s for 16 queen -local
66.0s for 18 queen -local

This is on the new one:
On stock HD5870 it takes:
3.40s for 16 queen
0.77 for 16 queen -local

Just for fun on HD5870@1130/1248:
2.80s for 16 queen
0.59s for 16 queen -local
30.50s for 18 queen -local


Here are some prelimeary results (with GeForce 8800GT) :

New -local 17: 4.41s
Old -local 17: 5.06s

New -local 18: 32.8s
Old -local 18: 38.8s

New -local 19: 270s
Old -local 19: 330s

Running on GTX 285:

New -local 17: 2.41s
Old -local 17: 2.44s

New -local 18: 15.1s
Old -local 18: 18.5s

New -local 19: 123s
Old -local 19: 155s

Running on GeForce GTX 285:

16 (gpu): 1.09s
-local 16 (gpu): 0.5s

17 (gpu): 4.01s
-local 17 (gpu): 2.55s

18 (gpu): 28.3s
-local 18 (gpu): 19.2s

GTX285 16 Queens
Cuda = .686s
OCL = .643s

Thats to noquater bringing something to my attention I over looked, I've added the newer result for the 5870 as they show mixed results. Still tho you have to question why an 8800GT can keep up with a 5870 when the later has so much more computational power compared to the latter.
 
Last edited:
That is one particular program, hardly conclusive. Indeed I could post the following link to a thread which shows the exact opposite.

http://www.ngohq.com/graphic-cards/16920-directcompute-and-opencl-benchmark.html

I will concede that CUDA is superior to OpenCL though.

Cool, shame its only benching the Compute shaders and not actually doing real computations. The n-queen bench being run in the link I posted is running actual computations on the CS, not benching how fast the CS can process a pixel or whatever. Its no secert ATI CS has been faster in synthitic benchs, but when put to use, it has come up slower which the n-queen bench proves.

Incase anyone is wondering, the n-queen bench can tak x number queen on a chess board and computes the random placement for the queens so as to not have any of them be able to take out another. It is also being developed by the guys behind F@H so its not like they are pulling favorites here.
 
Back
Top