DirectCompute vs. CUDA vs. OpenCL

There was never any need to offer something to someone if they are free to take it. CUDA is free.

No, CUDA is free to use, not free to port to a new platform. That is a *huge* difference.

I explained why it was wrong, and you said I have no point which is rude since I do have a point.

Sorry, I must have truly missed where you explain why what I said was wrong. Can you please elaborate on why my statement was wrong? Preferably with less emotion. :)

I think you all need to step back here and understand one thing. CUDA is not a failure.

(I’ll try to not repeat myself.) If you read my previous post on why CUDA is an easy sell then you’re half way there. GPGPU computing sure is nothing new however the fundamental difference is before (programmers, students, researchers) were forced to shoe-horn in general computations on the graphics pipeline. It makes what we can do on the GPU (as far as general computations) even more limited than now (besides wanting to stab yourself trying to code for it.)

CUDA is not a failure.

C and CUDA are practically indistinguishable. Every student in mathemtatics, science, engineering, and computer science has to learn C. CUDA works with MatLab which is huge in the fields I just mentioned. MatLab assured CUDA's wide acceptance in the college/university/research world. This is where companies hire their researchers from and where developers scout their new talent.

There is a lot more than just “NVidia knocking down on developers doors and saying, ‘here’s CUDA and money, let’s make a toast. Gentlemen To Evil!’” than people make it out to be.

In the case of mathmatics, science, and engineering using MatLab and CUDA, if people are using it then you are absolutely correct that it isn't a failure. What I original said doesn't dispute that at all, since Nvidia isn't paying for those people to use CUDA.

I think you missed my point, what is being discussed here, is not OTHER uses for CUDA, it's the gaming uses for it. if you want to discuss OTHER uses for it, head over to the physics processing forum, or the distributed computing forum =)

When he states that CUDA is a failure, I'm 100% sure hes talking about the physx/cuda implementation in games being implemented due to NV "supporting" the devs.

My statement isn't restricted to any particular category, but CUDA can certainly be a success in the engineering fields while being a failure in the gaming fields.

He/she said: If Nvidia needs to pay developers to use CUDA, then CUDA is a failure, pure and simple.

Ok...so suppose Nvidia did pay the devs to use CUDA for JC2 or those devs would not have used it. Then we have CUDA being a failure. Even if nearly ever other game in the universe uses CUDA without Nvidia paying them...it is a failure. That is pure logic and so clearly it makes no sense from an if/then point of view. I explained that before...but I have no point. Very frustrating.

No, now you are twisting my statement to apply to a single studio/game. I said "developers". If Nvidia has to pay to get some developers to use CUDA while other devs decide to pick it up, then CUDA isn't a failure.

So far, though, only PhysX and JC2 use CUDA in games (PhysX uses CUDA). So far, GPU PhysX has been primarily (only?) used in TWIMTBP games, as is JC2. So far, CUDA in gaming is limited to games developed with Nvidia's "support".

My statement applies to the market *in general*. If nobody/very few people in a given area (gaming, for example) use the technology because they like the technology (and not because they were paid to), then that technology is a failure.

By no means is CUDA the programming language a failure if Nvidia has to pay devs to use it. I would certainly like to move past that point.....

Are you sure you're know logic? You can't just say "it certainly isn't that, lets move on".

But is CUDA a failure in games? Maybe...but that was never the intended purpose, was it? The purpose was and always has been for using the gpu for task other than graphics. If CUDA gets into games as well, then that is just an added bonus.

Games aren't just graphics. PhysX is a prime example of Nvidia's desire to use CUDA in games.

If openCL, and other languages become more popular...that is great. But this whole argument could be done and over with if ATI would support CUDA. Then there would be a single good language that the devs from JC2 could have used to make both camps happy.

...

However, ATI would then be forced to follow Nvidia's lead. Look at Intel and AMD and SSE support. The reason AMD is always behind Intel in supporting the newest SSE versions is because Intel sits on it until it has a product on the market. You'd be a fool to think Nvidia wouldn't do something similar, if not the exact same thing.

Not to mention Nvidia would be free to require things that its cards do better than ATI's or tweak the system to favor its architecture. That would ultimately hurt consumers as ATI would be forced to follow Nvidia's lead.
 
I can prove that CUDA is free. Take a look at the license.....there is no restriction on implementation of CUDA. If ATI wanted to modify it in any way...that is something different so in that respect CUDA is not free. But is it open for public use according to the license.

As far as the programming is concerned....it is a compiled language. C code runs differently depending on which compiler generates assembly code as well as the assembler and linker. But there is nothing to keep ATI from making killer optimizations so CUDA would run as fast or faster than nvcc which is Nvidias compiler.

In other words, it would be upto ATI to make it run better...Nvidia would have no control over that aspect of it. Ordering of function calls, etc....does not matter too much. There might be specific types that CUDA uses that Nvidia supports natively. That could have some potential problems for ATI, but it could be work around for sure.



That is correct as I see it, but I do not agree with your conclusion. AMD has no control over how C++ specified, but can still use. Same for Intel and AMD....what Intel and AMD do have control over is their respective compilers and optimizations. Same thing here.

Maybe I just don't understand the technology well enough, but wouldn't the game (keep in mind I'm looking at this from a gaming perspective) act as the compiler and couldn't Nvidia go to teh developer much as it does now to implement CUDA features and have the developer write the code in a way that would put ATI at a disadvantage. There are plenty of games out there that run better on one card type than the other right now.

So what is stopping ATI cards right now from working with CUDA? Why can't I just install some cuda software and choose to run CUDA if it's entirely free?
 
Maybe I just don't understand the technology well enough, but wouldn't the game (keep in mind I'm looking at this from a gaming perspective) act as the compiler and couldn't Nvidia go to teh developer much as it does now to implement CUDA features and have the developer write the code in a way that would put ATI at a disadvantage. There are plenty of games out there that run better on one card type than the other right now.

So what is stopping ATI cards right now from working with CUDA? Why can't I just install some cuda software and choose to run CUDA if it's entirely free?

CUDA requires specific hardware in order to run. ATi would have to design their next video card release with the intention of running CUDA, and nVidia has always been absent with specifics on how that would play out. They know that it would never happen, so they spin the issue to look like ATi is the bad guy and that everything would be roses and puppies if AMD agreed to work with them.
 
No, CUDA is free to use, not free to port to a new platform. That is a *huge* difference.

As far as I know....and I do understand this difference, it is free to port to a new plateform.

Sorry, I must have truly missed where you explain why what I said was wrong. Can you please elaborate on why my statement was wrong? Preferably with less emotion. :)

sure...

No, now you are twisting my statement to apply to a single studio/game. I said "developers". If Nvidia has to pay to get some developers to use CUDA while other devs decide to pick it up, then CUDA isn't a failure.

That is a your statement!!!. You are saying your statement is false.... I am not twisting your statement at all. If Nvidia pays devs (suppose this part is true by supposing it paid a dev), then it is a failure. Nothing in your statement prevents all the other game devs from loving and using it all the time so suppose this is true as well. That is what your statement says!!!! specifically

You are trying to twist your damn words because you do not like being wrong...tuff.

If you think you are right and can support your statement, then prove it. You have not done so... you cannot because it is wrong. Depending on your definition of success, I have PROVEN it is wrong.

My statement applies to the market *in general*. If nobody/very few people in a given area (gaming, for example) use the technology because they like the technology (and not because they were paid to), then that technology is a failure.

Fine...your statement is false in this case too, and for the same reason.

Are you sure you're know logic? You can't just say "it certainly isn't that, lets move on".

I am saying that I have proven it and even you even agree it is false.

Logic 101

If P, then Q == Not Q then P

To prove the first case, you suppose P is true and arrive at Q. Similarly, suppose not Q is true, arrive at P.

If you agree that if Nvidia had to pay even a single dev, then it pays devs to use it. That is predicate calc. So let us suppose that is true. Then you must show that will result in CUDA being a failure as a result. Even if every other dev uses CUDA for games and everything else...it still must be a failure. You can't do it...


On to a different subject from your incorrect statement until you prove it.

However, ATI would then be forced to follow Nvidia's lead. Look at Intel and AMD and SSE support. The reason AMD is always behind Intel in supporting the newest SSE versions is because Intel sits on it until it has a product on the market. You'd be a fool to think Nvidia wouldn't do something similar, if not the exact same thing.

That is fair....

Not to mention Nvidia would be free to require things that its cards do better than ATI's or tweak the system to favor its architecture. That would ultimately hurt consumers as ATI would be forced to follow Nvidia's lead.

ATI can make a compiler to deal with most issues unless there are specfic function calls to specified hardware in Nvidia gpus that are normally very computationally expensive to implement. But even that can be minimized via compilers to a large degree.
 
Last edited:
CUDA requires specific hardware in order to run. ATi would have to design their next video card release with the intention of running CUDA, and nVidia has always been absent with specifics on how that would play out. They know that it would never happen, so they spin the issue to look like ATi is the bad guy and that everything would be roses and puppies if AMD agreed to work with them.

No it does not....

An 8088 intel processor can run CUDA. Any machine can, but the purpose of CUDA is to get the performance of gpus. The question is if ATI could be at a disadvantage without specific hardware....

Possibly, yes. But it could in theory be minimized by the compiler.
 
Maybe I just don't understand the technology well enough, but wouldn't the game (keep in mind I'm looking at this from a gaming perspective) act as the compiler and couldn't Nvidia go to teh developer much as it does now to implement CUDA features and have the developer write the code in a way that would put ATI at a disadvantage. There are plenty of games out there that run better on one card type than the other right now.

So what is stopping ATI cards right now from working with CUDA? Why can't I just install some cuda software and choose to run CUDA if it's entirely free?

CUDA is the language...not the binary that runs. Both Linux and Windows will run C code and java.

But you cannot run the same compiled code on both machines. You will have to compile the code with the respective compiler first.

So for CUDA code that runs on Nvidia hardware, you would have to first have an ATI CUDA compiler. Which is certainly possible.

EDIT: I am not disputing that this could give Nvidia an advantage w.r.t CUDA code. But I am saying that ATI could minimize it if they so choose.
 
PhysX isn't free. Its ad-supported and in addition to terms like you need to apply in order to be allowed to use it commercially, you have many advertisment terms you need to follow:

6. Attribution Requirements and Trademark License. You must provide attribution
to NVIDIA, PhysX® by NVIDIA, and the NVIDIA PhysX SDK.

A: You will include a reference to the PhysX SDK and NVIDIA in any press releases
for such Game that relate to NVIDIA, or in-game physics, and will identify
NVIDIA as the provider of the "Physics Engine" (or such other term or phrase as
indicated by NVIDIA from time to time).
B: For Games and Demos that incorporate the PhysX SDK or portions thereof, the
NVIDIA and PhysX by NVIDIA logos must appear:
a. on the back cover of the instruction manual or similar placement in an
electronic file for the purpose of acknowledgement/copyright/trademark
notice;
b. on external packaging;
c. during opening marquee or credits with inclusion of “PhysX by NVIDIA”;
d. must appear on title marketing feature list with a specific call-out of
PhysX Technology
e. on the credit screen; and
Page 5 of 7
f. in the “About” or “Info” box menu items (or equivalent) of all Physics

That sounds free to me, all you need to do is reference NV in your credits, and show their logo for using their technology, that is fairly standard.

How to access the Binary PhysX SDK

The NVIDIA binary PhysX SDK is 100% free for both commercial and non-commercial use and is available for immediate download by registered PhysX developers. To become a registered PhysX Developer please complete the registration form (steps provided below) on NVIDIA's PhysX Developers Website.

The SDK contains headers, libs, dlls, samples, and documentation and the directions for obtaining access to the binary PhysX SDKs for all supported platforms are provided below:
 
No it does not....

An 8088 intel processor can run CUDA. Any machine can, but the purpose of CUDA is to get the performance of gpus. The question is if ATI could be at a disadvantage without specific hardware....

Possibly, yes. But it could in theory be minimized by the compiler.

CUDA already limits the amount of CPU that can be used when running CPU enabled physX. what's stopping them from doing the exact same thing or worse with ATi?

http://www.xbitlabs.com/news/multim...ling_Multi_Core_CPU_Support_in_PhysX_API.html

http://img94.imageshack.us/img94/830/batmanphysxsoftcpu.png

http://img690.imageshack.us/img690/9610/batmanphysxcputhreading.jpg

I'm getting tired of this argument. You know exactly why ATi does not want to work with nVidia to get CUDA on the Radeon series. The way they're using PhysX is nothing more than jabs at ATi using things that can easily be constructed on open platforms. If developers weren't getting paid off substantially, they would be adopting such practices so they can reach out to the ~30-40% of gamers that use ATi video cards.

You can spin that all you want, you can look at it from an nVidia shareholder viewpoint until the end of time, but the actions nVidia does speak volumes more than what their PR team can dig up.
 
When there is an issue with PhysX or the 3d rendering where both are enabled in the system. Who is going to support it? ATI is going to say remove the NV card or NV will tell you to call ATI?

This is not an excuse and stuff like this happens all the time. What if the sound drivers and the video drivers aren't getting along? Which company do you blame? What if the chipset and the videocard don't get along?

Point being that most PCs are made up of major components from several companies and the potential for conflicts is nothing new nor has anything really changed in that regard.

And neither company has to "support" anything. That doesn't mean they have to disable it :rolleyes:
 
CUDA is the language...not the binary that runs. Both Linux and Windows will run C code and java.

But you cannot run the same compiled code on both machines. You will have to compile the code with the respective compiler first.

So for CUDA code that runs on Nvidia hardware, you would have to first have an ATI CUDA compiler. Which is certainly possible.

EDIT: I am not disputing that this could give Nvidia an advantage w.r.t CUDA code. But I am saying that ATI could minimize it if they so choose.

CUDA is not just the language, it's the entire architecture, It's not so easy to get it running well on ATI cards, NV and ATI have different architectures, and I'm pretty sure NV designs their hardware with CUDA in mind ;P there are architectural differences that give NV an advantage while running CUDA apps/games.
 
CUDA is the language...not the binary that runs. Both Linux and Windows will run C code and java.

But you cannot run the same compiled code on both machines. You will have to compile the code with the respective compiler first.

So for CUDA code that runs on Nvidia hardware, you would have to first have an ATI CUDA compiler. Which is certainly possible.

EDIT: I am not disputing that this could give Nvidia an advantage w.r.t CUDA code. But I am saying that ATI could minimize it if they so choose.

Fair enough. Let me ask you an honest question though. Even though you appear to support CUDA and it's implementation I want to ask you this:

As a logical person with a good head on your shoulders... If you were representing ATI/AMD and faced with the decision to put your trust in a rival companies technology or push an open source alternative what would you do?

I can see pros and cons for both.
 
I'm getting tired of this argument. You know exactly why ATi does not want to work with nVidia to get CUDA on the Radeon series. The way they're using PhysX is nothing more than jabs at ATi using things that can easily be constructed on open platforms. If developers weren't getting paid off substantially, they would be adopting such practices so they can reach out to the ~30-40% of gamers that use ATi video cards.

You can spin that all you want, you can look at it from an nVidia shareholder viewpoint until the end of time, but the actions nVidia does speak volumes more than what their PR team can dig up.

I agree with nearly everything you have said. But think about this....

Nvidia is really the only one who is actually pushing this Tech. (meaning using gpgpu) I want this to succeed. I want the cpu gpu divide to dwindle to nothing and see a three way match up between AMD INTEL and NVIDIA

For that to happen requires we move forward with this tech...that is only going to happen if gamers get on board with this tech. since games are such a huge market.

So when Nvidia adds features to a game that would not be there otherwise I see this as a success w.r.t to what I want. If OpenCL or Directcompute is going to take off....it is going to need some $$$ backing it up. For that to happen a company needs to back it....

Who is going to back something that equally benefits everyone? Not ATI, not Intel...and not Nvidia...

So we need a tech that is getting $$$....right now that only choice is CUDA.

And so CUDA is benefiting gamers everywhere by advancing the state of the art in video game tech. ATI does not support their stuff well enough for it to take off.....

If gpgpu tech was in sufficient demand an open standard could exist on its own without money from Nvidia or anyone else. But that does not seem to be the case....so to go forward this seems like the best option.
 
That sounds free to me, all you need to do is reference NV in your credits, and show their logo for using their technology, that is fairly standard.

Its not free as long as you have to accept advertisements in your game and have to advertise for them every time you speak of the game. They are not asking you to put some copyright in the game or mention them in the credit list, but actively advertise for Nvidia and PhysX. You have to give a product (advertisement) in return for a product (PhysX). A game developer could probably sell that ad space to Starbucks or MacDonalds, so its values changing hands.
 
Fair enough. Let me ask you an honest question though. Even though you appear to support CUDA and it's implementation I want to ask you this:

As a logical person with a good head on your shoulders... If you were representing ATI/AMD and faced with the decision to put your trust in a rival companies technology or push an open source alternative what would you do?

I can see pros and cons for both.

I would support it at this time. The reason is because it advances the industry largely at the expense of Nvidia. As more people started using gpgpu technology the demand for open source implementations like OpenCL will increase. Thus, in time CUDA would die, much like glide with 3dfx died.

But until that time, I am ok with Nvidia pushing this tech.

ATI still gets the ability to increase possible market share, but clearly Nvidia would take most of the CUDA market. For now, ATI could change that in the future...but for now I see how it would add sales and help broaden the gpu market.
 
If gpgpu tech was in sufficient demand an open standard could exist on its own without money from Nvidia or anyone else. But that does not seem to be the case....so to go forward this seems like the best option.

Open standard GPGPU exist and without $$$ to make people use it. OpenGL GPGPU in browsers and OpenCL in Apple OS are prime examples of this. OpenGL and OpenCL stands on their own feet and doesn't require $$$ to back them up.
 
CUDA is not just the language, it's the entire architecture, It's not so easy to get it running well on ATI cards, NV and ATI have different architectures, and I'm pretty sure NV designs their hardware with CUDA in mind ;P there are architectural differences that give NV an advantage while running CUDA apps/games.

I think what you mean to say is that NV designs CUDA with Nvidia gpu architecture in mind. ;) Of course that is true...

And I agree...but still. Compilers and computer science tricks exists and that can help deal with most of that. The most expensive difference I see would come from different types (like int, double, long, float in C) and CUDA has types are built from the ground up to run on Nvidia hardware. ATI would have to deal with that, and in some case it could be computationally expensive. Still....it could be minimized to some degree.
 
I think what you mean to say is that NV designs CUDA with Nvidia gpu architecture in mind. ;) Of course that is true...

And I agree...but still. Compilers and computer science tricks exists and that can help deal with most of that. The most expensive difference I see would come from different types (like int, double, long, float in C) and CUDA has types are built from the ground up to run on Nvidia hardware. ATI would have to deal with that, and in some case it could be computationally expensive. Still....it could be minimized to some degree.

Wouldn't any wrapper enabling CUDA on ATI require it to do double duty due to conversion?
 
Open standard GPGPU exist and without $$$ to make people use it. OpenGL GPGPU in browsers and OpenCL in Apple OS are prime examples of this. OpenGL and OpenCL stands on their own feet and doesn't require $$$ to back them up.

Nvidia is supporting OpenCL and so is AMD. MS is supporting direct compute....

I do not believe there is no money coming in from Nvidia and AMD on OpenCL, but I do not have proof of this. And CUDA is the most successful of all gpgpu platforms....and it is not because the need to give $$$ to devs to use it.

The $$$ to devs is for adding its use in other areas like games. I am ok with that so long is pushes the tech....

If games start to come out with OpenCL features....I would be very happy indeed. I do not see that happening, but I also do not know the future.
 
Wouldn't any wrapper enabling CUDA on ATI require it to do double duty due to conversion?

Nope....

Once it is translated to whatever ATI runs....it is native ATI code. I suppose there could be an emulator, which would require double duty of sorts, but that is not needed.
 
I would support it at this time. The reason is because it advances the industry largely at the expense of Nvidia. As more people started using gpgpu technology the demand for open source implementations like OpenCL will increase. Thus, in time CUDA would die, much like glide with 3dfx died.

But until that time, I am ok with Nvidia pushing this tech.

ATI still gets the ability to increase possible market share, but clearly Nvidia would take most of the CUDA market. For now, ATI could change that in the future...but for now I see how it would add sales and help broaden the gpu market.

I can respect those benefits and agree with the reasons you stated. However I just feel that if ATI put it's backing behind this tech and it turned out badly for them they would be in a hole against their primary rival that they may never be able to get out of. This is just a gut feeling based on Nvidia's actions lately such as making it so Physx would not work with an ATI card set as the primary GPU for no better reason than to screw over ATI card owners. I don't expect you to argue this point as it is really based off my opinion and my perception of Nvidia's actions recently.
 
I would support it at this time. The reason is because it advances the industry largely at the expense of Nvidia. As more people started using gpgpu technology the demand for open source implementations like OpenCL will increase. Thus, in time CUDA would die, much like glide with 3dfx died.

But until that time, I am ok with Nvidia pushing this tech.

ATI still gets the ability to increase possible market share, but clearly Nvidia would take most of the CUDA market. For now, ATI could change that in the future...but for now I see how it would add sales and help broaden the gpu market.

So let's say ATi develops for CUDA. Let's say it spends 20$ more per GPU to have some cuda cores and R&D is non-existent. Let's say it's a utopia, and nVidia helps them to implement CUDA cores on existing 5 series product refreshes, and let's say it costs ATi absolutely nothing to support CUDA even though nVidia has spend millions if not billions making and marketing the tech as something that is exclusive to nVidia.

What is preventing nVidia from simply pulling the plug after it is all done, or crippling the hardware that AMD is using like they've done countless times with software implementation? You KNOW that they will do this, it just puts them in a better position to artificially make competitive hardware look worse. That's what PhysX has been from the beginning of nVidia's aquisition, not a revolution or even a start of one.

AMD would be at the absolute mercy of nVidia in every say conceivable. It will never happen, and for good, good reason.

I encourage you also to check out [H]ard's Dark Void testing where the GTX 295 was unable to produce playable frame rates using the "high" subset for physx. Let me make this clear... the game looks like ASS without the physx added in, and it's no crysis when you've got it cranked.

I also feel like you pick out all of my arguments that don't support what you're saying. You've been quoting 1-2 sentences, and then going off as if that was my entire point...
 
As far as I know....and I do understand this difference, it is free to port to a new plateform.

Care to back that up with some actual proof/evidence?

That is a your statement!!!. You are saying your statement is false....

NO IT ISN'T!

You seem to miss the plural part of "developers" in my statement. It means to developers as a general category of people using CUDA, *NOT* a specific shop. Why aren't you getting that?

I am not twisting your statement at all. If Nvidia pays devs (suppose this part is true by supposing it paid a dev), then it is a failure. Nothing in your statement prevents all the other game devs from loving and using it all the time so suppose this is true as well. That is what your statement says!!!! specifically

No it isn't!!!!

Fuck, I even clarified that point for you and you still aren't getting it.

You are trying to twist your damn words because you do not like being wrong...tuff.

I haven't changed what I said at all. I've attempted to clarify the generalities of the statement for you, but you don't seem to get that either.

If you think you are right and can support your statement, then prove it. You have not done so... you cannot because it is wrong. Depending on your definition of success, I have PROVEN it is wrong.

You haven't proven shit.

And again, so far all uses of CUDA in games have been in TWIMTBP games. Games with Nvidia's hands in their development.

Fine...your statement is false in this case too, and for the same reason.

The reason being you said so? You really haven't done anything to actually counter the point and explain why Nvidia needing to pay developers to use CUDA doesn't mean that CUDA is a failure.

You instead want to try and pick apart the technicalities of the statement, convert it into a logical expression, and then attempt to find a false situation that goes against the spirit of the statement. So far you have done nothing to counter the idea behind the statement.

I am saying that I have proven it and even you even agree it is false.

And on that you are simply wrong. I still stand completely behind what I initially said.

Logic 101

If P, then Q == Not Q then P

To prove the first case, you suppose P is true and arrive at Q. Similarly, suppose not Q is true, arrive at P.

If you agree that if Nvidia had to pay even a single dev, then it pays devs to use it. That is predicate calc. So let us suppose that is true. Then you must show that will result in CUDA being a failure as a result. Even if every other dev uses CUDA for games and everything else...it still must be a failure. You can't do it...

A single is not a plural, so you're wrong there. And again, you are trying to force my statement to be wrong on a technicality that is resulting from you misunderstanding the statement and not the actual assertion itself.

Again, the statement applies to the overall market of a given field. Developers is a general category. For example, the JC2 devs are a subset of the "developers" in my statement. The "developers" I was referring to are *all* game developers as a group.

On to a different subject from your incorrect statement until you prove it.

*sigh*

If you fail to grasp the concept of what I said after this response I think I'm doing trying.

ATI can make a compiler to deal with most issues unless there are specfic function calls to specified hardware in Nvidia gpus that are normally very computationally expensive to implement. But even that can be minimized via compilers to a large degree.

Not so much. Nvidia can (and probably did) make a new version of CUDA that exploits things like Fermi's cache architecture. A compiler can't just compile that difference away. CUDA is more than just a language, hell CUDA isn't a language at all.
 
Not so much. Nvidia can (and probably did) make a new version of CUDA that exploits things like Fermi's cache architecture. A compiler can't just compile that difference away. CUDA is more than just a language, hell CUDA isn't a language at all.

exactly. how would amd compete with a patented cuda core technology? It would cost billions to implement.
 
Care to back that up with some actual proof/evidence?

Already done...

NO IT ISN'T!

You seem to miss the plural part of "developers" in my statement. It means to developers as a general category of people using CUDA, *NOT* a specific shop. Why aren't you getting tha
No it isn't!!!!

Fuck, I even clarified that point for you and you still aren't getting it.

Yes it is...now who is getting emotional because they are wrong????

I haven't changed what I said at all. I've attempted to clarify the generalities of the statement for you, but you don't seem to get that either.

You fail to understand what you so claim you do. If I say, there are blue cars. That means there is one or more blue cars. All it takes is one blue car to make it true. If there are fifty or a billion, it does not matter...just the one does it.

Your statement is that "if Nvidia has to pay devs" which is true if and only if there is one or more devs. That is what predicit calc is all about. Go check your book...

If you meant to say that "If Nvidia has to pay all devs,..." that is completely different and not comparable. If you do not get that....too bad. I get what you said....you do NOT get to tell me I have bad reading comprehension when I am going off of what you said, and not what you meant. Your statement is exactly what I said it is, and that is exactly why is it wrong. You see that, but you fail to understand logic, (not trying to brag or belittle anyone or toot my own horn but I am nearly finished with a masters in pure math and I have a double major undergrad in math/physics with minor in comp sci. That means I know predicate calculus and set theory par excellence and do not feel I need to justify this point further.)

Your original statement is wrong. You are clearly getting emotionally flustered because you do not like being wrong...too bad.

Look up the quantifiers for all, an upside down A, and there exists the backward E. Under exists, which is an existential quantifier, if you say there are some you mean one or more. That is any plural..as you used. You failure to understand predicate calc while thinking you know it is the cause of this argument. If you meant for all, you need to say that or your statement is ambiguous and vague and leads to misunderstanding...



You haven't proven shit.

Ok....:p


And again, so far all uses of CUDA in games have been in TWIMTBP games. Games with Nvidia's hands in their development.

Besides JC2, where has CUDA been used in games?

The reason being you said so? You really haven't done anything to actually counter the point and explain why Nvidia needing to pay developers to use CUDA doesn't mean that CUDA is a failure.

Yes I have...read the the thread from the start...I have repeatedly discussed why I am ok with Nvidia doing this.

You instead want to try and pick apart the technicalities of the statement, convert it into a logical expression, and then attempt to find a false situation that goes against the spirit of the statement. So far you have done nothing to counter the idea behind the statement.

I have...you were the one who tried to tell me how your if/then does not mean this or that which is not in the spirit of the discussion. My first response to your statement was in the spirit of the discussion. The derailment started with your response to my initial response to your response of one of my earlier post. :p
 
Last edited:
I am talking about CUDA only....

I still do not think Physx == CUDA. I do not care about Physx except that is uses a gpu to do something other neat stuff.

how can you support cuda being integrated into games when there's only one title that uses it? CUDA and PhysX go hand in hand.
 
how can you support cuda being integrated into games when there's only one title that uses it? CUDA and PhysX go hand in hand.

I support gpgpu, in games or otherwise. I see CUDA as the heart of that.

I would rather see CUDA be moved completely to open source with an independent committee to further its development. But I most of all want the market of gpgpu to expand. Games are a big market so....that is why I am ok with it....
 
It didn't start out that way when Ageia created it, it does now obviously since NV owns it now. I'm just saying, CUDA and PhysX are two totally separate things.

I should have rephrased it as "since the adoption by nvidia", I suppose. My point is simply that they are more or less the same thing when it comes to implementation in games.

I support gpgpu, in games or otherwise. I see CUDA as the heart of that.

I would rather see CUDA be moved completely to open source with an independent committee to further its development. But I most of all want the market of gpgpu to expand. Games are a big market so....that is why I am ok with it....

I don't see why ou wouldn't support AMD's decision to go to OpenGL then! I realize that you want it in the long run, but AMD adapting nVidia's proprietary stuff doesn't really get us too much further down the road.
 
I support gpgpu, in games or otherwise. I see CUDA as the heart of that.

I would rather see CUDA be moved completely to open source with an independent committee to further its development. But I most of all want the market of gpgpu to expand. Games are a big market so....that is why I am ok with it....

Thats what OpenGL and OpenCL do under Khronos.
 
I should have rephrased it as "since the adoption by nvidia", I suppose. My point is simply that they are more or less the same thing when it comes to implementation in games.



I don't see why ou wouldn't support AMD's decision to go to OpenGL then! I realize that you want it in the long run, but AMD adapting nVidia's proprietary stuff doesn't really get us too much further down the road.

OpenGL is graphics. OpenCL is gpgpu.

And I support it, but only if it gets used. The means AMD will have to do more than sit around and see if devs will support it. They will have to go and get devs to use the technology.

Would AMD spend its money to help OpenCL when Nvidia has the same support?
 
Last edited:
I'm hopeful that Apple helps put a little more light on OpenCL-based development as it's built right into the framework in Snow Leopard. Apart from that, it seems as if there's little momentum.
 
I'm hopeful that Apple helps put a little more light on OpenCL-based development as it's built right into the framework in Snow Leopard. Apart from that, it seems as if there's little momentum.
Sure, it might take some flight on OSX, but with pretty much all of the bigger players not relying on OSX (adobe?) only, it might be limited to FinalCut, and other Apple products.


I think what you mean to say is that NV designs CUDA with Nvidia gpu architecture in mind. ;) Of course that is true...

And I agree...but still. Compilers and computer science tricks exists and that can help deal with most of that. The most expensive difference I see would come from different types (like int, double, long, float in C) and CUDA has types are built from the ground up to run on Nvidia hardware. ATI would have to deal with that, and in some case it could be computationally expensive. Still....it could be minimized to some degree.

I goes both ways since NV develops both CUDA and the hardware it runs on =) look at the big picture, not the small one. but yea Make software for the hardware, then make hardware to run the software better =p
 
OpenGL is graphics. OpenCL is gpgpu.

And I support it, but only if it gets used. The means AMD will have to do more than sit around and see if devs will support it. They will have to go and get devs to use the technology.

Would AMD spend its money to help OpenCL when Nvidia has the same support?

OpenGL is used for GPGPU as well. Take Adobe's CS4 or Mozilla firefox as example where they use this.

As for OpenCL, AMD and Nvidia both have vowed to support this. They are both part of the Khronos group. Several major programs has also signalized that they are working on support for this in their programs and as mentioned (a quick google shows PowerVR, SiSoftware Sandra, mplayer, Bullet physics and more), OSX have native support for it already.

The broader support there is for OpenCL and OpenGL, the more redundent we'll see CUDA will be IMO.
 
I am talking about CUDA only....

I still do not think Physx == CUDA. I do not care about Physx except that is uses a gpu to do some other neat stuff.

Regardless of what you think, GPU PhysX == CUDA. So besides JC2, here is a list of games using CUDA:

http://www.nzone.com/object/nzone_physxgames_home.html

Yes it is...now who is getting emotional because they are wrong????

I'm not getting emotional, I'm getting frustrated. You are frustrating.

You fail to understand what you so claim you do. If I say, there are blue cars. That means there is one or more blue cars. All it takes is one blue car to make it true. If there are fifty or a billion, it does not matter...just the one does it.

Actually, if you say there are blue cars there must be *more than one*. One car is not cars plural.

Your statement is that "if Nvidia has to pay devs" which is true if and only if there is one or more devs. That is what predicit calc is all about. Go check your book...

Except I made my statement in English. English does not directly map to predicate calc.

If you meant to say that "If Nvidia has to pay all devs,..." that is completely different and not comparable.

Not only did I mean that, that is what I said and what I clarified 3 times now. Developers is a group of people. Again,my statement was to the group as a whole.

If you do not get that....too bad. I get what you said....you do NOT get to tell me I have bad reading comprehension when I am going off of what you said, and not what you meant. Your statement is exactly what I said it is, and that is exactly why is it wrong. You see that, but you fail to understand logic, (not trying to brag or belittle anyone or toot my own horn but I am nearly finished with a masters in pure math and I have a double major undergrad in math/physics with minor in comp sci. That means I know predicate calculus and set theory par excellence and do not feel I need to justify this point further.)

So you're problem is that you are approaching English as a math or logic statement, which English is not.

Your original statement is wrong. You are clearly getting emotionally flustered because you do not like being wrong...too bad.

Except you have done nothing to prove that I am wrong. You say you only have to show that one dev uses CUDA that wasn't paid, but you haven't even done that (sticking to the games category here).

Look up the quantifiers for all, an upside down A, and there exists the backward E. Under exists, which is an existential quantifier, if you say there are some you mean one or more. That is any plural..as you used. You failure to understand predicate calc while thinking you know it is the cause of this argument. If you meant for all, you need to say that or your statement is ambiguous and vague and leads to misunderstanding...

Those apply to predicate calc, not to English. You are mixing your languages.

I have...you were the one who tried to tell me how your if/then does not mean this or that which is not in the spirit of the discussion. My first response to your statement was in the spirit of the discussion. The derailment started with your response to my initial response to your response of one of my earlier post. :p

My response that it was an if/then is because you went off on some tangent that I never said.

So far you haven't even disagreed with my idea, you've just nitpicked the statement itself.
 
Despite all the handwaving going on about whether or not CUDA is free to develop with or implement, I don't see anything in the documentation presented about hardware implementations of CUDA and the licensing thereof. The SDK is a Software Development Kit, which is designed to work with the nVidia driver model to convert CUDA code into the appropriate instructions to run into the hardware of the nVidia graphics unit. Not to belabor the obvious, but ATI's hardware is different in how it handles instructions than nVidia's hardware does. Do you honestly think there is no fee associated with using a CUDA interpreter on hardware other than nVidia's GPUs?

To the person speculating as to ATI answering nVidia's playbook by using a counter move, ATI is. It's called adopting OpenCL. However, the point here is that ATI (in the GPU space) is using a vendor-neutral language to do the same thing. A win for ATI here is also a win for nVidia because nVidia is a member of the same group that promotes OpenCL.
 
Back
Top