DirectCompute vs. CUDA vs. OpenCL

Wow, I was not expecting the ATI cards to perform that much better than nVidia's with everything else equal. Pisses me off that the green team get's exclusive eye candy though! They need to quit that shit, and soon!

But I am sure NVIDIA owners are happy that they get exclusive features. Hell, they paid for it when they bought their GPUs.

At least NVIDIA puts $ in product where their marketing is.

As an owner of nVIDIA I am happy that there are products that leverage my gpu besides gaming, and also features in games that I can leverage. Mind you there are things missing in NVIDIA gpus.

As an AMD owner, I am happy that there is reasonable performce and power consumption. Mind you there are things missing from AMD gpus.

As a consumer, I am happy that I have choice.

Linux might be great and open, but the games on them are where? Should NVIDIA spend money to leverage Linux? Sorry I "choose" a closed OS (Windows 7) because there are many good games on it,
 
I'm not getting emotional, I'm getting frustrated. You are frustrating.

You are frustrating me. ;)

Let me make myself clear. In English (according to the common usage) if you say "some", that means one or more. If I say there are blue cars, there DOSE NOT require there to be more than one. It DOES NOT mean strictly more than one and it does not mean all. You are dicing words because yours statement is wrong as it stands, and you cannot stand being wrong, clearly.

You have never studied any kind of math or computer science at a theoretical level if you fail to understand this.


Actually, if you say there are blue cars there must be *more than one*. One car is not cars plural.

Please, please read this...
http://en.wikipedia.org/wiki/Existential_quantification

Read that article...and tell then me you are right...go ahead...try....you cannot unless you completely do not understand English. You made an existential quantification. Or are you saying you made a universal quantification? That would be even more absurd then anything you have said to this point. It is one or the other....period. Pick it and stick with it.


Except I made my statement in English. English does not directly map to predicate calc.

It has a very common usage that does map directly into predicate calc. Especially when you drop in if/then statements. At that point the map is isomorphic...
I have two book cases full of books on advanced math that all use English to make statements in predicate calc and set theory.

Not only did I mean that, that is what I said and what I clarified 3 times now. Developers is a group of people. Again,my statement was to the group as a whole.

That needs to be stated as such. And that is even more bull...but true for very stupid reasons.

If you mean all game devs, then your statement is vacuously true and I will never bother discussing anything with you again because such a stupid statement does not deserve the electronic ink used to process it.


Except you have done nothing to prove that I am wrong. You say you only have to show that one dev uses CUDA that wasn't paid, but you haven't even done that (sticking to the games category here).

If I did that I would not prove your statement is false, but prove it to be true. And I can give you an example; a buddy of mine making a chess game using CUDA. No money from NV.

EDIT: if you think I am wrong...you had better get a letter address the American Mathematical Society to explain how all the math books need to be corrected to deal with your version of English.
 
Last edited:
You have never studied any kind of math or computer science at a theoretical level if you fail to understand this.

I am sorry but you are not correct.

He could have studied some kind of math or computer science at a theoretical level and not understand it.

For example, he could have been not too smart and failed the courses in question.

Sorry sir, your logic is wrong. :p
 
I am sorry but you are not correct.

He could have studied some kind of math or computer science at a theoretical level and not understand it.

For example, he could have been not too smart and failed the courses in question.

Sorry sir, your logic is wrong. :p

Fair enough....lol:p
 
You are frustrating me. ;)

Let me make myself clear. In English (according to the common usage) if you say "some", that means one or more. If I say there are blue cars, there DOSE NOT require there to be more than one. It DOES NOT mean strictly more than one and it does not mean all. You are dicing words because yours statement is wrong as it stands, and you cannot stand being wrong, clearly.

Correct, "some" means at least one. But I didn't use the word "some" and neither did you, so I'm not sure how this is relevant?

You have never studied any kind of math or computer science at a theoretical level if you fail to understand this.

Because theoretical math and science are prerequisites to learning English?

But yes, I have studied math and computer science at theoretical levels.

Please, please read this...
http://en.wikipedia.org/wiki/Existential_quantification

Read that article...and tell then me you are right...go ahead...try....you cannot unless you completely do not understand English. You made an existential quantification. Or are you saying you made a universal quantification? That would be even more absurd then anything you have said to this point. It is one or the other....period. Pick it and stick with it.

Again, you are confusing your languages. Existential quantification is a predicate logic rule, *NOT* and English rule.

In English, a single is never a plural. They are mutually exclusive. One car can never be cars. If you say "look at those cars", there *must* be more than one car (or the person making the statement was incorrect)

It has a very common usage that does map directly into predicate calc. Especially when you drop in if/then statements. At that point the map is isomorphic...
I have two book cases full of books on advanced math that all use English to make statements in predicate calc and set theory.

You must first convert English idioms into predicate calc ones, which you didn't do.

That needs to be stated as such. And that is even more bull...but true for very stupid reasons.

If you mean all game devs, then your statement is vacuously true and I will never bother discussing anything with you again because such a stupid statement does not deserve the electronic ink used to process it.

Exactly, it didn't need this massive debate because it was such a simple and basic point. Now you are finally getting it, although for the record it doesn't need to be *all* game devs, just the vast majority.

EDIT: if you think I am wrong...you had better get a letter address the American Mathematical Society to explain how all the math books need to be corrected to deal with your version of English.

No, I just need to continually remind you that English doesn't directly map to math and predicate logic statements, as English has things like idioms.
 
Exactly, it didn't need this massive debate because it was such a simple and basic point. Now you are finally getting it, although for the record it doesn't need to be *all* game devs, just the vast majority.

First of all:
All math, physics, economics, theory of comp,...,etc..., books I have use English the way I use it because any other interpretation is essentially stupid. I was giving you the benefit of the doubt. I can list a ton of authors. There is no mapping between English and logic given in any of these books because it is obvious how to interpret them.

Second of all:
If you are going to make vague statements, or vacuous statement, why make them at all?

Example of equally stupid statement:
If I am God, then kllrnohj's statement for which this long discussion is about is false.

That is a true statement, but only because I am not god. Which is the same reason your statement is true. But it is also completely pointless, just like your statement....:rolleyes:
(to be clear, your statement is true b/c Nvidia has not paid all game devs to use CUDA who have used CUDA, and for NO other reason.)

And for the record..you do need to specify your domain.
 
Last edited:
Who is going to back something that equally benefits everyone? Not ATI, not Intel...and not Nvidia...

What sports team would play on a neutral field where nobody has an advantage? Playing on your own field slanted in your favor (CUDA) isn't likely to attract a lot of others. If it's a fair playing field, then the winner wins by playing better (having a better product). If everyone wants to get together and truly determine who's the best, everyone usually will only agree to a neutral arena. Most people don't look at one team playing on a lopsided field where nobody else will bother to challenge them and accept that team as being the best.

OpenCL ensures that one company can't arbitrarily change the rules of the game in their favor. Nvidia could do with CUDA what others have suggested - design their own cards to work very well with it while holding back details from ATI and stuff like that. It really is an unfair game for ATI in that case. ATI would end up simply copying Nvidia's designs and we would lose innovation in the market simply because ATI is forced to mimic Nvidia to get good performance on Nvidia's CUDA.

Right now is the best time to switch from a proprietary system like CUDA to something like OpenCL, before things get too involved. The longer the proprietary system is used, the more people will use it, and the harder it will be to switch. ATI supporting CUDA rather than OpenCL would only accelerate this. Right now it sucks for the consumer, basically having to choose between CUDA which has decent real-world support, or OpenCL which should make the whole situation much better for the consumer in the end - simply buy the card that gives you the most for your money and not worry about which card allows for special features in the game (which may or may not actually be possible to run on a different card), since OpenCL can run on anything. While OpenCL will work on either brand, buying Nvidia for PhysX is an implicit vote for closed, proprietary systems.



http://www.extremetech.com/article2/0,2845,2324555,00.asp is an article from almost two years ago.
Nvidia "owns" and controls the future of CUDA, so it's not open in the "open source" definition, but it's certainly free. Nvidia tells us it would be thrilled for ATI to develop a CUDA driver for their GPUs.

But what about PhysX? Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons. To do so would require ATI to build a CUDA driver, with the benefit that of course other CUDA apps would run on Radeons as well. ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable—it would work out to less than pennies per GPU shipped.
I can't verify all that's true, but I see no reason it wouldn't be. According to this, CUDA is free as in beer for ATI, but PhysX would need to be licensed at a (small) cost.
Keosheyan says, "We chose Havok for a couple of reasons. One, we feel Havok's technology is superior. Two, they have demonstrated that they'll be very open and collaborative with us, working together with us to provide great solutions. It really is a case of a company acting very indepently from their parent company. Three, today on PCs physics almost always runs on the CPU, and we need to make sure that's an optimal solution first." Nvidia, he says, has not shown that they would be an open and truly collaborative partner when it comes to PhsyX. The same goes for CUDA, for that matter.

Though he admits and agrees that they haven't called up Nvidia on the phone to talk about supporting PhysX and CUDA, he says there are lots of opportunities for the companies to interact in this industry and Nvidia hasn't exactly been very welcoming.

That matches up with what a lot of people here seem to be thinking. While Nvidia PR can say that they'd love to be friends with ATI and just have everyone get along and license things for pennies, their recent actions indicate that they'd be more likely to be shady in dealing with ATI on this. Holding back CUDA details from ATI while they work on their own stuff, specifically designing things to not work well with ATI's hardware, etc.

To sum up, Keosheyan assures us that he's very much aware that the GP-GPU market is moving fast, and he thinks that's great. AMD/ATI is moving fast, too. He knows that gamers want GPU physics and GP-GPU apps, but "we're devoted to doing it the right way, not just the fast way."

While Nvidia may be trying to paint ATI as hating consumers right now, I really think that they're doing the right thing. It'll take a while to build that neutral arena next to Nvidia's existing field, but in the end consumers will get to see two teams battle it out based on who's better, not one team screwing over the other with an unfair advantage.


I'm not a programmer, but based on everything I've seen, CUDA and OpenCL seem to be similar code. http://developer.download.nvidia.com/OpenCL/NVIDIA_OpenCL_JumpStart_Guide.pdf for example. Since ATI and Nvidia both have OpenCL drivers now, I don't see a whole lot of reason for devs to limit themselves to CUDA now (which is Nvidia-only) when they can use OpenCL and run it on Nvidia or ATI cards, or even CPUs. Assuming similar effort to program them and similar performance of the final product, it just seems like better business to use the one that inherently works on anything rather than tying yourself to one specific brand (which seems to be having some issues lately).



Windows
Direct 3D
Direct Compute
.net
Java
OSX
Cocoa
Carbon
Flash
CUDA

Oh my all closed source! they are all evil!!! Lets all move the dying platform of Linux(as a user OS, not server side)/Open GL/ Open CL! Oh wait there arent any games there...

This is a bit chicken & egg, just like the CUDA stuff. Devs write games for Windows because that's what consumers use. People use Windows because that's what devs write games for. If every game you wanted to play were available on Linux and ran equally well, would you still use Windows? Note that some Windows games running on Linux/WINE have actually been faster than running them from Windows, due to Linux being more efficient. If games were written in OpenGL/OpenCL, they'd be much easier to port to Linux and Mac. If there were more games available, more people would use them, meaning more people would develop for them, etc.

While the current situation may be acceptable, I think that more open systems would lead to more competition, leading to great things rather than just "acceptable".



FYI, in every distributed computing app but F@H, ATI seems to beat Nvidia pretty soundly. In dnetc, the HD5770 was about 3x the GTX285. The HD5870 was about 6x (which matches up well to being basically double the HD5770). The GTX285 matched up with other posted numbers, and was about 3x the 9800GT. In one OpenCL benchmark, the HD5850 is about 20% faster than a pair of GTX295's (4 GPUs). The GTX480 is about 20% faster than the HD5850, and the HD5870 is about 10% faster than the GTX480. However, in Anand's tests, the GTX480 beat up the ATI cards. Even the GTX285 beat the HD5870. "Due to the significant underlying differences of AMD and NVIDIA’s shaders, even with a common API like OpenCL the nature of the algorithm still plays a big part in the performance of the resulting code, so that may be what we’re seeing here." I think you'll find that ATI and Nvidia will each have their strong and weak points, simply because there are differences between the cards. However, in a lot of the apps with both ATI and Nvidia options, ATI seems to do better.
 
...snip...

Nice post...I agree with most of what you are saying.

But I am not so sure a natural progression will lead to more apps for gpgpu. The software is going towards parallelism no matter what....F#, more openmp,etc...

The more I think about it....I get the feeling you might be right. It might be best to let things happen on their own. Let the chips fall where they will....but I can totally understand why NV is pushing this tech. They cannot compete with CPUs and if the future is more CPU than GPU...Nvidia is dead. But if there are a lot of good gpu apps, then NV might survive....


..
 
First of all:
All math, physics, economics, theory of comp,...,etc..., books I have use English the way I use it because any other interpretation is essentially stupid. I was giving you the benefit of the doubt. I can list a ton of authors. There is no mapping between English and logic given in any of these books because it is obvious how to interpret them.

And they are intended to be interpreted that way because they are technical books.

Out here in the real world you have to deal with idioms.

Second of all:
If you are going to make vague statements, or vacuous statement, why make them at all?

Because it wasn't vague or vacuous?

Example of equally stupid statement:
If I am God, then kllrnohj's statement for which this long discussion is about is false.

No, that is just stupid. What I said isn't stupid, its pointing out something that ideally would be common sense.

That is a true statement, but only because I am not god. Which is the same reason your statement is true. But it is also completely pointless, just like your statement....:rolleyes:
(to be clear, your statement is true b/c Nvidia has not paid all game devs to use CUDA who have used CUDA, and for NO other reason.)

All *published* games using CUDA have been paid by Nvidia to do so at this point. So far, CUDA is a failure in the game world.

And for the record..you do need to specify your domain.

If I was doing a formal definition, correct, but I wasn't. And I did specify my domain (3 times no less), you just decided to ignore it repeatedly and define random words (like "some").
 
And they are intended to be interpreted that way because they are technical books.

Out here in the real world you have to deal with idioms.

If you say so...

I have never had such an absurd time making sense of someones statement and then been told it is because I failed at reading comprehension because I missed the idioms....that is just nuts!!!! I have always been precise so as to avoid confusion and feel that clarity is the responsibility of the author, not the reader.

But whatever...

Correct me if this is wrong.

If at time t NV has to pay for all published games up to time t to use cuda, then cuda is a failure in the gaming world for that time only.

And do you mean to say this is only good logic only with respect to an APIs and nothing else? (which is debatable to the extreme...)

Now...if that is correct. Explain what is and what is not a failure and I want to make sure your definition agrees with what we should think of as failure/success. Do not give me a circular definition. i.e. say it is not a success. EDIT: BTW a definition does NOT have any restrictions on its use. I should be able to apply your definition to cars, people birds, life,...whatever...

At that point we can proceed, but the burden of proof is on you to demonstrate your assertion. Which I, even in this much restricted form, I see as false unless you now restrict the definition of failure to something outlandish.
 
Last edited:
All *published* games using CUDA have been paid by Nvidia to do so at this point. So far, CUDA is a failure in the game world.
You're definition of failure is out of whack with reality.

Those games made money so they were not a failure to the publisher or developer. They were enjoyed by ATI and Nvidia consumers alike so they were not a failure to the gamer. They sold some more graphics cards for Nvidia so they were not a failure to Nvidia. Is it just that you are a butthurt ATI fan because ATI lost a few video card sales you have to come onto an internet forum and bitch about how unfair it is?
 
You're definition of failure is out of whack with reality.

Those games made money so they were not a failure to the publisher or developer. They were enjoyed by ATI and Nvidia consumers alike so they were not a failure to the gamer. They sold some more graphics cards for Nvidia so they were not a failure to Nvidia. Is it just that you are a butthurt ATI fan because ATI lost a few video card sales you have to come onto an internet forum and bitch about how unfair it is?

Where did he say the games failed? he said CUDA is failure because NV paid the game devs to use CUDA(as in they wouldn't have used it otherwise)
 
I think kilrmohj's point is easily understood.

However, like charlie and his unmanufacturable, it can also for some odd reason be easily misunderstood. Reasons for such misunderstandings are pretty clear.
 
Where did he say the games failed? he said CUDA is failure because NV paid the game devs to use CUDA(as in they wouldn't have used it otherwise)

So if Nvidia spends 1 million dollars on CUDA on putting CUDA into games and that causes it them to sell more than 1 million dollars worth of video cards, then is it still a failure? No sane person would say that it was. So then I can assume someone has evidence that CUDA isn't paying for it's self with video card sells?
 
So if Nvidia spends 1 million dollars on CUDA on putting CUDA into games and that causes it them to sell more than 1 million dollars worth of video cards, then is it still a failure? No sane person would say that it was. So then I can assume someone has evidence that CUDA isn't paying for it's self with video card sells?

You're assuming that the $1million dollars of video card sales is related to CUDA being in those games, and it's probably not, since the very niche market of people who use it are mostly folders and not gamers :p

BUT non-the-less if it works out well enough to cause them to sell more cards sure, but I doubt it will in the end, unless both ATI and NV adopt the same "standard" it won't be wide spread in games enough to influence the game devs, and you'll only end up seeing small uses like cooler water(the water already looked good btw, the biggest diff is how the water looks from under the water IMO)
 
When I judge a company's products I say they are failures if they did not accomplish the goals the company establishes for that product. In some cases that is making money. In some cases it is to hurt the competition. In other cases it is to help expand a market....I can go on..and on...

In this case CUDA exist to help expand the gpu market. When it gets used, even if NV has to pay for it, it accomplishes that goal. That, to me, makes this uses a success. That is a fair comparison that is open to use in other areas so we can judge if such a statement of success agrees with our connotation (what we would normally think) of the word success.

I disagree whole heartily with the negative connotation derived form the use of the word failure in this case. What is being described as failure is nothing of the sort that I know of. I am a rational, and I like to think, intelligent person. I am not being unreasonable with this.

We can say something is neither successful, nor a failure but somewhere in between...we can say a lot of things. But in the end, our statements should be clear and meaningful enough that they can be fully evaluated to true or false, at least in theory.

Without knowing precisely what is meant by failure in this case, I have no means of knowing if that statement is true or false. I feel like it should be false, but can do nothing more with this highly contentions statement unless the author of the statement fully clarifies what is meant.

It is generally accepted that the person who makes a statement prove that statement. This has not been done. I have gone on the offensive and said I think this is false and so I should have to prove that and attempted to do so, but that was when I thought I understood what this statement is saying. I think it is a false statement with any reasonable definition of the word failure, but it is up to the author to allow me to prove or disprove the statement. Right now NOBODY except the author is capable of knowing if the statement is true or false.
 
Last edited:
You're assuming that the $1million dollars of video card sales is related to CUDA being in those games, and it's probably not, since the very niche market of people who use it are mostly folders and not gamers :p

BUT non-the-less if it works out well enough to cause them to sell more cards sure, but I doubt it will in the end, unless both ATI and NV adopt the same "standard" it won't be wide spread in games enough to influence the game devs, and you'll only end up seeing small uses like cooler water(the water already looked good btw, the biggest diff is how the water looks from under the water IMO)

It would be so hard to figure out if CUDA was selling cards for them. You could look at market share and card sales before and after CUDA starts showing up in games and try to draw a conclusion that way but that would be a stretch when you consider all the variables.

Personally I would not buy a game or card because of CUDA features, at least as I see them currently. If for some reason CUDA shows it can do something that hasn't been done without CUDA and I need hardware to use it then the story might change. If that's the case I'll be pissed about being forced into going with one hardware vendor.
 
You're assuming that the $1million dollars of video card sales is related to CUDA being in those games, and it's probably not, since the very niche market of people who use it are mostly folders and not gamers :p

BUT non-the-less if it works out well enough to cause them to sell more cards sure, but I doubt it will in the end, unless both ATI and NV adopt the same "standard" it won't be wide spread in games enough to influence the game devs, and you'll only end up seeing small uses like cooler water(the water already looked good btw, the biggest diff is how the water looks from under the water IMO)

I doubt it is the sole reason anyone buys a video card. However when video cards are in the state like the 4870/4890 vs 260/275 was when the price and preformance were very similar, it could easily be the reason to push someone "over the edge" one way or another.

I do know that CUDA was developed for the GPGPU market, so the only real costs are the CUDA programers they are putting on loan to developers. If a good game sells 100M copies, and it causes 1% of it's buyers to upgrade video cards, and 1% of those upgrades are "convinced" to buy an Nvidia card instead of an ATI card because of it, that's 10,000 video cards. If an average profit of 50$ a unit, it's still 500K because of CUDA. I don't think those numbers are outlandish by any stretch. 1% of 1%? That's pretty small.
 
It would be so hard to figure out if CUDA was selling cards for them. You could look at market share and card sales before and after CUDA starts showing up in games and try to draw a conclusion that way but that would be a stretch when you consider all the variables.

Personally I would not buy a game or card because of CUDA features, at least as I see them currently. If for some reason CUDA shows it can do something that hasn't been done without CUDA and I need hardware to use it then the story might change. If that's the case I'll be pissed about being forced into going with one hardware vendor.

I know for a fact CUDA is selling some cards. I know of several places with ten of thousands of dollars worth of NV telsa cards. All to run CUDA apps.

Are more gamers are going to buy NV due to CUDA? I dunno....but I know a lot of gamers here are also into folding, and so NV cards are good for that and that is CUDA. So that might translate into more card sales.

Is this game going to sell more CUDA cards? That is certainly a hard one to call. I think there is some backlash from the devs using CUDA....but there are also people who think that backlash is over reacting and see this is a nice benefit for themselves.

And we may not be able to calculate if this is an overall success or failure according to your standards in practice, but at least we have a means to do so in theory.
 
Last edited:
Where did he say the games failed? he said CUDA is failure because NV paid the game devs to use CUDA(as in they wouldn't have used it otherwise)
But what of other commercial software using CUDA? What of free software utilizing CUDA?
 
Difference being that most of the products you listed weren't made by companies with the intent for it to only run on hardware that they own and control. The exception probably being OSX, which seems more like an example of closed proprietary fail than anything else. In fact, many of the technologies listed would never have become successful if they did not work on a wide range of hardware. Can you imagine if something like Java had come out and only worked on a certain brand of processor? We wouldn't still be talking about it today.

And more to the point all those products listed come from a market that can afford harsh competition, in the graphics card world, there is Nvidia and AMD and that's really all there is, if Nvidia push out standards they fully control, get everyone to buy in to it, then charge say AMD completely unreasonable fee's later on...then that's a massive advantage, they either bleed AMD dry of cash or competitors are locked out of a technology which has become mainstream.

That's bad, very very bad. In the list used as an example these are massive markets where this is less of a problem, in a 1v1 market such as medium-high end discreet video cards, there's no room for this kind of monopolizing...in fact AMD aren't stupid...they did this with PhysX, PhysX can work on AMD GPUs, Nvidia didn't lock them out, they chose not to enter that market.

My current generation and previous generation of video cards were both AMD and I agree with their decision to lock me out of PhysX, I think we need an open standard that neither Nvidia or AMD own, or can buy.
 
I know for a fact CUDA is selling some cards. I know of several places with ten of thousands of dollars worth of NV telsa cards. All to run CUDA apps.

Are more gamers are going to by NV due to CUDA? I dunno....but I know a lot of gamers here are also into folding, and so NV cards are good for that and that is CUDA. So that might translate into more card sales.

Is this game going to sell more CUDA cards? That is certainly a hard one to call. I think there is some backlash from the devs using CUDA....but there are also people who think that backlash is over reacting and see this is a nice benefit for themselves.

And we may not be able to calculate if this is an overall success or failure according to your standards in practice, but at least we have a means to do so in theory.

Ya I'm only interested in CUDA as far as gaming is concerned. I'm sure CUDA is very successful for other uses. I think this is the first game CUDA has been used (at least in this way).

I think what bothers me about the way CUDA was used in JC2 is that those same effects are fully available without CUDA and have been included in other games for all hardware users. If CUDA was bringing something to the table that couldn't be done without it then I would feel that CUDA was benefiting gamers, at the moment it's doing nothing more than segregating us.
 
But what of other commercial software using CUDA? What of free software utilizing CUDA?

Is Nvidia paying for those developers to use CUDA? If not, then it is a success in those respective fields.

Look, you guys are blowing the statement way out of proportion. Its simple.

For an API to become successful it must have developer adoption. If your API fails to attract developers on its own, then it is not a success.

And success/failure for an API is its adoption rate. As in, is it used and by how many people? Java and .NET are successful. DirectX is successful. WPF is successful. Swing (Java's GUI API) is not successful. QT for Java is not successful. PHP-GTK is not successful.

An API can be financially successful (eg, moving more cards) while still being a failure. The reason being that the former can be short term, whereas the later is more long term. If after 3 years an API has not broken into one of its target markets, that suggests that it simply isn't attractive to those developers and is probably never going to take off.

So far in the world of published games (indie games included, just things that are actually sold and not hobby projects), CUDA is a failure. Outside of a few games that used it due to Nvidia's money and pressure, nobody else is using it. *THIS CAN CHANGE* (although I hope it doesn't)
 
For an API to become successful it must have developer adoption. If your API fails to attract developers on its own, then it is not a success.

Those are two different ideas....

I will assume you mean to say that an API is a success if and only if developers adopt it without being paid to do so.

This does not agree with my connotation of reality.

The reason is..and I explained this before....

NO api has taken off in this area. Your idea of successful, as I explain in detail before, is worthless because CUDA is the most successful api in this area what so ever. It is the BEST!!! In this respect!!!

The most successful thing is not the failure. The connotation does NOT fit with reality. The failure is on the part of the tech being adopted by the devs. It has NOTHING to do with cuda specifically.

I would agree that if there were competing popular APIs the devs could all choose from, but only picked CUDA due to NV's money, then it would be a failure. NOTHING of the sort is true.

Your statement is inaccurate and misleading...

If you tell me this is not a valid point...or that they are not comparable so some other crap I am going to have a conniption...
 
Last edited:
Look, you guys are blowing the statement way out of proportion. Its simple.

For an API to become successful it must have developer adoption. If your API fails to attract developers on its own, then it is not a success.


This is your problem: They aren't selling the API, they sell hardware. If pay you 10$ to use my software and then you buy 1,000$ of hardware from me, obviously my API is a failure. :rolleyes:
 
This is your problem: They aren't selling the API, they sell hardware. If pay you 10$ to use my software and then you buy 1,000$ of hardware from me, obviously my API is a failure. :rolleyes:

Sort of. Nvidia is trying to use CUDA to move hardware, correct, but for that to happen CUDA itself must be a success. If CUDA fails to take off, then nobody will care about that feature. Also you're fictional amounts are way out of whack. You also aren't taking into account the money spent in R&D developing CUDA (both hardware and software). Nvidia needs CUDA to be successful over the long haul, not just the immediate future. For that to happen, CUDA must take off on its own. Nvidia is trying to stimulate that by paying devs and hoping others jump on, but that hasn't been happening. I would be extremely surprised if CUDA has turned a profit for Nvidia at this point.

Just look around here when people ask for card recommendations, pretty much every time people shrug off PhysX and CUDA because they aren't being used. For most people, those features don't mean much at all. Again, sticking with the context of gaming.

For example, look at ATI's TruForm. ATI was hoping to use that feature to move more hardware. Back when it was released, there were a couple of tech demos and one or two games that used the feature. But it wasn't picked up by developers, and was discarded as a relic of history. I promise you ATI spent more on TruForm than they ever got back in increased hardware sales. TruForm was not only the best hardware tessellation but the only hardware tessellation you could get for years, but games didn't use it and thus consumers didn't care. Now we have DX11 picking up where ATI left off after 8 years, and support in games for the standard has already surpassed TruForm's adoption rate.

Those are two different ideas....

I will assume you mean to say that an API is a success if and only if developers adopt it without being paid to do so.

This does not agree with my connotation of reality.

The reason is..and I explained this before....

NO api has taken off in this area. Your idea of successful, as I explain in detail before, is worthless because CUDA is the most successful api in this area what so ever. It is the BEST!!! In this respect!!!

The most successful thing is not the failure. The connotation does NOT fit with reality. The failure is on the part of the tech being adopted by the devs. It has NOTHING to do with cuda specifically.

I would agree that if there were competing popular APIs the devs could all choose from, but only picked CUDA due to NV's money, then it would be a failure. NOTHING of the sort is true.

Your statement is inaccurate and misleading...

If you tell me this is not a valid point...or that they are not comparable so some other crap I am going to have a conniption...

The most successful thing can absolutely be a failure (for an example see above about ATI's TruForm). So far, GPGPU in games is a failure. CUDA has a handful of games that Nvidia paid the devs to get CUDA features in, OpenCL has none, and DirectCompute has none. So far, all 3 are failures.

The only difference being that CUDA is the oldest of the 3 by far.
 
The most successful thing can absolutely be a failure (for an example see above about ATI's TruForm). So far, GPGPU in games is a failure. CUDA has a handful of games that Nvidia paid the devs to get CUDA features in, OpenCL has none, and DirectCompute has none. So far, all 3 are failures.

The only difference being that CUDA is the oldest of the 3 by far.

EDIT: ....all your api examples bear no common ground with CUDA. TruForm is not comparable because there were competing apis that fairly won out. This is NOT the case with gpgpu apis and the field was actively using those apis at the time TruForm came out.

I am done with this argument. My point stands valid...

But you go ahead kllrnohj....if it protects your ego....:rolleyes:
 
Last edited:
Sort of. Nvidia is trying to use CUDA to move hardware, correct, but for that to happen CUDA itself must be a success. If CUDA fails to take off, then nobody will care about that feature. Also you're fictional amounts are way out of whack. You also aren't taking into account the money spent in R&D developing CUDA (both hardware and software). Nvidia needs CUDA to be successful over the long haul, not just the immediate future. For that to happen, CUDA must take off on its own. Nvidia is trying to stimulate that by paying devs and hoping others jump on, but that hasn't been happening. I would be extremely surprised if CUDA has turned a profit for Nvidia at this point.
Are you that ignorant? Do you have any idea the profit margins on a Tesla card? Here's a Tesla. They add some ECC memory and other than that it is a standard 480, yet instead of selling for 500$ it sells for 2500$. They also sell it in server mount units such as this for 13,000$ with roughly 4x 480s in them. CUDA's development has been paid for by Tesla's, not Geforces.


Just look around here when people ask for card recommendations, pretty much every time people shrug off PhysX and CUDA because they aren't being used. For most people, those features don't mean much at all. Again, sticking with the context of gaming.
At least 80% of people buying video cards don't come to these forums for advice first. And there are posts on here all the time about "Should I buy Nvidia for PhysX?" which makes it clear Nvidia's marketing is working. With most of the people not coming here (or forums like these) to ask this question, the obvious assumption is that some of those people are going out and buying an Nvidia card FOR PhysX.

For example, look at ATI's TruForm. ATI was hoping to use that feature to move more hardware. Back when it was released, there were a couple of tech demos and one or two games that used the feature. But it wasn't picked up by developers, and was discarded as a relic of history. I promise you ATI spent more on TruForm than they ever got back in increased hardware sales. TruForm was not only the best hardware tessellation but the only hardware tessellation you could get for years, but games didn't use it and thus consumers didn't care. Now we have DX11 picking up where ATI left off after 8 years, and support in games for the standard has already surpassed TruForm's adoption rate.
Because of Tesla, the argument is apples and oranges.

The most successful thing can absolutely be a failure (for an example see above about ATI's TruForm). So far, GPGPU in games is a failure. CUDA has a handful of games that Nvidia paid the devs to get CUDA features in, OpenCL has none, and DirectCompute has none. So far, all 3 are failures.

The only difference being that CUDA is the oldest of the 3 by far.
Yep, it's sold some extra geforce cards, it is making insane amounts of money and profit volume for their Tesla line, and is opening up an entire new market for Nvidia. Complete and total failure. :rolleyes:
 
Are you that ignorant? Do you have any idea the profit margins on a Tesla card? Here's a Tesla. They add some ECC memory and other than that it is a standard 480, yet instead of selling for 500$ it sells for 2500$. They also sell it in server mount units such as this for 13,000$ with roughly 4x 480s in them. CUDA's development has been paid for by Tesla's, not Geforces.


At least 80% of people buying video cards don't come to these forums for advice first. And there are posts on here all the time about "Should I buy Nvidia for PhysX?" which makes it clear Nvidia's marketing is working. With most of the people not coming here (or forums like these) to ask this question, the obvious assumption is that some of those people are going out and buying an Nvidia card FOR PhysX.

Because of Tesla, the argument is apples and oranges.

Yep, it's sold some extra geforce cards, it is making insane amounts of money and profit volume for their Tesla line, and is opening up an entire new market for Nvidia. Complete and total failure. :rolleyes:

Wow I understand you want to be angry on the internet but you got blinded with rage before you could read his posts or just don't understand the words he's typed up.

He said over and over WITH REGARD TO GAMES.. Tesla profits and CUDA being the most successful API for GPGPU apps has nothing to do with regard to its adoption as a gaming API and whether there's any organic motivation for game devs to use it.
 
Are you that ignorant? Do you have any idea the profit margins on a Tesla card? Here's a Tesla. They add some ECC memory and other than that it is a standard 480, yet instead of selling for 500$ it sells for 2500$. They also sell it in server mount units such as this for 13,000$ with roughly 4x 480s in them. CUDA's development has been paid for by Tesla's, not Geforces.

And what are the sales numbers on Tesla? How much did CUDA cost Nvidia to create? How much has Nvidia spent advertising CUDA and PhysX? How much has Nvidia invested into TWIMTBP games? How much time has Nvidia spent working with developers to get CUDA and GPU PhysX into games?

Yes, Tesla is a high margin product, but it is also very low volume.

At least 80% of people buying video cards don't come to these forums for advice first. And there are posts on here all the time about "Should I buy Nvidia for PhysX?" which makes it clear Nvidia's marketing is working. With most of the people not coming here (or forums like these) to ask this question, the obvious assumption is that some of those people are going out and buying an Nvidia card FOR PhysX.

That would be Nvidia's advertising being successful in the short term which is completely independent of the success/failure of CUDA/PhysX. Again, you are looking only at the short term. If GPU PhysX fails to take off, then all the advertising in the world won't make it relevant again.

And of those some that do go and buy an Nvidia card because of PhysX, has that number exceeded the amount Nvidia has spent on advertising and in the TWIMTBP program? I highly doubt it.

Because of Tesla, the argument is apples and oranges.

No it isn't, because Tesla has no bearing on gaming. Stick to the context that has been repeated about a hundred times in this thread (hint: its gaming).

Yep, it's sold some extra geforce cards, it is making insane amounts of money and profit volume for their Tesla line, and is opening up an entire new market for Nvidia. Complete and total failure. :rolleyes:

See above about you not knowing jack shit about the actual numbers involved and you continuing to ignore the context (again, its gaming ffs).

Is Nvidia pushing PhysX/CUDA in the gaming market? Yes.
Has PhysX/CUDA taken off in the gaming market? NO

Yup, resounding success there :rolleyes:

EDIT: ....all your api examples bear no common ground with CUDA. TruForm is not comparable because there were competing apis that fairly won out. This is NOT the case with gpgpu apis and the field was actively using those apis at the time TruForm came out.

What competing APIs? Are you really that dense? THERE WAS NO COMPETITION TO TRUFORM FOR 8 YEARS, and yet it was a failure.

I am done with this argument. My point stands valid...

You're shell of a point lies shredded in a mess in a corner, but whatever. You still haven't actually made a point as to why you think Nvidia needing to pay game devs to use CUDA isn't a failure of CUDA.
 
What competing APIs? Are you really that dense? THERE WAS NO COMPETITION TO TRUFORM FOR 8 YEARS, and yet it was a failure.

You are saying...let me get this straight...that using gpus for tessellation (this is what TruFrom is, right?) is a fundamentally different usage for gpus in games than anything that has existed before? It has NOTHING to do with graphics and so then tessellation is not using the gpu to make nice graphics? :eek:...wow...

You know what? You are right buddy...feel better now? I am the worlds biggest moron. I have been humbled by your superior and impeccable argument. You have demonstrated I am dense, have problems with reading comprehension and fail to understand logic and that I am emotionally attached to a company that does not give a shit about me. All your terrific words....all proof that your statement is correct...and your perfect analogies that I fail to understand...geeze....

Everything I am...you have shown that I am not....:cool:

I love cognitive dissonance....

But of course you can say that I can be smart and stupid...and you never said any of that...and I misunderstood you...and blah blah blah...

Lets play kllrnohj:

Hey everybody!!! CUDA is a failure because NV has to pay devs to use it.....yeah...arent I so frickin smart now!!!!!!!!!!!!!!!

Proof??? You want PROOF????????????

You tell me what is wrong with my argument...that is my frickin PROOF!!!

You dense morons who fail at understanding my clearly stated statement...you who fail at reading comprehension...and are too dense to see that my analogies are prefect...and none of your analogies apply....how dare you question my position!!!!
 
And what are the sales numbers on Tesla? How much did CUDA cost Nvidia to create? How much has Nvidia spent advertising CUDA and PhysX? How much has Nvidia invested into TWIMTBP games? How much time has Nvidia spent working with developers to get CUDA and GPU PhysX into games?
Tesla profits are rapidly approching the profit from Geforce cards. Go have a look at some of the SEC filings and educate your self.

Yes, Tesla is a high margin product, but it is also very low volume.
And a single Tesla card pulls in roughly 30 times the profit of a geforce in terms of profit. That means if they sell only 3 Tesla 380s to every 100 480s, they make the same profit from both cards.



[QOUTE]That would be Nvidia's advertising being successful in the short term which is completely independent of the success/failure of CUDA/PhysX. Again, you are looking only at the short term. If GPU PhysX fails to take off, then all the advertising in the world won't make it relevant again.[/QUOTE]Right... So right now they are making money off of it, more games continue to use it in development with more titles on the horizon, and it's a failure? I suppose you think the telegraph was a failure since we stopped using that as well huh?

And of those some that do go and buy an Nvidia card because of PhysX, has that number exceeded the amount Nvidia has spent on advertising and in the TWIMTBP program? I highly doubt it.
STOP THE PRESSES! Well if you highly doubt it that changes everything!



No it isn't, because Tesla has no bearing on gaming. Stick to the context that has been repeated about a hundred times in this thread (hint: its gaming).
Ok, your right lets focus on gaming. They spent nothing on developing CUDA for gaming. Zero, zilch, nada. They spent nothing on developing the hardware for gaming, Zero, zilch, nada. They spent a little money on putting it into a couple of games (amazing! they thought it was a good enough idea to make a capital investment in! This must mean it's horrid!). That little bit of money putting it into a few games is the only investment they have to make back. Sell a few thousand more cards and it's done.


See above about you not knowing jack shit about the actual numbers involved and you continuing to ignore the context (again, its gaming ffs).
:rolleyes:

Is Nvidia pushing PhysX/CUDA in the gaming market? Yes.
Has PhysX/CUDA taken off in the gaming market? NO

Yup, resounding success there :rolleyes:
You stick to this notion of something must be widly accepted to be a success. Success is measured in dollars and cents. It doesn't matter how widely adopted youtube or twitter is until they make money. Wide acceptance is NOT success.
 
How much nvidia stock do you guys own again?

I mean seriously, all I see kllnohj saying is that as far as gaming is concerned cuda has not been a success and I think most would agree.

Get over it.
 
How much nvidia stock do you guys own again?

I mean seriously, all I see kllnohj saying is that as far as gaming is concerned cuda has not been a success and I think most would agree.

Get over it.


I have said CUDA is not a success...if that is what he said. this would be done

He said it is a failure...which is very different. It is not even important to me in any way except that it is wrong. I have tried to argue that point and kllnohj has gone through the book on informal logical fallacies to defend it.

Straw man:
You still haven't actually made a point as to why you think Nvidia needing to pay game devs to use CUDA isn't a failure of CUDA.

Ad homenim on multiple occasions.

False dilemma

posining the well

burden of proof

and several other as well....

The gpgpu tech has failed in the gaming industry thus far. Thus by his definition all the apis are failures, if they pay devs to use the api. He also goes on to say that they are then all failures...which is very different from his implication.

He then uses TruForm as if that proves anything. Examples do not prove anything, even it would work as an example. And it does not....TruFrom is graphics....

There are NO gpgpu apis of any sort out there.....if CUDA was a new twist on using gpgpu tech, then the TruForm would apply. It is not a new twist....it is new period.

We should have a poll. If a tech industry is a failure in general, does that imply that all products in the industry are failures? Not that it would prove anything....but I would like to see what most people think on this point.

Tell me...what do you think? Do you feel that if a new technology has failed to grab and take hold, then all products in that industry are therefore failures? Agree or disagree and why...please, thx...
 
Are you that ignorant? Do you have any idea the profit margins on a Tesla card? Here's a Tesla. They add some ECC memory and other than that it is a standard 480, yet instead of selling for 500$ it sells for 2500$. They also sell it in server mount units such as this for 13,000$ with roughly 4x 480s in them. CUDA's development has been paid for by Tesla's, not Geforces.


At least 80% of people buying video cards don't come to these forums for advice first. And there are posts on here all the time about "Should I buy Nvidia for PhysX?" which makes it clear Nvidia's marketing is working. With most of the people not coming here (or forums like these) to ask this question, the obvious assumption is that some of those people are going out and buying an Nvidia card FOR PhysX.

Because of Tesla, the argument is apples and oranges.

Yep, it's sold some extra geforce cards, it is making insane amounts of money and profit volume for their Tesla line, and is opening up an entire new market for Nvidia. Complete and total failure. :rolleyes:

You think those 80% of people who don't come here to buy video cards even know what Physx is ? I'd say 95% of the people who use computers, have no clue.

and the cost of buying AGEIA and then porting it over to the CUDA architecture, do you think all of this is covered by the few thousand people buying $100-$500 video cards for "physx".

Anyway 2 more things
#1) what game that has physx support and actually provides something of a better gaming experience has sold 100M copies?
#2) why are you guys still arguing about CUDA being a failure, if nothing else we should be discussing why CUDA is better than or worse than direct compute or Open CL.
 
Last edited:
You are saying...let me get this straight...that using gpus for tessellation (this is what TruFrom is, right?) is a fundamentally different usage for gpus in games than anything that has existed before? It has NOTHING to do with graphics and so then tessellation is not using the gpu to make nice graphics? :eek:...wow...

TruForm is hardware tessellation. For the 8 years before DX11 came out, it was the *only* way to do hardware tessellation. TruForm was a failure. For the 8 years before DX11, tessellation was a failure.

It is no different than CUDA today. I'm really not sure what you are trying to say.

Lets play kllrnohj:

Hey everybody!!! CUDA is a failure because NV has to pay devs to use it.....yeah...arent I so frickin smart now!!!!!!!!!!!!!!!

Proof??? You want PROOF????????????

You tell me what is wrong with my argument...that is my frickin PROOF!!!

You dense morons who fail at understanding my clearly stated statement...you who fail at reading comprehension...and are too dense to see that my analogies are prefect...and none of your analogies apply....how dare you question my position!!!!

:rolleyes:

I've repeatedly invited and ASKED you to provide your opinion. You've repeatedly ignored the request. You haven't questioned my position, you've nitpicked a statement to hell and back.

Right... So right now they are making money off of it, more games continue to use it in development with more titles on the horizon, and it's a failure? I suppose you think the telegraph was a failure since we stopped using that as well huh?

What games on the horizon are using CUDA? What games on the horizon are using GPU PhysX that aren't paid by Nvidia?

How much has Nvidia spent on advertising? How much have they spent on TWIMTBP? Because I promise you it isn't the trivial amount you so desperately want it to be.

So you really don't have a clue if they are making money off of GPU PhysX and CUDA in games, now do you?

And no, the telegraph isn't a failure under the conditions of my statement because *gasp* it took a hold and had adoption without companies paying people to use it! In fact, people wanted it so bad they PAID for it!

Ok, your right lets focus on gaming. They spent nothing on developing CUDA for gaming. Zero, zilch, nada.

Only if you assume that Nvidia set out to develop CUDA for everything BUT games, which just isn't true.

EDIT: And as someone pointed out, you conveniently ignored PhysX. Nvidia purchased Ageia. Then there is the cost of maintaining PhysX not to mention the cost to port it to CUDA. But I guess in your world all that was free, right?

They spent nothing on developing the hardware for gaming, Zero, zilch, nada.

Not at all true. As you've already pointed out, Tesla and GeForce use the same GPUs. That means all the extra transistors for HPC exists in GeForce cards. That isn't trivial, as the issues surrounding Fermi's manufacturing demonstrates.

They also have to make the context switch between DirectX/OpenGL to CUDA as fast as possible so that it can be used in games at all.

They spent a little money on putting it into a couple of games (amazing! they thought it was a good enough idea to make a capital investment in! This must mean it's horrid!). That little bit of money putting it into a few games is the only investment they have to make back. Sell a few thousand more cards and it's done.

Define "little money".

Also, I've already talked about Nvidia trying to stimulate adoption. That doesn't change the fact that so far in games the tech has not been attractive enough on its own. All the sarcasm in the world doesn't help your point.

You stick to this notion of something must be widly accepted to be a success. Success is measured in dollars and cents. It doesn't matter how widely adopted youtube or twitter is until they make money. Wide acceptance is NOT success.

And without wide acceptance Nvidia isn't going to get much (if any) of a return on their investment. The two are tied together, that seems to be the part you aren't quite getting.

I have said CUDA is not a success...if that is what he said. this would be done

He said it is a failure...which is very different.

http://www.google.com/search?hl=en&...&oi=glossary_definition&ct=title&ved=0CBIQkAE

Definitions of failure on the Web:

* an event that does not accomplish its intended purpose; "the surprise party was a complete failure"
* lack of success; "he felt that his entire life had been a failure"; "that year there was a crop failure"

So basically you're arguing that failure is harsher than not being a success? Really?

It is not even important to me in any way except that it is wrong. I have tried to argue that point and kllnohj has gone through the book on informal logical fallacies to defend it.

Straw man:


Ad homenim on multiple occasions.

False dilemma

posining the well

burden of proof

and several other as well....

Care to point out where? Because I really haven't done any of that.

The gpgpu tech has failed in the gaming industry thus far. Thus by his definition all the apis are failures, if they pay devs to use the api. He also goes on to say that they are then all failures...which is very different from his implication.

:rolleyes:

So close, yet so far...

He then uses TruForm as if that proves anything. Examples do not prove anything, even it would work as an example. And it does not....TruFrom is graphics....

It only doesn't work because it invalidates your claim that being the most successful thing in a field means that it cannot be a failure. Which is laughably false.

There are NO gpgpu apis of any sort out there.....if CUDA was a new twist on using gpgpu tech, then the TruForm would apply. It is not a new twist....it is new period.

CUDA was far from the first. GPGPU has been possible and done since the 9700 Pro. Using C to code for GPUs was done since the 9700 Pro.

And TruForm was the first hardware tessellation, so if anything its LESS of a twist on existing tech than CUDA is.

We should have a poll. If a tech industry is a failure in general, does that imply that all products in the industry are failures? Not that it would prove anything....but I would like to see what most people think on this point.

Go for it.

Tell me...what do you think? Do you feel that if a new technology has failed to grab and take hold, then all products in that industry are therefore failures? Agree or disagree and why...please, thx...

For the record, I have not stated anything like that.

That said, I would agree. If a product in that industry was a success, then the technology didn't fail to grab and take hold. The only way for a technology to fail is if all its products fail.
 
Last edited:
Back
Top