AMD: CUDA Is Doomed

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
AMD’s VP of channel sales had some interesting things to say in this interview with VR-Zone. In particular, this quote stood out:

I think CUDA is doomed. Our industry doesn’t like proprietary standards. PhysX is an utter failure because it’s proprietary. Nobody wants it. You don’t want it, I don’t want it, gamers don’t want it. Analysts don’t want it. In the early days of our industry, you could get away with it and it worked. We’ve all had enough of it. They’re unhealthy.
 
You can tell Iike AMD based on my CPU choice for my rig but really, Cuda is doomed as the industry doesn't like proprietary standards? Thank god AMD doesn't support DirectX and OpenGL, wait they do. I haven't seen CUDA failing in any way. I hope AMD does well and lasts in the market place but that one quote alone about Nvidia having proprietary standards and no one likes them, a lot of the gaming industry revolves around that now. What is DirectX if not a Proprietary standard by Microsoft technologies. What is OpenGL if not a standard of some sort. AMD needs to keep innovating, I think they are on the right track with the multi-core FX chips but I would really really like to see them improve their AMD Graphic drivers, even their drivers that are only running on their integrated chips as for the longest time I used to see their catalyst suite crash at start up on my bestfriend's rig which is why I didn't go that route when it came to my videocard.
 
I find it sad that AMD's VP of channel sales has such a fundamental misunderstanding of why technology will succeed or fail.

PhysX died because PhysX was a graphics only addition to games. No game developer was ever going to limit their market for their game to only people who had PhysX enabled hardware. CUDA is a much different beast. The people developing for CUDA and also specifying the hardware to run it. CUDA/Teslsa hardware is much cheaper and easier to use and implement than a custom FPGA which is the only real alternative. CUDA is and will be successful regardless of the proprietary standards.
 
He is not saying CUDA is crap; he is saying the proprietary model is on it's way out for this kind of software development.
The Bill Gates way of thinking (pay for it or you are a thief) is on it's way out as far as developers are concerned because there are just as good or better alternatives out there now.
 
PhysX and Cuda should be combined into an open standard and ported to all games that do either. That is what should happen.
 
Funny, I was reading an announcement the other day about how PhysX was going to be in a bunch of new console games on the Xbox One and PS4.

Hmm, who makes the processor and graphics card in those again? AMD?
 
nvidia can laugh whole-heartedly at AMD's continued failure to have any kind of GPGPU success.

CUDA isn't doomed. That's really stupid to say. It's never been very widely used outside of very niche segments, particularly in workstations and HPC applications. The problem with simplistic fanboy thoughts like "OMG cudah is teh proprietary" or "nvidia haets opencl" is that those are not concerns at all for those applications. CUDA is updated right as new architectures are made, and there is no lag in support for new features like what happens with OpenCL. IOW, all available horsepower and new features are available via CUDA on day one.

AMD certainly wants to grab market share from nvidia where CUDA rules by a very wide margin, and so we get these kinds of silly AMD statements every year. The problem is that nvidia runs OpenCL just fine too, and AMD's GPGPU SDK just plain sucks like it always has. Then there's the fear that AMD could once again just drop the current strategy for some kind of HSA-based GPGPU in the future (when big GPUs get married to small or big CPU cores on desktops and workstations).
 
replace "like" with "unlike" above in statement about OpenCL
 
PhysX is a failure because it is proprietary. But, as I recall, NVIDIA offered to license it to AMD. I really wish they did, as that would increase the PhysX footprint and more games would take advantage of it. Havok is great, but I'd like to see other people come and move the physics models and see what they can come up with.

CUDA, though, isn't dead, dying or doomed. It's just a PR statement more than anything to trash talk the competition.
 
PhysX is a failure because it is proprietary. But, as I recall, NVIDIA offered to license it to AMD

Nvidia offered to let AMD use PhysX for free to make it an "Open standard" but they wanted to know how AMD's cards worked, and Nvidia would write the drivers for AMD, they wouldn't let AMD have access to how PhysX works etc

AMD said no because in their own words "We cannot verify that Nvidia would write proper drivers for AMD GPU's because they might make PhysX work worse on AMD GPU's by default"

And then YEARS later the whole CPU reveal got leaked that Nvidia -WAS- gimping the shit out of CPU PhysX and purposely bottlenecking it, when the guy who backward engineered it said your typical dual core could run PhysX better than a GTX 480 could

After that huge, and I mean HUGE amount of bad press being leaked that PhysX was purposely gimped on CPU's, Nvidia rushed out the door PhysX 3.0, which is still gimped mind you, but it at least runs PHysX in a more modern language and also multi-threads it

And this would be a cool thing..........if Nvidia wasn't such fucking assholes and -FORCING- PhysX 2.X on developers, which lets Nvidia keep PhysX only running properly on Nvidia GPU's. Seriously look up the fallout over everyone finding out PhysX 2.X was being used in Batman Arkham City. Or hell, Planetside 2 uses a SPECIAL version of PhysX 3.0 which is GPU use only, no CPU use other than to do some basic vehicle debris

I understand why Nvidia is doing this, as they are trying to sell a product, but that still doesn't make what they are doing any less fucking sleezy
 
Hes not wrong, proprietary stuff is falling fast. Direct X is a terrible example to bring up of something proprietary, its proprietary to Windows yes, but anyone can develop drivers and support it on the platform.
 
Im not seeing where hes getting the info that no one wants PhysX. Look at the Unity engine. It uses PhysX and is insanely popular over every platform.
 
He's kinda stating the obvious. Only reason DirectX is successful is because you need a monopoly that eliminates choice in order for the market to begrudgingly accept the leash, nVidia is no monopoly.
 
So many haters on here directed at someone who is just stating the obvious. Nvidia needs to open up the standards or let it die kicking and screaming. This isn't about AMD or NV fan boys.
 
I don't know too much about how CUDA is doing. But I do agree, PhysX is a failure, almost EVERY big name game (and even smaller titles) on the market use's Intel's Havok system for physics now. The list of games that use it is tremendously large. The developers obviously know what they want to use and develop with, and it's not PhysX.
 
Nvidia offered to let AMD use PhysX for free to make it an "Open standard" but they wanted to know how AMD's cards worked, and Nvidia would write the drivers for AMD, they wouldn't let AMD have access to how PhysX works etc

AMD said no because in their own words "We cannot verify that Nvidia would write proper drivers for AMD GPU's because they might make PhysX work worse on AMD GPU's by default"

And then YEARS later the whole CPU reveal got leaked that Nvidia -WAS- gimping the shit out of CPU PhysX and purposely bottlenecking it, when the guy who backward engineered it said your typical dual core could run PhysX better than a GTX 480 could

After that huge, and I mean HUGE amount of bad press being leaked that PhysX was purposely gimped on CPU's, Nvidia rushed out the door PhysX 3.0, which is still gimped mind you, but it at least runs PHysX in a more modern language and also multi-threads it

And this would be a cool thing..........if Nvidia wasn't such fucking assholes and -FORCING- PhysX 2.X on developers, which lets Nvidia keep PhysX only running properly on Nvidia GPU's. Seriously look up the fallout over everyone finding out PhysX 2.X was being used in Batman Arkham City. Or hell, Planetside 2 uses a SPECIAL version of PhysX 3.0 which is GPU use only, no CPU use other than to do some basic vehicle debris

I understand why Nvidia is doing this, as they are trying to sell a product, but that still doesn't make what they are doing any less fucking sleezy


Again with this butthurt about PhysX. For people who claim that PhysX is dead and propriety, they sure do like to keep digging up the body to bash it.

Fact is Nvidia provided a physics solution that works and IMHO add to the gameplay and developers can use it. I love the effects in Borderlands and the Batman games hence why I choose to get an Nvidia solution to enjoy them. Bitch at AMD for not coming up with a practical solution to calculate physics and help developers implement it.
 
I don't know too much about how CUDA is doing. But I do agree, PhysX is a failure, almost EVERY big name game (and even smaller titles) on the market use's Intel's Havok system for physics now. The list of games that use it is tremendously large. The developers obviously know what they want to use and develop with, and it's not PhysX.

If its using the Unreal Engine its using PhysX. The upcoming Batman Game will have it and so will The Witcher 3.
 
I think AMD is doomed. Our industry doesn’t like proprietary standards. Eyefinity is an utter failure because it’s crap. Nobody wants it. You don’t want it, I don’t want it, gamers don’t want it. Analysts don’t want it. In the early days of our industry, you could get away with it and it worked. We’ve all had enough of it. Go away AMD. Nobody likes you.
Sounds like a whiney little beach!
 
Not classy and makes it seem like comments from pure desperation to gasp the last bits of oxygen as the ship sinks further into the murky sea. Another Morbid Death.

Eh, AMD will survive, but they need to put the kibosh on personnel making unfounded comments towards their competition. Else they will hurt themselves even more.

If they're going to bitch about the products of their competitors, they better be able to present a superior product.
 
i wont go as far as saying anything is doomed, but why are people supporting proprietary standards anyway? This seems like the same argument of apple or android.
 
I think AMD is doomed. Our industry doesn’t like proprietary standards. Eyefinity is an utter failure because it’s crap. Nobody wants it. You don’t want it, I don’t want it, gamers don’t want it. Analysts don’t want it. In the early days of our industry, you could get away with it and it worked. We’ve all had enough of it. Go away AMD. Nobody likes you.
Sounds like a whiney little beach!

Huh?

I love how when shit like this makes the news how people try to make comparisons that just don't make any sense. What does Eyefinity have anything to do with CUDA? This isn't even close to a good comparison.

It's also obvious gamer's want Eyefinity as there are a ton of users that have it.

Dumb.

Plus CUDA and Physx wouldn't be so bad if it worked properly. I can't even play Borderlands 2 with Physx using my CPU cause Nvidia gimped it so much...and don't say they haven't. Don't tell me my 4.5GHz 2500k can't handle some debris flying around.

I agree with the guy...it's time propriety stuff like this dies off as all it does is create discrimination in the gaming world. OH! Congratulations on buying the "right" card last year! You now get to enjoy Physx which Nvidia paid us to use!

Either way Physx is dead. With GPU compute being possible in all next-gen consoles (even Wii-U) we are going to see more general OPEN SOURCE FOR ALL TO USE physics instead of this proprietary Physx.
 
PhysX is a pile of poorly implemented shit that should die so that we can get Havok or which is actually useful and doing something besides bloated blinged visual effects or whatever else into games, but people definitely do want it. Easily brainwashed people spent big on extra cards just to run PhysX faster, those people really want it.
 
Huh?

Plus CUDA and Physx wouldn't be so bad if it worked properly. I can't even play Borderlands 2 with Physx using my CPU cause Nvidia gimped it so much...and don't say they haven't. Don't tell me my 4.5GHz 2500k can't handle some debris flying around.


Yeah, I recall reading some tech news from a while back concerning some testing that proved software driven Physx was hobbled so it could never preform as well as it would on a Nvidia card no matter how fast a CPU you had.
Old news I'm afraid.
 
You can tell Iike AMD based on my CPU choice for my rig but really, Cuda is doomed as the industry doesn't like proprietary standards? Thank god AMD doesn't support DirectX and OpenGL, wait they do.
OpenGL is an OPEN standard. You could from the OpenGL part.
I haven't seen CUDA failing in any way. I hope AMD does well and lasts in the market place but that one quote alone about Nvidia having proprietary standards and no one likes them, a lot of the gaming industry revolves around that now. What is DirectX if not a Proprietary standard by Microsoft technologies.
A proprietary standard that Microsoft allows both AMD and Nvidia and Intel to use. Who's going to use CUDA but Nvidia?

Also, what's wrong with OpenCL? With AMD being in all game consoles in the next generation, Nvidia will have to do a lot of work to convince people to use CUDA over OpenCL.
 
Plus CUDA and Physx wouldn't be so bad if it worked properly. I can't even play Borderlands 2 with Physx using my CPU cause Nvidia gimped it so much...and don't say they haven't. Don't tell me my 4.5GHz 2500k can't handle some debris flying around.

Allow me to cure your ignorance. Your OC'ed CPU is capable of roughly 0.1 Teraflop/s, a K20X tesla (roughly a 780) is capable of 4.0 Teraflops/s, that's 40 times the performance.

CPUs and GPUs are built for different tasks, and the reason you aren't running your graphics off your CPU. So no, your CPU is not built to handle massively parallel problems and it preforms poorly at it. It has nothing to do with Nvidia artificially limiting your CPU and much more to do with your CPU not being good at the task.

Since everyone loves car analogies, it's kind of like saying I should enter a dragster into a formula one race. "Don't tell my my dragster isn't fast and can't do well in a formula one race".
 
I did not know that. Thanks. :)



Nvidia offered to let AMD use PhysX for free to make it an "Open standard" but they wanted to know how AMD's cards worked, and Nvidia would write the drivers for AMD, they wouldn't let AMD have access to how PhysX works etc

AMD said no because in their own words "We cannot verify that Nvidia would write proper drivers for AMD GPU's because they might make PhysX work worse on AMD GPU's by default"

And then YEARS later the whole CPU reveal got leaked that Nvidia -WAS- gimping the shit out of CPU PhysX and purposely bottlenecking it, when the guy who backward engineered it said your typical dual core could run PhysX better than a GTX 480 could

After that huge, and I mean HUGE amount of bad press being leaked that PhysX was purposely gimped on CPU's, Nvidia rushed out the door PhysX 3.0, which is still gimped mind you, but it at least runs PHysX in a more modern language and also multi-threads it

And this would be a cool thing..........if Nvidia wasn't such fucking assholes and -FORCING- PhysX 2.X on developers, which lets Nvidia keep PhysX only running properly on Nvidia GPU's. Seriously look up the fallout over everyone finding out PhysX 2.X was being used in Batman Arkham City. Or hell, Planetside 2 uses a SPECIAL version of PhysX 3.0 which is GPU use only, no CPU use other than to do some basic vehicle debris

I understand why Nvidia is doing this, as they are trying to sell a product, but that still doesn't make what they are doing any less fucking sleezy
 
And this would be a cool thing..........if Nvidia wasn't such fucking assholes and -FORCING- PhysX 2.X on developers, which lets Nvidia keep PhysX only running properly on Nvidia GPU's. Seriously look up the fallout over everyone finding out PhysX 2.X was being used in Batman Arkham City. Or hell, Planetside 2 uses a SPECIAL version of PhysX 3.0 which is GPU use only, no CPU use other than to do some basic vehicle debris

This reminds me of the EAX standard that Creative makes. Which of course only their sound cards get EAX 5.0 while every other sound card is stuck on EAX 2.0 or less even. Remember when Doom 3 was released not supported EAX and they threatened to sue John Carmack over something he made, but they claim to have made it instead.

To this day I don't see any game advertise the use of EAX technology. Though Nvidia's "The Way It's Meant to be Played" nonsense exists in nearly ever game. I can assure you that PhysX will go the way of EAX eventually.
 
Having PR stunts on how something will die, trying to redirect people's attention won't work on keen readers. I want to hear AMD saying that their APU beats Intel CPU in terms of gaming due to its innovative designs. I want to hear AMD saying GPU beats Nvidia GPU in terms of gaming because of its innovative designs. I want AMD saying AMD's gpu + AMD's APU blows Nvidia GPU + Intel CPU out of the water when it comes to gaming due to its innovative designs. Unfortunately, AMD is saying other vendor's innovative will die, or will be doomed. It is really getting old about how others have proprietary stuffs that they don't have for so many years. Nvidia GPU does everything and AMD GPU can do, and better most of the time, plus they have something you cannot do.

Seriously, over 1000 USD for a video card is just wrong. AMD needs to first find the ball and get it back to their court. A 600 dollar AMD video card that beats Titan on everything? STFU and take my money! No? Then STFU and make it happen!

Pointless rant:

It is true that CUDA and PhysX is proprietary to Nvidia, but that does not mean they only work on Nvidia's video cards. Technically speaking, code can be written using CUDA and PhysX with or without a Nvidia GPU. To programmers, CUDA and PhysX are just APIs. There are 2 sets (or more) codes behind those APIs that are not visible to programmers. When a Nvidia GPU is present (plus some other conditions are met), the set of code which requires a GPU will be executed, otherwise the set of code which does not require a GPU will be executed. In other words, with or without Nvidia GPU, CUDA and PhysX code runs. Unlike DirectCompute, which is far worst then PhysX and CUDA because it only runs on MS, but AMD fans claims more open, non-proprietary than PhysX.

It isn't easy to make CUDA and PhysX code run on AMD's GPU because the architecture between different vendors differ. Suppose PhysX runs on AMD's GPU but it kills performance due to its architecture, people will say Nvidia is sabotaging AMD. Suppose PhysX breaks on AMD's GPU after an update, people will again say Nvidia is sabotaging AMD. The only way to ensure that PhysX won't break on AMD's GPU, the development of PhysX must have all AMD's GPUs in their mind and test on all possible configurations with AMD's GPUs. This is why a fee needs to be in place, but even then no one can say for sure that PhysX can operate as good on AMD's GPU compare to Nvidia's GPU. It isn't like AMD will redesign there GPU just to run themor Nvidia will redesign them just so that it runs well on AMD, so it is a logical decision for them not to support (by paying its development fee) for them.

If AMD really wants to support non-proprietary stuffs, then they will actively support OpenCL and OpenGL. Ask Ubuntu users and they will tell you that Nvidia is better and AMD is catching up. The truth is, both vendors have better drivers on OpenCL/GL then the ones people can download. Instead, those drivers are exclusive to their professional titled hardware. I don't blame them as computers are not only for games and there ain't really that much openCL/openGL games, let alone good ones (What I am saying is money don't grow on trees).

When it comes to open-source, what it really means is no one is responsible for upgrade and bug fixes. It is good when you want to program your GPU to do a specific task where you need to hack through walls to fully utilizes the hardware. It has absolutely nothing to do with gaming and no open-source driver run games better than vendor's binary blob. This tells you the true meaning of open-source. It is more flexible, but not better. These means nothing to do with both gamers and game developers.
 
Allow me to cure your ignorance. Your OC'ed CPU is capable of roughly 0.1 Teraflop/s, a K20X tesla (roughly a 780) is capable of 4.0 Teraflops/s, that's 40 times the performance.

CPUs and GPUs are built for different tasks, and the reason you aren't running your graphics off your CPU. So no, your CPU is not built to handle massively parallel problems and it preforms poorly at it. It has nothing to do with Nvidia artificially limiting your CPU and much more to do with your CPU not being good at the task.

Since everyone loves car analogies, it's kind of like saying I should enter a dragster into a formula one race. "Don't tell my my dragster isn't fast and can't do well in a formula one race".

Have you played BL2? I know the performance of my CPU. I'm not an idiot nor fucking ignorant. My CPU should be able to handle some shit flying around on the screen...have you never seen what Havok can do on a CPU? Obviously not. Fluid...cloth...dynamic particles...everything Nvidia says needs Physx and a GPU to handle has been shown and proven for years to be perfectly capable on the CPU.

How about instead of trying to "cure my ignorance" you cure yourself first before speaking out of your ass.

/schooling
 
Back
Top