AMD: CUDA Is Doomed

This reminds me of the EAX standard that Creative makes. Which of course only their sound cards get EAX 5.0 while every other sound card is stuck on EAX 2.0 or less even. Remember when Doom 3 was released not supported EAX and they threatened to sue John Carmack over something he made, but they claim to have made it instead.

To this day I don't see any game advertise the use of EAX technology. Though Nvidia's "The Way It's Meant to be Played" nonsense exists in nearly ever game. I can assure you that PhysX will go the way of EAX eventually.

gpu accelerated physx might be but physx as a whole will not. as a physics engine it did well just like havok did.
 
Have you played BL2? I know the performance of my CPU. I'm not an idiot nor fucking ignorant. My CPU should be able to handle some shit flying around on the screen...have you never seen what Havok can do on a CPU? Obviously not. Fluid...cloth...dynamic particles...everything Nvidia says needs Physx and a GPU to handle has been shown and proven for years to be perfectly capable on the CPU.

How about instead of trying to "cure my ignorance" you cure yourself first before speaking out of your ass.

/schooling

so why did Havok run the physics demo on PS4 using the power of gpu if cpu alone can calculate that extreme calculation?
 
Physx can't die soon enough. How can people NOT tell it is a fucking scam. It has been a lie even before nvidia bought ageia.
 
I don't know too much about how CUDA is doing. But I do agree, PhysX is a failure, almost EVERY big name game (and even smaller titles) on the market use's Intel's Havok system for physics now. The list of games that use it is tremendously large. The developers obviously know what they want to use and develop with, and it's not PhysX.

not really. for example Red CD Projekt Red Engine 2 use havok. but for Red Engine 3 they drop havok in favor of physx
 
so why did Havok run the physics demo on PS4 using the power of gpu if cpu alone can calculate that extreme calculation?

Because the CPU in next-gen consoles isn't as powerful as an overclocked i7 (or stock for that matter) and the GPU supports GPU compute so the CPU isn't necessary. That and the fact that the GPU has plenty of power to do physics as it was originally designed to do so. Plus the fact that Physx just isn't needed anymore due to GPU compute and that all next-gen consoles are AMD only...so Physx won't even be used unless the developer feels like using it only on the CPU.

Either case, still doesn't change the fact that a lot of the Physx effects are more than possible on the CPU.

How about you actually take some time to research the Havok, BeamNG, Euphoria, DMM, and Bullet engines instead of what you're doing now, which is failing. All of these physics engines run on the CPU and are capable of all the same things Physx can do on a GPU. Sure they might not be as "technical" but I've yet to see anything Physx related that couldn't be done on the CPU. Sure some cloth on the CPU may consist of 1,000 polygons and be less "accurate" then Physx which may use 5,000 polygons but the difference is minor if noticeable at all. Not to mention that Physx tanks performance on my cards.

Please do some research before responding next time.

Thanks.

/continued schooling.
 
Wow. I've reached my dumb limit and its not even noon.

if this AMD clown has any influence at all at his company I'm not sure its NVIDIA that's doomed.
 
Because the CPU in next-gen consoles isn't as powerful as an overclocked i7 (or stock for that matter) and the GPU supports GPU compute so the CPU isn't necessary. That and the fact that the GPU has plenty of power to do physics as it was originally designed to do so. Plus the fact that Physx just isn't needed anymore due to GPU compute and that all next-gen consoles are AMD only...so Physx won't even be used unless the developer feels like using it only on the CPU.

Either case, still doesn't change the fact that a lot of the Physx effects are more than possible on the CPU.

How about you actually take some time to research the Havok, BeamNG, Euphoria, DMM, and Bullet engines instead of what you're doing now, which is failing. All of these physics engines run on the CPU and are capable of all the same things Physx can do on a GPU. Sure they might not be as "technical" but I've yet to see anything Physx related that couldn't be done on the CPU. Sure some cloth on the CPU may consist of 1,000 polygons and be less "accurate" then Physx which may use 5,000 polygons but the difference is minor if noticeable at all. Not to mention that Physx tanks performance on my cards.

Please do some research before responding next time.

Thanks.

/continued schooling.

if bullet can simulate that complex calculation without the help of gpu then why did bullet work on gpu accelerated physics through the help of openCL in the first place?
 
Bitch at AMD for not coming up with a practical solution to calculate physics and help developers implement it.

AMD was willing to use PhysX, but Nvidia wanted far, far, far too much

They wanted to know exactly how AMD's cards worked, they wanted full control over physX, and Nvidia themselves would write the driver, AMD would not be allowed to have access to how PhysX works

I mean fuck, Intel lets developers know inside and out how HAVOK works, cause they want it to be a open source standard, so that's why you see it in everything. Hell, Havok runs 10x as efficiently as PhysX does, Did you know all the physics in Super Mario Galaxy 1 and 2 were done with Havok? And all cloth effects etc. A FUCKING WII CAN RUN HAVOK FULLY!

Yeah, unless Nvidia is willing to bite the bullet and open source it, PhysX is dead

hell, the guys who made Ghostbusters for the 360/PS3/PC, they ended up writing their own physics engine that was almost as efficient as Havok was, and they did it in house by themselves in less than 4 months, and its got AMAZING physics.
 
if bullet can simulate that complex calculation without the help of gpu then why did bullet work on gpu accelerated physics through the help of openCL in the first place?

How about you respond to the rest of my post instead of picking one small example? Also OpenCL is not a very good example considering ANY GPU can use it (at least newer ones because I'm sure with your logic you'd shoot back that a Geforce 2 MX couldn't).

So good job pointing out that a physics engine can run on AMD and Nvidia. Awesome counter argument.

/you've been held back a grade. Try again.
 
What a fucking scumbag. I like PhysX and I want PhysX. I also like CUDA because programs like 3DS Max and Sony Vegas take advantage of it quite nicely.

Screw him and screw his shitty drivers. Fucking happiest day for me in recent years is when I made a move from the 5870 to my GTX 780 back in May, after having that card for some 3.5 years. Screw their drivers, never again.
 
What a fucking scumbag. I like PhysX and I want PhysX. I also like CUDA because programs like 3DS Max and Sony Vegas take advantage of it quite nicely.

Screw him and screw his shitty drivers. Fucking happiest day for me in recent years is when I made a move from the 5870 to my GTX 780 back in May, after having that card for some 3.5 years. Screw their drivers, never again.

Congratulations on missing the author's point while throwing a bunch of swear words around.
 
Have you played BL2? I know the performance of my CPU. I'm not an idiot nor fucking ignorant. My CPU should be able to handle some shit flying around on the screen...have you never seen what Havok can do on a CPU? Obviously not. Fluid...cloth...dynamic particles...everything Nvidia says needs Physx and a GPU to handle has been shown and proven for years to be perfectly capable on the CPU.

How about instead of trying to "cure my ignorance" you cure yourself first before speaking out of your ass.

/schooling

Clearly you are...
 
This is just more chest thumping from AMD to promote OpenCL and to hopefully get the industry to start using it more since it runs well on their APU's and GPU's.

OpenCL is great, and I hope it gets used more often than it does. However, CUDA is still very powerful and well supported by Nvidia. It works well enough for companies to buy up enough Tesla systems to short the supply of GK110's to put in consumer level GPU's (e.g. Titan).

If anything, CUDA has a better chance of outlasting Physx.
 
Look my thing is this.

Does Physx work? Yes. Does it do some cool things? Of course.

The big question however though is the worth. Is Physx worth all this dividing of features? In my opinion, no.

I mean, lets face it, Nvidia wants to use Physx NOT because of it's technical superiority but because it's THEIRS. It helps them sell cards. That is all. Why do you think Nvidia a lot of the time has to pay developers to use it in the first place?

OpenCL and thus GPU computing is LOGICALLY the better choice. It's open source so it doesn't cost anything for people to use, it's universal so any GPU can use it, and in the end it's capable of all the same effects Physx uses.
 
In this day of age. The GPU manufacturer's made their own problem. They've made everything so powerful that the CPU has very little processing to do now. As what used to be CPU intensive is now thrown at the GPU. So the CPU has so much spare power now it can do HAVOK without a sweat and doesn't need a dedicated Physx unit. Same sort of goes with Cuda. While is nice for that extra little processing power. CPU's will always get faster and that little extra power gets diminished faster.
 
Care to elaborate?

Nope. You seem more interested in fighting than having a conversation. You clearly have a zeal about this matter and will not see reason. You make claims that others directly disputed with technical data. You believe what you "feel should work", when you clearly have no technical understanding of what's going on. I'm more than willing to explain something to someone who wants to understand, but it's clear you are much more interested in fighting than a conversation. Sorry, I've got better things to do with my life than argue with someone on the internet. Have a nice day.
 
This reminds me of the EAX standard that Creative makes. Which of course only their sound cards get EAX 5.0 while every other sound card is stuck on EAX 2.0 or less even.
Asus Xonar sound cards support EAX 5.0 effects if you turn on "GX Mode" in the driver control panel.
 
Nope. You seem more interested in fighting than having a conversation. You clearly have a zeal about this matter and will not see reason. You make claims that others directly disputed with technical data. You believe what you "feel should work", when you clearly have no technical understanding of what's going on. I'm more than willing to explain something to someone who wants to understand, but it's clear you are much more interested in fighting than a conversation. Sorry, I've got better things to do with my life than argue with someone on the internet. Have a nice day.

Uh huh...I don't understand how wanting an open platform that everyone can use is not seeing reason. I apologize that I don't believe in having company specific features when other products can do the same because a company wants to stifle innovation and yell "MINE!".

I believe in sharing and being fair. Using a CPU or OpenCL for physics is fair. This way everyone get's to enjoy the benefits instead of having something like Nvidia Physx which is used more often that not because Nvidia PAYS dev's to use it. The Witcher 3 being the perfect example of this. Instead of Red coding physics for OpenCL (for next-gen and all GPU's), Nvidia instead pays Red to use Physx so they can scream "EXCLUSIVE!".

Now if you are a Nvidia fanboy you are biased. If you are not then I don't see how you can disagree with me. I am not a fanboy hence why my views are unbiased and fair. Also, before you go off about how I own a 7970 just know that my previous cards were a 580 and a 480 before that. I choose performance:price thus I have a 7970 now.

I think the main problem with OpenCL and GPU Compute is that Nvidia, until the 780 and Titan, completely sucked at using it. Hell, even back in the day when Physx was made to run on ATI the ATI GPU was FASTER (ATI 3870). This is why Physx still exists. NOT because it's better but because Nvidia can keep it locked down.

Why do you think Nvidia wanted to make Physx drivers for AMD cards? It's so they could limit performance because they knew AMD would be better at it. This isn't opinion, it's a fact. Until the 780 and Titan AMD ruled the GPU physics power wise.

I do not wish to fight but at the same time people that don't seemingly really know what they're talking about want to push their views with nothing to back it up. That's irritating.
 
All I see is a salesman spouting the usual bullshit that all salesmen spout. Why anybody would EVER listen to anything a salesman says is beyond me.
 
Allow me to cure your ignorance. Your OC'ed CPU is capable of roughly 0.1 Teraflop/s, a K20X tesla (roughly a 780) is capable of 4.0 Teraflops/s, that's 40 times the performance.

CPUs and GPUs are built for different tasks, and the reason you aren't running your graphics off your CPU. So no, your CPU is not built to handle massively parallel problems and it preforms poorly at it. It has nothing to do with Nvidia artificially limiting your CPU and much more to do with your CPU not being good at the task.

Since everyone loves car analogies, it's kind of like saying I should enter a dragster into a formula one race. "Don't tell my my dragster isn't fast and can't do well in a formula one race".

No I'd say he is right, there has been testing to prove your statement wrong. Just comparing physx performance between multiple nvidia cards should be enough to cause concern over performance ceilings.
 
Everybody arguing needs to head over to the steam hardware survey and take a peek at how many people don't give a damn about what this guy is saying. AMD is so amazing but are perfectly content playing second fiddle to Intel and Nvidia and the people that go up to bat for them and completely ignore the numbers are beyond me. When they start posting great profits consistently, write drivers that work, and actually want to be number one then people could argue the validity of this dudes statement. Until then, you grand wizards of code know it all's are wasting your time.
 
PhysX is an "utter failure"...really?

Last time I checked it was in a LOT of games. Maybe not GPU PhysX, but still.
 
PhysX is an "utter failure"...really?

Last time I checked it was in a LOT of games. Maybe not GPU PhysX, but still.

Has it sold Nvidia GPU's? Im sure that was part of their initial want for it.
Im not sure what it costs to Liscense physx's for a game.
 
Has it sold Nvidia GPU's? Im sure that was part of their initial want for it.
Im not sure what it costs to Liscense physx's for a game.

Unity, Unreal, Gamebryo and Torque all use it in their engine, as well as other solutions like Ogre, Panda and Autodesk. So what the developers pay for it? Nothing really depending on the package. What do the companies that provide the engine like Bethesda, Epic, etc pay for it? I have no idea.

I'm not sure what the backlash is against it. Half the people posting can't really give a solid answer as to why "this" one sucks and "that" one is better. I mean the hardware acceleration for PhysX is kind of pointless because CPU power these days will still have room to spare for other tasks after physics is taken care of. Even Mobile CPUs. But the argument of PhysX vs Havok is pretty pointless. Havok is a company, and provides different levels of physics solutions, so it's like saying Unreal Engine is better than unity because it provides Realtime shadows and static batching at the free level. But then someone will come in and say Unity is better because it compiles to all major console, mobile and desktop platforms. For decisions like physics I think in the end, most game developers know what they're getting in to and if they chose a certain physics engine for their game, and the game works and is fun, what the hell is the complaint about?
 
For decisions like physics I think in the end, most game developers know what they're getting in to and if they chose a certain physics engine for their game, and the game works and is fun, what the hell is the complaint about?

Plenty when one can't use the physics engine in question.
 
AMD was willing to use PhysX, but Nvidia wanted far, far, far too much
That is a 100% fictitious claim. I don't think nvidia ever offered to license it to AMD, or at least I cannot remember or even find any such offer in a search.
 
Only reason why DirectX works is because openGL is not a real replacement DirectX is a package deal includes D3D and tons of other tools to help with development, Plus i hate openGL for it's driver implementation.
 
PhysX is an "utter failure"...really?

Last time I checked it was in a LOT of games. Maybe not GPU PhysX, but still.

All next gen consoles support it too.. Its not a fail at all.. Its potential will only be bigger.

I like CUDA. Some of my programs require it too :/
 
Hmmm AMD Saying it's doomed.. yet... licensed it on both the PS4 and XB1.... the hell is wrong here?
 
Hmmm AMD Saying it's doomed.. yet... licensed it on both the PS4 and XB1.... the hell is wrong here?

Its being pushed on PS4 and XB1 by Nvidia, I don't know if any games are using it, but Nvidia is actively pushing it

Even more hilarious is they're pushing PhysX 3.X on both systems, when they won't give PhysX 3.X the time of fucking day on PC
 
Back
Top