Nvidia Physx strikes again: Cryostasis tech demo

What potential? I never said hardware accelerated physics is bad, what I mean is PhysX won't be the main physics engine in the future, not until more games support it.

I'm just trying to find out what you're arguing against... The technology or the business side.
The business side I can understand, there's no way of knowing what the world of physics will look like in 5 years. So anyone will have their own theory of how it could unfold. I can even understand that an AMD/ATi supporter such as yourself would rather see PhysX fail than succeed.

But some people actually seem to be in denial about the quality of the PhysX API, or about the massive increase in processing power that the nVidia GPUs deliver, no matter how much proof they see in all kinds of games and tech demos.
I don't understand that.

I can't recall people going all crazy about the 3DFX VooDoo when it came out, arguing that CPUs would be faster anyway, and specialized hardware would never work.
On the contrary, gamers and enthusiasts absolutely LOVED it, and many got one, even at the time when only a handful of games supported them.
What is different now? Is nVidia just the company that people love to hate?
 
So when was this statement made? Probably when it was still Ageia with their PPU card? Or when Havok FX was just a rumour? I'm quite sure that they have a completely different view on the situation today.

Yeah it was. Maybe things have changed but we'll just have to wait and see.
 
Yeah it was. Maybe things have changed but we'll just have to wait and see.

I can understand why they wouldn't want to use PhysX back when you required a PPU.
I can also understand why they wouldn't want to use Havok FX. That solution was doomed from the start.

But today the story is way different. The only reason why they wouldn't want to use PhysX now is because they already developed their own CryPhysics system, and might want to hang on to that. I certainly don't expect them to modify CryEngine 2.0 to replace CryPhysics with PhysX.
However, with CryEngine 3.0 they might move over to PhysX.
 
I'm just trying to find out what you're arguing against... The technology or the business side.
The business side I can understand, there's no way of knowing what the world of physics will look like in 5 years. So anyone will have their own theory of how it could unfold. I can even understand that an AMD/ATi supporter such as yourself would rather see PhysX fail than succeed.

But some people actually seem to be in denial about the quality of the PhysX API, or about the massive increase in processing power that the nVidia GPUs deliver, no matter how much proof they see in all kinds of games and tech demos.
I don't understand that.

I can't recall people going all crazy about the 3DFX VooDoo when it came out, arguing that CPUs would be faster anyway, and specialized hardware would never work.
On the contrary, gamers and enthusiasts absolutely LOVED it, and many got one, even at the time when only a handful of games supported them.
What is different now? Is nVidia just the company that people love to hate?
Kyle must be an AMD supporter as well right? :rolleyes:
 
DirectSound has already been killed off in Vista and games now use OpenAL, which is also supported on current consoles.
DirectSound is currently taking a back seat while developers transition to Microsoft's native XAudio2 API, X3DAudio libraries and XACT toolset. OpenAL's future, unfortunately, stands to be fairly bleak: it's in dire need of a complete specification overhaul, and that's not something NVIDIA or Creative seem to have any interest in at the moment.

I am so getting banned for this but seriously, i find it hard to listen to your warped view of the tech world.
Both EA and John Carmack are taking OS X somewhat seriously these days, actually. As a gaming platform, it's absolutely dwarfed by Windows PCs, but that's not to say that's what the not-too-distant future holds for Apple (we'll see).

Well yeah that's obviously what's happening with most of the anti-PhysX posters. They're rallying against something that the competition has and that's understandable.
That's ridiculous. I'm arguing against proprietary, vendor-specific solutions, not rallying against NVIDIA nor am I blindly allegiant to AMD (I don't even own any AMD hardware!). If this was a situation of AMD having bought PhysX, I'd have the exact same stance that I do now.

The bottom line is that, as good consumers and as fans of the PC platform, we should find such solutions unacceptable. Just because there's no realistic alternative at this time doesn't mean we should blindly follow NVIDIA into a very uncertain and potentially quite turbulent future.

Hardware-accelerated physics may be the technology the PC needs to maintain its title as the technologically superior gaming platform, but this doesn't even remotely resemble an ideal way to introduce this technology into the mass market. I think Valve, Crytek and others have made the right decision to not pursue a hardware physics future as defined by a single company.
 
I don't think so. If you look at the Steam Hardware Survey for example, you'll see that there are more people with a GeForce 8 or higher card than there are people with a CPU with more than 2 cores.
So as for what is 'out there', it's not CPU cores, it's the nVidia GPUs.

Yes, but compare that to who has two cards vs. two cores and I think more people will have more cores and that will always be the case.

Well you touched on two things there. In terms of gameplay physics there probably is enough room to grow there on CPUs. It's up to developers to be more creative with their gameplay physics integration the way Valve did with HL2. However, the vast majority of physics effects are too compute intensive for even the fastest CPUs, especially those involving particle systems or cloth simulation.

The other point with regard to "other graphics" is what I was referring to earlier. Why is it that "other graphics" must be maxed out before "physics graphics" gets a turn? Is one more important than the other? Would better HDR have a greater impact on your experience than realistic water? What about a game like Bioshock where water was everywhere? It's not a black and white decision, there are games where physics will have a greater impact than the "other graphics" effects.

I agree on the graphics front. They can use the GPU physics for "effects" and that is great. I would buy whatever card made the games look the best and I hope they do that. I want games looking the best it can. But since it won't be available on all cards, it will stay just that, an effect.

To me, the true benefit of physics is not clothes that sway in the wind. Not flowing hair. It's the gameplay physics. And until it's universal hardware I don't see developers shutting the door on non-Nvidia cards where it affects gameplay and thus, it's not a viable solution.
 
I think Valve, Crytek and others have made the right decision to not pursue a hardware physics future as defined by a single company.

Nothing is stopping developers from coding their own libraries in OpenCL that will run on all hardware. Havok provided a solution for CPU physics but that doesn't mean all developers had to use it.
 
Demo posted for download - http://www.bigdownload.com/games/cryostasis-the-sleep-of-reason/pc/cryostasis-physx-tech-demo/

Can someone check it out and let us know if it's worth all the fuss?

Nevermind, didn't realize it was on nzone two days ago :eek: How come there's so little feedback on this? Are people not checking it out? I haven't had a chance to since I'm down in Mexico for a few days.

I already checked it out the day it was posted in the forums. We can't interact with the world, so it's just like a video, running with the game engine. Physics are in full effect and the water effects are superb. There's a part where they look a bit odd, since each water drop seems to "pile up" in a way similar to what we see the T-1000 in Terminator 2, when it "glues" itself back. Still, we see the water path change with the objects that are placed or removed from the water's direction, which is something I never saw in real time, without it being scripted. At the end , we get benchmark information. With my system, I get an average of 25 fps with everything set to max @ 1280x1024 and DX10.
 
Hardware-accelerated physics may be the technology the PC needs to maintain its title as the technologically superior gaming platform, but this doesn't even remotely resemble an ideal way to introduce this technology into the mass market. I think Valve, Crytek and others have made the right decision to not pursue a hardware physics future as defined by a single company.

And they have done so, because:

1) They didn't have any viable solution at the time they developed their games
2) They still use the same engine from a few years ago, which means they would need to rewrite part of their code to introduce a new physics API, which wouldn't make any sense.

Crytek is 1) and Valve is 2) obviously.

Also, almost all technology starts this way. What makes it or doesn't make it, is the adoption rate. Given the good relations NVIDIA has with many game companies, this has a good chance to succeed and we already saw the first step, with companies such as 2K, EA and THQ, that have many well-known franchises, jump on the PhysX bandwagon. It's all a matter of time now.
 
To be honest, that statement doesn't make a lot of sense to me (not the first time with mr. Yerli either).
Crysis is one of the most scalable games on the market today, as far as physics go. They have 4 different levels of physics.
That kind of scalability is EXACTLY what you'd want for accelerated physics... Just like it's what you want to handle single, dual and quadcore CPUs..

I think his point was that they don't want to add a hardware-assisted "level 5" physics setting that offers significantly different game play. Sure, you could animate even more particles and rigid bodies and soft bodies and cloth and bla bla bla, but when it starts to affect game play they quite rightly don't want to end up having to design 2 separate games, for people with and without physics acceleration.
 
Yes, but compare that to who has two cards vs. two cores and I think more people will have more cores and that will always be the case.

I don't see your point?
You don't need two videocards to use hardware-accelerated physics.
You do however need more than two cores to get more physics, because current games already swamp a dualcore processor with their lightweight physics.

Which is why I compare quadcore CPUs to 8-series GPUs.
 
Yes, but compare that to who has two cards vs. two cores and I think more people will have more cores and that will always be the case.

You don't need two graphics cards to enjoy these effects. Only one is sufficient, though obviously I'm not talking about a low-end one, but a mid-range card, such as the 9800 GTX is more than enough at this point. As more and more complex effects appear, it won't be enough anymore, but by then, a GTX 280 will be mid or low range, since that's pretty much how tech evolves.

Chameleoki said:
I agree on the graphics front. They can use the GPU physics for "effects" and that is great. I would buy whatever card made the games look the best and I hope they do that. I want games looking the best it can. But since it won't be available on all cards, it will stay just that, an effect.

To me, the true benefit of physics is not clothes that sway in the wind. Not flowing hair. It's the gameplay physics. And until it's universal hardware I don't see developers shutting the door on non-Nvidia cards where it affects gameplay and thus, it's not a viable solution.

The way PhysX works, you'll never be "shutting the door on non-NVIDIA cards", since it runs on a CPU and/or PPU aswell. You would just need to drop down the quality of the physics effects, just like you do with graphics options.

If that logic of yours applied to graphics too, game developers wouldn't provide so many graphical options, so that they wouldn't be "shutting the door on non-SM3 cards", or "non-DX10 cards"...you get the point.
 
What sort of CPU usage are people getting with GPU PhysX disabled? One thing that hasn't been properly fleshed out is whether Nvidia's CPU PhysX is properly optimized for multi-core CPUs.
 
I think his point was that they don't want to add a hardware-assisted "level 5" physics setting that offers significantly different game play. Sure, you could animate even more particles and rigid bodies and soft bodies and cloth and bla bla bla, but when it starts to affect game play they quite rightly don't want to end up having to design 2 separate games, for people with and without physics acceleration.

Yea, but I still don't see his point. His own game is proof that you can create a game with scalable physics, without significantly different gameplay.
Basically the exact same arguments for or against accelerated physics in Crysis also work in the case of dualcore/quadcore CPUs. So there's no way they can defend their choice for quadcore support while arguing against hardware-accelerated physics support.
 
The way PhysX works, you'll never be "shutting the door on non-NVIDIA cards", since it runs on a CPU and/or PPU aswell. You would just need to drop down the quality of the physics effects, just like you do with graphics options.

If that logic of yours applied to graphics too, game developers wouldn't provide so many graphical options, so that they wouldn't be "shutting the door on non-SM3 cards", or "non-DX10 cards"...you get the point.

Again, I agree as far as graphical (or minor gameplay) physics goes. You can have the choice of CPU or if you have a supporting GPU, get better framerates by using that. That is awesome and I can see that happening.

But not great gameplay physics. If it would affect the game, you can't just have GPU-only physics or the game WILL shut the door on non-supported cards. "Turning down" physics in that scenario would not be an option in a multiplayer game because you would get totally different experiences.

For example: throw a grenade with a physics GPU and it calculates each fragment, its velocity, penetration through materials, etc. The other person without a physics GPU throws a grenade and the system has to "dumb down" the physics making it more of a dice-roll if it hit the other person. That isn't fair. That's my point. I WANT these types of physics in games but until there's a standard, we'll just get minor effects (which I'll still take!).
 
[Valve] still use the same engine from a few years ago, which means they would need to rewrite part of their code to introduce a new physics API, which wouldn't make any sense.
Valve had to rewrite large sections of renderer code to implement HDR, so Valve's no stranger to code rewrites. They could have rewritten the physics framework on multiple occasions now and haven't, and Valve's made no commitment to implementing hardware physics at any point (Gabe's even gone so far as to call PhysX "inadequate").

Crytek has also made no mention of whether or not PhysX is going to make it into future engines, but Cevat's hinted at the possibility that CryEngine 3 will not feature hardware physics.

Given the good relations NVIDIA has with many game companies, this has a good chance to succeed and we already saw the first step, with companies such as 2K, EA and THQ, that have many well-known franchises, jump on the PhysX bandwagon.
How many developers use PhysX for hardware-accelerated physics is irrelevant to me. It could be 1% or 99% of games -- that doesn't really matter much to me. The issue is that it's proprietary and vendor-locked.
 
Again, I agree as far as graphical (or minor gameplay) physics goes. You can have the choice of CPU or if you have a supporting GPU, get better framerates by using that. That is awesome and I can see that happening.

But not great gameplay physics. If it would affect the game, you can't just have GPU-only physics or the game WILL shut the door on non-supported cards. "Turning down" physics in that scenario would not be an option in a multiplayer game because you would get totally different experiences.

For example: throw a grenade with a physics GPU and it calculates each fragment, it's velocity, penetration through materials, etc. The other person without a physics GPU throws a grenade and the system has to "dumb down" the physics making it more of a dice-roll if it hit the other person. That isn't fair. That's my point. I WANT these types of physics in games but until there's a standard, we'll just get minor effects (which I'll still take!).

That's not a good example, because Physics calculations are usually toned down (or turned off completely) in multiplayer games anyway. You can't do much, physics wise, on a multiplayer environment, because increasing the amount of variables for the existing "lesser" physics capabilities, already starts stressing CPUs.

As far as single player goes, even though you may not SEE the most realistic effects with multiple particles, you can still have enough particles to simulate the final effect, even if not as realistic.

And that's basically what happens now, in games like Crysis for example. You can interact with vegetation and when you do, passing through them not only makes them move according to you, but it also makes a sound, that can trigger the attention of some KPA or Alien near you. By toning the physics down, you don't see the plant move, but you HEAR the same sound, which will convey the same basic effect, but without it being realistic. The gameplay mechanic still works, because you know the sound may alert enemy forces and so you try to avoid interacting with plants, when enemies are near you, but you just don't see the proper effect.
 
Valve had to rewrite large sections of renderer code to implement HDR, so Valve's no stranger to code rewrites. They could have rewritten the physics framework on multiple occasions now and haven't, and Valve's made no commitment to implementing hardware physics at any point (Gabe's even gone so far as to call PhysX "inadequate").

You think that implementing HDR is anywhere as complex as changing the physics API ? Think again.

phide said:
Crytek has also made no mention of whether or not PhysX is going to make it into future engines, but Cevat's hinted at the possibility that CryEngine 3 will not feature hardware physics.

I know nothing about Crytek's intentions, but I'm finding it funny that you of all people (given your comments in the past about Cevat), have any faith on his "hints".

phide said:
How many developers use PhysX for hardware-accelerated physics is irrelevant to me. It could be 1% or 99% of games -- that doesn't really matter much to me. The issue is that it's proprietary and vendor-locked.

The only other Physics API that exists (Havok) is also as proprietary and vendor-locked, plus it's limited to CPUs only, so I'm not seeing your point against PhysX here...
 
Because the success of hardware physics is at least partially dependent on heightening gameplay interaction, which hardware-accelerated graphics wasn't (and still isn't, really). If we assume that the mid-range and high-end discrete graphics market is a 70/30 split between NVIDIA and AMD (for the sake of the argument), and if that split maintains for the next three years, when can developers begin to rely on hardware-accelerated physics for essential interactions? When does hardware acceleration become mandatory so developers can have a fixed target for physics interactions? When does hardware physics become less of a visual effect and more of a staple for innovative gameplay?


You don't believe that developers want to reach the widest target audience possible? Believe me, given the choice between two solutions: one proprietary solution tied to vendor-specific hardware, and the other, open to a wide range of hardware, which would a developer choose?

They may not have a choice at this point, and they find it acceptable because it's the only solution currently available, but they certainly do care about spending resources on features that only NVIDIA owners can experience.


You have empirical evidence that, at one times, things were a certain way, and as time progressed, they're now entirely different? How exactly does that make any kind of case for PhysX?

I don't subscribe to the "suck it up and buy into the bullshit" mentality anymore. As a consumer, I have every choice, and I've chosen not to support a mere annoyance of a blip on what will be the whole of the physics processing radar that is PhysX. The future -- and it is absolutely a non-vendor-locked future -- is so much brighter.

I tend to agree with you. It just wouldn't make sense for a company to produce a game that's only compatible with 70% of the market (for argument's sake) when other good Physics solutions are available that could be compatible with 100% of the market (Hypothetical situation).

Plus I doubt that game developers would want to be at the whim of NVIDIA.... just doesn't make good business sense.
 
Valve had to rewrite large sections of renderer code to implement HDR, so Valve's no stranger to code rewrites.

HDR is no big deal. You just change the rendertarget to floating point, update your shaders, add an extra tone-mapping post-processing step (or replace your current processing with a HDR-version) and you're done.
Most work is in finetuning the shaders and content, not in changes to rendercode.
You can't compare it to physics at all... physics integrate into the core of your engine, because they are directly linked to every object in your world. Moving from Havok to PhysX would mean changing all datastructures, and the APIs also work in a different way, so you have to rethink a lot of your game logic.

(Gabe's even gone so far as to call PhysX "inadequate").

Gabe also said that DX10 was useless. I don't put too much value in what he says, because he's quite transparent... He always argues against everything that his outdated Source engine doesn't support.
 
I tend to agree with you. It just wouldn't make sense for a company to produce a game that's only compatible with 70% of the market (for argument's sake) when other good Physics solutions are available that could be compatible with 100% of the market (Hypothetical situation).

Why do people insist on this notion ?
First of all, the other "good" Physics solution out there is Havok and it's limited to CPUs only. PhysX runs on the CPUs too, but it can run on CUDA ready GPUs and PPUs too. How's that producing a game compatible with only "70% of the market" ?

pillagenburn said:
Plus I doubt that game developers would want to be at the whim of NVIDIA.... just doesn't make good business sense.

So what do they do ? Not use physics at all ? Because using Havok puts them at the whim of Intel...
 
You think that implementing HDR is anywhere as complex as changing the physics API ? Think again.
I never said that ;)

I know nothing about Crytek's intentions, but I'm finding it funny that you of all people (given your comments in the past about Cevat), have any faith on his "hints".
Oh, I have no faith in anything Cevat says, believe me (the guy's probably a pathological liar, actually), but that's all I can go on at this point.

The only other Physics API that exists (Havok) is also as proprietary and vendor-locked, plus it's limited to CPUs only, so I'm not seeing your point against PhysX here...
How is Havok vendor-locked? It operates across any x86/x64 CPU: AMD, Intel, VIA, etc.

HDR is no big deal. You just change the rendertarget to floating point, update your shaders, and you're done.
That's fine and dandy if you want to omit half of the HDR "equation" :)

Gabe also said that DX10 was useless. I don't put too much value in what he says, because he's quite transparent... He always argues against everything that his outdated Source engine doesn't support.
He said the PS3 was inadequate at the same time he said PhysX was. That was when Valve had already released a PS3 version of the Orange Box (outsourced to EA). So, uh...
 
How is Havok vendor-locked? It operates across any x86/x64 CPU: AMD, Intel, VIA, etc.

In case you didn't know, x86 is owned and controlled by Intel aswell.

That's fine and dandy if you want to omit half of the HDR "equation" :)

Really, and what is the "HDR equation" according to you? Can't wait to hear that.

He said the PS3 was inadequate at the same time he said PhysX was. That was when Valve had already released a PS3 version of the Orange Box (outsourced to EA). So, uh...

So the PS3 version didn't perform very well? I dunno, I fail to see the relevance anyway.
 
In case you didn't know, x86 is owned and controlled by Intel aswell.

Yep, some people don't realize that a lot of these "standards" were brought about through monopoly not consensus. OpenCL is an example of such consensus though - time will tell if Nvidia will ever give AMD free access by porting PhysX to it. They have little incentive to do so though since there's no competition. Maybe some independent party - Valve or Epic - will eventually develop an OpenCL physics library that renders PhysX a moot point but that day isn't here yet.

I'd really like to know what's going on with AMD and Havok. Because they claim they're working with Havok but Intel has no incentive to push GPGPU - they would love to keep everything on x86 accelerated by their latest 4 and 8 core CPUs.
 
In case you didn't know, x86 is owned and controlled by Intel aswell.
...and Intel and VIA licensed x86-64 from AMD. What's your point here, exactly? We're talking about hardware lock-in vis-a-vis PhysX and NVIDIA. I don't exactly see how the Havok API being locked to x86/x64 (on PC platforms, anyway) is in any way similar to the PhysX API being locked to PhysX/NVIDIA hardware.

Really, and what is the "HDR equation" according to you? Can't wait to hear that.
We're going to play games now, are we?

So the PS3 version didn't perform very well? I dunno, I fail to see the relevance anyway.
Yeah, the PS3 version was fairly shoddy, actually (but due to the lackluster port job, not due to the PS3's "deficiencies"). As for the relevance? Well, if we're talking about things that Valve downplays that they don't support (SM4.0, for instance), then why would Gabe downplay a platform his company produces products for in the same breath as he downplays a physics API they seems to have no interest in implementing? That doesn't quite add up to me.

In any case, we're getting way sidetracked here. I'm not even sure where this discussion should be leading at this point.
 
I can't recall people going all crazy about the 3DFX VooDoo when it came out, arguing that CPUs would be faster anyway, and specialized hardware would never work.
On the contrary, gamers and enthusiasts absolutely LOVED it, and many got one, even at the time when only a handful of games supported them.
What is different now? Is nVidia just the company that people love to hate?

People are afraid of change... at other times they just like to flock to a popular POV without really understanding what it's about.

Like saying how physics will only be used for eyecandy, while in the next thread they'll be drooling over screenshots of the next FPS. I think it's best to not try to understand, but just to make and/or support these new technologies :)
 
That's not a good example, because Physics calculations are usually toned down (or turned off completely) in multiplayer games anyway. You can't do much, physics wise, on a multiplayer environment, because increasing the amount of variables for the existing "lesser" physics capabilities, already starts stressing CPUs.

I think it'd be pretty cool to have this enabled in multiplayer games where the server had dual gpu's calculating all the physics!
 
...and Intel and VIA licensed x86-64 from AMD. What's your point here, exactly? We're talking about hardware lock-in vis-a-vis PhysX and NVIDIA. I don't exactly see how the Havok API being locked to x86/x64 (on PC platforms, anyway) is in any way similar to the PhysX API being locked to PhysX/NVIDIA hardware.

Firstly Intel didn't license x86-64 from AMD... Intel and AMD have a cross-license: AMD is allowed to use Intel's x86, but in return AMD has to give Intel a free license to any extensions to the x86 instructionset.
x86-64 isn't an instructionset, see... it's an extension to x86. Without x86, you can't have x86-64.. .and x86 is owned by Intel.

Secondly, just like AMD and VIA licensed x86, AMD can license Physx from nVidia.
All you naysayers: Instead of lambasting nVidia for pushing new technology, you should be lambasting AMD for not licensing and adopting the technology.

We're going to play games now, are we?

Well, you made claims that HDR required a lot of changes to the rendering engine. Apparently you cannot back up your own claims when they are challenged.

Yeah, the PS3 version was fairly shoddy, actually (but due to the lackluster port job, not due to the PS3's "deficiencies").

So you basically agree that Gabe makes nonsense claims as long as it suits his agenda.
You already answered your own question there: he blames a poor port job on the hardware.
 
People are afraid of change... at other times they just like to flock to a popular POV without really understanding what it's about.

Ironically enough I myself was rather late into the 3d accelerated arena... Then again I was a software rendering specialist. But I rehabilitated and do OGL and D3D aswell now :)
 
Secondly, just like AMD and VIA licensed x86, AMD can license Physx from nVidia.
How do we know NVIDIA intends to allow licensing PhysX to hardware manufacturers (e.g. AMD)?

There was some momentum on bringing PhysX to AMD hardware as recently as July by an independent developer, with the supposed assistance of NVIDIA's developer network, but nothing's ever materialized from it. Hell, nobody's even really talked about it since July.

Well, you made claims that HDR required a lot of changes to the rendering engine. Apparently you cannot back up your own claims when they are challenged.
That was actually the claim made by Valve developers during the pre-Lost Coast publicity. That wasn't specifically my claim.

So you basically agree that Gabe makes nonsense claims as long as it suits his agenda. You already answered your own question there: he blames a poor port job on the hardware.
No, that isn't quite what Gabe said. Gabe never blamed EA for poorly porting the Orange Box, he merely stated that the PS3 was generally "inadequate".
 
How do we know NVIDIA intends to allow licensing PhysX to hardware manufacturers (e.g. AMD)?

Because they said it in public, for example here:
http://www.custompc.co.uk/news/602205/nvidia-offers-physx-support-to-amd--ati.html

nVidia said:
We are committed to an open PhysX platform that encourages innovation and participation,’ and added that Nvidia would be ‘open to talking with any GPU vendor about support for their architecture.

There was some momentum on bringing PhysX to AMD hardware as recently as July by an independent developer, with the supposed assistance of NVIDIA's developer network, but nothing's ever materialized from it. Hell, nobody's even really talked about it since July.

That's because that particular thing was a hoax.

That was actually the claim made by Valve developers during the pre-Lost Coast publicity. That wasn't specifically my claim.

Well, then they are contradicting what other developers are saying.
However, you were specific about some kind of equation, which you now don't want to delve deeper into...?

No, that isn't quite what Gabe said. Gabe never blamed EA for poorly porting the Orange Box, he merely stated that the PS3 was generally "inadequate".

Ofcourse he doesn't blame EA, he blames the "inadequate PS3" in its place. By saying the hardware is inadequate, he's implying that the software (his responsibility) isn't.
 
Game looks meh and those numbers suck, if you can't get serious frame rates with a single GTX2xx then it's a joke. I was dragging my heels about selling my 9800GTX+ because of PhysX, now I'm glad I didn't wait.

Kyle posted the most realistic view of the physics situation in the thread, Havok is it for a good long while.
 
I don't see your point?
You don't need two videocards to use hardware-accelerated physics.
You do however need more than two cores to get more physics, because current games already swamp a dualcore processor with their lightweight physics.

Which is why I compare quadcore CPUs to 8-series GPUs.

If you can only get 35fps with a GTX sereis card, you do need two video cards. If that is the best they can do GPU based physics are in trouble. So far it has been a gimmick included in lame games.
 
Back
Top