SixtyWattMan
Supreme [H]ardness
- Joined
- Aug 4, 2008
- Messages
- 4,466
Play CS 1.6 at 60FPS then 100FPS, feels completely different.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
That's only because the GoldSrc engine (i.e. Quake) doesn't have a fixed tic rate for input polling (and movement simulation and so on). The "feel" changes as the frame rate changes because they aren't in direct synchronization.
If you were to play a more modern game with a fixed tic rate, you'd find that the difference in "feel" between running at a locked 60 Hz and a locked 120 Hz is non-existent. With a 60 Hz display, the experience itself would be identical.
That's only because the GoldSrc engine (i.e. Quake) doesn't have a fixed tic rate for input polling (and movement simulation and so on). The "feel" changes as the frame rate changes because they aren't in direct synchronization.
If you were to play a more modern game with a fixed tic rate, you'd find that the difference in "feel" between running at a locked 60 Hz and a locked 120 Hz is non-existent. With a 60 Hz display, the experience itself would be identical.
Not all modern use fixed tic rates, but many do. If they don't, then they are by definition not very modern at all
EDIT: For whatever it's worth, I do actually believe that the Source engine does have a fixed tic for client-server updates. I don't think the "snipe-off" scenario is a valid one.
While this may be true, what people really dont seem to be able to understand is that a game running at 60 frames a second will feel very different from a game running at say 120 frames a second, regardless of your display refresh rate.
If you can't notice this, then you probably don't play games very seriously. Now, there's nothing wrong with that, but don't go preaching that frames over 27, 30, 60, whatever, don't matter. It's just not true. I don't care about scientific facts you dig up about the refresh rate of the human eye, reaction times, etc. You're just not seeing the whole picture, and you don't have the experience to say otherwise.
This is most especially important in multiplayer games. Player A with a 60hz monitor that is getting 60fps just is not going to be as competitive as the equally skilled Player B with a 60hz monitor that is getting 100fps. Proxy has the right of it, though input lag is not how I would describe it.
Since inevitably someone will mention it, note that I am *NOT* saying that getting a higher refresh rate display won't also affect how a game plays or "feels".
-->Input lag is more affected by using a USB mouse over a PS/2 mouse...or a LCD over a CRT:
http://www.hardware.fr/html/articles/lire.php3?article=632&page=1
And ignore science all you want...facts don't go away for that reason.
You're just not seeing the whole picture, and you don't have the experience to say otherwise.
What the fuck does any of this have to do with Physx?
bla bla bla bla
I just don't play enough games, nor do I have the personal experience with this topic to make any meaningful posts in this thread. Instead I shall resort to asinine forum tactics because you upset me. Have at thee brigand!
If that's the case, then I think you should be quite intensely concerned with some of the things you've stated in this thread.I'm not particularly concerned with the reasonings. I'm more concerned with people preaching fallacy as fact.
I haven't noticed any. There are changes that do occur at varying server tic rates, but none I've noticed with respect to movement or input polling (or client prediction).As a long time competitive CS:Source player, and general fan of competitive playing in FPS games, I can tell you for a fact that the difference in the Source engine is *quite* pronounced.
...What was that?
I'm done with this thread. You are wrong, but it's obvious from this point that you're just going to continue braying at anyone who disagrees with you.
kett said:You're just not seeing the whole picture, and you don't have the experience to say otherwise.
Input lag is more affected by using a USB mouse over a PS/2 mouse...or a LCD over a CRT:
http://www.hardware.fr/html/articles/lire.php3?article=632&page=1
And ignore science all you want...facts don't go away for that reason.
PrincessFrosty please research this..no, CPUs are not as fast for this as a GPU just like a CUDA kicks the crap out of any CPU when folding for example.
Proxy, I understand your point and it is your choice and preference. I feel otherwise but then again, if we were all alike there would not be an options menu.
You assume perfect scaling across the cores. That just doesn't happen. As others have said, the GPU is much faster than CPUs at physics processing.
To get back onto the subject of PhysX...
I watched the Batman AA comparison video a couple days ago, and I think this is genuinely the first time I've seen hardware PhysX play a real role in positively contributing to the overall atmosphere of a game. There are apparently a bunch of instances in which things are implement pretty much just "because they can be", but for the most part, the effects fit within the atmosphere of the game itself and reinforce the idea that the world is physical and believable and that actions on objects have predictable and appropriate responses. Even the cloth simulation, which everyone seems to think is totally misplaced and inappropriate (?), adds a little extra dimension to the world.
Suffice to say that I'd be playing Batman with hardware physics on if I could afford the frame rate hit.
And to think all of this started because I stated that I'd rather have higher amounts of AA/AF, instead of ONE game that can use PhysX, and nVidia shamelessly pushing PhysX.
I never said they were better, what I'm saying is that I'm getting an almost playable frame rates on batman using a CPU for PhysX and the CPU is barely being used at all, which suggests that if they made better use of the CPU then a playable frame rate would be possible.
I'm not saying the scaling is perfect, it wouldn't have to be to provide a playable fps.
Are you seriously telling me that to obtain an FPS that Hardocp considers playable, about 35-45 fps (lets say 40 fps average, thats an additional ~15fps, that cannot be achieved with 70% of the CPUs idle time.
I think that's very intellectually dishonest
We've got some good evidence to say that physics scales very well across multicore CPUs, valve did testing on this way back when quadcores where quite new and saw very close to linear scaling.
Source here - http://techreport.com/articles.x/11237/2
Hardocps look at ghostbusters multicore scaling having upto 80% and above balanced across 4 cores.
http://www.youtube.com/watch?v=H9boF-JZKcU
Let me ask you something, do you have any reason to believe that it won't scale over the cores in a highly non-linear way such that the additional 70% CPU time could cover the additional 15fp to get a playable frame rate.
Show me some kind of article that demonstrates highly non-linear scaling on the CPU or SOMETHING to make me understand your point of view.
I don't think you have no idea what you are talking about here. CPUs get beaten pretty badly by GPUs in regard to physics processing. What your talking about is little more than a vague notion that the extra CPU power would some how be "just as good" or that it would provide similar performance as GPU physics implementations. Or at the very least, you seem to suggest that you could get playable frame rates doing the same things using a multicore CPU for physics processing. What exactly do you base this on?
Scaling is never perfect in any x86 or x64/EM64T implementation. (Though this is getting better all the time.)
I don't know exactly how demanding Batman AA's PhysX effects are and if they could be done even half as well on a CPU. Frankly, based on what you've said, you don't either. Based on what I know concerning the differences in performance between CPU and GPU's in raw computational power I seriously doubt the same level of PhysX effects we see in Batman AA can be done with multi-Core CPUs vs. GPU's. Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.
So disagreeing with your opinion is intellectually dishonest?
Way back when huh? I remember that article. What was being done physics wise in the Source 2 engine doesn't translate to what they can do now, three years later using a game like Batman Arkham Asylum. Batman Arkham Asylum makes heavy use of physics based effects. Much heavier usage of them than any game I've seen to date. You aren't just talking about the tech demos running in a given engine from three years ago either. We are talking about a new game, with tons of visual effects that probably weren't being utilized three years ago in the Source 2 Engine and again with the games engine, AI, larger levels, sound effects, all running at the same time. You are talking about Apples vs. olives.
That's Ghostbusters. Different physics engine, different game engine, different everything. Again you can't compare the two directly. YES physics effects can be done on the CPU. We've known this ever since Alan Wake was first announced. There really isn't a debate here, GPUs are better than CPUs at physics processing. Period. Now you might be arguing that CPU physics may be "good enough" and that we'd be able to get playable frame rates in these games using that method. To that I say, screw that. I don't want a "playable" method for physics processing, or a "good enough" implementation of physics processing to satisfy those who don't seem to want the evenlope pushed for visual quality. I want the best method.
It isn't that simple. Do you know how many GFLOPS it takes to get the same PhysX effects found in Batman Arkham Asylum and maintain playable performance? I doubt that you do. (I don't know either.) All I know is that I'm dedicating a Geforce GTX 280 OC to PhysX support in Batman Arkham Asylum and two more Geforce GTX 280 OC cards for graphics processing. I get about 60FPS at 2560x1600. I seriously doubt my Core i7 920 @ 4.0GHz can do the same the same amount of effects just as well and still provide the same frame rates I'm getting today.
You need to look at the differences between GPUs and CPUs in terms of physics processing / computing power. GPUs aren't ideal for every type of processing out there but in areas like physics and other scientific calculations the CPU doesn't hold a candle to the GPU.
A typical CPU is measured in GFLOPS. Video cards are now breaking the TFLOP barrier all the time now. The difference is staggering.
http://www.tomshardware.com/forum/262886-28-core-gflops-benchmark
http://www.gpureview.com/GeForce-GTX-295-card-603.html
Granted the comparison isn't always 1:1, nor is it an apples to apples comparison as their individual architectures lend themselves to doing certain tasks better than the other. However, it is widely known that for physics processing, GPUs outclass CPUs, badly.
Some more articles on physics processing:
http://www.computerpoweruser.com/Ed...=articles/archive/c0610/28c10/28c10.asp&guid=
Comparisons between CPU and GPU PhysX in Cryostatis:
http://www.techarp.com/showarticle.aspx?artno=644&pgno=5
http://www.hardforum.com/showthread.php?t=1376657
This should showcase the difference between GPU and CPU physics capabilities. Yeah they can implement some physics effects via multi-core CPUs, but it won't hold a candle to what the GPU can do.
All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications. Beyond that you make a big assumption that if you utilized 70% of the idle processing power across multiple cores that it would somehow equal or be "good enough" to get similar frame rates as a dedicated GPU already provides. I don't think that's going to be the case.
I think GPU physics processing is the way to go. However, I do agree that being tied to an NVIDIA card for PhysX sucks ass when NVIDIA is trying to disable their own cards PhysX capabilites if you use an AMD card for rendering. I'd like to see a physics implementation like PhysX that isn't tied to one brand of cards or another. Through there isn't a technical restriction on running phyiscs processing off AMD cards, it hasn't been done or pushed the way PhysX on NVIDIA cards has been pushed. So hopefully game developers will start to change this.
Well if PhysX is a better implementation than other physics processing engines (I suspect that it might be) then I would hope for it to dominate the market just because it's better. I'm not a programer or a game developer so I can't really speak to that point. However, if it is better, and it does gain more market share I would hope that AMD would embrace the technology and license it for themselves. I know that it must suck to have to get something like that from your competitor, but NVIDIA did offer the technology to ATI previously. As it is AMD already has an x86 license agreement with Intel. A PhysX license from NVIDIA would be no different.
Likewise, if Bullet, Havok or any other technology proves to be better, or at least gain more market share, then I hope NVIDIA would embrace that as well. According to them they can do other forms of PhysX processing on their GPUs as it is.
Unfortunately what may happen is that NVIDIA and AMD will each have their own solutions, and game developers will have to pick one or the other, or worse yet, they'll have to try and put in support for both physics engines into their games. This would mirror the OpenGL vs. Direct X days where a game developer would choose one or the other, and owners of one brand of card would suffer to a degree because the developer chose a technology that their card's counterpart handled better. Direct X ran better on ATI hardware while OpenGL ran better on NVIDIA cards etc. If that happens, the truth is we all lose. That could add development time for game titles and it also penalizes one camp or the other when their favorite game was optimized for the cards they didn't choose.
Comparing chickens to motorbikes eh?
Easy riggid bodies physics that disspear after 10 seconds are no where in the same leuage as interactive smoke.
If it where so easy, why havn't any havok API games showed this?
(Hint: Because the CPU's simply don't have the SIMD power to do so, this is another example: http://golubev.com/about_cpu_and_gpu_2_en.htm )
I don't think you have no idea what you are talking about here. CPUs get beaten pretty badly by GPUs in regard to physics processing.
What your talking about is little more than a vague notion that the extra CPU power would some how be "just as good" or that it would provide similar performance as GPU physics implementations.
Or at the very least, you seem to suggest that you could get playable frame rates doing the same things using a multicore CPU for physics processing. What exactly do you base this on?
Scaling is never perfect in any x86 or x64/EM64T implementation. (Though this is getting better all the time.)
I don't know exactly how demanding Batman AA's PhysX effects are and if they could be done even half as well on a CPU. Frankly, based on what you've said, you don't either. Based on what I know concerning the differences in performance between CPU and GPU's in raw computational power I seriously doubt the same level of PhysX effects we see in Batman AA can be done with multi-Core CPUs vs. GPU's.
Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.I wasn't really comparing these games directly, which again I've said in my previous post. The reason i referenced ghostbusters is that it's proof that physics engines can be written to be scaled across multicore CPUs and make use of large scale physics.
Let me stress that I'm not saying its as good or better than a GPU doing it, what Im saying is that we can make physics engines on a CPU, we can scale it to use almost all of the CPUs resources, and that PhysX doesn't scale like that in fact it has very bad CPU usage
So disagreeing with your opinion is intellectually dishonest?
I think you know that if we made use of the other 70% of the CPU we'd likely get playable frame rates in batman, you've failed to procude any kind of evidence to show that above the current 30% CPU usage that PhysX would stop scaling as well (worse than 4x slower as calculated above) I think thats intellectually dishonest, if this was anything else I suspect you'd be inclinded to at least question why such a small resource usage.
It's not as if this is highly unlikely, nvidia dont sell CPUs so it massively benefits them that you can only do the effects on their card.
Way back when huh? I remember that article. What was being done physics wise in the Source 2 engine doesn't translate to what they can do now, three years later using a game like Batman Arkham Asylum. Batman Arkham Asylum makes heavy use of physics based effects. Much heavier usage of them than any game I've seen to date.
I'm not talking about what effects are specifically being used, i'm talking about the scaling that is possible on multicore CPUs.
You aren't just talking about the tech demos running in a given engine from three years ago either. We are talking about a new game, with tons of visual effects that probably weren't being utilized three years ago in the Source 2 Engine and again with the games engine, AI, larger levels, sound effects, all running at the same time. You are talking about Apples vs. olives.
Ok so do you have any evidence or any reason at all to think that newer physics effects dont scale as well as older ones?
That's Ghostbusters. Different physics engine, different game engine, different everything. Again you can't compare the two directly.
I wasn't I've said this over and over, it wasn't a direct comparison, it was used as a proof of concept, that it COULD be done.
YES physics effects can be done on the CPU. We've known this ever since Alan Wake was first announced. There really isn't a debate here, GPUs are better than CPUs at physics processing. Period.
Yes and I agree, but this is not what I'm saying, again i've already said this before so I'm really just repeating myself again.
Now you might be arguing that CPU physics may be "good enough" and that we'd be able to get playable frame rates in these games using that method.
Yes! Finally after misrepresenting my position for god knows how many paragraphs, you finally get it.
To that I say, screw that. I don't want a "playable" method for physics processing, or a "good enough" implementation of physics processing to satisfy those who don't seem to want the evenlope pushed for visual quality. I want the best method.
This is asinine and quite frankly incredibly selfish, the world doesn't revolve around just what you want. You don't have to give up GPU processing to improve PhysX on the CPU you can do both well in one package, you can have your super duper whizbang GPU solution and for those who cannot use PhysX like ATI users get to have fallback effects.
You argument would make sense if everyone could do GPU based physics equally but thats not the case.
It isn't that simple. Do you know how many GFLOPS it takes to get the same PhysX effects found in Batman Arkham Asylum and maintain playable performance? I doubt that you do. (I don't know either.)
No idea, neither of us know. My argument doesn't rely on knowing the exactly GFLOP usage, what it relys on is knowing the current frame rate and the current CPU usage and extrapolating from there.
All I know is that I'm dedicating a Geforce GTX 280 OC to PhysX support in Batman Arkham Asylum and two more Geforce GTX 280 OC cards for graphics processing. I get about 60FPS at 2560x1600. I seriously doubt my Core i7 920 @ 4.0GHz can do the same the same amount of effects just as well and still provide the same frame rates I'm getting today.
Again you say you dont know the exact usage in batman yet doubt your CPU could handle it, what are you basing this on? In fact you the i7 architecture is better than the Q9xxx range and you've got a faster clock so im betting you get at least 30fps if not a little more in the start area of batman, you could actually try and provide some data, would only take 5 minutes.
You need to look at the differences between GPUs and CPUs in terms of physics processing / computing power. GPUs aren't ideal for every type of processing out there but in areas like physics and other scientific calculations the CPU doesn't hold a candle to the GPU.
I never said CPUs were better, i've aknowledged several times that GPUs are faster at physics calculations but not everyone can run GPU physics so until we all can then it makes sense to investigate CPU based physics.
A typical CPU is measured in GFLOPS. Video cards are now breaking the TFLOP barrier all the time now. The difference is staggering.
http://www.tomshardware.com/forum/262886-28-core-gflops-benchmark
http://www.gpureview.com/GeForce-GTX-295-card-603.html
I am aware of this, but I cannot run batman on my GPU so this is a moot point
My options in batman currently are,
1) No PhysX
2) Switch to Nvidia
Why can't there be a 3rd?
3) Run physX on the CPU and nvidia make it use more than 30% usage and get at least some PhysX effects
Some more articles on physics processing:
http://www.computerpoweruser.com/Ed...=articles/archive/c0610/28c10/28c10.asp&guid=
Comparisons between CPU and GPU PhysX in Cryostatis:
http://www.techarp.com/showarticle.aspx?artno=644&pgno=5
http://www.hardforum.com/showthread.php?t=1376657
This should showcase the difference between GPU and CPU physics capabilities. Yeah they can implement some physics effects via multi-core CPUs, but it won't hold a candle to what the GPU can do.
Yes I understand GPUs are better at physics than CPUs, but doing physics on the CPU is better than not having them at all. For people without the option of doing GPU physics then why not do what we can on the CPU, which in batman it's obvious that a lot of the effects could be done on the CPU.
All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications.Right, so make it an option, Physics level 1 with 30% usage, physics level 2 60% usage, physics level 3 90% usage, we have options for various speed GPUs already so we know its possible, and we have options for almost every other graphics options to allow us to balance frame rate vs image quality. This is just a weak argument.
Beyond that you make a big assumption that if you utilized 70% of the idle processing power across multiple cores that it would somehow equal or be "good enough" to get similar frame rates as a dedicated GPU already provides. I don't think that's going to be the case.
I never said "same frame rates" as a GPU, I said playable frame rates.
You dont think thats going to be the case, so you must have some reason for thinking that, 30% usage provides 25FPS but between the rest of the other 70% we cant muster another 10-15 FPS, do you have any specific reason to think that over a certain point we suddenly see a massive degredation in physics scaling, there must be some reason to think this, post an article, a interview with a developer, let me know why you so strongly think this.
No one has posted any reasons to think that PhysX cant scale almost linearly on the CPU like i've demonstrated has been done in the past.
I think GPU physics processing is the way to go.
And for the record I happen to think that its a good move for the industry as it gives us more options, what we need is a GPU agnostic solution, while we have Nvidia in control of PhysX we have a highly bias situation which in the long run will just bite gamers in the ass because at some point they're going to turn around and be a big bunch of gays, there's already evidence of this as you've mentioned because they've already stopped support for multi vendor solutions.
However, I do agree that being tied to an NVIDIA card for PhysX sucks ass when NVIDIA is trying to disable their own cards PhysX capabilites if you use an AMD card for rendering. I'd like to see a physics implementation like PhysX that isn't tied to one brand of cards or another.
I couldn't agree more, I'd love to run batmans physics on my GPU and play it in all it's glory but while Nvidia are in charge thats always going to be a problem, I dont blame ATI for their stance on support because they can't afford to let Nvidia corner the market like that, so for the time being there is no one solution fits all.
Through there isn't a technical restriction on running phyiscs processing off AMD cards, it hasn't been done or pushed the way PhysX on NVIDIA cards has been pushed. So hopefully game developers will start to change this.
I'm not sure if this is down to game developers, I think AMD need to help intergrate support and they're being fairly stubborn about it, but while I'd like to play batman with GPU physics if it means Nvidia being the sole people in the position of power for GPU physics thats VERY BAD, they've already proven they're capable of screwing over AMD users.
Had Agiea remained independent and not been bought by Nvidia im sure that AMD would be rushing to impliment PhysX support, we'd all be able to use GPU PhysX and the whole CPU thing would be moot.
The problem I have with PhysX is Nvidia appears to have totally half-assed the CPU implementation. I've messed around with the PhysX library, and even using a room full of rigid boxes with 8 threads CPU usage never increased above ~15% as the FPS gradually started dropping. Not only that, but only 1 of the physics threads was really doing work, 1 was doing a little, and 6 threads were idle.
Now, I'm not saying CPUs are better at physics than GPUs, far from it. What I am suggesting is that Nvidia hasn't bothered to fix the CPU scaling that is supposed to be there, thus making the gap between CPU physics and GPU physics artificially larger.
Havok has showed things like that, you just like to ignore them: http://www.youtube.com/watch?v=daZoXzBGea0
Also, the PhysX SDK has sample programs of collide-able particles and guess what? They can run on the CPU, too. So even the PhysX library is capable of doing interactive smoke on the CPU.
No you're simply not reading what I've typed, I said SPECIFICALLY that I don't think CPU's are better, what I'm saying is that in the case of batman there is a good chance that if Nvidia made better use of the CPU that people without nvidia cards can appreciate a similar level of PhysX effects without having to go out and change their entire GPU solution to Nvidia.
In general, no. specifically for what is used in batman, yes. I've provided links to articles from game developers who have built engines which allow almost linear scaling of physics processing on multicore CPUs. I've also provided real world numbers for Batmans specific use of physics which clearly show that it only uses a tiny fraction of the CPU for PhysX, it's less than 30% and the frame rate is almost playable already.
Based on evidence (i've posted) from developers showing near linear scaling in physics on multicore CPUs with benchmarks for proof.
If 30% CPU usage can provide 25FPS in the game engine then even if the scaling wasn't perfect (which we both agree on) then the other 70% of CPU time could be used to provide a further FPS boost, maybe it's highly non linear and can only produce another 10fps but that would still be enough to make the game playable for people decent CPU's and without Nvidia graphics cards.
Again I've said several times it's not perfect, the links I posted show near perfect scaling but I am perfectly aware that things don't scale perfectly, but they don't need to
30% for 25fps is 0.833 FPS per 1% of CPU time
Even if that dropped to something like 0.2 FPS per percentage of CPU time which is a massive drop (over 4 times less efficient) you'd still get enough additional FPS for it to be playable. And this make several assumptions which actually offset the score against my argument, for example 30% usage is PhysX + the rest of the game code so the actual percentage being used for the PhysX is smaller which offset these numbers.
How does this make any sense?
You admit you don't know how demanding batmans PhysX are (I suspect quite low) but then you go on to say that you don't think a CPU can do match the GPU in batman, how can possibly know that without knowing how demanding the game is, it just doesn't make any sense.
Now, that's with PhysX, I'm not sure if another physics processing engine could do it better. So far Ghost Busters is one title we can reference, but you can't really compare them directly given the fact that these two games aren't even Apples vs. Oranges. More like eggplants vs. carrots. Again GPUs are much more powerful than CPUs in this area. There is no comparison so what a Geforce GTX 295 may not break a sweat doing, might yield unplayable results using a CPU to perform the same physics effects.
I wasn't really comparing these games directly, which again I've said in my previous post. The reason i referenced ghostbusters is that it's proof that physics engines can be written to be scaled across multicore CPUs and make use of large scale physics.
Let me stress that I'm not saying its as good or better than a GPU doing it, what Im saying is that we can make physics engines on a CPU, we can scale it to use almost all of the CPUs resources, and that PhysX doesn't scale like that in fact it has very bad CPU usage
I think you know that if we made use of the other 70% of the CPU we'd likely get playable frame rates in batman, you've failed to procude any kind of evidence to show that above the current 30% CPU usage that PhysX would stop scaling as well (worse than 4x slower as calculated above) I think thats intellectually dishonest, if this was anything else I suspect you'd be inclinded to at least question why such a small resource usage.
It's not as if this is highly unlikely, nvidia dont sell CPUs so it massively benefits them that you can only do the effects on their card.
I'm not talking about what effects are specifically being used, i'm talking about the scaling that is possible on multicore CPUs.
Ok so do you have any evidence or any reason at all to think that newer physics effects dont scale as well as older ones?
I wasn't I've said this over and over, it wasn't a direct comparison, it was used as a proof of concept, that it COULD be done.
Yes and I agree, but this is not what I'm saying, again i've already said this before so I'm really just repeating myself again.
Yes! Finally after misrepresenting my position for god knows how many paragraphs, you finally get it.
This is asinine and quite frankly incredibly selfish, the world doesn't revolve around just what you want. You don't have to give up GPU processing to improve PhysX on the CPU you can do both well in one package, you can have your super duper whizbang GPU solution and for those who cannot use PhysX like ATI users get to have fallback effects.
You argument would make sense if everyone could do GPU based physics equally but thats not the case.
No idea, neither of us know. My argument doesn't rely on knowing the exactly GFLOP usage, what it relys on is knowing the current frame rate and the current CPU usage and extrapolating from there.
I am aware of this, but I cannot run batman on my GPU so this is a moot point
My options in batman currently are,
1) No PhysX
2) Switch to Nvidia
Why can't there be a 3rd?
3) Run physX on the CPU and nvidia make it use more than 30% usage and get at least some PhysX effects
All I said was that scaling wouldn't be 100% perfect. It never will be. Sure you might be able to get close to it but I doubt that given developer's track records with that so far. It took forever to get developers to utilize two cores much less four or more. The last thing you actually want to do is max out every core at 100% or get close to it. That leaves no resources for anything else but the game. While some people consider this to be no big deal, many people have background tasks going on which may or may not include AV, torrent apps, VM's or monitoring applications.Right, so make it an option, Physics level 1 with 30% usage, physics level 2 60% usage, physics level 3 90% usage, we have options for various speed GPUs already so we know its possible, and we have options for almost every other graphics options to allow us to balance frame rate vs image quality. This is just a weak argument.
I have no problem with that, but again I don't think your CPU usage calculations are correct. Your idea has merrit though. And actually PhysX in Batman Arkham Asylum is done this way already. There are levels of PhysX processing that are adjustable. They show what GPU is recommended for your configuration and settings. Mine recommends a Geforce GTX 295 and a Geforce GTX 260 for PhysX at 2560x1600 with everything but AA maxed.
I never said "same frame rates" as a GPU, I said playable frame rates.
That's a matter of opinion. I asked you to back that up with data, which I have refuted. I do not believe the same level of effects rendered by a CPU would be playable. Certainly some of them might be, but it wouldn't be the same.
You dont think thats going to be the case, so you must have some reason for thinking that, 30% usage provides 25FPS but between the rest of the other 70% we cant muster another 10-15 FPS, do you have any specific reason to think that over a certain point we suddenly see a massive degredation in physics scaling, there must be some reason to think this, post an article, a interview with a developer, let me know why you so strongly think this.
No one has posted any reasons to think that PhysX cant scale almost linearly on the CPU like i've demonstrated has been done in the past.
The CPU scaling isn't like that. I don't think 30FPS = 30% CPU usage. This is so variable across systems and game engines. The performance offered by two difference systems at 30% CPU usage at 4.0GHz vs. 3.0GHz varies. It also varies by CPU architecture.
And for the record I happen to think that its a good move for the industry as it gives us more options, what we need is a GPU agnostic solution, while we have Nvidia in control of PhysX we have a highly bias situation which in the long run will just bite gamers in the ass because at some point they're going to turn around and be a big bunch of gays, there's already evidence of this as you've mentioned because they've already stopped support for multi vendor solutions.
This I can agree with and I've already said as much in the post you just responded to.
I couldn't agree more, I'd love to run batmans physics on my GPU and play it in all it's glory but while Nvidia are in charge thats always going to be a problem, I dont blame ATI for their stance on support because they can't afford to let Nvidia corner the market like that, so for the time being there is no one solution fits all.
I'm not sure if this is down to game developers, I think AMD need to help intergrate support and they're being fairly stubborn about it, but while I'd like to play batman with GPU physics if it means Nvidia being the sole people in the position of power for GPU physics thats VERY BAD, they've already proven they're capable of screwing over AMD users.
Had Agiea remained independent and not been bought by Nvidia im sure that AMD would be rushing to impliment PhysX support, we'd all be able to use GPU PhysX and the whole CPU thing would be moot.
Agreed.
I'm sorry but rustling papers, cloth banners and dust effects should not cause a 50% fps drop, especially since they are supported by a dedicated $100+ piece of hardware. All that has been done on the CPU for years without significant drop. Plus, the difference between CPU and GPU PhysX so far has not been that significant in terms of better IQ. In the games where it does look a little better with PhysX, it's painfully obvious those things are simply purposely lacking from software PhysX and would of been totally doable without the hardware.more advanced physics effects - lower fps
Hopefully games developers will start taking advantage of OpenCL, Direct Compute, and start using physics engines that can be accelerated on both company's GPUs.
I'm sorry but rustling papers, cloth banners and dust effects should not cause a 50% fps drop, especially since they are supported by a dedicated $100+ piece of hardware. All that has been done on the CPU for years without significant drop. Plus, the difference between CPU and GPU PhysX so far has not been that significant in terms of better IQ. In the games where it does look a little better with PhysX, it's painfully obvious those things are simply purposely lacking from software PhysX and would of been totally doable without the hardware.
PhysX is probably the most under performing/unoptimized API to ever exist.
I'm sorry but rustling papers, cloth banners and dust effects should not cause a 50% fps drop, especially since they are supported by a dedicated $100+ piece of hardware. All that has been done on the CPU for years without significant drop. Plus, the difference between CPU and GPU PhysX so far has not been that significant in terms of better IQ. In the games where it does look a little better with PhysX, it's painfully obvious those things are simply purposely lacking from software PhysX and would of been totally doable without the hardware.
PhysX is probably the most under performing/unoptimized API to ever exist.
Only 1 game worth playing for me = Batman
Tradeoff worth it for me? = nope
/post
I honestly don't think that we're going to see a single mainstream implementation of a physics API any time soon (unless someone like Microsoft does step in and finally helps to make a decision on it).Well if PhysX is a better implementation than other physics processing engines (I suspect that it might be) then I would hope for it to dominate the market just because it's better. I'm not a programer or a game developer so I can't really speak to that point. However, if it is better, and it does gain more market share I would hope that AMD would embrace the technology and license it for themselves. I know that it must suck to have to get something like that from your competitor, but NVIDIA did offer the technology to ATI previously. As it is AMD already has an x86 license agreement with Intel. A PhysX license from NVIDIA would be no different.
Likewise, if Bullet, Havok or any other technology proves to be better, or at least gain more market share, then I hope NVIDIA would embrace that as well. According to them they can do other forms of PhysX processing on their GPUs as it is.
Unfortunately what may happen is that NVIDIA and AMD will each have their own solutions, and game developers will have to pick one or the other, or worse yet, they'll have to try and put in support for both physics engines into their games. This would mirror the OpenGL vs. Direct X days where a game developer would choose one or the other, and owners of one brand of card would suffer to a degree because the developer chose a technology that their card's counterpart handled better. Direct X ran better on ATI hardware while OpenGL ran better on NVIDIA cards etc. If that happens, the truth is we all lose. That could add development time for game titles and it also penalizes one camp or the other when their favorite game was optimized for the cards they didn't choose.