Dark Void PhysX Gameplay Performance and IQ @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,535
Dark Void PhysX Gameplay Performance and IQ - Dark Void was recently released with some neat PhysX effects. We’ll explore gameplay performance and show you what PhysX support will do for you in this game with plenty of screenshots and performance comparisons.

If you are buying a new video card and you are concerned about Dark Void, the GeForce GTX 295 will give you the best experience if money isn’t a concern. If it is an issue, the GeForce GTX 275 is your next best bet. If your budget is more meager still, the GeForce GTS 250 continues to be a fantastic value. If you don’t care about PhysX at all, take your pick of any of AMD’s ATI Radeon HD 5800 or HD 5700 series of video cards. They all perform brilliantly and faster than the GeForce series (when PhysX is Off) in the Dark Void.
 
why not test dx 11 vs a card thats dx 10 only?????????? fps ATI\AMD pwnz making games based for ONLY nvidia games blows and takes away from US teh gamers. nvidia sucks and runs to hott and there lacking till it comes out dx 11 features. Dont buy older later to the game video cards and stop being biased and leaning towards 1 company. Notice the nvidia advertisement here to NO ATI or AMD anywhere whose paying who...maybe this website is made for nvidia LOL ? think about it
 
Regarding PhysX and the very off-putting heavy impact that has for what little it gives, I'd love to read your thoughts of the Havok physics demonstrated in Bad Company 2. This looks great but "all looks, no soul" is a saying I've used since the CDROM revolution came to PCs and I dreaded games would focus on that over making an actual game.

Does AA work if you rename the executable into something like ut3.exe? 99% sure this problem lies not with AMD not having a driver, but with scummy affairs in the code, and it's not the first time that's happening with a UE3 game, with that certain companyname on it.
 
Last edited:
why not test dx 11 vs a card thats dx 10 only?????????? fps ATI\AMD pwnz making games based for ONLY nvidia games blows and takes away from US teh gamers. nvidia sucks and runs to hott and there lacking till it comes out dx 11 features. Dont buy older later to the game video cards and stop being biased and leaning towards 1 company. Notice the nvidia advertisement here to NO ATI or AMD anywhere whose paying who...maybe this website is made for nvidia LOL ? think about it


We test cards and games available in the market currently. Your comments are childish at best.

As for the ads, we appreciate NVIDIA advertising support, that keeps these pages FREE for you to read and complain about. AMD's campaign ended last month and we expect another soon. So as for your suggestions that we are bought and paid for...well, I guess you will just have to deal with that, we have more than proven ourselves in this business.

Your comments are noted, I will ask that you not post in this thread again.

Everyone else, PLEASE keep this ON TOPIC. Thanks.
 
Regarding PhysX and the very off-putting heavy impact that has for what little it gives, I'd love to read your thoughts of the Havok physics demonstrated in Bad Company 2.

We are going to wait for the full version of the game. Too much work to put into article to see things change when the game goes gold. These articles like this are very expensive to produce.
 
Out of curiousity, wouldn't it be possible to manually force AA on through the catalyst control center similar to how that was done in BatmanAA to force fullscreen AA? Not as efficient and kind of brute force AA as I understand it, however, with 140+ average fps, I think brute-force aa couldn't hurt?

Did the CPU PhysX option once again use only '1' core of the iCore7 920's 4 cores or rather was limited to using 1 thread again?

I'm surprised with direct compute and an average 80 fps above 60fps, this developer couldn't find someway to make less ugly smoke effects on non-physx hardware. I kind of point to developer laziness in this case. I mean, this game uses the Unreal engine 3? Isn't that the game engine used for GOW which has way better smoke effects and came out years ago on hardware not nearly as powerful as the hardware used in this review?

Edit: NM, the question about performance with PhysX turned on, found it listed in the apples-to-apples section. Lol at the minimum 2 fps. That must have been rough to test. I am a bit curious about how many cores were utilized though :)
 
Last edited:
Well, Nvidia screwing with everyone as usual.... Physx is killing gamin, making anything prioritary is not good for the market. Well that's how it works I guess... as long as they can make money in the short run. This needs to be open or on the cpu, give everybody an equal chance... that was funny:rolleyes:
 
PhysX is good for gaming and more important then the performance stealing bitch FSAA ever will be or ever was. This should shut up all the PhysX nay-sayers once and for all. You can see the difference big time. Its right on your face!

nuff said...
 
Out of curiousity, wouldn't it be possible to manually force AA on through the catalyst control center similar to how that was done in BatmanAA to force fullscreen AA? Not as efficient and kind of brute force AA as I understand it, however, with 140+ average fps, I think brute-force aa couldn't hurt?

I tried enabling AA from the CCC, and it did not work. There is no in-game AA control for Dark Void. Enabling AA in the NVCP worked fine, and that is how we were able to use it.

Did the CPU PhysX option once again use only '1' core of the iCore7 920's 4 cores or rather was limited to using 1 thread again?

CPU utilization of PhysX is a sticky point. In most cases, we have found it to be not substantial enough to affect CPU utilization graphs over time.

Edit: NM, the question about performance with PhysX turned on, found it listed in the apples-to-apples section. Lol at the minimum 2 fps. That must have been rough to test. I am a bit curious about how many cores were utilized though :)

I did my best to get through the full 10 minutes, but at some point you just can't do anything but die.
 
I was wondering if you guys could add a price column to each cards stats at the beginning of the article? Most people buy a card based on price/performance numbers, and in articles like these its impossible to guess what that might be without going out and looking at the price on newegg or something. You guys mention price at some points but never any hard numbers. I think all of the video card reviews would be greatly improved with some price information at the beginning of the article.
 
I was wondering if you guys could add a price column to each cards stats at the beginning of the article? Most people buy a card based on price/performance numbers, and in articles like these its impossible to guess what that might be without going out and looking at the price on newegg or something. You guys mention price at some points but never any hard numbers. I think all of the video card reviews would be greatly improved with some price information at the beginning of the article.

Prices change on an almost weekly basis. If we supplied that information and wanted to ensure it was accurate, we would have to update every article at least once a week.
 
PhysX is good for gaming and more important then the performance stealing bitch FSAA ever will be or ever was. This should shut up all the PhysX nay-sayers once and for all. You can see the difference big time. Its right on your face!

nuff said...

You would get like 5fps if you enabled PhysX with your setup.

You're such a horrible troll. I hope you're not trying.
 
Prices change on an almost weekly basis. If we supplied that information and wanted to ensure it was accurate, we would have to update every article at least once a week.

http://hardocp.com/article/2010/01/25/asus_eah5750_formula_video_card_review/10

In the conclusion of some video card reviews you include price information. I don't see why you couldn't do the same thing for all tested cards in the beginning of the review, and some note saying prices are at time of review and subject to change or something to that effect. Just use whatever method you use for prices in the conclusion of some reviews as a standard, and then give the data for all cards reviewed. I figure most people read reviews shortly after they are posted which would leave most people looking at the article an acurrate idea on performance/price.
 
I haven't ever seen anything really compelling out of Physx, and probably never will unless its a game developed by Nvidia. Even games sponsored by Nvidia don't want to alienate half the user base by making games so phyics bound. One example where you will probably never see a physx game go is Red faction guerrilla. Never have I been so surprised by physics in a game as that game.
 
I haven't ever seen anything really compelling out of Physx, and probably never will unless its a game developed by Nvidia. Even games sponsored by Nvidia don't want to alienate half the user base by making games so phyics bound. One example where you will probably never see a physx game go is Red faction guerrilla. Never have I been so surprised by physics in a game as that game.

yep.. I totally agree..

RFG is the first game that really surprise me of what physics can do.
 
http://hardocp.com/article/2010/01/25/asus_eah5750_formula_video_card_review/10

In the conclusion of some video card reviews you include price information. I don't see why you couldn't do the same thing for all tested cards in the beginning of the review, and some note saying prices are at time of review and subject to change or something to that effect. Just use whatever method you use for prices in the conclusion of some reviews as a standard, and then give the data for all cards reviewed. I figure most people read reviews shortly after they are posted which would leave most people looking at the article an acurrate idea on performance/price.

My mistake. That is an evaluation of a specific product. When we are looking at 8 or 9 different video cards in a specific game, the situation is less cut and dry. In the past, we have had prices change so much over a weekend that every price we quoted in an article was wrong. It has caused graphics and text have to be redone.

I have actually talked to Brent about adding a price column to the Test Setup chart in these large gameplay articles, and he said no for the exact reasons I gave above. I agree with him. It is an idea that is good in theory, but the data would become invalid within days of the article being posted.
 
But does Dark Void support Eyefinity?

That's the first question in my mind before making a purchase now that I have a 5850. No way am I going to go spend $50 on a game with no Eyefinity support
 
But does Dark Void support Eyefinity?

That's the first question in my mind before making a purchase now that I have a 5850. No way am I going to go spend $50 on a game with no Eyefinity support

That, I do not know. If it does not, you will likely have to wait for Catalyst 10.2.
 
Most games should, there is no "support" it needs, only that it can support higher resolutions, which most games can, since the game only sees one large display. I will check Dark Void in Eyefinity tomorrow and see if it can detect the higher resolutions.
 
http://hardocp.com/article/2010/01/25/asus_eah5750_formula_video_card_review/10

In the conclusion of some video card reviews you include price information. I don't see why you couldn't do the same thing for all tested cards in the beginning of the review, and some note saying prices are at time of review and subject to change or something to that effect. Just use whatever method you use for prices in the conclusion of some reviews as a standard, and then give the data for all cards reviewed. I figure most people read reviews shortly after they are posted which would leave most people looking at the article an acurrate idea on performance/price.

Thanks for your suggestion, this is not going to happen. It is simply too problematic for us from a publishing standpoint. We typically massage the conclusion and value statements the DAY of publication because of constant price changes.
 
You would get like 5fps if you enabled PhysX with your setup.

You're such a horrible troll. I hope you're not trying.

Doubt that dude..Granted, I wouldn't be breaking any frame rate records but I hardly doubt that low....

And I'm a good troll, ask Kyle =)
 
Can hardware PhysX die already?

+1 to this sentiment or at least fix the CPU PhysX component so it uses enough cpu to actually show up on the cpu-utilizatoin chart and multiple cores. This would be good for nVidia users too too in a way as I might want to use my nVidia to render 4xAA and use my cpu to do the physics rather than my gpu as it seems with the GTS260, Physx on high resulted in a 50% performance drop which is very significant imo.
 
Wow, Capcom is getting a lot of credit for such a sucky game! You guys are reviewing such a game on your reputed website, even if you guys stated that "
Dark Void received a lukewarm welcome from game critics, with most criticism falling on the lack of depth in its story and it’s lackluster flight mechanic. It’s highest professional review score was 7/10, given by Game Informer. The PC version of Dark Void has earned a Metacritic rating of 57.
"

Save it for Mass Effect 2 (which is already released right now) and Bioshock 2!

Can hardware PhysX die already?

Roger that, honestly.
 
Great review. Has anyone tried modified drivers that enable PhysX on AMD GPU's and this game?
 
+1 to this sentiment or at least fix the CPU PhysX component so it uses enough cpu to actually show up on the cpu-utilizatoin chart and multiple cores. This would be good for nVidia users too too in a way as I might want to use my nVidia to render 4xAA and use my cpu to do the physics rather than my gpu as it seems with the GTS260, Physx on high resulted in a 50% performance drop which is very significant imo.

Totally agree. The issue though is that would be counter productive to what NV wants. They want you to believe that the CPU can't be used at all to offset some of the physics calculations. That way you constantly buy their new cards.

Once again the issue here isn't PhysX itself. There's no denying that hardware physics adds great eye candy to the game. The issue is the way NV is handling it because it's theirs. Havok using OpenCL is nice since anybody can use it, but again it's owned by a major player which is BAD for the consumer. Intel could immediately decide to start screwing stuff up especially if they ever get Larrabee running.

A 3rd party physics API that uses OpenCL, like Bullet, that can be used on ANY vendor hardware should become the de facto standard.
 
The fact that there is no CPU utilization change when CPU physx is enabled indicates to me that this is completely a sales ploy. These effects are relatively simplistic, with 2 cores worth of i7 cpu basically not utilized there should be no problem enabling these same effects in cpu. Of course, there'd be no reason to buy an Nvidia card if they did that, so here we are.

I can't believe H isn't calling Nvidia out on this - it seems like any other time there's a real (or percieved) short changing of the customer, H goes off (and often overboard) but here we get a sentence or two about how possibly Nvidia is up to something fishy, but then it gets glossed over and their cards are shown as giving the best gameplay. First, H should be lambasting Nvidia for not letting us use a second Nvidia card as a physx only card and then for pulling a virtual bait and switch with the 'cpu' physx. What a crock of shit, I've about had it with Nvidia even though my 8800GT is one of the best bang for the buck purchases I've ever put in a PC. They're almost as bad as Apple, just less overpriced.
 
IIRC, wasn't there an [H] sponsored thread/rant about NVIDIA's practices regarding PhysX and the like? So what gives? PhysX is a proprietary POS, that's nothing new. I appreciate the time and effort you guys put into these articles, but if you're going to invest it, put it into a game worthy of it. There's a ton of great games being released this first quarter (ME2, Bioshock 2, Global Agenda, Aliens vs. Predator, Metro 2033, Battlefield BC2, etc.)
 
Totally agree. The issue though is that would be counter productive to what NV wants. They want you to believe that the CPU can't be used at all to offset some of the physics calculations. That way you constantly buy their new cards.

Once again the issue here isn't PhysX itself. There's no denying that hardware physics adds great eye candy to the game. The issue is the way NV is handling it because it's theirs. Havok using OpenCL is nice since anybody can use it, but again it's owned by a major player which is BAD for the consumer. Intel could immediately decide to start screwing stuff up especially if they ever get Larrabee running.

A 3rd party physics API that uses OpenCL, like Bullet, that can be used on ANY vendor hardware should become the de facto standard.

The issue is more that most game devs can't be bothered writing multi-core cpu physx acceleration. They just write the simple console version and leave it at that. This isn't just a physx thing, it's the same with the rest of the game which only uses a fraction of a modern pc's power.

If nvidia or ati are using their devs to add stuff to the PC version that the game dev's wouldn't bother doing otherwise then they are only doing that to make money. This means they have to justify this work as profitable, which means it's got to sell more nvidia or ati hardware. In nvidia's case it means they really don't try very hard at all to make the cpu path for hardware physx work well. Considering they make no money from sales of quad core cpu's this isn't surprising.

tbh I doubt we will ever see much in the way of multi core cpu physics. It'll all stay the same until the next gen of consoles arrive. They are bound to support some variant of physics in hardware, and microsoft will standardise whatever the xbox 720 uses for use on windows. When that happens suddenly all the PC console ports are going to require hardware physics acceleration, and it will be both vendor agnostic, and game changing.
 
+1 to this sentiment or at least fix the CPU PhysX component so it uses enough cpu to actually show up on the cpu-utilizatoin chart and multiple cores. This would be good for nVidia users too too in a way as I might want to use my nVidia to render 4xAA and use my cpu to do the physics rather than my gpu as it seems with the GTS260, Physx on high resulted in a 50% performance drop which is very significant imo.

Who cares if the GPU is doing it and no tthe CPU? I understand you guys with quad cores and soon to be 6 cores are always looking for a reason to justify those extra cores but this isn't the way to do it LOL
 
IIRC, wasn't there an [H] sponsored thread/rant about NVIDIA's practices regarding PhysX and the like? So what gives? PhysX is a proprietary POS, that's nothing new. I appreciate the time and effort you guys put into these articles, but if you're going to invest it, put it into a game worthy of it. There's a ton of great games being released this first quarter (ME2, Bioshock 2, Global Agenda, Aliens vs. Predator, Metro 2033, Battlefield BC2, etc.)

I believe the reason for the DV 'review' was not on the merit of its gameplay but it's PhysX abilities to add a lot of graphic flare. PhysX releases like that are far and few between.
 
Who cares if the GPU is doing it and no tthe CPU? I understand you guys with quad cores and soon to be 6 cores are always looking for a reason to justify those extra cores but this isn't the way to do it LOL

Who cares? Everybody with a non top of the line Nvidia card or an AMD card that has spare CPU cycles.
 
kyle what is the cpu utilization on an amd gpu with physx on? is it anywhere near 100%? or is it lower than when physx is disabled?
 
You do realize that even if using our multiple cores at 100% they cannot handle physx like this do you?
Don't you think there is a reason CUDA performs so much better for certain applications than a CPU?
I am sure if optimize they could do a bit better with CPU but first, I do not think they care for that of course, and still it wont match the GPU performance so why bother?
A freakin used 8800gt is dirty cheap so just get one and stop complaining.
Thanks for the review!
 
You do realize that even if using our multiple cores at 100% they cannot handle physx like this do you?

yes i do realize that. i just think nvidia artificially limits the performance of the cpu doing physx. that is why i would like to see what it looks like when physx is bening run on the cpu. does the cpu use go up? down? stay the same?
 
nice work [H], did you try say GT285+GTS250 to see if having one card do nothing but the physX vs GTX295?
I know the GTX275 COOP thing EVGA had out would be nice to see that tested as well.
 
There is so much back and forth here about concepts it seems people clearly don't understand.

1) HardOCP and their relationship with Nvidia. If you had been paying attention, most of the news lately from [H] has been negative towards Nvidia, while they constantly post almost every press release from the AMD/ATI camp for their cards. Even to the point of regurgitating press releases from ATI about certifications for particular CAD products that is such a niche market it is not even funny. Not to mention that Nvidia already had these certifications on their last 2 sets of driver releases for their products, and they support it on more Nvidia products than AMD. (sorry, slightly off topic)

2) PhysX. PhysX was developed well before Nvidia started using it, and you needed to buy a particular card to use it, it wasn't available on either Nvidia or ATI cards. Nvidia bought the company that produced it and at least gave people a chance to experience it with more mainstream cards and not have to buy a separate PhysX cards. They also helped game developers put more PhysX in their games to enhance them. In addition to that, Nvidia started developing CUDA and using CUDA to help propel physics processing as well. These technologies are NOT just limited to Nvidia hardware. AMD/ATI has always had the ability to license the technology and use it on their cards, but they don't. So if you are upset about it only working on Nvidia hardware, yell at ATI for not licensing it.

3) Standards. Open standards are not always the best thing, they can greatly discourage advancement in technology as much as help standardize the industry. We might have had physics in a much wider capacity had DirectX not come to life. We can never know for sure. Love it or hate it, competition is what drives progress and companies that develop these technologies need to profit from them somehow to continue research. Also, DX 11 does bring some physics capabilities to all cards that support that standard, and gaming companies will eventually start producing more physics effects in DX 11 eventually. But it takes time, DX 11 just came out, and these games usually take years and years to produce. Also note the only reason AMD/ATI are pushing for open standards and are using Havok now, is because they don't have the money or resources to come up with their own solution, so they have to use someone else's.

4) CPU vs GPU computation. CPUs and GPUs are completely different sets of technology. They both handle how calculations and processing are done in different ways. Just because Physics 'can' be done on a CPU, doesn't mean it can be done effectively. GPUs can handle more complex instructions, while CPUs can handle more instructions at a time. The reason CUDA development was important was that now you can do a lot of processing on your GPU more effectively than it could be done on the CPU. Just look at all the applications already made for CUDA that work much, much faster on GPUs than CPUs.

5) Multi-core programming. Many companies are working harder to optimize programs for multi-core use, but it is not easy. And making that part more complicated slows down the development process which costs the company more money. Also, as I mentioned above, some processing just flat out works better on GPUs than CPUs, so it makes no sense to try and optimize that code for processing on a CPU. Just look at how long it is taking Intel to come out with a viable Larabee product.
 
You do realize that even if using our multiple cores at 100% they cannot handle physx like this do you?
Don't you think there is a reason CUDA performs so much better for certain applications than a CPU?
I am sure if optimize they could do a bit better with CPU but first, I do not think they care for that of course, and still it wont match the GPU performance so why bother?
A freakin used 8800gt is dirty cheap so just get one and stop complaining.
Thanks for the review!

Why bother, this game is merely OK, only a bit of eye candy is lost, and it is joining a still, after all this time, tiny list of AAA GPU accelerated PhysX titles.
 
Back
Top