FarCry 2 DX9 vs. DX10 Performance @ [H]

As delighted as I am to see the results, I'm equally disappointed that after two years, we only have O-N-E game to draw upon.

How is that? I've been gaming on Vista for years, and most DX9 titles are just as good on Vista as on XP. Then there's Crysis/Warhead which really make Vista/DX10 worthwhile.
Sure, if you want top framerates, go DX9 and set your detail low... But I found the games much nicer with DX10 and very high detail.
 
DX10 has micro stutter. DX9 still feels smoother. DX10 = continuous fail

EDIT - I agree with poster above, I've got a GTX 260. Something about DX10, DX9 is still smoother even if it's not faster.

Same for me, system in sig. Cat 8.11 drivers

The micro stutter sucked in DX10, going away a little when I capped my frames to 40 but not totally. Move to DX9 mode and all is well so although this article is well written my own experiences have shown DX9 to be better. Maybe when 8.12 drivers and a new patch for the game comes it will get better.
 
Game offers better gaming experience under XP than Vista. Article is well written but this time they are wrong unfortunately.
Frame rates are better under Vista but in Vista you get some short stutter every 10 seconds when moving, which is kind of distracting, XP is much smoother, even if average frame rates are lower.
Interestingly no reviewer I know found out about that, although in forums this is a well known problem. Possibly a memory streaming thing, what works (still) better under XP.
Easy to see on a dual boot system like mine, but you have to test the game on both OS otherwise you might get the impression, that Vista is smooth, because you just dont know better.
 
As delighted as I am to see the results, I'm equally disappointed that after two years, we only have O-N-E game to draw upon.

I agree completely. I got a free copy (actually 3) with some hardware purchases so I decided to fire it up. I am enjoying the game (1.5 hours on it), and it looks fantastic - really can't say enough about how it looks. But I also think about everyone rushing out to buy 8800s because they supported DX10 - and here we are 2 generations of cards later and finally get a game that utilizes it.

Just for the record - I have been playing the game on an XP box (C2D @3Ghz, GTX 260, etc) and on a Vista 64 (i7 @3Ghz, 8800 GTS 640, 3Gb RAM, etc) and the DX10 version looks better to me, but doesn't seem to be any less smooth that the DX9 version.
 
Great article. Never really thought DX10 would make such a difference. Wish you had tested the 280GTX vs a 4870X2!

Honestly, I think one of the reasons DX10 performance has been lower than DX9 in past is that It simply hasn't been a shift to DX10 but a shift to DX10 + extra eye candy. The extra eye candy ate up any performance advantage on those games as a result.
 
Honestly, I think one of the reasons DX10 performance has been lower than DX9 in past is that It simply hasn't been a shift to DX10 but a shift to DX10 + extra eye candy. The extra eye candy ate up any performance advantage on those games as a result.

Except that many DX10 titles barely look better, if at all.
 
Running a single 4870, the two biggest differences I've found for smoothness are DX10, and "ultra" quality on shadows.

FPS seemed very similiar to DX9 with DX10, but on my comp, DX10 feels choppier-- without question. 8.11 drivers.
 
I run a plain old 7200 RPM single 500GB drive for my C and my games are installed there. No issues. Turn off indexing.....
 
Is it just me or in the first 2 screenshots does it look like DX10 has HDR turned off? I am sure they were run the same...just noticed that though :) Since it said they were exactly the same
 
The jidder is still there, this with superfetch, indexing, themes switched off. When you get the jidder, there is harddrive activity. This harddrive activity is only when the game is running. Possibly when using a 10000 RPM harddrive with fast access time, this becomes less noticeable.

I'm no expert or anything, but I'm pretty sure your harddrive shouldn't be accessed outside of load areas. Even so, it shouldn't have anything to do with your preformance on a game. Sounds like something it pretty messed up on your configuration or something. Like you've got your avaliable texture memory extremely low or something. I dunno. Somethings really messed up though.
 
True I noticed this myself... But is HDR working in DX 10? it´s greyed out wonder if that has something to do with it. Can´t say the lighting look any worse though.
 
I don't know about you guys, but I don't see how Lower CPU usage is better for anyone.

I'm tired of seeing my quad-core used only to around 25% while I play a game. Wouldn't games run a lot better and be able to process more things if they actually used close to 100% of my CPU? A few years ago games would max out your single-core CPU, and that meant they were efficient. Now they can barely use a quarter of our available power and that's a good thing?

I'd love to see programmers be more efficient and take advantage of the power of today's CPUs (even lower end ones are dual-core)...which are really under-used in everything apart from 3D rendering. Nothing else comes close to using 100% of my CPU apart from CPU benchmarks.
 
I don't know about you guys, but I don't see how Lower CPU usage is better for anyone.

I'm tired of seeing my quad-core used only to around 25% while I play a game. Wouldn't games run a lot better and be able to process more things if they actually used close to 100% of my CPU? A few years ago games would max out your single-core CPU, and that meant they were efficient. Now they can barely use a quarter of our available power and that's a good thing?

I'd love to see programmers be more efficient and take advantage of the power of today's CPUs (even lower end ones are dual-core)...which are really under-used in everything apart from 3D rendering. Nothing else comes close to using 100% of my CPU apart from CPU benchmarks.
I have seen single core cpus stay at nearly 100% even in Real Arcade games so I am sure that doesnt necessarily mean that games were/are efficient.
 
I don't know about you guys, but I don't see how Lower CPU usage is better for anyone.

I'm tired of seeing my quad-core used only to around 25% while I play a game. Wouldn't games run a lot better and be able to process more things if they actually used close to 100% of my CPU? A few years ago games would max out your single-core CPU, and that meant they were efficient. Now they can barely use a quarter of our available power and that's a good thing?

I'd love to see programmers be more efficient and take advantage of the power of today's CPUs (even lower end ones are dual-core)...which are really under-used in everything apart from 3D rendering. Nothing else comes close to using 100% of my CPU apart from CPU benchmarks.

I've got an AMD dual core running at 3.3ghz with quite a bit of headroom left, too. Nearly everything except number crunching has been pushed off the the GPU.

I don't think CPU's are being under utilized. I suspect there simply isn't much left for CPU's to do now that GPU's rule the roost...at least in terms of gaming.
 
I have seen single core cpus stay at nearly 100% even in Real Arcade games so I am sure that doesnt necessarily mean that games were/are efficient.

Indeed, the CPU usage in itself doesn't say a thing about efficiency.
Something like:
while (true) DoNothing();

will take 100% CPU, and ... do nothing.
Sometimes a polling loop is actually the most efficient solution for something, even though it never releases the CPU to the OS for other tasks. It all depends on what you're doing.

However in this case the exact same game was tested under different circumstances, so that means we can do relative comparisons, and in this case lower CPU usage indeed means that something is running more efficiently.

It's been common knowledge for years with Microsoft, nVidia and ATi that there was considerable overhead in Direct3D and OpenGL calls, and there are quite a few papers and presentations around on their developer sites, addressing this issue, and advising programmers to keep the number of calls and state changes to an absolute minimum. The most wellknown one is probably the one titled "Batch, Batch, Batch", by nVidia:
http://developer.nvidia.com/docs/IO/8230/BatchBatchBatch.pdf
This was released way back in the early days of DirectX 9.
Therefore it wasn't very surprising when Microsoft announced that Direct3D 10 would introduce a new driver model that would minimize CPU overhead.
 
Well I took a look at my results again and found something interesting. "Single GPU" setting in the nvidia control panel doesn't work for FC2 in DX10 mode, but it does in DX9 mode. I used the SLI visualization to test it plus run the benchmark several times. DX10 with 1 card is almost as good as DX9 SLI.

e3110 (e8400 3ghz)@3.6ghz
4GB
8800GTS 512 SLI
Vista x64 SP1
180.48

DX10 SLI avg FPS 54
DX10 Sin setting avg 55
DX10 SLI Disabled avg FPS 63

DX9 SLI avg 67
DX9 Sin setting avg 46

I turned SLI Visualization on and benched both DX9 and DX 10. Then I set FC2 to Single GPU mode and benched DX9 just fine but when I benched DX10, the indicators were back on. So I tried DX9 again and it was off. So I disabled SLI, reboot and tried 10 again and the performance went up by 20FPS max and 10FPS avg and 5FPS min.

So, either DX10 SLI doesn't work in FC2 or just not at my resolution (1440x900). But I just don't see the resolution being the issue. My MIN FPS is 40% faster, AVG FPS is 45% faster and my MAX FPS is 60% faster with SLI on in DX9 mode. But absolutely nothing gained in DX10 SLI mode? I think DX10 SLI is either broke, or just extremely efficient that it doesn't need SLI to maintain a FPS at my resolution like DX9 does.

I also noticed DX10 benchmark does stutter quite a bit with SLI disabled, but the stuttering is much better in DX10 SLI. There was none in DX9 mode. This stuttering doesn't seem to carry over into the game play that I have noticed.

*edit*

Tried the new 180.84 and it does help the DX10 stuttering the but min FPS takes a big drop. Now SLI mode is not working by default, you have to force it on for FC 2 with the .84. Definitely SLI issues with this game. DX9 SLI benches are virtually identical to .48.
 
100% agree. in fact I have played this game on various hardware configurations and DX9 feels smoother EVERY time. sometimes walking or just looking around a room in DX10 cause little hitches or shuddering.

I also experience that microstuttering, or little hitches, is annoying, the funny thing is that it happens only when the game is played at full screen, if you play it on Window mode, it would dissapear. Also if you use FRAPS and you start to record a video of the game while you're playing, it would also dissapear. I think that it has something to do with the CPU usage in multi core systems. A friend of mine who has a similar setup, but instead of the QX6850, he has a QX9550 and a Asus P5Q vainilla and still experience the same little hiccups which are bothersome.
 
181.00 and 1.02 still does not fix the DX10 bug I'm experiencing. Regardless of my Far Cry 2 settings, whether Single GPU or Nvidia Recommended (SLI) is set, DX10 always runs in SLI. I have to disable SLI all together to run it with 1 GPU in DX 10 mode. I manually disabled SLI in order to do this test in DX10 single.

Here are some new results

3.0ghz e3110 (e8400)
8800GTS512 (x2)
4GB Ram
Vista x64
1440x900 Maximum settings

DX9 @3ghz
SLI: Avg FPS - 61
Single: Avg FPS - 46

DX9 @3.6ghz
SLI: Avg FPS - 68
Single: Avg FPS - 48

DX10 @3ghz
SLI: Avg FPS - 48
Single: Avg FPS - 48

DX10 @3.6ghz
SLI: Avg FPS - 56
Single: Avg FPS - 49

Seem the 600mhz OC did almost nothing for me in the single GPU setup. The min/max FPS were almost unchanged, just like the avg.The 600mhz OC earned 11-15% more FPS under SLI. I've benched these settings at lower resolutions, just for shits and giggles, and my FPS actually dropped.

What I'm wondering is, if I dropped my SLI and picked up, say the new GTX 260 216, and ran a single gpu, would my FPS pick up noticeably? 600mhz OC is gaining me nothing in single gpu mode. What do you guys think? I would consider switching if I could run my cpu at stock speeds, with 1 260 216 (probably wait for 55nm), but match the SLI speeds of my old cards, with a new GPU.
 
Back
Top