Watch Dogs on AMD CPUs

Simmonz

2[H]4U
Joined
May 14, 2008
Messages
2,506
Anyone care to share their experience with Watch dogs on their AMD rigs ?

I haven't been able to play more than a few hours yet but what I have played I have gotten an average FPS of 40 so far with Depth of Field turned off, Ambient Occlusion turned off, AA turned off and the rest on High at 1680*1050.

The game makes good use of my 6 cores but won't max out any of them. Usually I am at 40-46% usage but it spread equal across all 6 cores. My GPU usage doesn't go over 40-50% either. Makes me wonder why my FPS isn't higher.
 
I got the FX-8350 and I find watch dogs uses 6 cores. I don't see much activity on the remaining 2 cores. The only game that I've found that makes signficant use of all 8 cores are games that run on frostbite.
 
The game runs everybody's cores at 50%.
My 4670k's 4 threads are running 50%.

People with 4770k's are sitting at 50% across all 8 threads.
It's silly.
 
I got the FX-8350 and I find watch dogs uses 6 cores. I don't see much activity on the remaining 2 cores. The only game that I've found that makes signficant use of all 8 cores are games that run on frostbite.

From what i saw the game doesn't have any trouble scalling from 6 to 12 cores when you enable/disable HT on Sandy/Ivy-E
 
It seems the game tries to hug vsync or vsync/2 fps.
At 60Hz refresh I have seen around 33 to 45fps as well, but at times I see my CPU and GPU running just over 1/2 occupied and it hits 30fps ish.
If I turn vsync off, I get much higher occupancy and framerate.

So it looks like the implementation of vsync/buffering is holding up the framerate.
Changing the number of buffers doesnt fix this as it causes terrible stutter if raised above 2.
I tried forcing vsync in CCC but it is ignored (cat 14.6 beta)

Due to this I play with vsync off and its a "much" better experience.

I'm running a 2500K @ 4.5GHz and a 290x @1080p/60Hz fyi.
[ULPS disabled so GPU clock hugs 1000MHz]


edit

After further testing, the situ is a little different but still odd.
With vsync on, where I currently am hugs 60fps unless I turn towards a crowd, and then get around 52fps and it looks jerky (tried buffering of 1 to 4).
[cpu use 50 to 75% on each core, GPU use 75%]

With vsync off, I get well over 60fps (64 to 67fps) looking at the same crowd and it is smooth.
[cpu use 79 to 91% on each core, GPU use 95%]
Seeing as it can achieve over 60fps sustained, why isnt it doing 60fps with vsync on?

There is a problem with how the engine runs with vsync on.
 
Last edited:
This is not that surprising to me. Most developers struggle with coding for more then 4 cores.
 
Its not overloading any of the cores with vsync on though, so its not a cpu limitation.
Its not even overloading the cores with vsync off and I get more fps than the monitors refresh rate.
 
People with 4770k's are sitting at 50% across all 8 threads.
It's silly.
Usually that's because the application is bypassing Hyperthreading so "logically" it appears to only be utilizing 50% of the CPU. I wonder what would happen if 4770k owners disabled HT.
 
Watchdogs is revealing itself as the worst coded game ever. Beside it the original crysis looks like greta code.
 
Watchdogs is revealing itself as the worst coded game ever. Beside it the original crysis looks like greta code.

Is the original Crysis badly coded? Can you point out some articles which reveal bad coding on Watch Dogs? Not being rude - I'm genuinely curious.
 
Crysis 1 was horrible, memory leaks everywhere and there was a whole fucking tessellated ocean under the land.
 
I thought that was Crysis 2? Crysis 1 had that too?

No, that was definitely crysis 2.

The first game's controversy stemmed from the fact that you could get nearly the same image quality in Dx9 mode as dx10 if you dug through config files.

The improved quality seen in the dx10 version was a fabrication.
 
Crysis 2 was such a weird conspiracy. NVIDIA knew AMD 6800 series and 5800 series ran like crap with tess and prob took advantage.
 
It seems the game tries to hug vsync or vsync/2 fps.
At 60Hz refresh I have seen around 33 to 45fps as well, but at times I see my CPU and GPU running just over 1/2 occupied and it hits 30fps ish.
If I turn vsync off, I get much higher occupancy and framerate.

So it looks like the implementation of vsync/buffering is holding up the framerate.
Changing the number of buffers doesnt fix this as it causes terrible stutter if raised above 2.
I tried forcing vsync in CCC but it is ignored (cat 14.6 beta)

Due to this I play with vsync off and its a "much" better experience.

I'm running a 2500K @ 4.5GHz and a 290x @1080p/60Hz fyi.
[ULPS disabled so GPU clock hugs 1000MHz]


edit

After further testing, the situ is a little different but still odd.
With vsync on, where I currently am hugs 60fps unless I turn towards a crowd, and then get around 52fps and it looks jerky (tried buffering of 1 to 4).
[cpu use 50 to 75% on each core, GPU use 75%]

With vsync off, I get well over 60fps (64 to 67fps) looking at the same crowd and it is smooth.
[cpu use 79 to 91% on each core, GPU use 95%]
Seeing as it can achieve over 60fps sustained, why isnt it doing 60fps with vsync on?

There is a problem with how the engine runs with vsync on.
That is an interesting analysis. I can confirm that with Vsync off, at least, my i7-4770 utilization is typically 60-70% across all 8 threads with HT enabled, while both GPUs sit around 90-95%. I'll have to test with Vsync on to see what happens. But the fact that GPU1 utilization is under 99% is very telling of something weird or wrong in the way the game was programmed. In fact, GPU1 is always lower in utilization in my case with SLI than GPU2, which is the opposite of the typical SLI scenario.
 
I was just flamebaiting the thread when i talked about the original Crysis, which is still my reference when talking about a game with massive hardware requirements for measly image quality gains.
The joke " But can it run Crysis?" was not undeserved.:p
 
I was just flamebaiting the thread when i talked about the original Crysis, which is still my reference when talking about a game with massive hardware requirements for measly image quality gains.
The joke " But can it run Crysis?" was not undeserved.:p

Measly gains? That game was the defining benchmark of how realistic/awesome graphics could look, and still looks better then some current gen games! It was a slideshow on the hardware of the era, but considering how "forward thinking" Crytek was with it..
 
Cousin says it's pretty bad.. He did some patch that made the game think it was using a GTX 660 and it somehow helped alot.. Pretty wierd >.<
 
According to the techspot the difference between the 4770k and FX-8350 is 3 FPS...

http://www.techspot.com/review/827-watch-dogs-benchmarks/page5.html

With DX12 "supposedly" reducing CPU overhead I may hang on to this 8350 a bit longer and SLI Maxwell when it comes out...CPU bottleneck? These days games are well threaded so I don't have to worry about games like Skyrim where it only uses 2 cores are reliant on high IPC...
 
According to the techspot the difference between the 4770k and FX-8350 is 3 FPS...

http://www.techspot.com/review/827-watch-dogs-benchmarks/page5.html

With DX12 "supposedly" reducing CPU overhead I may hang on to this 8350 a bit longer and SLI Maxwell when it comes out...CPU bottleneck? These days games are well threaded so I don't have to worry about games like Skyrim where it only uses 2 cores are reliant on high IPC...
The entire CPU bench is capped at 82fps which means they're hitting a GPU bottleneck.
Not to mention their results are nearly TWICE as high as they should be. I assume they benchmarked the game by staring at a wall.

http://www.tomshardware.com/reviews/watch-dogs-pc-performance,3833-8.html
 
Tomshardware didn't test the 4770K so that chart is pointless as it's what most people reference when comparing AMD and Intel processors. That 3960X is $1,100. But going by previous titles, I5's usually comes close to the i7 4770K in games anyways.
Hardocp tested at a different resolution than others and only did a single Intel processor. Usually knock off 5% and you've got your AMD frame rate. It would then look awfully similar to the Techspot review then.
That leaves Techspot as the only review that tested everything, even though they were probably standing still somewhere.
 
I have had pretty good performance here. Reading how unoptimized the game is I have it maxed out at 4xmsaa and 16xaf and it very smooth.
 
On an unrelated note it's nice to see some guys who hit 5 Ghz on your 8320's. I am thinking of picking one up soon to replace my FX6100 and was worried I wouldn't be able to equal my 4.0 I got on it.
 
Back
Top