Fable Legends DX12 benchmark

Quartz-1

Supreme [H]ardness
Joined
May 20, 2011
Messages
4,257
There's a new DX12 benchmark.

http://www.extremetech.com/gaming/2...o-head-to-head-in-latest-directx-12-benchmark

The results are quite interesting: the Radeon cards are ahead at 1080p but behind - just - at 4k.

FableLegends.png


AMD-Perf2.png


Note in particular the R9 Nano's performance.
 
Just be aware that only the first set of numbers comparing the Fury X and 980ti are actually from ExtremeTech testing. The other numbers are supplied from AMD as noted in the ExtremeTech article.
 
Uh oh. Looks like Anand will be added to AMD's blacklist. It's not fair to have NVIDIA beating AMD in DX12 benchmarks.

Doubt it. This test seems to show much of the same. The 290x/390 cards get a significant boost in Dx12. Even though Anand left out the 980 the 290x/ 390 actually seem to outperform a 980 which are sold at $500 and the 290x/390's are what? $370 Seems like whoever stuck with their 290x will make out pretty nicely when DX12 games come out.

TR shows much of the same and they add the 980 so that you can see where it sits.

http://techreport.com/review/29090/fable-legends-directx-12-performance-revealed/3

The weird part is how close they (290x) perform to the Fury line. I think AMD needs to fix that if they can. No one is going to buy the high end cards when their year old cards perform so well. Who isn't going to want to save $150. It's only happens on Dx12 but still it's something to think about.
 
Last edited:
The review over at thetechreport shows radeons fairing a lot better in this benchmark. Overall they're almost on par with the 980ti
 
? the 980 ti is 24% faster at 4k than the Fury X in techreport's benchmarks.........

The GeForce cards perform well generally, in spite of this game's apparent use of asynchronous compute shaders. Cards based on AMD's Hawaii chips look relatively strong here, too, and they kind of embarrass the Fiji-based R9 Fury offerings by getting a little too close for comfort, even in 4K. One would hope for a stronger showing from the Fury and Fury X in this case.
 
?

there was never any quesiton that nvidia was faster.

Even in Ashes, the 980TI was faster.

It was the uplift from DX11 to DX12 that was the issue.

None of these benchmarks show this.

Also, does anyone have any idea if Async is used at all in this benchmark?
 
http://www.pcper.com/reviews/Graphi...-Benchmark-DX12-Performance-Testing-Continues

Fable Legends is a gorgeous looking game based on the benchmark we have here in-house thanks in some part the modifications that the Lionhead Studios team has made to the UE4 DX12 implementation. The game takes advantage of Asynchronous Compute Shaders, manual resource barrier tracking and explicit memory management to help achieve maximum performance across a wide range of CPU and GPU hardware.
Compute shader simulation and culling is the cost of our foliage physics sim, collision and also per-instance culling, all of which run on the GPU. Again, this work runs asynchronously on supporting hardware.
Second quote, maxwell 2 has less latency at async than Fiji or GCN hardware which we can see here
http://www.pcper.com/reviews/Graphi...erformance-Testing-Continues/Results-4K-Ultra

fable-4k-timings.png


Dark green line
 
Last edited:
And like %20 faster than the 980Ti anand used. So 980Tis are finicky now...


You can't compare benchmarks across different sites that way, you can interpret benchmarks indirectly, but not direct correlations.
 
?

there was never any quesiton that nvidia was faster.

Even in Ashes, the 980TI was faster.

It was the uplift from DX11 to DX12 that was the issue.

None of these benchmarks show this.

Also, does anyone have any idea if Async is used at all in this benchmark?


nV's hardware won't show as much of a gain from Dx11 to 12 as AMD GPU's will, because nV'sr Dx11 drivers have lower CPU overhead to begin with.
 
nV's hardware won't show as much of a gain from Dx11 to 12 as AMD GPU's will, because nV'sr Dx11 drivers have lower CPU overhead to begin with.

I mentioned it before, it's hard to tell how much AMD gains from running DX12 code...and how much of the gain is due to AMD's CPU overhead in DX11 Code.

One has to wonder what is at fault in AMD's DX11 drivers.
 
This benchmark is rather meh, it's a canned benchmark that has no actual game play happening.

AOTS was a far more realistic benchmark due to it simulating actual gameplay, this is literally a tech demo of the environment. Pretty much identical to all the other UE4 Tech Demos people have released.
 
The engine is still Unreal 4. I don't get the surprise some of you got. It is still an engine closely related to Nvidia.
 
This benchmark is rather meh, it's a canned benchmark that has no actual game play happening.

AOTS was a far more realistic benchmark due to it simulating actual gameplay, this is literally a tech demo of the environment. Pretty much identical to all the other UE4 Tech Demos people have released.

Read the description of the benchmark.
It is that way in order to show of the technical aspects of DirectX12.

Dismiss at your own peril.
 
This benchmark is rather meh, it's a canned benchmark that has no actual game play happening.

AOTS was a far more realistic benchmark due to it simulating actual gameplay, this is literally a tech demo of the environment. Pretty much identical to all the other UE4 Tech Demos people have released.
Still canned benchmarks are canned.
 
The AMD defense force is going to be busy today.

What defence? The Fury X is neck and neck with the 980ti, and the 390x (2 year old tech) is faster than the 980.

All that on an engine that traditionally runs better on Nvidia.
 
The GTX 980TI is doing horrible compared to FuryX at frame times and delivering smooth gameplay. And I thought that PcPer was an outright nvidia shill website. What's going on here?!


Right away in that first frame time graph you can see the additional variance and larger amount of slow frame times on the GTX 980 Ti compared to the R9 Fury X. Even though the average frame rate is higher with the GeForce card, it would be easy to say that the experience between the two products is much more evenly matched. Even with the bottom two comparisons (GTX 960 vs R9 380 and GTX 950 vs R7 370) AMD has an advantage when it comes to consistency of frame times, even if its at frame rates that no one would want to play at.
fable-4k-gtx980ti.png


fable-4k-gtx980.png



Don't buy any graphics cards until Nvidia drops prices some more.
 
Last edited:
What defence? The Fury X is neck and neck with the 980ti, and the 390x (2 year old tech) is faster than the 980.

All that on an engine that traditionally runs better on Nvidia.

He forgot in the midst of his fav hobby lol.:D
 
Read the description of the benchmark.
It is that way in order to show of the technical aspects of DirectX12.

Dismiss at your own peril.

Yes, was just saying that everyone going off about OMG there you go this is definitive example of nVidia DX12 superiority might want to look at the actual benchmark since it doesn't show any game play. Where AOTS actually did have a more realistic benchmark.
 
Yes, was just saying that everyone going off about OMG there you go this is definitive example of nVidia DX12 superiority might want to look at the actual benchmark since it doesn't show any game play. Where AOTS actually did have a more realistic benchmark.

What part of the benchmark description eludes your comprehension?
Are you deliberately trolling?
 
I had to laugh a little at the lack of problems with Async compute on NVidia :)
Poor red fanboys.

The 390x is very competitive, impressed.

Results from different sites vary quite a bit on 980ti vs Fury.
 
I had to laugh a little at the lack of problems with Async compute on NVidia :)
Poor red fanboys.

The 390x is very competitive, impressed.

Results from different sites vary quite a bit on 980ti vs Fury.

I would bet that the engine doesn't fully support ASYNC on the PC to make sure that the games run well on Nvidia hardware. Of course on console it would have to for more performance. That's a guess though. It would be interesting to see some documentation on the UE4 to see if it does or doesn't. The numbers kinda look like Ashes of the Singularity when you disable ASYNC. Then again Nvidia could have worked some driver magic.

:)
 

because of this

–]SilverforceG 48 points 3 hours ago*
For those unaware, Unreal Engine 4 which powers Fable Legends ONLY SUPPORTS ASYNC COMPUTE FOR XBONE. Not on PC.
You would have to ask, why not on PC? Why are consoles given special treatment? The consoles run AMD's GCN GPU, if they can handle Async Compute, then surely PC can too?
But wait, dig further... UE4 is nvidia sponsored. Full PhysX integration.
Interesting? You bet.
 
because of this


That guy doesn't know what he is talking about, UE4 doesn't have full physX integration, never had and never will out of the box. UE4 only has CPU physX and its very rudimentary at that for basic animation and collision detection for certain objects. UE4 has a completely separate physics engine for particles which is gpu driven, although flex and apex can be added to it relatively easily.


Also the developer stated they are using async compute for foliage culling and physics on the PC version, which are not done with physX at all.
 
That guy doesn't know what he is talking about, UE4 doesn't have full physX integration, never had and never will out of the box. UE4 only has CPU physX and its very rudimentary at that for basic animation and collision detection for certain objects. UE4 has a completely separate physics engine for particles which is gpu driven, although flex and apex can be added to it relatively easily.


Also the developer stated they are using async compute for foliage culling and physics on the PC version, which are not done with physX at all.

i just searched a little bit more and tada....

http://www.overclock.net/t/1574740/anand-fable-legends-dx12-benchmark-analysis/50

enjoy
 
The GTX 980TI is doing horrible compared to FuryX at frame times and delivering smooth gameplay. And I thought that PcPer was an outright nvidia shill website. What's going on here?!

Did you read the entire paragraph? Who cares if the frame times are better when they're at unplayable frame rates anyway.

And that is just one game, the initial Fury review at PCPer showed very poor frame times against the 980 Ti across a wide variety of games.
 
That guy doesn't know what he is talking about, UE4 doesn't have full physX integration, never had and never will out of the box. UE4 only has CPU physX and its very rudimentary at that for basic animation and collision detection for certain objects. UE4 has a completely separate physics engine for particles which is gpu driven, although flex and apex can be added to it relatively easily.


Also the developer stated they are using async compute for foliage culling and physics on the PC version, which are not done with physX at all.

Please post source for UE4 already supporting async because thus far Unreal only supports dx 12 implements for XBOne.

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html
 
Did you read the entire paragraph? Who cares if the frame times are better when they're at unplayable frame rates anyway.

And that is just one game, the initial Fury review at PCPer showed very poor frame times against the 980 Ti across a wide variety of games.
PC: Corsair 650D || Asus X99 Deluxe || i7 5930k || 32GB GSKILL DDR4 || 980 Ti ||
I see that the NVIDIA DEFENSE SQUAD is finally showing up. Allow me to explain the results to you, since the Fury X is capable of delivering smoother gameplay at 4K Ultra, this means unlike the 980 TI, I can turn down a few settings and enjoy better gameplay unlike the 980TI.
 
Last edited:
FL is still under NDA... It pops up in bright red colors every time I open the XBO client. How/why are they posting these results?

Ahh, I see it's a MS Benchmark.
 
Back
Top