Fable Legends DX12 benchmark

So a second point of data for DX12 performance? Holy shit! This constitutes a preponderance of evidence, now! Everyone celebrate by spreading more FUD!
 
I see that the NVIDIA DEFENSE SQUAD is finally showing up. Allow me to explain the results to you, since the Fury X is capable of delivering smoother gameplay at 4K Ultra, this means unlike the 980 TI, I can turn down a few settings and enjoy better gameplay unlike the 980TI.

I actually looked at the chart and frame times are poor for both. Unplayable regardless of which card you have. So that's a win for Fury? lol

As far as turning settings down - you would have to re-test the frame time results then. You can't just assume they would be better.
 
Please post source for UE4 already supporting async because thus far Unreal only supports dx 12 implements for XBOne.

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

Lion Head isn't using the same engine as what is available to everyone else ok, and they have stated for this demo benchmark for Dx12, they are using async compute for PC's. If you are too lazy to read the articles, I suggest you don't post. I don't understand why people don't read... doesn't make any sense, they just want to be spoon feed.
 
There's gotta be something hardware-wise that's bottlenecking the Fury (or GCN 1.2 for that matter). If the 390X and even the 390 can outpace the 980 at DX12, why does the Fury seem so horribly gimped by comparison?
 
Why are people so focused on $600+ video cards and 4K?
I wish they'd take a closer look at the 960/285/380 and the 390/970/390X/980 all at 1080p and some 1440p. That's where the competition is.

And definitely throw AMD's official benchmarks out the window right from the start.

Also worth mentioning this game doesn't support async whatsoever, from what I'm reading.
 
There's gotta be something hardware-wise that's bottlenecking the Fury (or GCN 1.2 for that matter). If the 390X and even the 390 can outpace the 980 at DX12, why does the Fury seem so horribly gimped by comparison?

AMD Fury X and R9 390x have the same number of ROPs ( 64) despite Fury X having 30% more shaders. Because of this Fury X will have limited improvement over R9 390x in situations where brute shading power isn't required.
 
Last edited:
My biggest takeaway from this is anyone that continues to recommend a 970 over a 390 should get an infraction for trolling.
 
because of this

I think those people will find that others will have a hard time taking them seriously if all they do is accuse a developer of colluding with nvidia each time a benchmark result doesn't show what they like.

I don't think its healthy for anyone to be so emotionally invested in a company that they go beyond reason.
 
Lion Head isn't using the same engine as what is available to everyone else ok, and they have stated for this demo benchmark for Dx12, they are using async compute for PC's. If you are too lazy to read the articles, I suggest you don't post. I don't understand why people don't read... doesn't make any sense, they just want to be spoon feed.

Please post source for UE4 already supporting async because thus far Unreal only supports dx 12 implements for XBOne.

https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

Please post source for Fable Legends fully supporting DX 12 features.
 
Last edited:
I actually looked at the chart and frame times are poor for both. Unplayable regardless of which card you have. So that's a win for Fury? lol

As far as turning settings down - you would have to re-test the frame time results then. You can't just assume they would be better.

Ok, but at playable 1080p res we see the following results.
fable-1080p-avg.png


fable-1080p-95th.png
 
My biggest takeaway from this is anyone that continues to recommend a 970 over a 390 should get an infraction for trolling.

If all you're ever going to play is one cartoon F2P then sure. And that's assuming you can get a good price on one, but then factor that so many current DX11 games it gets beat handily by a mildly OC'd 970. By the time DX12 really hits a stride (end 2016 at earliest) and there's more than a few token titles, the landscape will have shifted again in that price bracket, in part due to Pascal.

In general though you can't go wrong either way if it's a choice between those two in particular.
 
Last edited:
If all you're ever going to play is one cartoon F2P then sure. Not to say 390X isn't bad value if you can get a good price on one, but so many current DX11 games where it gets best handily by a mildly OC'd 970.
So far there's no indication that Maxwell is going to pull ahead of Hawaii in DX12 games. We now have two benchmarks and the results are "meh" at best for Nvidia so there's no reason to continue buying 970's.
The 390 is at least as good as the 970 in DX11 games, and it has over twice the VRAM. With the performance gains of DX12 (which may or may not exist in every game) you're taking a gamble with the 970 at this point. There's really no pay off.

The real question is simply, why buy a GTX 970? You just shouldn't unless you have no choice... So, if you own a G-Sync monitor then buy a 970, I guess.
 
My biggest takeaway from this is anyone that continues to recommend a 970 over a 390 should get an infraction for trolling.

hGnvNxw.jpg


The 390 is not much faster in actual games on the market, but holy shit it sucks up enough power to require it's own flux capacitor. :eek:
 
So far there's no indication that Maxwell is going to pull ahead of Hawaii in DX12 games. We now have two benchmarks and the results are "meh" at best for Nvidia so there's no reason to continue buying 970's.
The 390 is at least as good as the 970 in DX11 games, and it has over twice the VRAM. With the performance gains of DX12 (which may or may not exist in every game) you're taking a gamble with the 970 at this point. There's really no pay off.

The real question is simply, why buy a GTX 970? You just shouldn't unless you have no choice... So, if you own a G-Sync monitor then buy a 970, I guess.

Eh how about day one drivers? Shadowplay? OC headroom? Cooler? Less power? All the current DX11 titles the 970 is faster at? Plenty of reasons. A tech demo and a cartoon game are a bit premature to be declaring DX12 victories for one side or the other. Lets wait for meaningful and reproducible benches on games people actually want to play. Way too much assumption and extrapolation going on. It definitely looks good for 390 but it's far from no-brainer.

89kLm6v.jpg
 
Last edited:
hGnvNxw.jpg


The 390 is not much faster in actual games on the market, but holy shit it sucks up enough power to require it's own flux capacitor. :eek:

Hmmm
74804.png


So 980Ti is just trash because it sucks down more than a 290X in Uber mode. Seriously power is like 7c/KWh on average, went as low as 2c for us here. Not a big deal.
 
Eh how about day one drivers? Shadowplay? OC headroom? Cooler? Less power? All the current DX11 titles the 970 is faster at? Plenty of reasons. A tech demo and a cartoon game are a bit premature to be declaring victories for one side or the other. Lets wait for meaningful and reproducible benches on games people actually want to play.

Anandtech wasn't using the latest Catalyst either.

AMD Catalyst Cat 15.201.1102 vs 15.201.1151 out already.
 
Anandtech wasn't using the latest Catalyst either.

AMD Catalyst Cat 15.201.1102 vs 15.201.1151 out already.
I thought drivers didn't matter in DX12? Or is that only true when Nvidia says their drivers are outdated... :p
 
Please post source for Fable Legends fully supporting DX 12 features.

pc perceptive review, I already linked it, and talked about it, and quoted it. READ instead of just posting without.

http://hardforum.com/showpost.php?p=1041873255&postcount=13

http://www.pcper.com/reviews/Graphi...-Benchmark-DX12-Performance-Testing-Continues

Compute shader simulation and culling is the cost of our foliage physics sim, collision and also per-instance culling, all of which run on the GPU. Again, this work runs asynchronously on supporting hardware.

That is the developer talking about async compute for this specific benchmark.
 
pc perceptive review, I already linked it, and talked about it, and quoted it. READ instead of just posting without.

http://hardforum.com/showpost.php?p=1041873255&postcount=13

http://www.pcper.com/reviews/Graphi...-Benchmark-DX12-Performance-Testing-Continues



That is the developer talking about async compute for this specific benchmark.
I don't know the validity of your post, but move the "Dark green line" bit above the graph so it's easier to see.

What about dynamic lighting and post processing? Aren't those handled asynchronously? Looks like Maxwell loses at both of those.
AnandTech found similar results.

When we do a direct comparison for AMD’s Fury X and NVIDIA’s GTX 980 Ti in the render sub-category results for 4K using a Core i7, both AMD and NVIDIA have their strong points in this benchmark. NVIDIA favors illumination, compute shader work and GBuffer rendering where AMD favors post processing, transparency and dynamic lighting.

mpvCATw.png
 
I thought drivers didn't matter in DX12? Or is that only true when Nvidia says their drivers are outdated... :p


Drivers still matter lol, people that thought Dx12 and low level API's were going to make driver optimizations the way of the dodo are just fooling themselves,
 
hGnvNxw.jpg


The 390 is not much faster in actual games on the market, but holy shit it sucks up enough power to require it's own flux capacitor. :eek:

Let's see....

200w 8hrs a day

(200x8)/1000 = 1.6kwh per day

$0.16 per kWh (here in CT, generation plus delivery cost)

1.6 x 0.16 = $0.256

Wow. If you game 8hrs a day, with the gpu pegged at 100% the whole time, it costs you 27 cents a day to run a 390 vs a 970.

Guess I'm not that worried about that $0.27 a day.
 
I don't know the validity of your post, but move the "Dark green line" bit above the graph so it's easier to see.

What about dynamic lighting and post processing? Aren't those handled asynchronously? Looks like Maxwell loses at both of those.
AnandTech found similar results.



mpvCATw.png


In this specific benchmark looks like they aren't using async compute for lighting, only physics and occlusion culling for the foliage. Which is represented by the compute shader simulation & culling line, in the pc perspective graphs (the dark green line) it was labeled as compute shaders.
 
The real question is simply, why buy a GTX 970? You just shouldn't unless you have no choice... So, if you own a G-Sync monitor then buy a 970, I guess.

Why buy a GTX 970? Because you're broke, plain and simple. If you weren't and you still care about power/heat, you would've bought a 980 instead.
 
I don't know the validity of your post, but move the "Dark green line" bit above the graph so it's easier to see.

What about dynamic lighting and post processing? Aren't those handled asynchronously? Looks like Maxwell loses at both of those.
AnandTech found similar results.



mpvCATw.png

Anandtech screwed up their labeling here. Is AMD the orange bars as the graph suggests or black bars as the paragraph says?
 
Can someone give me a 30,000 ft view, especially of the mid-lower level stuff? (One should hope that it's fairly general architecture vs. architecture)

From what I understand--it *seems* like everyone is benefiting from DX12's API to some degree, but AMD hardware is seeing a larger jump from DX11? Nvidia's chips at the top-end are still faster, but the mid-level seems to be a bit more parity performance/$$? (This is ignoring that Nvidia is still way, way lower power).

I'm utterly disinterested in who's "winning", "told you so", and all that.
 
The Anandtech benchmark has the i3>i5>i7 ... That's Suspicious.

That is on Ultra 4k only, lower settings are set accordingly.

Hmm ok let's say that this must have been actually same performance... still grain of salt but interesting nonetheless.

Wtf at 1080p the Fury has inverse scaling actually being faster on the i3 than the i5 than the i7, similar to the general Ultra 4k performance.
Are the extra threads being generated actually adding latency vs the already low performance hit on Dx12??
 
Last edited:
I dont see anything here that changes what we saw in AOTS.


AMD defeats Nvidia in midrange handily. Deciding between 980/980ti/furyx/fury is now an exercise of what you actually are going to use the card for, and when your next upgrade timeframe lies. Legitimate arguments independent of price can be made for either brand at the 450+$$ price.

There is no reason to buy a gtx 970, 950, or 960 right now, none whatsoever.
 
Eh how about day one drivers? Shadowplay? OC headroom? Cooler? Less power? All the current DX11 titles the 970 is faster at? Plenty of reasons. A tech demo and a cartoon game are a bit premature to be declaring DX12 victories for one side or the other. Lets wait for meaningful and reproducible benches on games people actually want to play. Way too much assumption and extrapolation going on. It definitely looks good for 390 but it's far from no-brainer.

89kLm6v.jpg

Asking those types of questions but not knowing amd has had an equivalent to shadowplay for over a year now, as well as direct twitch streaming
 
We spent the last month listening to people ramble about the 290X beating a 980 Ti in DX12. So... There's a bit of a difference in Fable.

E30xoY3.png

that was a false narrative.

AMD performing well in aots was/is always about midrange cards that would offer unexpected performance gains over top tier cards, comparatively speaking of course
 
If all you're ever going to play is one cartoon F2P then sure. And that's assuming you can get a good price on one, but then factor that so many current DX11 games it gets beat handily by a mildly OC'd 970. By the time DX12 really hits a stride (end 2016 at earliest) and there's more than a few token titles, the landscape will have shifted again in that price bracket, in part due to Pascal.

In general though you can't go wrong either way if it's a choice between those two in particular.

Look at these amazon sales charts.

http://www.amazon.com/gp/bestsellers/pc/284822/ref=pd_zg_hrsr_pc_1_3_last


The number 1 selling gpu, out of all gpus there, is a 970 (with multiple variants not far below that spot). The number of people looking to get a gpu in the 300-350 range is absolutely enormous.

So whoever offers the best card in that price range can make incredible sales gains. Right now, that card should not be a 970 based on the facts, for most people. If you only have a 400W power supply and have an overclocked 8350, ok, then you might need to save some power. But anyone who is building a system and decides to choose such a crippled piece of hardware just to constrain themselves to a 970 is doing it wrong.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Let's see....

200w 8hrs a day

(200x8)/1000 = 1.6kwh per day

$0.16 per kWh (here in CT, generation plus delivery cost)

1.6 x 0.16 = $0.256

Wow. If you game 8hrs a day, with the gpu pegged at 100% the whole time, it costs you 27 cents a day to run a 390 vs a 970.

Guess I'm not that worried about that $0.27 a day.

I agree but at the same time that's about $90/year which can be otherwise put into a GPU upgrade and is instead wasted powering a GPU that is sucking down too much power relative to it's performance. It still makes the 970 a better buy, especially for someone that plans to upgrade and is on a limited budget and can't afford to burn $90/yr on electric for a GPU.
 
Yeah but that's 8 hours of full load per day, and the 970 also runs up your power bill too.

True but his calculation was based off the 200W difference and there are a lot of gamers that play games all day long. Maybe not at peak load but even if above half and say they burn $50-60+ (over what a 970 would burn) a year on electric just for that GPU, it diminishes the value of owning it, especially if you plan to upgrade every 3 years. That's roughly $150-270/upgrade cycle depending on how much you game and the load you put on the GPU. This is amplified even more if you plan to Crossfire two of those.
 
Back
Top