Fable Legends DX12 benchmark

You guys should really bring your 390s to the tropics some time and try gaming there, without air conditioning. Then try with a 970. You tell me which is more pleasant in a room with little to no airflow (no wind).

100W less heat doesn't just mean lower bills, it means more comfort as well. It also means working well in many many more situations out there (bad cases with very little airflow).

Either way, what I don't see people questioning enough is AMD's released results. Why do they differ so much from actual results? They say that it's the Ultra preset, but there's nobody that managed to pull those results off.
Should bring your 970 to Canada sometime, piddly 3.5 GB card would require me bring a large heater in my room so there's $200 along with power usage of heater. The FuryX looks so tempting cause I can use it to target my feet
 
Should bring your 970 to Canada sometime, piddly 3.5 GB card would require me bring a large heater in my room so there's $200 along with power usage of heater. The FuryX looks so tempting cause I can use it to target my feet
Even better, you can run the rad out of your case, put it on the floor, and set your feet right ontop of it.
I'd pay money to see a picture of that.
 
Even better, you can run the rad out of your case, put it on the floor, and set your feet right ontop of it.
I'd pay money to see a picture of that.

That's brilliant!

So clearly AMD is the superior brand just based on foot warming utility. :D
 
when will that benchmark be available for download ?

any results with cards overclocked ?
 
Should bring your 970 to Canada sometime, piddly 3.5 GB card would require me bring a large heater in my room so there's $200 along with power usage of heater. The FuryX looks so tempting cause I can use it to target my feet

Hahaha, thats good one. Maybe I'll do the same when the winter finally arrives here with my Fury X :D

Back to topic: It feels like everybody is missing the point with this test. It is a Nvidia sponsored Unreal Engine 4 game where Nvidia traditionally has been running circles with AMD cards in DX11.
Like this:
http--www.gamegpu.ru-images-stories-Test_GPU-Action-ARK_Survival_Evolved-test-arc_1920m.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Unreal_Tournament_-cach-UE4_2560.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Kholat-test-Kholat_2560.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-The_Vanishing_of_Ethan_Carter_Redux-test-EthanCarter_2560.jpg


Nvidia cards are roflstomping AMD counterparts in DX11. Now lets add in DX12 to that still nvidia sponsored UE4 Engine and we suddenly get this:

Untitled.jpg


Btw, those are all custom cards with the exception of Fury, Nano and TX

Now the tables have evened out considerably and AMD card are actually beating their counterparts from Nvidia like 290 vs 780Ti and 970, 390X vs 980, 280X vs 770, 380 vs 960 etc. how the hell did this happen :confused:
Sure the 980Ti is still the king of the hill but that doesn't downplay the fact that AMD got considerable boosts with their older cards. But because Fury X didn't win 980Ti, people conclude that DX12 is not doing anything for the AMD lineup even tough they can just look at the data. My gues is that "most" people just looks at the top of the charts and see 980Ti sitting there so they conclude that there's nothing to see here and then come to forums to say that AMD loses even in DX12 but the reality is that AMD only looses the Top of the line battle and wins in every other category.

It feels like the whole driver overhead bottleneck which haunts AMD in DX11 is gone from AMD drivers when using DX12. Of course there's only two data point so we still can't make any final conclusions but I for one actually like these turn of events. It evens the playing field when comparing nvidia and amd cards. Now you can't just flat out say that 970 is the better card when compared to 390 as 390 might actually get quite big performance boost in DX12 games when looking at these initial DX12 results.
 
Last edited:
Can we really make blanket statements about an entire engine? The AMD community are the ones who were originally downplaying UE4 due to Epic's heavy involvement with Nvidia. Some people even pinned ARK's DX12 delay on Nvidia... Keep in mind ARK is a GameWorks game, Fable is not.

Unreal Engine 3, DX11...
Thief, Gaming Evolved: http://www.techpowerup.com/reviews/EVGA/GTX_970_SC_ACX_Cooler/18.html
Arkham Origins, GameWorks: https://www.techpowerup.com/reviews/ASUS/GTX_970_STRIX_OC/7.html

I don't know anything about Fable's implementation but according to PCPer they're using their own version of a "heavily modified" UE4.
Seeing Fable's DX11 results could shed some light but it looks like the game forces DX12 as long as you have capable hardware.

AMD released their driver for this game already (AnandTech didn't use it apparently) but we haven't heard anything from Nvidia. Their drivers have been shitty over the last few months especially in Windows 10, I would chalk this up as a driver issue more than anything else. But hey, 'drivers don't matter in DX12'. Maybe Nvidia is being lazy/incompetent or maybe they just don't give a damn about games that aren't even released yet. AMD has always struggled to stay afloat in DX11 so it makes sense they would switch their efforts to early DX12 benchmarks just to build some hype.

"Don't look at our mediocre bottleneck-induced DX11 benchmarks, check out these new flashy DX12 benchmarks for games you can't actually play for another 6+ months! Aren't those fantastic?"
 
Last edited:
Hahaha, thats good one. Maybe I'll do the same when the winter finally arrives here with my Fury X :D

Back to topic: It feels like everybody is missing the point with this test. It is a Nvidia sponsored Unreal Engine 4 game where Nvidia traditionally has been running circles with AMD cards in DX11.
Like this:
http--www.gamegpu.ru-images-stories-Test_GPU-Action-ARK_Survival_Evolved-test-arc_1920m.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Unreal_Tournament_-cach-UE4_2560.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Kholat-test-Kholat_2560.jpg

http--www.gamegpu.ru-images-stories-Test_GPU-Action-The_Vanishing_of_Ethan_Carter_Redux-test-EthanCarter_2560.jpg


Nvidia cards are roflstomping AMD counterparts in DX11. Now lets add in DX12 to that still nvidia sponsored UE4 Engine and we suddenly get this:

Untitled.jpg


Btw, those are all custom cards with the exception of Fury, Nano and TX

Now the tables have evened out considerably and AMD card are actually beating their counterparts from Nvidia like 290 vs 780Ti and 970, 390X vs 980, 280X vs 770, 380 vs 960 etc. how the hell did this happen :confused:
Sure the 980Ti is still the king of the hill but that doesn't downplay the fact that AMD got considerable boosts with their older cards. But because Fury X didn't win 980Ti, people conclude that DX12 is not doing anything for the AMD lineup even tough they can just look at the data. My gues is that "most" people just looks at the top of the chart and see 980Ti sitting there so they conclude that there's nothing to see here and then come to forums to say that AMD loses even in DX12 but the reality is that AMD only looses the Top of the line battle and wins in every other category.

It feels like the whole driver overhead bottleneck which haunts AMD in DX11 is gone from AMD drivers when using DX12. Of course there's only two data point so we still can't make any final conclusions but I for one actually like these turn of events. It evens the playing field when comparing nvidia and amd cards. Now you can't just flat out say that 970 is the better card when compared to 390 as 390 might actually get quite big performance boost in DX12 games when looking at these initial DX12 results.

Tell me how much of AMD's gain is from DirectX12...and how much is from their DirectX11 driver being FUBAR?

Think about it...and then tell me when there will come more DirectX12 games out than DirectX11 games.

Funny mental exercise.
 
Nvidia cards are roflstomping AMD counterparts in DX11. Now lets add in DX12 to that still nvidia sponsored UE4 Engine and we suddenly get this:

Untitled.jpg


Now the tables have evened out considerably and AMD card are actually beating their counterparts from Nvidia like 290 vs 780Ti and 970, 390X vs 980, 280X vs 770, 380 vs 960 etc. how the hell did this happen :confused:

It feels like the whole driver overhead bottleneck which haunts AMD in DX11 is gone from AMD drivers when using DX12. Of course there's only two data point so we still can't make any final conclusions but I for one actually like these turn of events. It evens the playing field when comparing nvidia and amd cards. Now you can't just flat out say that 970 is the better card when compared to 390 as 390 might actually get quite big performance boost in DX12 games when looking at these initial DX12 results.

Make no mistake, Sweeney hates AMD but is stuck having to optimize UE4 for consoles. PC games can inherit some of these optimizations for AMD. On Gameworks titles... Some, or all, of these optimizations seem to disappear and go back to default... Sleazy stuff... Anyways...

That being said, the difference comes from the basic GPU differences. A gross summary would be Nvidia built their architecture around a sequential series of events architecture to feed the GPU. AMD built their GPU to be able to handle inherently parallel information requests.

On DX11 and under nvidia's approach this works better with the way Directx feeds the information.

In DX12 this approach still works for nvidia, but now AMD's GPU architecture can be properly fully utilized.
 
Tell me how much of AMD's gain is from DirectX12...and how much is from their DirectX11 driver being FUBAR?

Think about it...and then tell me when there will come more DirectX12 games out than DirectX11 games.

Funny mental exercise.

No, AMD DX 11 drivers weren't FUBAR, it was Fiji and HAwaii archiectures being enlarged derivatives of console gpu's which reduced their sequential thread dx11 performance.

I think we all discussed enough async related architecture crap to know this by now. DX 12 allows AMD to use full power of parallelized Fiji and Hawaii architectures
 
Last edited:
Holy moly, I said that we can't make any final conclusions but it seems that even posting something positive about AMD DX12 performance actually gets these kind of responses. The hate in this thread is unreal.

I would really like to see what this thread would look like if AMD actually lost in those initial DX12 benchmarks. I guess some would be jumping with joy and others would join with doom and gloom posts.

I'll say it again: We can't make any final conclusions about the performance because we don't have enough data points (actual DX12 games) but for now, it is looking quite bright from AMD perspective.

Next time I guess I'll just join the Nvidia defence team and sing the same song even if the data would show that song is not signed in the correct tune.
 
Last edited:
I'll say it again: We can't make any final conclusions about the performance because we don't have enough data points (actual DX12 games) but for now, it is looking quite bright from AMD perspective.
When you have people using the data as a 'final conclusion' then you're going to get a proportionate response from the other side, too. If DX12 games start hitting the market with results like these then I think you'll start to see more of a movement towards AMD. AMD loves their hype machine. If they were actually being quiet through all of this (See: Rob Hallock, Richard Huddy, AMD's official benchmarks) then I would actually be more inclined to trust the data. The more AMD hypes something, the more skeptical I become.

Meanwhile Nvidia is silent as the grave. No response to async yet, it's been over a month. No new drivers for AotS or Fable. At least AMD let reviewers know they had a driver.
 
Nvidia have also shut up Ark developers on DX12. They even have gone as far as having the DX12 thread over on Steam with hundreds, and hundreds of posts unpinned. Probably in hopes to bury the issue for now.

Edit: let's not forget this is after the Devs had announced DX12 later in the week, and had even booked Steam for a free weekend. Not cheap to do Valve is quite expensive, so it had to be working.
 
Meanwhile Nvidia is silent as the grave. No response to async yet, it's been over a month. No new drivers for AotS or Fable. At least AMD let reviewers know they had a driver.

What "response" are you expecting from Nvidia for an Indie alpha tech demo and a cartoon game that's not out yet and still under NDA? chances are they have bigger fish to fry, and they're beating AMD on these benches anyway. These are not exactly important even while tech sites try to generate clicks by being the first with mickey mouse "DX12 benches" that everyone can jerk off over like it means anything.
 
Edit: let's not forget this is after the Devs had announced DX12 later in the week, and had even booked Steam for a free weekend. Not cheap to do Valve is quite expensive, so it had to be working.
The patch was delayed within days of the Async stuff popping up. Presumably, ARK has been working on their DX12 patch for months. Nvidia itself has been working with UE4, as well. It seems really unlikely that Nvidia would just now realize they had performance issues in DX12 + ARK at the same time the news breaks about async, which is also 2 weeks after the Ashes benchmarks went live.

That would mean Wildcard Studios, Epic Games, and Nvidia are like the 3 stooges. "Hey guys we're about to release this DX12 patch tomorrow, let's go ahead and benchmark it first. Oh shit it runs like crap on our sponsored hardware!" :confused::confused::confused: Although TO BE FAIR maybe they never actually tested it on AMD cards this entire time, so they never realized how much faster AMD hardware actually was... lol.

If Nvidia had such a heavy hand in ARK, as people are implying, they would have been well aware of their performance problems from day one. And they never would have promised a DX12 patch this early, only to subsequently delay it. Just because it's news to us, doesn't mean it's news to them; surely the people designing the game, the people designing the engine, and the people designing the hardware would have seen this issue earlier on. I mean they are literally the 3 groups of people responsible for the entire thing.
 
Last edited:
Wonder how hard it is to throw in an ASYNC on/off switch like Ashes of the Singularity has. Nvidia users seem pleased with that game when you turn off ASYNC. AMD users seem to be pleased when it is turned on. Oxide seemed happy to include it as an option.

I think that would be the best solution for everyone.
 
Wonder how hard it is to throw in an ASYNC on/off switch like Ashes of the Singularity has. Nvidia users seem pleased with that game when you turn off ASYNC. AMD users seem to be pleased when it is turned on. Oxide seemed happy to include it as an option.

I think that would be the best solution for everyone.
That's not a solution, it's a band-aid... More like a non-solution.
Same as the people who say "Just play it in DX11 mode". wat?
 
The patch was delayed within days of the Async stuff popping up. Presumably, ARK has been working on their DX12 patch for months. Nvidia itself has been working with UE4, as well. It seems really unlikely that Nvidia would just now realize they had performance issues in DX12 + ARK at the same time the news breaks about async, which is also 2 weeks after the Ashes benchmarks went live.

That would mean Wildcard Studios, Epic Games, and Nvidia are like the 3 stooges. "Hey guys we're about to release this DX12 patch tomorrow, let's go ahead and benchmark it first. Oh shit it runs like crap on our sponsored hardware!" :confused::confused::confused: Although TO BE FAIR maybe they never actually tested it on AMD cards this entire time, so they never realized how much faster AMD hardware actually was... lol.

If Nvidia had such a heavy hand in ARK, as people are implying, they would have been well aware of their performance problems from day one. And they never would have promised a DX12 patch this early, only to subsequently delay it. Just because it's news to us, doesn't mean it's news to them; surely the people designing the game, the people designing the engine, and the people designing the hardware would have seen this issue earlier on.

Or AMD's post WIndows 10 launch drivers have been pretty much flawless and steadily improving their new Fury lineup while also improving their older card's performance. Almost to the point where you can't tell the 200 series from the 300 series or the new Fury series.
 
No, AMD DX 11 drivers weren't FUBAR, it was Fiji and HAwaii archiectures being enlarged derivatives of console gpu's which reduced their sequential thread dx11 performance.

I think we all discussed enough async related architecture crap to know this by now. DX 12 allows AMD to use full power of parallelized Fiji and Hawaii architectures

Only trying (and quite badly) to answer half the point is worse than no answer.
When will DirectX12 game releases surpass DirectX11 games?

You sound like you are trying to sell todays GPU's for tomorrows games (aka: This GPU is futureproof... Trust me!!!!).


That has NEVER worked FYI.
 
Sounds like you are trying to create an issue that doesn't exist.

Tomorrow's GPUs aren't here.

Todays are.
 
That's not a solution, it's a band-aid... More like a non-solution.
Same as the people who say "Just play it in DX11 mode". wat?

Well it's not like Nvidia is going to sell a DX12 helper card that adds ASYNC support to the older hardware. It is what it is. Let the Nvidia users use DX11 features within DX12 for certain things if it's faster for them. There isn't a visual difference in DX12 vs DX11 yet. As long as it looks the same and plays the same who cares?
 
The patch was delayed within days of the Async stuff popping up. Presumably, ARK has been working on their DX12 patch for months. Nvidia itself has been working with UE4, as well. It seems really unlikely that Nvidia would just now realize they had performance issues in DX12 + ARK at the same time the news breaks about async, which is also 2 weeks after the Ashes benchmarks went live.

That would mean Wildcard Studios, Epic Games, and Nvidia are like the 3 stooges. "Hey guys we're about to release this DX12 patch tomorrow, let's go ahead and benchmark it first. Oh shit it runs like crap on our sponsored hardware!" :confused::confused::confused: Although TO BE FAIR maybe they never actually tested it on AMD cards this entire time, so they never realized how much faster AMD hardware actually was... lol.

If Nvidia had such a heavy hand in ARK, as people are implying, they would have been well aware of their performance problems from day one. And they never would have promised a DX12 patch this early, only to subsequently delay it. Just because it's news to us, doesn't mean it's news to them; surely the people designing the game, the people designing the engine, and the people designing the hardware would have seen this issue earlier on. I mean they are literally the 3 groups of people responsible for the entire thing.

DX12 support in the released versions of UE4 are still marked as experimental.

If you enable DX12 on the Infiltrator demo both AMD and Nvidia have performance regression - https://www.youtube.com/watch?v=llzhKw6-s5A

Wonder how hard it is to throw in an ASYNC on/off switch like Ashes of the Singularity has. Nvidia users seem pleased with that game when you turn off ASYNC. AMD users seem to be pleased when it is turned on. Oxide seemed happy to include it as an option.

I think that would be the best solution for everyone.

Supposedly there is a toggle in the Fable bench. The Fury X has roughly a 20% performance drop with it off compared to the current results you are seeing.
 
The patch was delayed within days of the Async stuff popping up. Presumably, ARK has been working on their DX12 patch for months. Nvidia itself has been working with UE4, as well. It seems really unlikely that Nvidia would just now realize they had performance issues in DX12 + ARK at the same time the news breaks about async, which is also 2 weeks after the Ashes benchmarks went live.

That would mean Wildcard Studios, Epic Games, and Nvidia are like the 3 stooges. "Hey guys we're about to release this DX12 patch tomorrow, let's go ahead and benchmark it first. Oh shit it runs like crap on our sponsored hardware!" :confused::confused::confused: Although TO BE FAIR maybe they never actually tested it on AMD cards this entire time, so they never realized how much faster AMD hardware actually was... lol.

If Nvidia had such a heavy hand in ARK, as people are implying, they would have been well aware of their performance problems from day one. And they never would have promised a DX12 patch this early, only to subsequently delay it. Just because it's news to us, doesn't mean it's news to them; surely the people designing the game, the people designing the engine, and the people designing the hardware would have seen this issue earlier on. I mean they are literally the 3 groups of people responsible for the entire thing.

Could be right hand doesn't know what the left hand is doing syndrome. Bit of additional info... Nvidia only bought into Ark after having seen the greenlight on Steam like everyone else, which is later in the process than usual (might even be able to Google that info now). Could just be the lag time between getting word from Wildcard that they are ready to issue the DX12 codepath and comparing to results from their technical marketing department (yes they test against competition of coarse). What i wouldn't do to see that OH CRAP! moment and used their Gameworks contract to force Wildcard to STFU.

In any case nvidia was blindsided.

Well it's not like Nvidia is going to sell a DX12 helper card that adds ASYNC support to the older hardware. It is what it is. Let the Nvidia users use DX11 features within DX12 for certain things if it's faster for them. There isn't a visual difference in DX12 vs DX11 yet. As long as it looks the same and plays the same who cares?

You mean like a separate Physx card? ;). It's all in the branding dude!

nvidia could call it a DX12+ card! NOW doesn't that sound better than admitting they lied and don't properly support Async Compute units in hardware? A 980ti with DX12+ sidekick card for "ultimate DX12 experience.". Sad part is, people would actually buy into that marketing.
 
Last edited:
Although TO BE FAIR maybe they never actually tested it on AMD cards this entire time, so they never realized how much faster AMD hardware actually was... lol.

Now this isn't even that far fetched scenario when you look at couple of gameworks titles released recently, where studios didn't even have recommended specs with AMD gpu's in the sheets and just added it later on. Also having more than horrendous performance with AMD gpu's in general on launch.

Of course I don't really believe this is the case with ARK but oh god how hilarious that would be if someone from Wildcard studios suddenly comes in and says that "we didn't release DX12 patch because it made AMD cards look way better than what we expected and we only got sub-par improvements with Nvidia cards so we delayed the patch to fine tune the code for Nvidia hardware". :D
 
Can we really make blanket statements about an entire engine? The AMD community are the ones who were originally downplaying UE4 due to Epic's heavy involvement with Nvidia. Some people even pinned ARK's DX12 delay on Nvidia... Keep in mind ARK is a GameWorks game, Fable is not.

Unreal Engine 3, DX11...
Thief, Gaming Evolved: http://www.techpowerup.com/reviews/EVGA/GTX_970_SC_ACX_Cooler/18.html
Arkham Origins, GameWorks: https://www.techpowerup.com/reviews/ASUS/GTX_970_STRIX_OC/7.html

Keep in mind that Thief on Mantle is using asynchronous shading.
 
Keep in mind that Thief on Mantle is using asynchronous shading.

AMD wrote that codepath essentially themselves with input from Square Enix. It is an impressive feat they managed to do years ago already. With equally impressive results.

Curiously... PS4 started looking into Async Shader use... Around that time... ;)
 
And the gains from it are hilariously small in it.

what are the minimum frames vs dx11 and equivalent nvidia cards?

what are frame times to see how more or less fluid the mantle version plays vs dx11 and nvidia cards?
 
what are the minimum frames vs dx11 and equivalent nvidia cards?

what are frame times to see how more or less fluid the mantle version plays vs dx11 and nvidia cards?

Considering that none of the nvidiots commented on Pcper graphs showing the 980ti stuttering much more than the Fury X, I'm sure average FPS will become the only important metric. The same they have done with power and temperature.
 
what are the minimum frames vs dx11 and equivalent nvidia cards?

what are frame times to see how more or less fluid the mantle version plays vs dx11 and nvidia cards?

It's not only about max FPS.

Unless you've actually played the Mantle version of Thief you can't understand how huge the difference is, ESPECIALLY in Crossfire. The minimum fps doubles, even triples in some cases under Mantle yes.

But what caught my attention, and it seems other people's as well, is the increase in responsiveness (fluidity)! I couldn't explain how extremely, extremely responsive the game was until I learned this was a game where AMD introduced Async Shaders same as in their Liquid VR to reduce lag.

The DX11 version feels very laggy by comparison, but again until you know better you can't compare.
 
Only trying (and quite badly) to answer half the point is worse than no answer.
When will DirectX12 game releases surpass DirectX11 games?

You sound like you are trying to sell todays GPU's for tomorrows games (aka: This GPU is futureproof... Trust me!!!!).


That has NEVER worked FYI.

What are you even talking about? You said AMD dx11 drivers were shit which is not true and I corrected you. Latest two gen of AMD graphics architectures weren't built around DX 11, they were built primarily for console ops. The parallel architecture that AMD developed for Sony and XBOX consoles wasn't well suited to DX 11.

DEUS EX: Mankind Divided is around the corner, loaded with DX 12 and propitiatory AMD graphics features.
 
Sounds like You used it with crappy cpu. But using crossfire sure explains a lot.

And I don't know anyone who even said a word about some super fluidity effect.

thief_mantle_cpu_4770koc.png
 
What are you even talking about? You said AMD dx11 drivers were shit which is not true and I corrected you. Latest two gen of AMD graphics architectures weren't built around DX 11, they were built primarily for console ops. The parallel architecture that AMD developed for Sony and XBOX consoles wasn't well suited to DX 11.

DEUS EX: Mankind Divided is around the corner, loaded with DX 12 and propitiatory AMD graphics features.
There seems to be issues with AMD drivers with high CPU usage, this has nothing to do with parallel execution of ops, there should be no CPU involvement when doing calculations that are solely based on the GPU. This problem seems to be not localized to certain applications either its pretty wide spread. I wouldn't call the drivers shit but optimization wise yeah they are behind with Dx11 drivers, now is that because they decided to focus on Mantle and Dx12 drivers, possibly. I say this because if we look at Fiji, its running across similar (similar not same issues) as Maxwell when we look at Dx 12 benchmarks which we don't see on any other GCN hardware.
 
What are you even talking about? You said AMD dx11 drivers were shit which is not true and I corrected you. Latest two gen of AMD graphics architectures weren't built around DX 11, they were built primarily for console ops. The parallel architecture that AMD developed for Sony and XBOX consoles wasn't well suited to DX 11.
So why aren't more people upset about AMD selling underperforming console-based tech on the PC market? They had to suffer through 2 generations of weak performance for it to finally payoff in DX12... Right before they replace their card with a new 16nm model. But hey, at least they get to showoff Deus Ex for a few months until Pascal/Greenland. :D

And this is an insider's perspective. I owned AMD exclusively since 2007. I suffered through the worst of it and continued to suffer up to the moment I pulled the 280X out of my rig. It looks like I still made the right decision since the Fury X continues to get smoked by the 980 Ti even in DX12. :rolleyes:

edit: I'm only half serious about AMD's DX11 performance over the years. They've been mostly competitive and DX12 is good news for anyone keeping a Hawaii-based card for another year or two.
 
Last edited:
Sounds like You used it with crappy cpu. But using crossfire sure explains a lot.

And I don't know anyone who even said a word about some super fluidity effect.

thief_mantle_cpu_4770koc.png

Crossfire under Mantle for this game is amazing. As for fluidity it would show up with fcat, but the reduction of lag wouldn't.

Edit: to read about reduction of lag you can look up Tomorrow Children on the PS4, they talk about it more.
 
Last edited:
So why aren't more people upset about AMD selling underperforming console-based tech on the PC market? They had to suffer through 2 generations of weak performance for it to finally payoff in DX12... Right before they replace their card with a new 16nm model. But hey, at least they get to showoff Deus Ex for a few months until Pascal/Greenland. :D

And this is an insider's perspective. I owned AMD exclusively since 2007. I suffered through the worst of it and continued to suffer up to the moment I pulled the 280X out of my rig. It looks like I still made the right decision since the Fury X continues to get smoked by the 980 Ti even in DX12. :rolleyes:

edit: I'm only half serious about AMD's DX11 performance over the years. They've been mostly competitive and DX12 is good news for anyone keeping a Hawaii-based card for another year or two.

GCN is looking more and more like a competently designed architecture for graphics rendering (not saying it's better than Maxwell, and certainly not on power consumption) that was wildly misaligned in its resources to the API's that were using the hardware for the past several years (and continuing). Not visionary or something like that, but more competent than we had assumed for years (which doubly means that AMD, while still having PLENTY of work cut out for them, isn't as far behind architecturally). Having 2 good manufacturers of GFX cards is a good thing, no?

I realize this is [H]ardForum, but not all of us can/want to own a 980ti, even if the groups most loudly shouting back and forth are the 980ti/Fury crowd. The 980ti is an absolutely fantastic card, but also worth more than my entire system (now, at least, not when I got it years ago). As has been stated, the 390/970 price point is where things probably matter the most for enthusiasts, and it'd be great if we had a lot of good offerings around this mark.

I'm pretty hopeful that I'll be able to get a few more years of decent 1080p (not amazing, reasonable expectations here) out of my 280x (recently bought used, hits a very nice performance/$ spot even at DX11 and under) due to DX12/Vulkan allowing better utilization of AMD hardware, I might get a few more good years (rather than mediocre years) out of this sucker. That's kinda cool, IMO--who doesn't like the idea of getting 7+ years out of a system with only an SSD and graphics card being added to the mix over time? It's not always about leedingEdge.
 
Last edited:
It's not only about max FPS.

Unless you've actually played the Mantle version of Thief you can't understand how huge the difference is, ESPECIALLY in Crossfire. The minimum fps doubles, even triples in some cases under Mantle yes.

But what caught my attention, and it seems other people's as well, is the increase in responsiveness (fluidity)! I couldn't explain how extremely, extremely responsive the game was until I learned this was a game where AMD introduced Async Shaders same as in their Liquid VR to reduce lag.

The DX11 version feels very laggy by comparison, but again until you know better you can't compare.

I have to agree here on the "fluidity" thing. Didn't ever see Thief, but I saw it firsthand in BF4 on Mantle on a 290x, and then again with Civ 5: Beyond Earth on Mantle w/ its split frame rendering. It's actually pretty compelling and what excites me about Vulkan and DX12 going forward.

That said, I don't think it had much to do with Async shaders, and I don't see that buzzword of the moment being the christmas miracle that brings back AMD, least in the way some are hoping it will.
 
Last edited:
Back
Top