Fable Legends DX12 benchmark

I have to agree here on the "fluidity" thing. Didn't ever see Thief, but I saw it firsthand in BF4 on Mantle, and then again with Civ 5: Beyond Earth on Mantle w/ its split frame rendering. It's actually pretty compelling and what excites me about Vulkan and DX12 going forward.

That said, the fluidity had fuckall to do with Async shaders, and anyone hoping that new buzzword of the moment is going to be the christmas miracle that brings back AMD, you're kidding yourself.

All different techniques.

Don't know why you bring up Christmas, but don't care.

Who knows what kinds of optimizations Johan Andersson and his team are using in their Frostbyte engine, but they must be significant. We've just seen the first version of SFR in Civ. BE, it works great and should get even better going forward. As for Async and fluidity, I obviously haven't seen the Liquid VR SDK yet, but apparently AMD is using it to reduce movement lag and keep it under the maximum 16-20ms to reduce nausea.
 
So why aren't more people upset about AMD selling underperforming console-based tech on the PC market? They had to suffer through 2 generations of weak performance for it to finally payoff in DX12... Right before they replace their card with a new 16nm model. But hey, at least they get to showoff Deus Ex for a few months until Pascal/Greenland. :D

And this is an insider's perspective. I owned AMD exclusively since 2007. I suffered through the worst of it and continued to suffer up to the moment I pulled the 280X out of my rig. It looks like I still made the right decision since the Fury X continues to get smoked by the 980 Ti even in DX12. :rolleyes:

edit: I'm only half serious about AMD's DX11 performance over the years. They've been mostly competitive and DX12 is good news for anyone keeping a Hawaii-based card for another year or two.

The irony, you probably switched at the worst possible time as we are finally nearing the inflection point where the dx12 api will advantage amd, or at least make them much more competitive across the line.
 
The irony, you probably switched at the worst possible time as we are finally nearing the inflection point where the dx12 api will advantage amd, or at least make them much more competitive across the line.

Saying that a card sucks in games now but may or may not suck in the future, is like the worst sales pitch ever.
 
Saying that a card sucks in games now but may or may not suck in the future, is like the worst sales pitch ever.

Dude that's AMD and/or their shills main selling point and their fans eat it up:

"Fury X will demolish 980Ti/Titan X, just wait until it's released"
"Fury X will be an overclockers dream...just wait till we can actually overclock it!"
"Oops ok so Fury X doesn't overclock like we thought..wait until voltage is properly unlocked! Just wait."
"AMD will have HBM 2 priority, too bad NVIDIA! Pascal will be late and AMD will be ahead of the game. Just wait"
"AMD won all 3 console contracts, RIP NVIDIA! All games will be optimized for GCN. Just wait."
"FresSync will take over the world and GSync will be dead. Just wait"
"Async Compute, just wait till late 2016, it will smoke 780 Ti! Just wait."
"AMD APUs will put NVIDIA and Intel out of business someday, just wait."
"Lisa Su will save AMD, just wait"
"Zen"
and of course
"Bulldozer"
 
Last edited:
Dude that's AMD and/or their shills main selling point and their fans eat it up:

"Fury X will demolish Titan X"
"Fury X will be an overclockers dream...just wait till we can actually overclock it!"
"Oops ok so Fury X doesn't overclock like we thought..wait until voltage is properly unlocked!"
"AMD will have HBM 2 priority, too bad NVIDIA!"
"FresSync will take over the world and GSync will be dead"
"Async Compute, just wait till late 2016, it will smoke 780 Ti!"
"AMD APUs will put NVIDIA and Intel out of business someday"
"Zen"
and of course
"Bulldozer"

Between you and Prime1 I cant tell who is the bigger troll these days.

No one from AMD said it would destroy the Titan X

They did say it would be a overclockers dream, that was a blunder on their part.

AMD has never promised voltage control for any of their cards, your just parroting fanboys there.

With Intel adopting Freesync, Gsync is in trouble I doubt it will live much longer.

Async computing is light years ahead of what Nvidia can do right now on their cards, perhaps Nvidia will close that gap with the next gen hardware.

AMD makes a better APU but I dont think anyone thought it would bankrupt Nvidia or Intel. In fact every new gpu or cpu people usually say it will be the death of AMD not the other way around.

Zen is coming, will see what it can do when it's here, tho it does sound good so far.

Bulldozer is a great chip at multi threaded tasks however it sucked on single threaded tasks. Not surprising since it was built as a server chip first. Sadly that chip did get over hyped and expectations were way too high.

I could go over all the BS you post thats is just as bad fud and pro Nvidia but I don't have all day. It's hardware, learn to enjoy it and not try to justify what you purchase to everyone else, after all you bought it for your use not mine.
 
Between you and Prime1 I cant tell who is the bigger troll these days.

No one from AMD said it would destroy the Titan X

I stopped reading here. AMD posted benchmarks showing it would destroy nVidia and called it the "overclockers dream". No one got close to their results and we know it's tapped out.

I own a Fury X and think it's a good SFF card, but let's not rewrite history.


image_zpsrbddmdd6.jpg
 
Last edited:
The irony, you probably switched at the worst possible time as we are finally nearing the inflection point where the dx12 api will advantage amd, or at least make them much more competitive across the line.
If it makes you feel any better, if I had gone with a mid-range part instead, I probably would have bought a 390 despite how much I dislike Hawaii.
 
I stopped reading here. AMD posted benchmarks showing it would destroy nVidia and called it the "overclockers dream". No one got close to their results and we know it's tapped out.

I own a Fury X and think it's a good SFF card, but let's not rewrite history.


image_zpsrbddmdd6.jpg

Still waiting to see that quote AMD made about destroying a Titan X, cool graph tho.

Destroying, failure and etc. are all in the eye of the beholder, some would look at that graph and say it's barely faster, of course marketing never shows their product in it's best light... Oh wait that's what they get paid to do.
 
I don't always give a shit about benchmarks but whenever I visit this forum, AMD is losing them.
 
Saying that a card sucks in games now but may or may not suck in the future, is like the worst sales pitch ever.

Nvidia has the high end crown, and nothing else in dx12. This is not to be contested, we've all seen the charts.

390 ~ 980 > 970
390x > 980
280 > 960

980ti >= Fury depending on the degree of async compute going on (not much in the fable benchmarks shown)


The response to this is essentially, yeah but so what, in dx11 nvidia still performs better, I care about games now.

Not a bad retort, and it would make a lot more sense if amd cards were in the gutter in dx11, but they are not (... unless talking about AOTS, any game like that and it's unusable on amd if only dx11 is available, or games like project cars where they forced physx in a dx11 code path). If I had the choice of buying a card where my performance in current games would be ~ 10% worse on average in dx11 games today but 10-15% better in the upcoming dx12 games that are coming up fast, I'd choose the decent performance today, and better performance tomorrow cards.

But perhaps that is because I don't treat graphics cards like a cheap stable of slatterns to be rotated through like I'm Charlie Sheen.

I've had my 290 since around late 2013 / early 2014. I'm typically on a 2 year gpu cadence, but felt no pressing need to upgrade, and wanted to wait until the die shrink which will bring the single biggest performance gains we've seen in a long while between generations. For me, and people like me, it is better to err towards the cards with longer legs.

If you are the kind of person that rotates high end cards/sli/crossfire with each new infinitesimal iteration, then longevity and performance over time is kind of meaningless.

It's like a guy who buys a bmw m3 this year, and buys a new one next year because they upped the hp by 20.

You could have made a better argument if we were still well within the dx11 game phase, because getting a 980 over a 290 would give tangible performance gains. But we are NOT well within the dx11 gaming era. The games that push performance will have enormous pressure to shift towards the latest apis like dx12 and vulkan, and in that world, the time when it was easy to brush off amds offerings look to be closing.

the 390 is based off a card that was released in EFFING 2013 for gods sakes, and it is running neck and neck with a 980 released by nvidia a year later in dx12 ?!?!?!?????!?!?!??


Those "rebrands" when unshackled had far more staying power than we ever knew. The dark reign of shackled amd performance in dx11 is ending, and in the words of Saruman:

A New Power is Rising.

https://www.youtube.com/watch?v=TQq4LjSF2rc#t=42s
 
Compute shader simulation and culling is the cost of our foliage physics sim, collision and also per-instance culling, all of which run on the GPU. Again, this work runs asynchronously on supporting hardware.

Do we know what this really means? It's kind of unclear what is implied. If it simply disables ASYNC for the NVidia cards that is great news for everyone, clearly. I am all for a game taking advantage of a GPU's features as long as it doesn't cripple other GPUs.
 
Do we know what this really means? It's kind of unclear what is implied. If it simply disables ASYNC for the NVidia cards that is great news for everyone, clearly. I am all for a game taking advantage of a GPU's features as long as it doesn't cripple other GPUs.

Except async code was never supposed to cripple anything, it's a legitimate feature of DX12. This is a problem nvidia created by themselves for themselves by lying.

It would be nice to confirm if async code is only active on the Xbox version and artificially disabled on the PC version or not.
 
Except async code was never supposed to cripple anything, it's a legitimate feature of DX12. This is a problem nvidia created by themselves for themselves by lying.

It would be nice to confirm if async code is only active on the Xbox version and artificially disabled on the PC version or not.

glancing at the engine source, it looks like it's not implemented yet on PC.
 
Except async code was never supposed to cripple anything, it's a legitimate feature of DX12. This is a problem nvidia created by themselves for themselves by lying.

It would be nice to confirm if async code is only active on the Xbox version and artificially disabled on the PC version or not.

I am not a part of the rest of the thread's fanboyism. I am not suggesting that AMD nor Microsoft are scheming against NVidia. I am simply pointing out that it was a simple solution to support it on hardware that supports it, and to disable it otherwise. Kind of like Tomb Raider and TressFX. If you didn't have the option to turn it off, there wouldn't be much point in NVidia owners to buy the game. Some developers want to put features in their engine and release benchmarks showing how it cripples a certain card. This is interesting and useful information, but what happens when that feature is bypassed? I wonder what amazing graphics features the latest version of Windows Blinds and Start10 will support?
 
A New Power is Rising.

You mean AMD? Doubtful. I predict Nvidia will continue to outsell them. Irregardless of these benches. I honestly don't see this changing much of anything.
 
Last edited:
You mean AMD? Doubtful. I predict Nvidia will continue to outsell them. Irregardless of these benches. I honestly don't see this changing much of anything.

Well, Saruman did fail as well. But hope springs eternal. If there is any justice in this world, the 390 should start to cut into 970 sales, STILL the number 1 selling video card on amazon. I want cards to win on the merits, and the 970 no longer deserves to sit at that perch.
 
Well, Saruman did fail as well. But hope springs eternal. If there is any justice in this world, the 390 should start to cut into 970 sales, STILL the number 1 selling video card on amazon. I want cards to win on the merits, and the 970 no longer deserves to sit at that perch.

Name brand sells, unfortunately. The closest selling (aside from adapters), equivalent AMD card on Amazon is the 390 MSI, and it's at spot 20. It also appears Nvidia is gearing up to dump out Pascal.

Most consumers aren't even knowledgeable of these benches. It's mostly fodder for diehard fans to argue among themselves.
 
Last edited:
The problem is some enthusiasts place way too much importance into what are really rather marginal raw performance differences. Even more so in terms of how this applies to the broader market which is less discerning.

From a practical stand point if you were to blind test graphic cards within 1-2 segments of each other the vast majority of people are not going to be able to tell the difference. What also compounds this issue is that products that close together will effectively trade performance wins as well (some games run better on one or the other).
 
The problem is some enthusiasts place way too much importance into what are really rather marginal raw performance differences. Even more so in terms of how this applies to the broader market which is less discerning.

From a practical stand point if you were to blind test graphic cards within 1-2 segments of each other the vast majority of people are not going to be able to tell the difference. What also compounds this issue is that products that close together will effectively trade performance wins as well (some games run better on one or the other).

You're right.
 
How is he right ? If we are talking about solutions with equal price why would you want to settle for less ?
 
How is he right ? If we are talking about solutions with equal price why would you want to settle for less ?

Well that's where [H] type performance reviews come in. Do you actually get a better experience? If we're talking about +/- 10% probably not. That's what he's talking about.

Then you can consider things like past history, location of production, ect.

Still waiting to see that quote AMD made about destroying a Titan X, cool graph tho.

Destroying, failure and etc. are all in the eye of the beholder, some would look at that graph and say it's barely faster, of course marketing never shows their product in it's best light... Oh wait that's what they get paid to do.

The OP you responded to said "AMD and/or their shills". At some point this was said. I'm sure I could get a quote from AMD Roy if someone didn't delete his entire account.
 
Last edited:
You mean AMD? Doubtful. I predict Nvidia will continue to outsell them. Irregardless of these benches. I honestly don't see this changing much of anything.

I would feel bad for anyone who would buy a card based on benchmarks for a game that's not even out. A lot can change by the time the game is released and the drivers are finished. DX12 games will probably not be the standard until 2020.
 
I would feel bad for anyone who would buy a card based on benchmarks for a game that's not even out. A lot can change by the time the game is released and the drivers are finished. DX12 games will probably not be the standard until 2020.

Good continuation of funny tech predictions of the past. Can we repost this as reply endlessly on all your posts from now on?
 
Good continuation of funny tech predictions of the past. Can we repost this as reply endlessly on all your posts from now on?

I'm sure some "temperature/power draw/efficiency doesn't matter, this is [H]" were thrown a few years ago.
 
That isn't a word.


No, it is a word. Just a word that possesses a double negative. When used (mostly incorrectly) it irks the obsessively grammar conscious. I have a bad habit of using it sometimes. I think I like the way it sounds for some reason.

There you go.
 
Last edited:
I would feel bad for anyone who would buy a card based on benchmarks for a game that's not even out. A lot can change by the time the game is released and the drivers are finished. DX12 games will probably not be the standard until 2020.

If anyone knows prime irl you are going to need to console him hard as the dx12 titles start raining from the sky next year.
 
NVIDIA will do fine on DX12. The idea that their hardware is just a nicely polished turd and we're only now seeing it, and that all nvidia has is drooling morons for engineers who don't know how to design GPUs might make for great forum talk, but that's just not reality.
 
NVIDIA will do fine on DX12. The idea that their hardware is just a nicely polished turd and we're only now seeing it, and that all nvidia has is drooling morons for engineers who don't know how to design GPUs might make for great forum talk, but that's just not reality.

By the time dx12 games become prevalent, I imagine nvidia will have new tech on the market.
 
Still waiting to see that quote AMD made about destroying a Titan X, cool graph tho.

Destroying, failure and etc. are all in the eye of the beholder, some would look at that graph and say it's barely faster, of course marketing never shows their product in it's best light... Oh wait that's what they get paid to do.

Read what I wrote, I said AMD and/or their shills. Lest we not forget this amusing pic their shills/fanboys made:

13063620177l.jpg


Ah here's something else:

Matt Skynner CVP and General Manager of AMD’s GPU and APU products touted the company’s upcoming flagship Fury X as the world’s fastest GPU. As he proclaimed today at the company’s Computex press conference that “HBM enables us to build the fastest GPU in the world” when referring to Fiji, reports Hardwareluxx .

Read more: http://wccftech.com/amd-hbm-fury-x-fastest-world/#ixzz3msH5MI14

Fastest GPU in the world indeed...just not yet.
 
Last edited:
No, it is a word. Just a word that possesses a double negative. When used (mostly incorrectly) it irks the obsessively grammar conscious. I have a bad habit of using it sometimes. I think I like the way it sounds for some reason.

There you go.


English has pretty flexible grammatical rules and still adding multiple prefixes doesn't make an acceptable word. If this was german, were you can even string a couple things together to make a new complex word, then sure.

Thereyougothisisn'tawordeitherbutaccordingtoyouitcouldbeawordeventhoughweallknowthatitisn'tthisisjustareductioadabsurdumofyouritisaworddefense.
 
NVIDIA will do fine on DX12. The idea that their hardware is just a nicely polished turd and we're only now seeing it, and that all nvidia has is drooling morons for engineers who don't know how to design GPUs might make for great forum talk, but that's just not reality.

Shall we take bets now on which team will top the DeusEx:MD DX12 charts? Any takers? All in the spirit of friendly, heightened civility and basic, common respect and such in this new DX12 era to prove things are different.
 
Last edited:
English has pretty flexible grammatical rules and still adding multiple prefixes doesn't make an acceptable word. If this was german, were you can even string a couple things together to make a new complex word, then sure.

Thereyougothisisn'tawordeitherbutaccordingtoyouitcouldbeawordeventhoughweallknowthatitisn'tthisisjustareductioadabsurdumofyouritisaworddefense.

But, it is a word; "regardless" of how ridiculous it may sound. Here you go:

http://i.word.com/idictionary/irregardless

Who cares though? Perhaps you'll send some death threats towards Merriam's way. I'm sure they'll be intimidated. Perhaps they'll be amazed by your grammar skills which are beyond reproach, yes? How much effort did you spend making that erroneous/ rude response?

I could care less about the word "irregardless". :cool:

Stop! Grammar time! If you don't rectify this error, I'll be most perturbed.
 
Last edited:
Back
Top