AMD R9 390X, Nvidia GTX 980 Ti and Titan X Benchmarks Leaked

Results look interesting though I find it a bit odd that at 2560x1600 the top dog cards don't perform well but at 4K they really shine which is rather off.
 
The numbers are a percentage not fps, 290X = 100%.

Fiji XT was 149.2% @ 4K resolution so ~50% faster than 290X.
Fiji XT was 139.9% @ 1600p resolution so ~ 40% faster than 290X.

If the $549 price and slightly-better-than-$1K-TitanX peformance are correct AMD looks to have a winner here.

Price gouging will likely be insane if they're even slightly supply constrained at launch though.
 
Coffee's bad for your blood pressure. :)

If, as I expect, you have one or more cards of whichever flavour, but I'm assuming Titan X, please benchmark them in use cases with > 4 GB VRAM in use, > 6 GB VRAM in use, and, if possible, > 8 GB VRAM in use. Shadows of Mordor or a fully up-textured Skyrim on 3x 4K monitors should satisfy the last two. And please also try DX12.

I just read coffee is good for your heart. Do I believe it is true? Doesn't matter. I like coffee so it's true.

I still can't believe people take these "leaks" as true...
 
I just read coffee is good for your heart. Do I believe it is true? Doesn't matter. I like coffee so it's true.

I still can't believe people take these "leaks" as true...

Well considering Titan X is probably in reviewers hands. I can believe some of the Titan X benchmarks.

OCUK has said they have Titan X, just cant sell them yet.
 
One of our local portals was teasing some big review for next week in their article about 960 in Sli so it will be probably Titan.
 
Perhaps. But I still need to see how it OC's and if it's voltage unlocked.

I was also serious about coffee being good for your heart. I wasn't trying to make a weird analogy.

http://www.hsph.harvard.edu/nutritionsource/coffee/

Drink on Brent!

LOL. Yea I would "think" Nvidia would keep the Titan X voltage available to play with, but I honestly don't see them doing it.

I mean hell they got pissed at MSI for having a bios with everything unlocked....
 
I'm interested to see that the 980Ti is expected to have 6GB VRAM. As Baasha showed, that's not enough.
 
http://wccftech.com/amd-r9-390x-8-gb-hbm/

Something to think about. If AMD really does release a 8gb 390x (which I highly doubt) that will basically double the amount of Memory Bandwidth, which would make these benchmarks obsolete since that is the 390x with 4gb of memory.

So it is possible AMD will gain even more performance?....Is that the right way of thinking on the memory bandwidth with HBM?

They aren't making a 8 chip high stack. It will still be 4 chips high, but 4x2 high instead of 4x1. So the memory bandwidth would be the same.
 
/\/\/\/\/\/\/\/\ HBM2 will be able to give you 8GB of VRAM in the same space. The slides from the Hynix presentation have been floating around for a while now. Should be available before the end of the year. Whether AMD uses it or not is a different story. There has been nothing about it in the rumor mill.

I think for at least the next couple of years you don't need more than 4GB of VRAM even at 4K. There are just going to be too few games that will show any benefit and the benefit that you do see will only be visible in benchmarks and not in game while you're playing.

http://www.tweaktown.com/tweakipedia/68/amd-radeon-r9-290x-4gb-vs-8gb-4k-maxed-settings/index.html
 
/\/\/\/\/\/\/\/\ HBM2 will be able to give you 8GB of VRAM in the same space. The slides from the Hynix presentation have been floating around for a while now. Should be available before the end of the year. Whether AMD uses it or not is a different story. There has been nothing about it in the rumor mill.

I think for at least the next couple of years you don't need more than 4GB of VRAM even at 4K. There are just going to be too few games that will show any benefit and the benefit that you do see will only be visible in benchmarks and not in game while you're playing.

http://www.tweaktown.com/tweakipedia/68/amd-radeon-r9-290x-4gb-vs-8gb-4k-maxed-settings/index.html

I will counter the tweaktown with no VRAM usage data with the Dying Light [H] review. :cool:

http://www.hardocp.com/article/2015/03/10/dying_light_video_card_performance_review/10

"Our recommendations, based on our experiences, lead us to recommend these VRAM capacities for the best enjoyment of Dying Light.



1080p - 3GB VRAM Video Cards

1440p - 4GB VRAM Video Cards

4K - 6GB+ VRAM Video Cards"

The article you linked mentioned the 4GB cards didn't do well in Shadow of Mordor.
 
Last edited:
Sounds like someone is just mad nvidia did not come out on top with their card that will probably cost twice the price.

If it makes you feel better the nvidia cards will probably overclock well with the better thermal/power headroom so you always have that to fall back on.

I'm mad because I didn't put any worth in made up benchmarks? Yeah that makes sense alright. I should go leak some made up graphs on chiphell as a social experiment.
 
I hope these go viral, AMD could use some good press... Even if it's a batch of overly-fake Chinese benchmarks.
When the news breaks in 24 hours that these are fabricated numbers, nobody will spread that news. Then the 390X launches at half the speed shown here and everybody gets mad at AMD for lying to them through leaked benchmarks or something.
 
I hope these go viral, AMD could use some good press... Even if it's a batch of overly-fake Chinese benchmarks.
When the news breaks in 24 hours that these are fabricated numbers, nobody will spread that news. Then the 390X launches at half the speed shown here and everybody gets mad at AMD for lying to them through leaked benchmarks or something.

I was reading some of the comments on wccftech and I couldn't stop laughing at how excited people got at unsubstantiated benchmarks, especially the AMD fans. It's like a starving dog that starts salivating and wagging its tail at the sight of some scraps.

YXyOEQ2.jpg
 
Last edited:
390X looks sweet.
Titan X is a bloody joke anyways at the price.

its not a joke when you get 12gb of Vram and SLI profiles before new games get released, etc.

390x has only 4gb Vram, do you know that? did you know that AMD never releases crossfire support until months later for most games?

its not fun waiting for new AMD drivers, like I did with the R200 series before I took them out and threw them out of my 4th floor apartment.

AMD hasn't released a new driver for like? 3-4 months now? people still waiting for crossfire support for many new games like FC4, etc...

if you are only looking at the prices for your next purchase I must say you are stupid, don't take it personal.

I rather pay more money by going with NVidia for a better game support than waiting months and years for AMD drivers.

right now I have two 980's in SLI and everything runs perfect without a single issue, getting new drivers days before new games come out is a big deal to me.

for those who are looking to save some money by going with AMD, all I want to say is, GOOD LUCK!
 
Last edited:
its not a joke when you get 12gb of Vram, SLI profiles before new games get released, etc.

have fun waiting for new AMD drivers, like I did with the R200 series before I took them out and threw them out of my 4th floor apartment.

AMD hasn't released a new driver for like? 3-4 months now? people still waiting for crossfire support for many new games like FC4, etc...

if you are only looking at the prices for your next purchase I must say you are stupid, don't take it personal.

I rather pay more money by going with NVidia for a better game support than waiting months and years for AMD drivers.
If we're talking about the difference between $600 and $1300 then I believe the downsides are justified... And if you're only running 1 card, crossfire problems are a non-issue.

Benchmarks could be considered a global average of a card's performance, which would include games where AMD has poor optimization. So that would mean, despite AMD's bad performance, the 390X would still be as fast or faster than the Titan X averaged across all games.
 
I will counter the tweaktown with no VRAM usage data with the Dying Light [H] review. :cool: The article you linked mentioned the 4GB cards didn't do well in Shadow of Mordor.
Shadow of Mordor is a inefficient console port that only has issues with 4GB if you max it out on Ultra @ 4K resolution. Dying Light is just one game and its GPU intensive enough that even at 1440p a single 290X or 980GTX can't maintain 60fps avg. with it so the VRAM usage issue with that game is largely mooted.

4GB of VRAM will fine for a long time. By the time it isn't anymore it'll be time to upgrade anyways.

NH sources seems to point to 8gb for the 390x
Yea there is a Fudzilla rumor too from a couple days ago that is largely a rehash of the Hynix presentation about HBM2. NH article you linked seems to have the same information if google translate is correct.

For a June launch it doesn't pan out to have HBM2 for the 390X. If they're doing 4GB of HBM + 4GB of GDDR5 it might be doable but there are no rumors of that just forum talk and theorycrafting. A 8GB HBM2 390X could be doable very late 2015.
 
Last edited:
I was reading some of the comments on wccftech and I couldn't stop laughing at how excited people got at unsubstantiated benchmarks, especially the AMD fans. It's like a starving dog that starts salivating and wagging its tail at the sight of some scraps.

serie2.gif

they are the crowd who still use CRT monitors and 10 year old GPU's, and they also wear $10 shoes.
 
If we're talking about the difference between $600 and $1300 then I believe the downsides are justified... And if you're only running 1 card, crossfire problems are a non-issue.

Benchmarks could be considered a global average of a card's performance, which would include games where AMD has poor optimization. So that would mean, despite AMD's bad performance, the 390X would still be as fast or faster than the Titan X averaged across all games.

first of all , Titan X wont cost $1300, lol, it is not a dual gpu card.

second, even when I was forced to use a single 290x I still had issues with bf4 and many other games where fps would just drop by 40fps for no reason, low gpu usage, lock ups, memory leaks, etc.

I am not arguing that AMD perf/price aint great, its better than great, but like I learned from the past with 290x crossfire it comes at a big cost.

it felt like I been ripped off and then kicked in the groin, I rather pay more and feel I got my moneys worth and I do feel that with my two 980's in SLI.
 
Shadow of Mordor is a inefficient console port that only has issues with 4GB if you max it out on Ultra @ 4K resolution. Dying Light is just one game and its GPU intensive enough that even at 1440p a single 290X or 980GTX can't maintain 60fps avg. with it so the VRAM usage issue with that game is largely mooted.

4GB of VRAM will fine for a long time. By the time it isn't anymore it'll be time to upgrade anyways.

4GB VRAM might be ok for a single card but you're cutting it close. For multiGPU and cards this powerful it's a no buy for me if it only has 4GB.

I could care less if something a console port, coded wrong, ect. If a card can't run it properly it is what it is. The card isn't good enough.
 
With everyone arguing about 4GB not enough. It has been confirmed with DX12 that when you sli/crossfire cards now that it pools the memory as one.

So if you crossfire 2 390x's in DX12 it will have 8GB memory over all.

Same with Titan X. Would be 24GB of memory in DX12.

Not sure if this works with JUST DX12 titles, or windows X.

Anyway just something to think about.
 
By the time DX12 matters, people will be buying the R9 490X and GTX 1080.

This.

I am slightly skeptical anyways. How do you get data from GPU 2 (let's say 2GB) if you need it on GPU1? The bandwidth blows between the cards when compared to VRAM and GPU core on the same card.
 
With everyone arguing about 4GB not enough. It has been confirmed with DX12 that when you sli/crossfire cards now that it pools the memory as one.

So if you crossfire 2 390x's in DX12 it will have 8GB memory over all.

Same with Titan X. Would be 24GB of memory in DX12.

Not sure if this works with JUST DX12 titles, or windows X.

Anyway just something to think about.

Someone has to write code specifically to use such feature. Considering how shitty modern games are with SLI/CF support at release it sounds extremely optimistic.
 
For multiGPU and cards this powerful it's a no buy for me if it only has 4GB.
Multi GPU from either vendor is pretty problematic though. Even if the VRAM is there the drivers can cause performance or stuttering issues. Very irritating to say the least.

I could care less if something a console port, coded wrong, ect. If a card can't run it properly it is what it is. The card isn't good enough.
There is always some game out there that brings the hardware to its knees for whatever reason but isn't necessarily a good indicator of the shape of things to come or an effective benchmark. Remember Crysis?

It has been confirmed with DX12 that when you sli/crossfire cards now that it pools the memory as one.
The developer has to code the game properly with this in mind though. They can't just do a recompile with DX12 specs flagged and AMD/nV can't just write a driver to do it for them either.

I hope the developers do this but I'm not too optimistic.
 
How do you get data from GPU 2 (let's say 2GB) if you need it on GPU1? The bandwidth blows between the cards when compared to VRAM and GPU core on the same card.
DX12 makes memory and buffer management explicit vs DX11 which was a 'black box' designed to hide all that in order to make programming for the GPU easier.

A properly written DX12 game would 'know' what asset would have to go where before hand so it would just split everything up appropriately between 2 or more GPU's while the game was loading in order to minimize bandwidth and timing issues.

Some good info. about this:
http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019

Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship). This is Vista era, when a lot of people were busy with the DX10 transition, the hardware transition, and the OS/driver model transition. My job was to get games that were broken on Vista, dismantle them from the driver level, and figure out why they were broken. While I am not at all an expert on driver matters (and actually sucked at my job, to be honest), I did learn a lot about what games look like from the perspective of a driver and kernel.



The first lesson is: Nearly every game ships broken. We're talking major AAA titles from vendors who are everyday names in the industry. In some cases, we're talking about blatant violations of API rules - one D3D9 game never even called BeginFrame/EndFrame. Some are mistakes or oversights - one shipped bad shaders that heavily impacted performance on NV drivers. These things were day to day occurrences that went into a bug tracker. Then somebody would go in, find out what the game screwed up, and patch the driver to deal with it. There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games - up to and including total replacement of the shipping shaders with custom versions by the driver team. Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA? There you go.



The second lesson: The driver is gigantic. Think 1-2 million lines of code dealing with the hardware abstraction layers, plus another million per API supported. The backing function for Clear in D3D 9 was close to a thousand lines of just logic dealing with how exactly to respond to the command. It'd then call out to the correct function to actually modify the buffer in question. The level of complexity internally is enormous and winding, and even inside the driver code it can be tricky to work out how exactly you get to the fast-path behaviors. Additionally the APIs don't do a great job of matching the hardware, which means that even in the best cases the driver is covering up for a LOT of things you don't know about. There are many, many shadow operations and shadow copies of things down there.



The third lesson: It's unthreadable. The IHVs sat down starting from maybe circa 2005, and built tons of multithreading into the driver internally. They had some of the best kernel/driver engineers in the world to do it, and literally thousands of full blown real world test cases. They squeezed that system dry, and within the existing drivers and APIs it is impossible to get more than trivial gains out of any application side multithreading. If Futuremark can only get 5% in a trivial test case, the rest of us have no chance.



The fourth lesson: Multi GPU (SLI/CrossfireX) is fucking complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated strictly to making multi-GPU setups work with existing games. (And I don't even know what the hardware side looks like.) If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole. There is ONE fast path, and it's the narrowest path of all. Take lessons 1 and 2, and magnify them enormously.



Deep breath.



Ultimately, the new APIs are designed to cure all four of these problems.

* Why are games broken? Because the APIs are complex, and validation varies from decent (D3D 11) to poor (D3D 9) to catastrophic (OpenGL). There are lots of ways to hit slow paths without knowing anything has gone awry, and often the driver writers already know what mistakes you're going to make and are dynamically patching in workarounds for the common cases.

* Maintaining the drivers with the current wide surface area is tricky. Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc) simply cannot keep up with the necessary investment. More importantly, explaining to devs the correct way to write their render pipelines has become borderline impossible. There's too many failure cases. it's been understood for quite a few years now that you cannot max out the performance of any given GPU without having someone from NVIDIA or AMD physically grab your game source code, load it on a dev driver, and do a hands-on analysis. These are the vanishingly few people who have actually seen the source to a game, the driver it's running on, and the Windows kernel it's running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability.

* Threading is just a catastrophe and is being rethought from the ground up. This requires a lot of the abstractions to be stripped away or retooled, because the old ones required too much driver intervention to be properly threadable in the first place.

* Multi-GPU is becoming explicit. For the last ten years, it has been AMD and NV's goal to make multi-GPU setups completely transparent to everybody, and it's become clear that for some subset of developers, this is just making our jobs harder. The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually.

Part of the goal is simply to stop hiding what's actually going on in the software from game programmers. Debugging drivers has never been possible for us, which meant a lot of poking and prodding and experimenting to figure out exactly what it is that is making the render pipeline of a game slow. The IHVs certainly weren't willing to disclose these things publicly either, as they were considered critical to competitive advantage. (Sure they are guys. Sure they are.) So the game is guessing what the driver is doing, the driver is guessing what the game is doing, and the whole mess could be avoided if the drivers just wouldn't work so hard trying to protect us.

So why didn't we do this years ago? Well, there are a lot of politics involved (cough Longs Peak) and some hardware aspects but ultimately what it comes down to is the new models are hard to code for. Microsoft and ARB never wanted to subject us to manually compiling shaders against the correct render states, setting the whole thing invariant, configuring heaps and tables, etc. Segfaulting a GPU isn't a fun experience. You can't trap that in a (user space) debugger. So ... the subtext that a lot of people aren't calling out explicitly is that this round of new APIs has been done in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.

Three out of those four just made their engines public and free with minimal backend financial obligation.

Now there's nothing wrong with any of that, obviously, and I don't think it's even the big motivating raison d'etre of the new APIs. But there's a very real message that if these APIs are too challenging to work with directly, well the guys who designed the API also happen to run very full featured engines requiring no financial commitments*. So I think that's served to considerably smooth the politics involved in rolling these difficult to work with APIs out to the market, encouraging organizations that would have been otherwise reticent to do so.

[Edit/update] I'm definitely not suggesting that the APIs have been made artificially difficult, by any means - the engineering work is solid in its own right. It's also become clear, since this post was originally written, that there's a commitment to continuing DX11 and OpenGL support for the near future. That also helped the decision to push these new systems out, I believe.

The last piece to the puzzle is that we ran out of new user-facing hardware features many years ago. Ignoring raw speed, what exactly is the user-visible or dev-visible difference between a GTX 480 and a GTX 980? A few limitations have been lifted (notably in compute) but essentially they're the same thing. MS, for all practical purposes, concluded that DX was a mature, stable technology that required only minor work and mostly disbanded the teams involved. Many of the revisions to GL have been little more than API repairs. (A GTX 480 runs full featured OpenGL 4.5, by the way.) So the reason we're seeing new APIs at all stems fundamentally from Andersson hassling the IHVs until AMD woke up, smelled competitive advantage, and started paying attention. That essentially took a three year lag time from when we got hardware to the point that compute could be directly integrated into the core of a render pipeline, which is considered normal today but was bluntly revolutionary at production scale in 2012. It's a lot of small things adding up to a sea change, with key people pushing on the right people for the right things.
 
Nvidia drivers and SLI support is just as bad as crossfire and AMD right now. Also crossfire 290/290x is smoother then Nvidia SLI as well.

The thing is we have no idea if this is the TOP AMD card, or the cutdown version. We do know Titan X is not cut down.

Anyway too much speculation, not enough [H]ard evidence.

Edit: How can you be upset with it using the same power as a 290x? Think about it. Its 40% faster using the same power as the previous generation? That is impressive.
The power needs for the 290X ( I have two) are nuts compared to the 980 (I have two).
The heat and noise from the 290 is/was unbearable. I necessarily had to watercool them just to have some peace and quiet. I don't care how good the 390X will be, the noise and heat won't be worth it.
Not to mention AMD hasn't put out a driver since December.

FreeSync driver will be out very soon, as well as displays.

Please.
AMD hasn't released a driver or hotfix since the Omega in December.
Maybe you have inside information, but AMDs support has fallen off the radar.
I don't care how good/bad NVidia has been lately, they are at least still there with releases.

On the other side of the argument, nearly every game released lately has been FUBAR on release day, that isn't the driver software's fault.
 
Freesync driver is Mar 19th.
http://www.guru3d.com/news-story/single-gpu-amd-freesync-driver-march-19th.html

On the other side of the argument, nearly every game released lately has been FUBAR on release day, that isn't the driver software's fault.
Every game with the "Nvidia" logo attached to it.
According to AMD there is no Far Cry 4 crossfire support since they're waiting on Ubisoft. Just one example...

I'm curious how anyone expects AMD to succeed in the PC gaming industry that is now heavily owned by Nvidia.
They'll just keep buying out game developers/publishers until eventually AMD becomes obsolete... Then people will come on these forums and praise Nvidia for doing so, and shame AMD for failing.
 
I'm curious how anyone expects AMD to succeed in the PC gaming industry that is now heavily owned by Nvidia.
They'll just keep buying out game developers/publishers until eventually AMD becomes obsolete... Then people will come on these forums and praise Nvidia for doing so, and shame AMD for failing.

Wouldn't that violate anti-trust laws?
 
Hard to prove in a court of law.

Also enforcement of US Anti Trust laws is a joke these days.
 
Freesync driver is Mar 19th.
http://www.guru3d.com/news-story/single-gpu-amd-freesync-driver-march-19th.html


Every game with the "Nvidia" logo attached to it.
According to AMD there is no Far Cry 4 crossfire support since they're waiting on Ubisoft. Just one example...

I'm curious how anyone expects AMD to succeed in the PC gaming industry that is now heavily owned by Nvidia.
They'll just keep buying out game developers/publishers until eventually AMD becomes obsolete... Then people will come on these forums and praise Nvidia for doing so, and shame AMD for failing.

And this is why I can't go AMD regardless of how good their new cards are. The games will run better on Nvidia and NV will keep releasing drivers for their TWIIMTBP games.


It is shitty but it is the reality of the PC gaming AAA segment. I mean just look at the last 6 months of big game releases, they have all played like ass on AMD. I really wish Samsung would just buy them out and give them the capital to really compete with NV & Intel.
 
And this is why I can't go AMD regardless of how good their new cards are. The games will run better on Nvidia and NV will keep releasing drivers for their TWIIMTBP games.
This statement makes no sense. What makes a video card "good" is its performance relative to its price, both relative to the competition.

If one 390X matches the Titan X across 19 games, and you can buy two 390X's for the price of one Titan X, why would you buy the Titan X for any other reason besides VRAM limitations? Now you're paying nearly twice as much to dodge Crossfire issues which in this case is 'free' performance, and rely on whatever special optimizations Nvidia floats in future games beyond the ones benchmarked. Presumably some of the games tested in these benchmarks were Far Cry 4, Dying Light, Unity, etc, which favor Nvidia hardware and still lost to (or tied with) AMD.

The games might "run better" on Nvidia hardware, but it doesn't really matter if AMD still has a larger performance gap / lower price. So it all evens out, unless you happen to have a bottomless wallet.
 
Last edited:
Well I will always buy Nvidia. They always worked for me all these years. That cannot be said for Ati as for my experience.
 
This statement makes no sense. What makes a video card "good" is its performance relative to its price, both relative to the competition.

If one 390X matches the Titan X across 19 games, and you can buy two 390X's for the price of one Titan X, why would you buy the Titan X for any other reason besides VRAM limitations? Now you're paying nearly twice as much to dodge Crossfire issues which in this case is 'free' performance, and rely on whatever special optimizations Nvidia floats in future games beyond the ones benchmarked. Presumably some of the games tested in these benchmarks were Far Cry 4, Dying Light, Unity, etc, which favor Nvidia hardware and still lost to (or tied with) AMD.

The games might "run better" on Nvidia hardware, but it doesn't really matter if AMD still has a larger performance gap / lower price. So it all evens out, unless you happen to have a bottomless wallet.

I would of just stopped at buying the same performance for half the price. :D
 
Back
Top