R9 290X goes toe-to-toe with GTX 980 Ti on DirectX 12

They look like CPU-limited benchmarks. Can't tell much about GPU performance there.
 
Hardly levels the playing field - we're talking about a game that is still in alpha and there are no other DX12 games out yet that I'm aware of.

When complete DX12 games come out then we can judge.
 
Drivers are still quite fresh, I wouldnt jump to conclusions on either side until Win10 drivers are mature and stable.
 
Had a good feeling DirectX 12 would level the playing field.

Drawing those kinds of conclusions based on a single benchmark of a game that is still in pre-beta makes you look like one of those people who point to a single unusually hot or cold day as proof of climate change.
 
They look like CPU-limited benchmarks. Can't tell much about GPU performance there.

Umm, no, they are not CPU limited. The only time you can see CPU limitation is at 2160P with the 980ti,

Review-chart-template-final-full-width-3.0011-980x720.png


Review-chart-template-final-full-width-3.0021-980x720.png
 
Yep, looking forward to getting even more performance out of my R9 290 at 4k resolutions. :D Almost feel sorry for those Nvidia owners who might have to upgrade to just keep up. :p Good to see AMD's hard work finally starting to pay off.
 
Drawing those kinds of conclusions based on a single benchmark of a game that is still in pre-beta makes you look like one of those people who point to a single unusually hot or cold day as proof of climate change.

Ummm, no. Mantle had already proven it, DX12 is just icing on the cake. :)
 
What I'm reading elsewhere, the benchmark is being limited by draw calls, which may be an intentional decision by Stardock given that this was originally a Mantle game. Perhaps they did it on purpose to emphasize AMD's overhead improvements from DX11 --> DX12.

In DX11 that would be a CPU bottleneck. Not sure about DX12.

Realistically speaking, if these kinds of draw call numbers are consistent across other DX12 games then Nvidia is in trouble.

0AeKhjB.jpg
 
Umm, no, they are not CPU limited. The only time you can see CPU limitation is at 2160P with the 980ti,

This is a benchmark that prides itself on having 600 gazillion draw calls. The CPU is probably the bottleneck.

Look at the blue bars between nvidia and AMD. They are almost identical. That's a pretty clear indication that the GPU is not the bottleneck in the system. There is likely some other piece of hardware in the system that is in common between the two sets of runs that is the bottleneck, since they're producing basically identical results.
 
This is a benchmark that prides itself on having 600 gazillion draw calls. The CPU is probably the bottleneck.

Look at the blue bars between nvidia and AMD. They are almost identical. That's a pretty clear indication that the GPU is not the bottleneck in the system. There is likely some other piece of hardware in the system that is in common between the two sets of runs that is the bottleneck, since they're producing basically identical results.

Jeez...look at the top of charts... One chart says Six Cores...other chart says Four Cores no HT. If you read the article, it spells it out for you, this was not CPU limited.
 
Jeez...look at the top of charts... One chart says Six Cores...other chart says Four Cores no HT. If you read the article, it spells it out for you, this was not CPU limited.

Haven't we already seen that DX12 doesn't scale beyond four cores?

I did read the article... at least as much as I can read something on Ars. I mean it is kind of scraping the bottom of the tech barrel.
 
A better way to test for a CPU bottleneck would be to adjust the CPU clocks lower / higher and see how it affects performance.
Changing the amount of threads is not a good method, especially beyond 4 cores.
 
A better way to test for a CPU bottleneck would be to adjust the CPU clocks lower / higher and see how it affects performance.
Changing the amount of threads is not a good method, especially beyond 4 cores.

Even though DX12 was designed in mind with multithreading...

The article clearly states at what clock the CPU was and how everything was benchmarked.
 
I'm curious to see what happens. This is a single benchmark, and early in the DX12 driver cycle, so if it's a pure software issue it won't take long for that gap to disappear.

If this is a real problem, then I'd be shocked. According to this article Maxwell has a deeper mixed shader queue than the 290X:

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

But could Nvidia's claims be bullshit? Or is this just a case of early drivers?
 
Haven't we already seen that DX12 doesn't scale beyond four cores?

I did read the article... at least as much as I can read something on Ars. I mean it is kind of scraping the bottom of the tech barrel.

no?
 
I'm curious to see what happens. This is a single benchmark, and early in the DX12 driver cycle, so if it's a pure software issue it won't take long for that gap to disappear.

If this is a real problem, then I'd be shocked. According to this article Maxwell has a deeper mixed shader queue than the 290X:

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

But could Nvidia's claims be bullshit? Or is this just a case of early drivers?

NV's Windows 10 drivers seem really half baked right now and I say this as a 980 Ti owner. I've had weird graphical corruption just on the desktop and the GeForce forums are flooded with issues. I had no issues prior to installing W10...

I don't even think the Windows 10 OS itself was ready for prime time, still feels like a beta and I wished I kept my rig on 7. Already did a clean install and stuff so not worth the hassle to revert now though.

As long as they get this sorted in time it isn't a big deal. For now, a win for AMD yes, but it only matters if the lead lasts when real DX12 games come out.
 
^ Win 10 is nothing more than a tweaked Win 8.1

NVidia has a long storied history of not having their drivers ready for new iterations of Windows...
 
I'm curious to see what happens. This is a single benchmark, and early in the DX12 driver cycle, so if it's a pure software issue it won't take long for that gap to disappear.

If this is a real problem, then I'd be shocked. According to this article Maxwell has a deeper mixed shader queue than the 290X:

http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

But could Nvidia's claims be bullshit? Or is this just a case of early drivers?

That wss an interesting article to read.
 
Seems like AMD has planned for DirectX 12 at the cost of performance on DirectX 11. A interesting call to make, but if those numbers hold up Nvidia will be in real trouble, that might not be something they can fix with a driver update. Prime1 will be most displeased if this is true.
 
Nvidia doesn't even have proper dx12 drivers, everything is super pre-alpha

dx12 is basically mantle. go back and look at how much mantle gained... basically nothing, sometimes even worse.


remember, Nvidia has 85% of the market. even if the benches are true Nvidia simply won't allow it and will gimp amd cards on purpose.
 
Yep, looking forward to getting even more performance out of my R9 290 at 4k resolutions. :D Almost feel sorry for those Nvidia owners who might have to upgrade to just keep up. :p Good to see AMD's hard work finally starting to pay off.

This is what is really hilarious to me. Whenever wee the frequent benchmarks showing Nvidia cards outperforming amd counterparts, it's always "b-but <something>", but then we see exactly ONE benchmark of and AMD sponsored game in alpha stage and because AMD is outperforming Nvidia then it's an established fact for the whole future and amd fanboys going full into "LALALA I CAN'T HEAR YOU LALALA". Not like we've been through this exact same thing before a thousand times already. No, seriously, just how many of these wishful thinking parades from AMD that always ends in major disappointment and damage control we've been through already, ranging from their attempts at fixing their cpu fuckups to their rebrands? Speaking of cpus, we've been hearing from amd fanboys that dx12 would definitely make the FX line outstanding and this very same test posted here completely destroyed this notion.

Haven't you guys simply had enough flack already? I think there's been plenty of time to have learned to shut your mouth until the thing actually exists in a consumer tangible form, because I'm pretty sure you must be tired of biting your own tongue this much already, right? You guys cling so blindly to the one glimmer of hope you find and you always get burned in the end. Stop this mad post purchase rationalization, jesus.
 
Jeez...look at the top of charts... One chart says Six Cores...other chart says Four Cores no HT. If you read the article, it spells it out for you, this was not CPU limited.

That just proves that it doesn't benefit from more than 4 threads. It can still be cpu limited.
 
remember, Nvidia has 85% of the market. even if the benches are true Nvidia simply won't allow it and will gimp amd cards on purpose.

Sure, they walk over to AMD manufacturers and sabotage their production lines. What the hell are you talking about?
 
Sure, they walk over to AMD manufacturers and sabotage their production lines. What the hell are you talking about?

Well DX12 is supposed to be down to the metal coding- so I guess it would be easy to make code that favours maxwell as long as Nvidia pays for it.
 
This was with a 290x?
What does Fury 390X score?
Well if we go the parallelism route, Fury scores the exact same. They all have the same amount of compute engines (8).

As with &#8220;Hawaii&#8221;, &#8220;Fiji&#8221; carries eight Asynchronous Compute Engines.

Maxwell's Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor.

If we go the CPU bottleneck route, Fury scores the exact same, too. All 3 cards will be limited by the CPU at the same point. :rolleyes:

Bottomline: the framerate is being limited by a draw call bottleneck, either by the GPU hardware (causes AMD to run faster than typical) or by CPU overhead (causes 980 Ti to run slower than typical). Both scenarios produce the same result.
 
Last edited:
Well DX12 is supposed to be down to the metal coding- so I guess it would be easy to make code that favours maxwell as long as Nvidia pays for it.

If these APIs really are console-like low-level, then yeah people have no idea what kind of disaster we're headed for in PC gaming.
 
Well DX12 is supposed to be down to the metal coding- so I guess it would be easy to make code that favours maxwell as long as Nvidia pays for it.

The "to the metal" stuff is mostly marketing noise.

But the ideologically and abstraction _has_ seen a blatant paradigm shift with the new APIs. The driver does less than ever behind the scenes and it's up to you now to take care of that. It's a different (and lighter) abstraction, not total absence of one.

If these APIs really are console-like low-level, then yeah people have no idea what kind of disaster we're headed for in PC gaming.

Yea man cause it's definitely in the best interests of the industry to doom itself and you alone have seen and foresaw what a sea of engineers could not
 
Yea man cause it's definitely in the best interests of the industry to doom itself and you alone have seen and foresaw what a sea of engineers could not

Right. Because broken and poor-performing PC games are rare for this industry.
 
Yea man cause it's definitely in the best interests of the industry to doom itself and you alone have seen and foresaw what a sea of engineers could not

Well it's been 10-15 years since great API wars of 3DFX era so I'm not exactly suprised that a lot of people forgotten why Microsoft was seen as saviour back then thanks to providing higher abstraction level option.
 
Sure, they walk over to AMD manufacturers and sabotage their production lines. What the hell are you talking about?

A few years ago there was a scandal because games funded by Nvidia performed 30% worse on AMD cards
 
If these APIs really are console-like low-level, then yeah people have no idea what kind of disaster we're headed for in PC gaming.

Well it's been 10-15 years since great API wars of 3DFX era so I'm not exactly suprised that a lot of people forgotten why Microsoft was seen as saviour back then thanks to providing higher abstraction level option.

The difference being, there are options and developers aren't being forced to go low-level if they don't want to.
 
This is more about the game engine and not the driver as this is more to the metal of the design of the gpu and Nvidia just has not been built for it as it depends to heavy on drivers support.. AMD has been planning for this since HD 7950/7970 with GCN and the hardware was built with DX 12 in mind as to limit the need of driver support and let the game developers build there game engine closer to hardware which would allow better game support right out of the box.
 
Right. Because broken and poor-performing PC games are rare for this industry.

And who is buying these things? It's not me.

Nobody has to use DX12 / Vulkan. The most played games on Steam are still stuck in DX9 land...
 
And who is buying these things? It's not me.

Nobody has to use DX12 / Vulkan. The most played games on Steam are still stuck in DX9 land...

This is why I'm not too worried about a single benchmark. Even if this turns out to be a real hardware issue (FX versus R300 slamdance anyone?), and Nvidia hardware is just "DX12 in name only," it won't matter to most people.

Back in the day people actually cared about Half-Life 2-performance numbers, while riding the train of the freshly-minted DX9. But today they still make games in DX9c, and DX11 will be much more popular than DX12 for cutting-edge games for some time.

The only reason we're hearing about this is because the CEO of Stardock knows how to ruffle feathers and whine about just the right things to get free press. He has a history of doing so, even though his company is 1/4th the size, and 1/10 the revenue of a tieir-2 developer like Gearbox. The top-rated Stardock games I've played have not lived up to the hype, so I don't have much hope for this one. But Brad Wardell would sure like to give YOU a reason to care :D
 
Last edited:
This is what is really hilarious to me. Whenever wee the frequent benchmarks showing Nvidia cards outperforming amd counterparts, it's always "b-but <something>", but then we see exactly ONE benchmark of and AMD sponsored game in alpha stage and because AMD is outperforming Nvidia then it's an established fact for the whole future and amd fanboys going full into "LALALA I CAN'T HEAR YOU LALALA". Not like we've been through this exact same thing before a thousand times already. No, seriously, just how many of these wishful thinking parades from AMD that always ends in major disappointment and damage control we've been through already, ranging from their attempts at fixing their cpu fuckups to their rebrands? Speaking of cpus, we've been hearing from amd fanboys that dx12 would definitely make the FX line outstanding and this very same test posted here completely destroyed this notion.

Haven't you guys simply had enough flack already? I think there's been plenty of time to have learned to shut your mouth until the thing actually exists in a consumer tangible form, because I'm pretty sure you must be tired of biting your own tongue this much already, right? You guys cling so blindly to the one glimmer of hope you find and you always get burned in the end. Stop this mad post purchase rationalization, jesus.

I think the MAIN issue is that Nvidia and their "fanboys" have constantly shamed AMD for their drivers.
Now that the tables have turned and Nvidia has been caught with their pants down, they just get a pass. As illustrated by this comment.
NV's Windows 10 drivers seem really half baked right now and I say this as a 980 Ti owner. I've had weird graphical corruption just on the desktop and the GeForce forums are flooded with issues. I had no issues prior to installing W10...

I don't even think the Windows 10 OS itself was ready for prime time, still feels like a beta and I wished I kept my rig on 7. Already did a clean install and stuff so not worth the hassle to revert now though.

As long as they get this sorted in time it isn't a big deal. For now, a win for AMD yes, but it only matters if the lead lasts when real DX12 games come out.

If this was AMD, the Nvidia "fanboys" would be all over this. This particular thread would be over 10 pages long with Nvidia "fanboys" saying "Told you so".
 
I think the MAIN issue is that Nvidia and their "fanboys" have constantly shamed AMD for their drivers.
Now that the tables have turned and Nvidia has been caught with their pants down, they just get a pass. As illustrated by this comment.


If this was AMD, the Nvidia "fanboys" would be all over this. This particular thread would be over 10 pages long with Nvidia "fanboys" saying "Told you so".

There's a bit of a difference here in that this is one solitary example from a game that's still in the alpha stage. Rather than one of multiple examples of AAA titles that are already on the market which has been the reason for AMD's bad reputation in the last few years. The safe assumption is that Nvidia's performance will have improved by the time the game actually gets released and DX12 actually starts to become more widespread.

So its good to see AMD doing a good job in this example. However I think most of us are going to reserve full judgement until we start seeing real world performance.
 
Back
Top