R9 290X goes toe-to-toe with GTX 980 Ti on DirectX 12

If you have current gen N cards, it won't do well with DX12 titles coming down the line fast. Next gen? Perhaps. But don't hang your hope on something unrealistic.


There seems to be a lot of menstruating about Benchmarks from either camp when one does better than the other.

Cant we all just game?
 
Cant we all just game?

l4d_strategery.png
 
A lot of you are talking out of your butt and clearly don't understand the first thing of what you are talking about. Seeing how NVidia's performance is actually going down in DX12 vs 11 clearly the drivers are not ready. If you said DX12 means that drivers don't matter you're just going on your own theories which are proven in your own mind. Also, a game being "just a bunch of draw calls" does not make it very CPU limited. That is the most absurd thing I have ever heard. I am not trying to insult you guys I just want to let you know that is absurd. Nothing gets on that screen, or to the GPU in general without being put there by a draw call. That's what a draw call is. What makes a game CPU limited is having all those draw calls being executed from a single core. Artificial intelligence being calculated for every enemy on the map even though they are miles away can be taxing on the cpu. Calculating how the wind effects a million blades of grass can tax the cpu. AMD is showing a huge increase in these early demos because they were already a step ahead of the game driver wise because of their horribly failed Mantle project.
 
A lot of you are talking out of your butt and clearly don't understand the first thing of what you are talking about. Seeing how NVidia's performance is actually going down in DX12 vs 11 clearly the drivers are not ready. If you said DX12 means that drivers don't matter you're just going on your own theories which are proven in your own mind. Also, a game being "just a bunch of draw calls" does not make it very CPU limited. That is the most absurd thing I have ever heard. I am not trying to insult you guys I just want to let you know that is absurd. Nothing gets on that screen, or to the GPU in general without being put there by a draw call. That's what a draw call is. What makes a game CPU limited is having all those draw calls being executed from a single core. Artificial intelligence being calculated for every enemy on the map even though they are miles away can be taxing on the cpu. Calculating how the wind effects a million blades of grass can tax the cpu. AMD is showing a huge increase in these early demos because they were already a step ahead of the game driver wise because of their horribly failed Mantle project.

Wrong.
Drivers has nothing to do with performance in DX12.

Don't expect an nVIDIA fix through Driver Intervention either. DirectX 12 is limited in driver intervention because it is closer to Metal than DirectX 11. Therefore nVIDIAs penchant for replacing shaders at the driver level is nullified with DirectX 12. DirectX 12 will be far more hardware limited than DirectX 11.

Oxide confirmed it here:
DirectX 11 vs. DirectX 12 performance

There may also be some cases where D3D11 is faster than D3D12 (it should be a relatively small amount). This may happen under lower CPU load conditions and does not surprise us. First, D3D11 has 5 years of optimizations where D3D12 is brand new. Second, D3D11 has more opportunities for driver intervention. The problem with this driver intervention is that it comes at the cost of extra CPU overhead, and can only be done by the hardware vendor’s driver teams. On a closed system, this may not be the best choice if you’re burning more power on the CPU to make the GPU faster. It can also lead to instability or visual corruption if the hardware vendor does not keep their optimizations in sync with a game’s updates.
 
When talking about magic API performance improvement, this is something that is constantly claimed by the industry, but never in the history of the world has it happened -- not once. We heard the same malarky during the move from DX7 to 8, 8 to 9, 9 to 10 and the x% performance increase claimed did not materialize a single cotton-picking time. The planted and "directed" tests sure must help to sell cards though...
 
Nvidia sells you outdated tech called the 980ti etc...
said so a long time.
buy amd buy the future today
 
When talking about magic API performance improvement, this is something that is constantly claimed by the industry, but never in the history of the world has it happened -- not once. We heard the same malarky during the move from DX7 to 8, 8 to 9, 9 to 10 and the x% performance increase claimed did not materialize a single cotton-picking time. The planted and "directed" tests sure must help to sell cards though...

Exactly. I've been saying this all along. Although it would be really nice to see the claims made about performance increases coming into fruition, I'll remain very skeptical until we have a thorough list of real world examples.
 
what a blatantly ignorant commentary.. God.. :eek: it's really impressive..

suddenly a DirectX programmer/designer but also a game developer/programmer/designer teach us and decide to enlighten us :rolleyes:

Still I don't know if it was trolling or serious :(

Don't worry, I'm sure nvidia will just spend a few billion on PR convincing us all once again that AMD cards can't play games and have horrible drivers.
 
Last edited:
If you google fury x DX 12 benchmark you get

http://www.extremetech.com/gaming/2...he-singularity-amd-and-nvidia-go-head-to-head

The same company. Between the two companies they pretty much lay it on the table.

Nvidia focused on an aggressive straight one line game and did a stellar job.
AMD focused on asynchronous calls using more lines.

Intel would be the first to tell you out of order execution and multiple lines of computations running side by side always wins.

Nvidia is architecturally build for DX11 and suffers some performance hit on DX12 because their optimizations don't work in DX12.

Not saying they won't be able to make it ... better, but the benchmark was certified by MS, Nvidia and AMD and passed DX 12 and DX11 certification requirements. Both companies had access to it for over a year and Nvidia out of the gate was blaming the game.

I think we're seeing a turn of favor, just like when Intel jumped into HT tech, AMD went for Multiple pipelines and moving forward, if the 290X can almost overtake a 980Ti and the 290x scales better, it would cast some doubt in Nvidia's current gen and Maxwell as well.

the Fury is hot off the press and posted terrific numbers and shows they have moved away from a single pipeline ideal and came out swinging in the new world of DX12.
 
Nvidia has always been great at squeezing performance out of their drivers and delivering 100% of what the hardware can output.

Amd has always had terrific hardware and questionable drivers, usually landing on a sweet spot with particular revisions.

For once, Nvidia might have dropped the ball.

AMD is killing it with their Async Shaders and Multi-Threaded Command Buffer Recording.

http://www.dsogaming.com/news/amd-e...ders-multi-threaded-command-buffer-recording/

Maybe it's time for them to shine again?
 
Nvidia has always been great at squeezing performance out of their drivers and delivering 100% of what the hardware can output.

Amd has always had terrific hardware and questionable drivers, usually landing on a sweet spot with particular revisions.

For once, Nvidia might have dropped the ball.

AMD is killing it with their Async Shaders and Multi-Threaded Command Buffer Recording.

http://www.dsogaming.com/news/amd-e...ders-multi-threaded-command-buffer-recording/

Maybe it's time for them to shine again?

Where do you see that shining ?

All I see is that in most favourable case Fury X slightly beats 980ti.

Meanwhile 980 ti still keeps 20% oc headroom and offers superior performance everywhere else.

And I get that performance without been forced to instal spyware os ;)
 
Where do you see that shining ?

All I see is that in most favourable case Fury X slightly beats 980ti.

Meanwhile 980 ti still keeps 20% oc headroom and offers superior performance everywhere else.

And I get that performance without been forced to instal spyware os ;)

DX12 is the future you like it or not. The 980ti is currently trading blows with a 2 years old gpu lol.

You can keep Win7 and enjoy old games. Microsoft is pushing DX12 on xbox games and most of them will be ported to PC.
 
I wouldn't be so quick to trust the validity of this game's performance either -- nvidia was seemingly performing much better than AMD was on the same engine 6 months ago. It seems more like AMD's sponsorship money got them further than anything.

The Star Swarm test makes use of 100,000 Draw Calls which does not bottleneck either nVIDIAs Maxwell or AMDs GCN 1.1/1.2. That is just about the only DirectX 12 feature it uses (the ability to draw more units on the screen). More units means more triangles. Star Swarm does not make use of Asynchronous Shading (Parallel Shading) or the subsequent Post Processing effects as seen under Ashes of the Singularity.

Once you throw in Asynchronous Shading, nVIDIAs Serial architectures take a noticeable dive in performance while AMDs Parallel architectures take little to no performance hit.

Since both the Xbox One as well as the Playstation 4 include Asynchronous Compute Engines (2 for the Xbox one like those found in a Radeon 7970 and 8 for the PS4 like those found in Hawaii/Fiji) then it is almost a given that developers will be making use of this feature. Microsoft added this feature into Direct X 12 for this reason.

This is a case where AMDs close collaboration with consoles will surely pay off. This feature should be welcomed by any R9 200 series owner as it will breath new life into their aging hardware.
 
oh we are again at the ole argument: gaming future is Linux ehm AMD ?

you can dream about the performance of yet nonexistent tech while im playing Witcher 3 with hairworks on my 980ti

By the time DX12 titles hit proper i will have a new card:D
 
Wrong.
Drivers has nothing to do with performance in DX12.

Don't expect an nVIDIA fix through Driver Intervention either. DirectX 12 is limited in driver intervention because it is closer to Metal than DirectX 11. Therefore nVIDIAs penchant for replacing shaders at the driver level is nullified with DirectX 12. DirectX 12 will be far more hardware limited than DirectX 11.

Oxide confirmed it here:

Then please explain to me how DX12 going 'closer to the metal' actually reduces performance in nVidia's case? Please don't throw around developer presentation's catch phrases like that. Touchpad implementation can be developed with lower level programming as well but it still needs a functioning driver.
 
You should really add that as a separate post, tons of great detail and work done and it will sadly get buried in this thread.
 
Then please explain to me how DX12 going 'closer to the metal' actually reduces performance in nVidia's case? Please don't throw around developer presentation's catch phrases like that. Touchpad implementation can be developed with lower level programming as well but it still needs a functioning driver.

Because nVidia started to rely more and more on driver intervention for shaders processing starting with Kepler. In tech way this was a step backwards because Kepler's shaders ( which Maxwell inherits) are less complex then Fermi's and do less arithmetic at the hardware level relegating the rest of the work to software.

The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.

However based on their own internal research and simulations, in their search for efficiency NVIDIA found that hardware scheduling was consuming a fair bit of power and area for few benefits. In particular, since Kepler’s math pipeline has a fixed latency, hardware scheduling of the instruction inside of a warp was redundant since the compiler already knew the latency of each math instruction it issued. So NVIDIA has replaced Fermi’s complex scheduler with a far simpler scheduler that still uses scoreboarding and other methods for inter-warp scheduling, but moves the scheduling of instructions in a warp into NVIDIA’s compiler. In essence it’s a return to static scheduling.

Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.

http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3
 
What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.


With that said, in discussing Kepler with NVIDIA’s Jonah Alben, one thing that was made clear is that NVIDIA does consider this the better way to go. They’re pleased with the performance and efficiency they’re getting out of software scheduling, going so far to say that had they known what they know now about software versus hardware scheduling, they would have done Fermi differently. But whether this only applies to consumer GPUs or if it will apply to Big Kepler too remains to be seen.
More points. Basically nVidia gimped Fermi architecture by making it simpler and more linear and moved the rest of the work to their drivers. Undoubtedly and obviously since its already happening Maxwell will suffer even more under DX12 where driver ability to make up for hardware shortfalls is nullified.
 
Last edited:
It's going to hilarious when the first DX12 game actually comes out, and Nvidia blows the AMD cards out of the water.
 
It's going to hilarious when the first DX12 game actually comes out, and Nvidia blows the AMD cards out of the water.
Considering Nvidia's iron grip on the market devs have no reason to make separate considerations for AMD's benefit unless it happens by accident. Even if that were the case, Nvidia will do what they always do and intentionally prevent it from happening... Nvidia will do everything in their power to stop it from seeing the light of day.

If nothing else they will design Pascal to take advantage of the same feature, market it to hell and back, and pretend Maxwell/Kepler don't exist. It won't matter by then anyway, you won't be able to buy any of the current cards. Nvidia has no reason to care about their old GPUs performance when they're trying to sell new hotness.

What I find most interesting here is that the Fury X, 980 Ti, and 290X are all within spitting distance of each other. The 290X and Fury X should perform identically under the same draw call bottleneck based on the analysis other people are providing. If all of this is true then it looks like AMD intentionally gimped their own $650 halo product compared to the rest of their line-up.
 
It's going to hilarious when the first DX12 game actually comes out, and Nvidia blows the AMD cards out of the water.

If that does not happen, will you please post a picture of your tears of anguish? :D So, how is Nvidia exactly going to blow away AMD if there are not NVidia only specific software optimizations like they have in DX11?

Considering Nvidia's iron grip on the market devs have no reason to make separate considerations for AMD's benefit unless it happens by accident. Even if that were the case, Nvidia will do what they always do and intentionally prevent it from happening... Nvidia will do everything in their power to stop it from seeing the light of day.

So, Nvidia attempt to kill the DX12 market with market manipulations instead of making legitimately better hardware? Well, at least we know why Nvidia was showing better in gameworks games and you have admitted to it. :D
 
If that does not happen, will you please post a picture of your tears of anguish? :D So, how is Nvidia exactly going to blow away AMD if there are not NVidia only specific software optimizations like they have in DX11?
The idea is that this game might represent one specific game where AMD slips ahead of Nvidia. You can do the same thing with DX11 right now -- try Shadow of Mordor. AMD leads Nvidia in pretty much every SoM test I could find.

http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/7
http://www.tomshardware.com/reviews/shadow-of-mordor-performance,3996-4.html

So if you pretend for a second that SoM is the first and only DX11 game on the market, suddenly AMD looks very appealing. We might even get some fun articles about why AMD is winning DX11 thanks to their architecture, and a bunch of other nonsense that may or may not actually mean anything. And then you have threads like this popping up; "AMD leads in first SoM tests, is Nvidia doomed in DX11?"

And of course the reality of the situation, that's simply not the case. With AotS we're looking at some kind of bottleneck that might have been originally implemented to show off Mantle. It may not even be an issue in any other DX12 game... Or maybe just RTS in particular. Or maybe as DX12 games become more advanced, the bottleneck on Nvidia hardware will become more obvious. Similar to AMD's issues with tessellation in DX11, it can really hurt in certain games where Nvidia intentionally goes overboard to kill AMD's performance, but for most games it has no effect. As far as I'm concerned, AotS' DX12 benchmark is the equivalent of Nvidia releasing a DX11 Tessellation benchmark. Neither are representative of real-world gameplay, they solely exist to emphasize one feature.
 
Last edited:
If that does not happen, will you please post a picture of your tears of anguish? :D So, how is Nvidia exactly going to blow away AMD if there are not NVidia only specific software optimizations like they have in DX11?



So, Nvidia attempt to kill the DX12 market with market manipulations instead of making legitimately better hardware? Well, at least we know why Nvidia was showing better in gameworks games and you have admitted to it. :D
Simple get developers to use more geometry instead of tricky shaders, nvidia cards already crush AMD cards in that regard.
 
If that does not happen, will you please post a picture of your tears of anguish? :D So, how is Nvidia exactly going to blow away AMD if there are not NVidia only specific software optimizations like they have in DX11?



So, Nvidia attempt to kill the DX12 market with market manipulations instead of making legitimately better hardware? Well, at least we know why Nvidia was showing better in gameworks games and you have admitted to it. :D

Sure why not I'll post myself buying an AMD card as well.

I am more concerned that everyone is now saying DX12 is going to save AMD.

I think its foolish this early in the game to say anything for sure about DX12 and AMD vs Nvida.

I think AMD has an advantage due to the similarities between Mantel and DX12, however it would be foolish to think that a company with the resources of Nvidia isn't ready for DX12 with the next generation of their cards.

AMD might have been the first to the block with good DX12 performance, but there cards are on the table you wont see much of a difference in the Fiji and the next generation of AMD cards.

However with Nvidia the difference between the Pascal architecture and the Maxwell will be significant.

This is just my option on whats going on.
 
I'm an Ashes Founder, and I own a GTX 980, and a R9 290. The game looks great. If you like RTS with big battles and low amounts of micro, than this looks like it will be a fantastic game.
That's what I'm looking at, how the game looks and plays on my hardware. If my experience is great, I don't care if some one might be getting higher frame rates than me. I care about my experience, period.
 
When talking about magic API performance improvement, this is something that is constantly claimed by the industry, but never in the history of the world has it happened -- not once. We heard the same malarky during the move from DX7 to 8, 8 to 9, 9 to 10 and the x% performance increase claimed did not materialize a single cotton-picking time. The planted and "directed" tests sure must help to sell cards though...

yes it did with better visuals
 
The only way that's going to happen is in Gameworks titles.

Yes, this is quite worrisome. In relation to this however, one phrase from the Oxide blog caught my attention :

It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages.

At first glance it's hopeful that this was brought to Microsoft's attention and certain safety measures were put into place as part of minimally optimized code. Need to look into this.
 
Back
Top