DirectX 12 Preview: Star Swarm benchmarks

glNext will just be the AZDO stuff plus the new command list thingy from nvidia, and a re-do of how the shaders are going to be done and probably an easier programming API since everybody complains about that. I also suspect they're going to merge all the different GLs like WebGL, GLES, etc. into one.

The two Valve employees presenting glNext at GDC are actually former nvidia devs.

I'm going to call not a chance - it's going to be a massive API change or that will be all she wrote.
 
TDP = Thermal Design Power

This represents how much heat the cooling solution is designed to dissipated, in watts. The GTX 980 has a cooler that can handle 165w of heat, while the R9 290 needs a cooler that can deal with 275w of heat.

The GTX 980 is an efficient card, and doesn't throw out as much heat (even when drawing similar amounts of power).

False. We already discussed this...
The Nvidia Maxwell TDP numbers is the average power consumption in specific gaming situations, cherry picked by Nvidia, at the base clocks, aka without boost.

When it draws similar amounts of power, it is dissipating the same amount of heat. The coolers are designed for ~200-225w.
 
TDP = Thermal Design Power

This represents how much heat the cooling solution is designed to dissipated, in watts. The GTX 980 has a cooler that can handle 165w of heat, while the R9 290 needs a cooler that can deal with 275w of heat.

The GTX 980 is an efficient card, and doesn't throw out as much heat (even when drawing similar amounts of power).

No, that's not even close to right. POWER = HEAT, didn't they learn you anything in skool? :D It is a more efficient card, so it DOES MORE WORK for the same unit of power.

First of all, let's assume thay ran the power consumption test at Extreme settings, because they didn't include that much information.

The DX12 performance increase of the GTX 980 is 30% over the 290X at Extreme, but the cards are more like 15-20% in most games when you crank the settings. This means the CARD is doing marginally more work, which means that the GTX 980 is running at a much higher voltage/frequency. Let's assume that the card is running at full-tilt (165w, can get as high as 190w in Furmark)

Since dynamic power consumption scales with frequency * voltage^2, and since the highest frequencies always require the biggest bump in voltage, the Nvidia card is likely at full-tilt (165w) while the AMD card (assuming it's stock) is running at around ~800 MHz, which should drop it's voltage as well, so the power is way down from the typical 250w you get in games.

I would expect that both companies have aggressive frequency/voltage throttling based upon graphics load, so it's not hard to believe that the AMD card is under 200w. And at that point you're 15-20w away from a fully-floored GTX 980, so it makes sense.
 
Last edited:
No, that's not even close to right.

First of all, let's assume thay ran the power consumtpion test at Extreme settings, because they didn't include it.

The DX12 performance increase of the GTX 980 is 30% over the 290X. This means the CARD is doing 30% more work, which means that the GTX 980 is running at a much higher voltage/frequency.

Since dynamic power consumption scales with frequency * voltage^2, and since the highest frequencies always require the biggest bump in voltage, the Nvidia card is likely at full-tilt (165w) while the AMD card (assuming it's stock) is running at around 700-800 MHz, which should drop it's voltage as well, so the power is way down from the typical 250w you get in games.

I would expect that both companies have aggressive frequency/voltage throttling based upon graphics load, so it's not hard to believe that the AMD card is under 200w.


Wrong, just because the core is doing more work doesn't mean the voltage has to scale up with it. Overclocking with the 980 shows this time and again.
 
I'm going to call not a chance - it's going to be a massive API change or that will be all she wrote.

Oh it'll definitely be a massive API change. We just won't see anything that you can't already do.
 
Wrong, just because the core is doing more work doesn't mean the voltage has to scale up with it. Overclocking with the 980 shows this time and again.

This is not about YOU overclocking your GTX 980. This is about the automatic power control built into all modern GPUs.

The automatic power control has several different speed/voltage settings that it can step between in a heartbeat based upon load. In a STOCK card (from the article), the user has NO CONTROL over these voltage levels. When you overclock the card (without modifying the voltage), it simply adds a higher frequency to the top-end power block.

If you don't believe me, why don't you talk to all the people having problems with overclocking SLI GTX 980s because the cards are using different voltages?
 
If I was AMD I'd take the Mantle funding and throw it at their hardware. Get xfire to work as well as a single card and I'll give them a few grand next year.

It's too bad this review didn't look at trisli 980s and if there's better scaling with more cores.
Have you used CrossFire recently? When I had 5870s it was a disappointing experience but the frametimes and compatibility on my 290X CF setup have been fantastic. Crossfire is arguably smoother than SLI right now. NVIDIA will probably need to drop the bridge soon too.

Mantle isn't open source, how is that good for the gaming community? Windows is the dominant gaming platform and it matters the most. A tiny subset who play games on osx or Linux are largely irrelevant.
AMD has stated that it will be open once Mantle is out of beta. Obviously this is one where we'll have to wait and see, but I can't remember a time they have promised something like that and then lied.

I'm going to call not a chance - it's going to be a massive API change or that will be all she wrote.
It's a ground up rewrite from what they have stated. It pretty much has to be. The current version of OpenGL's biggest strength, and it's Achilles heel, is the massive amount of legacy support it provides. They have to ditch that to really provide performance improvements the level of which Mantle and DX12 are going to offer.
 
This is not about YOU overclocking your GTX 980. This is about the automatic power control built into all modern GPUs.

The automatic power control has several different speed/voltage settings that it can step between in a heartbeat based upon load. In a STOCK card (from the article), the user has NO CONTROL over these voltage levels. When you overclock the card (without modifying the voltage), it simply adds a higher frequency to the top-end power block.

If you don't believe me, why don't you talk to all the people having problems with overclocking SLI GTX 980s because the cards are using different voltages?

I'm well aware of the different voltage and boost states of the 980. If for example the card is at P0 and the voltage is at the peak level, it won't go any higher even though frequency can scale based on power draw and heat. So you will get the card working faster at a higher frequency but the temperature will remain the same. I've seen this on modified vbios and stock - frequency can scale way up but if voltage peaks at a certain point, the power draw and heat doesn't scale linearly with it.

AMD has stated that it will be open once Mantle is out of beta. Obviously this is one where we'll have to wait and see, but I can't remember a time they have promised something like that and then lied.
.

By the time it's out of beta it will be of no use to anyone since DX 12 will be in full swing so in essence the point remains - Mantle was only ever good for AMD products.
 
I wish that game developers will really put more focus on the new APIs.

I think people like Valve (wait, do they still make games...?) will be doing that. But a lot of these "AAA" games where the shops don't even know what a peecee is are probably not going to be pushing the envelope.

Indie games will probably be getting it automatically through Unity, etc.
 
Man. The AMD cards really benefit from DX12/Mantle over DX11 in those tests.

I wonder why there's such a disparity in the performance improvement for the AMD cards vs the NVIDIA ones? The 290X numbers go from 8.3FPS to 42.9FPS - an increase of 400%. But the 680 shows a 50% increase, since it's already pulling 23FPS and hits 36. Likewise the 980, about a 150% increase.

It's also odd that the 285 and 260X both had framerates right around 8FPS in DX11. I wonder if there's room for optimization in the DX11 driver path on the AMD cards still, or if that was a bug in the current driver for Win10 that's putting some other kind of limitation in place?
 
Who cares where the idea for DX12 came from? It's a much better alternative than mantle because it's not locked to a single video card vendor. If AMD was the catalyst then I say thank you AMD. If not, then thank you MS. Time to get W10 out and some games working with DX12!!


Mantle is locked to AMD like freesync is locked to AMD, i.e. not at all.

There might be some additional perks to better highlight amd hardware, but I doubt there is anything on the scale of gameworks and their suite of nvidia only tools.


Nvidia could choose to write a mantle driver, or build discreet gpu cards with displayport 1.2a support and freesync, they just choose not to so they can charge their apple like captured fanbase extra money for the same features. Actually I'll be fair, up until this point they did that to get the tech to the market on the desktop earlier. Going forward, if nvidia continues to refuse to allow variable refresh rate support through any other method than gsync modules, then it WILL be due to an apple like desire to create a walled garden for better experiences.

Thus far the market has not punished them, so I bet they feel they have enough Reverse Charlies Demarjins (PRIME) that buy their stuff and are bewitched by the branding that they will pay the extra toll for the privilege of using nvidias hardware.
 
Man. The AMD cards really benefit from DX12/Mantle over DX11 in those tests.

I wonder why there's such a disparity in the performance improvement for the AMD cards vs the NVIDIA ones? The 290X numbers go from 8.3FPS to 42.9FPS - an increase of 400%. But the 680 shows a 50% increase, since it's already pulling 23FPS and hits 36. Likewise the 980, about a 150% increase.

It's also odd that the 285 and 260X both had framerates right around 8FPS in DX11. I wonder if there's room for optimization in the DX11 driver path on the AMD cards still, or if that was a bug in the current driver for Win10 that's putting some other kind of limitation in place?

To me it shows how poor AMD drivers are in relation to NVIDIA. Much higher batch submission times in DX 11 - makes you wonder how bad they are in other non-Mantle titles. So the gains should be expected in DX 12 given their poor DX 11 performance in this game. NVIDIA is fairly consistent so it does have gains across the board and just doesn't have a poor starting point like AMD. At the end of the day, the 980 still wipes the floor with the 290X in DX 12 or Mantle.
 
Well, I'm not debating that the 980 is the superior card. That's obvious. But seeing that kind of leap in performance, and this weird 8FPS ceiling on the GCN cards they tested, it is making me wonder if they could improve performance on their existing cards with further driver optimization (or if there's a bug in there somewhere). I mean, the 290X isn't exactly a "slow" card, so what's the deal with the 8.3FPS?

Of course, if it is just an issue of performance optimizations and not a bug, with Win10 being a free upgrade to anyone on Win7/8, it also kind of makes the point moot. They could just say "why optimize for DX11 when Win10 with DX12 is free?" and it'd be kind of hard to argue with that.
 
I tried Linux for a limited amount of time. It ran stuff just fine. I'm probably going to give it a real test soon. My only issue was that my Creative SoundBlaster ZX didn't work with it because of a lack of drivers.
 
It's too bad this review didn't look at trisli 980s and if there's better scaling with more cores.

This is not a review. It's a preview based on an alpha tech demo of alpha API code. The numbers being given are generally meaningless. The important takeaway is the dramatic performance improvement in general.

You'll get your tri-SLI benchmarks in 8-10 months when Windows 10 ships, finalized drivers ship, and new hardware ships.
 
Man. The AMD cards really benefit from DX12/Mantle over DX11 in those tests.

I wonder why there's such a disparity in the performance improvement for the AMD cards vs the NVIDIA ones? The 290X numbers go from 8.3FPS to 42.9FPS - an increase of 400%. But the 680 shows a 50% increase, since it's already pulling 23FPS and hits 36. Likewise the 980, about a 150% increase.

It's also odd that the 285 and 260X both had framerates right around 8FPS in DX11. I wonder if there's room for optimization in the DX11 driver path on the AMD cards still, or if that was a bug in the current driver for Win10 that's putting some other kind of limitation in place?

Nvidia put a large amount of work into reducing the driver overhead in DX11 with Kepler. They saw a healthy boost early last year with the 337.50 release.
 
Maybe I read a different preview article than some of you, but I do not see this as anything other than a win for AMD. Sure their Windows 10 drivers could use some work in the "legacy" D3D11 department but otherwise Mantle is more efficient than DX12 for their hardware (may change with further optimized drivers). You cannot compare those results to current Win 7 D3D11 numbers if you are then you did not read their test setup and what OS they are using.

Perhaps the biggest thing that many of you are forgetting is that AMD has been trying to push their APU business. This preview shows that you can now push some top notch GPUs with them even though they have weaker CPU cores. The market has been moving to laptops over desktops for a while now and has lately been trending into tablets and other portables. APU are more ideal chips for those places and the general user. We power users are a very small piece of the pie.

MS gains from this with Xbone due to the AMD APU inside. They also get to decrease fragmentation in the Windows user base (ask any Android dev what this means).

What did Nvidia gain? Their current GPU stay in the same performance positions they currently have in Win 7 and D3D 11. Their mobile SoC's are generally garbage. However K1 shows some promise if they can get it to run actual Windows it may gain from DX12. So really they just get a bump in current GPU performance which AMD will get a bump as well.
 
Maybe I read a different preview article than some of you, but I do not see this as anything other than a win for AMD. Sure their Windows 10 drivers could use some work in the "legacy" D3D11 department but otherwise Mantle is more efficient than DX12 for their hardware (may change with further optimized drivers). You cannot compare those results to current Win 7 D3D11 numbers if you are then you did not read their test setup and what OS they are using.

Perhaps the biggest thing that many of you are forgetting is that AMD has been trying to push their APU business. This preview shows that you can now push some top notch GPUs with them even though they have weaker CPU cores. The market has been moving to laptops over desktops for a while now and has lately been trending into tablets and other portables. APU are more ideal chips for those places and the general user. We power users are a very small piece of the pie.

MS gains from this with Xbone due to the AMD APU inside. They also get to decrease fragmentation in the Windows user base (ask any Android dev what this means).

What did Nvidia gain? Their current GPU stay in the same performance positions they currently have in Win 7 and D3D 11. Their mobile SoC's are generally garbage. However K1 shows some promise if they can get it to run actual Windows it may gain from DX12. So really they just get a bump in current GPU performance which AMD will get a bump as well.

That right there is an example of really looking hard for the silver lining in the cloud. Mantle is dead on arrival and AMD has piss poor drivers in dx 11 as shown by this test.
 
That right there is an example of really looking hard for the silver lining in the cloud. Mantle is dead on arrival and AMD has piss poor drivers in dx 11 as shown by this test.

Actually it took me about 30 seconds to see the big picture. Sad that so many others just see poor Windows 10 drivers and rush to bash AMD for an OS that is not even released other than a preview. GPU and ARM SoC is all Nvidia has and so far only one of those markets can gain from DX12. AMD has GPU, CPU, and APU of which all 3 can benefit. If the GPU market was all gravy then Nvidia would not be trying so hard to break into mobile which is where the majority of "gamers" are these days. I would rather have 3 market segments get an improvement in their value rather than 1, perhaps you disagree? If the tables were turned I would be saying the exact same thing in Nvidia's favor. Simple math 3 > 1.

How is Mantle dead on arrival? It was/is in games and showed an improvement, yes? Are there more games coming out with it? Yes. Will DX12 launch with tons of games using it? No. Will it take on faster than Mantle? Yes. In the mean time AMD users can get that boost without an unreleased Windows 10 and non-beta drivers, can Nvidia users? Overall Mantle is about as useful as PhysX, it's an item that boosts one camps appeal but does not really give it total market dominance due to slow adoption. While yes one is an API and another is a feature set the principle of why they were created are the same: provide value to their brand (AMD states they did it to further the market, but nobody is 100% altruistic in business).
 
Actually it took me about 30 seconds to see the big picture. Sad that so many others just see poor Windows 10 drivers and rush to bash AMD for an OS that is not even released other than a preview. GPU and ARM SoC is all Nvidia has and so far only one of those markets can gain from DX12. AMD has GPU, CPU, and APU of which all 3 can benefit. If the GPU market was all gravy then Nvidia would not be trying so hard to break into mobile which is where the majority of "gamers" are these days. I would rather have 3 market segments get an improvement in their value rather than 1, perhaps you disagree? If the tables were turned I would be saying the exact same thing in Nvidia's favor. Simple math 3 > 1.

How is Mantle dead on arrival? It was/is in games and showed an improvement, yes? Are there more games coming out with it? Yes. Will DX12 launch with tons of games using it? No. Will it take on faster than Mantle? Yes. In the mean time AMD users can get that boost without an unreleased Windows 10 and non-beta drivers, can Nvidia users? Overall Mantle is about as useful as PhysX, it's an item that boosts one camps appeal but does not really give it total market dominance due to slow adoption. While yes one is an API and another is a feature set the principle of why they were created are the same: provide value to their brand (AMD states they did it to further the market, but nobody is 100% altruistic in business).

Yeah, how's AMD's bottomline these days with their awesome APUs and Mantle? Oh that's right they've got one foot in the grave. And Mantle is doa and even with it in BF4, 290x still loses to the 980.
 
Just looked at the CPU used and again a very expensive high end Core i7-4960X, there is more to it then just threads, IPS, cache and other aspects so no just reducing how many cores are used and clock speeds is not accurate of emulating other CPU scaling at all, they should of least used a AMD CPU as well because you simply can not use a Intel CPU to represent it..
 
Just looked at the CPU used and again a very expensive high end Core i7-4960X, there is more to it then just threads, IPS, cache and other aspects so no just reducing how many cores are used and clock speeds is not accurate of emulating other CPU scaling at all, they should of least used a AMD CPU as well because you simply can not use a Intel CPU to represent it..

True but it is just a quick preview. Also, Star Swarm is a sort of "best case" scenario and probably not representative of the performance improvement that we'll see with typical games.

When the hell are we going to get UE4 games?
 
When the hell are we going to get UE4 games?

Pretty soon I'd guess. I've been playing with the SDK and it's a pretty amazing engine. There are a lot of devs with released WIP/Beta games done in UE4 if you look around.
 
Yeah, how's AMD's bottomline these days with their awesome APUs and Mantle? Oh that's right they've got one foot in the grave. And Mantle is doa and even with it in BF4, 290x still loses to the 980.

Who is talking about bottom lines? Man you really don't want to admit that AMD gains more value to 3 market segments while Nvidia gains value to 1. Simple math man, simple math 3>1. Yeah AMD needs more revenue and things like this can help add value and sway people over to their side. More competition is a good thing for consumers.

Again how is Mantle DOA? You have provided no evidence. You are comparing a GPU released a few months ago to a GPU released almost 1.5 years ago and think that is making your case stronger? Instead how does it compare to the 780Ti? Show me some facts here.
 
Again how is Mantle DOA? You have provided no evidence.

I think that DX12 and glNext are going to kind of make Mantle redundant. Right now Mantle is still in closed beta w/ no real indication that it's coming to other platforms or other GPUs other than a pile of hopes from Robert Hallock. At this point, it would be easier for devs to just target DX12 / glNext since you can get pretty much all the same benefits of Mantle but be targeting a wider swath of customer base.

There's really no technical need for AMD to continue supporting Mantle. It was devised as a way to generate some excitement to sell some video cards. Just about everything in Mantle could have been done with OpenGL at the time, but bragging about a vendor-neutral library would not have helped AMD sell AMD cards.

I said on day one that Mantle is not about a new graphics API standard but about selling AMD cards. And 18 months later we see that we are not even an inch closer to seeing Mantle on non-AMD or non-Windows setups.
 
I too do believe that with DX 12 and glNext, AMD's Mantel will be redundant. There's just no way Mantel can compete with both API that works independent of vendor.

Perhaps Mantel may have pushed the industry to put the development of DX and OpenGL into high gears, and we can all be grateful if it did. But there's just simply no reason for dev to continue (or start) working with Mantel when you have API that will work everywhere in terms of hardware vendor.
 
I too do believe that with DX 12 and glNext, AMD's Mantel will be redundant. There's just no way Mantel can compete with both API that works independent of vendor.

Perhaps Mantel may have pushed the industry to put the development of DX and OpenGL into high gears, and we can all be grateful if it did. But there's just simply no reason for dev to continue (or start) working with Mantel when you have API that will work everywhere in terms of hardware vendor.

are you doing that on purpose? lol.
 
False. We already discussed this...
The Nvidia Maxwell TDP numbers is the average power consumption in specific gaming situations, cherry picked by Nvidia, at the base clocks, aka without boost.

When it draws similar amounts of power, it is dissipating the same amount of heat. The coolers are designed for ~200-225w.
Not false, TDP = heat related, not a direct measure of power consumption.

And a card can throw out more heat than its its cooler is designed for (TDP) for short periods of time because the sheer mass of the cooler can soak up non-sustained temperature spikes and then dissipate them over time. The heatsink acting as a temporary heatsoak is accounted for when determining TDP.
Basically, as long as heat-output averages below the TDP (without the cooler reaching thermal saturation), the cooling solution is good to go and the rating is fine.

TDP is NOT power consumption.
TDP is NOT the maximum heat output the components.
TDP IS how much heat the cooling system is designed to deal with.

No, that's not even close to right. POWER = HEAT, didn't they learn you anything in skool? :D It is a more efficient card, so it DOES MORE WORK for the same unit of power.
Power consumption does NOT directly equal the heat output of a graphics card.
 
Last edited:
Not false, TDP = heat related, not a direct measure of power consumption.

And a card can throw out more heat than its its cooler is designed for (TDP) for short periods of time because the sheer mass of the cooler can soak up non-sustained temperature spikes and then dissipate them over time. The heatsink acting as a temporary heatsoak is accounted for when determining TDP.
Basically, as long as heat-output averages below the TDP (without the cooler reaching thermal saturation), the cooling solution is good to go and the rating is fine.

TDP is NOT power consumption.
TDP is NOT the maximum heat output the components.
TDP IS how much heat the cooling system is designed to deal with.


Power consumption does NOT directly equal the heat output of a graphics card.
Yes, I know. This was also already discussed.
It still isn't accurate in this discussion.
 
Not false, TDP = heat related, not a direct measure of power consumption.

And a card can throw out more heat than its its cooler is designed for (TDP) for short periods of time because the sheer mass of the cooler can soak up non-sustained temperature spikes and then dissipate them over time. The heatsink acting as a temporary heatsoak is accounted for when determining TDP.
Basically, as long as heat-output averages below the TDP (without the cooler reaching thermal saturation), the cooling solution is good to go and the rating is fine.

TDP is NOT power consumption.
TDP is NOT the maximum heat output the components.
TDP IS how much heat the cooling system is designed to deal with.


Power consumption does NOT directly equal the heat output of a graphics card.

Dear god not this again
 
Power consumption does NOT directly equal the heat output of a graphics card.
If the heat from the gfx card isnt responsible for the removal of power used, where and how does the power used get dissipated?

Perhaps its one of those KWatt LEDs attached to the gfx card?
 
If the heat from the gfx card isnt responsible for the removal of power used, where and how does the power used get dissipated?

Perhaps its one of those KWatt LEDs attached to the gfx card?

We all know that it's the color of the paint on the graphics card that makes it dissipate heat better. You should know that by now!
 
Ah yes I forgot.
Red/Silver stripes are great, multiple 0.15mm thick with 20nm spacing work best.
 
Ah yes I forgot.
Red/Silver stripes are great, multiple 0.15mm thick with 20nm spacing work best.

Someone should redo this Steve Jobs compilation video and insert a GTX 980 in it. T.D.P. Totally maDe uP
 
I heard it was That Damn Propaganda.
Gosh, it might be true!
 
Mantle is garbage, waste of time and money... Both of which are in short supply at AMD.
What were they thinking? Total failure on their part.

It's comforting to know every game I play will run ~20% slower than the equivalent Nvidia GPU, and AMD is too busy tweaking Mantle rather than fix their DX11/12 performance.
Thanks AMD.

Its not garbage its just in its infancy and even at its current state its far beyond what the original DX started with many years ago. Also you sound like an nvidia fanboy. So this doesn't backfire... I currently have a 970 G1 installed :p
 
Mantle is garbage, waste of time and money... Both of which are in short supply at AMD.
What were they thinking? Total failure on their part.

It's comforting to know every game I play will run ~20% slower than the equivalent Nvidia GPU, and AMD is too busy tweaking Mantle rather than fix their DX11/12 performance.
Thanks AMD.
GTFO. My junk 2x 780s would like to have a word with you.
Swapped those for 2x 290 @ 290X and made $140 in the process.
 
Back
Top