Windows 10, Nvidia being left in the dust.

Sorry to say but need better info that those comparisons. the win7 had the lower tier cards and the win10 had higher with very little overlap between the 2.
 
And to everyone, my experiences are that I have not experienced the OS being the cause for any performance improvements. If there are performance improvements in games, it is through driver updates that they have come, not the OS in the games I have used.

There are no games out yet which support the one feature Win10 would be the cause of benefit yet, and that is DX12.

To expect any DX11 game to work better on Win10 vs. Win 8.1 boggles my mind since it is the same API. Placebo effect is at play, out of shear will power and wanting performance to be better on a new OS people believe that it is so. The mind is a powerful trickster.

Something else to consider, there have been new patches for games as well, which can improve performance. GTAV and Project Cars and Witcher 3 have received a lot of new patches recently, even in the past month.
 
And to everyone, my experiences are that I have not experienced the OS being the cause for any performance improvements. If there are performance improvements in games, it is through driver updates that they have come, not the OS in the games I have used.

There are no games out yet which support the one feature Win10 would be the cause of benefit yet, and that is DX12.

To expect any DX11 game to work better on Win10 vs. Win 8.1 boggles my mind since it is the same API. Placebo effect is at play, out of shear will power and wanting performance to be better on a new OS people believe that it is so. The mind is a powerful trickster.

WDDM 1.3 vs WDDM 2.0. Wouldn't be surprised if CPU overhead was much lower with WDDM 2.0.
 
WDDM 1.3 vs WDDM 2.0. Wouldn't be surprised if CPU overhead was much lower with WDDM 2.0.


Only if running DX12, the cpu overhead is actually similiar with WDDM 2.0 when running DX11 (there is slighting less but nothing really great like 1-2%)
 
I thought WDDM2.0 showed a great memory management capability. My Win7 Memory usage was 3.--gb. Win10 2.5gb with the exact same programs up. (idle usage)
 
hmm I think it doesn't do that much for system memory usage, if anything it probably uses it a bit more at least on the graphics memory portion but pretty much negligible.
 
Only if running DX12, the cpu overhead is actually similiar with WDDM 2.0 when running DX11 (there is slighting less but nothing really great like 1-2%)
It seems clear AMD has been putting all of their driver efforts into Windows 10 the last few months, maybe even starting this Spring before Win10 launched. It's a free upgrade for everyone on 7 & 8, so with their limited resources it makes sense to drop the old OS'. Obviously AMD still supports previous versions of Windows but future optimizations will probably be limited to Windows 10 exclusively.

On the flipside Nvidia is having the opposite problem with Windows 10.
I believe unless Nvidia pulls an ace from their sleeve with Pascal, AMD will dominate the DX12 era. They already have a huge head start both in DX12 and Windows 10 drivers, and considering all of the recent flubs Nvidia has made.
 
It seems clear AMD has been putting all of their driver efforts into Windows 10 the last few months, maybe even starting this Spring before Win10 launched. It's a free upgrade for everyone on 7 & 8, so with their limited resources it makes sense to drop the old OS'. Obviously AMD still supports previous versions of Windows but future optimizations will probably be limited to Windows 10 exclusively.

On the flipside Nvidia is having the opposite problem with Windows 10.
I believe unless Nvidia pulls an ace from their sleeve with Pascal, AMD will dominate the DX12 era. They already have a huge head start both in DX12 and Windows 10 drivers, and considering all of the recent flubs Nvidia has made.


And nothing has changed with the cards are positioned from what we have seen thus far, don't know how you are drawing that conclusion, the only one DX12 benchmark AOS that was shown to be a win for AMD, now looks to be positioned just like where DX11 games were, as long as there was no CPU bottleneck. DX12 won't matter much for AMD, as they hoped, the main reason for that is that nV has the resources to ramp up things from a driver point of view if they needed to. We are talking about pre alpha and beta games, nV has time, just because people see these as early indicators won't reflect performance once these games launch.

WDDM 2.0 running DX11 games there won't be much change outside of the possibly AMD fixing its CPU overhead problems, but that won't change anything, because we can already look at games and figure out its happening.
 
I don't know how you can say nothing has changed, even for the Fury X just check benchmarks since its release and it's gained around 10% more performance, at this rate it will surpass the 980 Ti across the board by mid-2016. The same could be said about their entire line-up, the 280X is nipping at the 780's heels, same for the 290 and 970. AMD has basically moved all of their GPUs up a tier.

Both AotS and Fable show ~10% performance gain for AMD over their average DX11 performance. Of course there are some DX11 games that perform similarly as current DX12 benchmarks (Battlefront for example) so it's too early to claim they dominate the entire API, but it's certainly looking good for them.

Anyone denying these performance margins is simply burying their heads in the sand. How many benchmarks will it take before people acknowledge the gains? You can't deny it forever. It's right there in-front of your eyes.
 
I do agree AMD looks pretty good initially in DX12/W10. But at least we can say it has added parity. In most of the DX12 benches/games there is no clear winner at the top, well at least not by the same margin as was the case in DX11. And have to admit the 7970 crowd has to be the most content and happy owners given the longevity of their cards.

Hey Razor: I got a question that I have been wondering about. Nvidias boost: Say a card boosts to 1500mhz from its stated base of 1100mgz. So what does overclocking to 1386mhz do to the 1500mhz boost? Reason I ask is that when it is stated that the card was boosting to that 1500 and receives said score, most say that the Nvidia cards still have a lot of OCing headroom, though I would think that Boost would account for the OCing and only achieve the same results as the original 100mhz bench. So is there something that I am missing in that?
 
I do agree AMD looks pretty good initially in DX12/W10. But at least we can say it has added parity. In most of the DX12 benches/games there is no clear winner at the top, well at least not by the same margin as was the case in DX11. And have to admit the 7970 crowd has to be the most content and happy owners given the longevity of their cards.

Hey Razor: I got a question that I have been wondering about. Nvidias boost: Say a card boosts to 1500mhz from its stated base of 1100mgz. So what does overclocking to 1386mhz do to the 1500mhz boost? Reason I ask is that when it is stated that the card was boosting to that 1500 and receives said score, most say that the Nvidia cards still have a lot of OCing headroom, though I would think that Boost would account for the OCing and only achieve the same results as the original 100mhz bench. So is there something that I am missing in that?

Not quite right. Nvidia boosts to the default boost clock. After that, overclocking adds an offset to said clock.

Example: NV's advertised boost clock is 1386. You add a 114 offset to core clock. Final boost clock will be 1500 provided you don't hit thermal/power limits. If you don't add an offset it'll never go above the advertised boost clock.
 
I don't know how you can say nothing has changed, even for the Fury X just check benchmarks since its release and it's gained around 10% more performance, at this rate it will surpass the 980 Ti across the board by mid-2016. The same could be said about their entire line-up, the 280X is nipping at the 780's heels, same for the 290 and 970. AMD has basically moved all of their GPUs up a tier.

Both AotS and Fable show ~10% performance gain for AMD over their average DX11 performance. Of course there are some DX11 games that perform similarly as current DX12 benchmarks (Battlefront for example) so it's too early to claim they dominate the entire API, but it's certainly looking good for them.

Anyone denying these performance margins is simply burying their heads in the sand. How many benchmarks will it take before people acknowledge the gains? You can't deny it forever. It's right there in-front of your eyes.

Dude different games different settings read the review properly. Some of the benchmarks new and old aren't using AA like the old ones where, and we know nV cards tend to perform better with AA as they tend to take less of a hit in most games. Did you even notice that? I already stated this, well didn't really point it out point blank, I think you just don't want to read the review, instead just looking at the final numbers?

If you want me to break down the TPU reviews that you linked to in the first post of this thread, I can, but it was easy to see that.

You know what I will do it.

Assassin's creed, old review x4 aa, new review, no AA

Battlefield 3, old review x 4 aa, new review, no AA

Battlefield 4, old review x 4 aa, new review, no AA

Crysis 3, old review x 4 aa, new review, no AA

Shadows of Mordor, old review x 4 aa, new review, no AA

There are two other games in the old review that used AA, which the new review those games were replaced that don't use AA.

Pretty much a big change in the way the review was done and it doesn't show any of the benefits of AA from a performance stand point that nV got from the first review that you posted. This is why you have seen the changes that you think you see.
 
Last edited:
Hrmm my Surface Book with dedicated GPU could DEFINITELY use some better nVidia drivers for REAL! It has win10
 
Dude different games different settings read the review properly. Some of the benchmarks new and old aren't using AA like the old ones where, and we know nV cards tend to perform better with AA as they tend to take less of a hit in most games. Did you even notice that? I already stated this, well didn't really point it out point blank, I think you just don't want to read the review, instead just looking at the final numbers?

If you want me to break down the TPU reviews that you linked to in the first post of this thread, I can, but it was easy to see that.

You know what I will do it.

Assassin's creed, old review x4 aa, new review, no AA

Battlefield 3, old review x 4 aa, new review, no AA

Battlefield 4, old review x 4 aa, new review, no AA

Crysis 3, old review x 4 aa, new review, no AA

Shadows of Mordor, old review x 4 aa, new review, no AA

There are two other games in the old review that used AA, which the new review those games were replaced that don't use AA.

Pretty much a big change in the way the review was done and it doesn't show any of the benefits of AA from a performance stand point that nV got from the first review that you posted. This is why you have seen the changes that you think you see.

don't know why they're not labeling the use of AA in their latest review, but I'm pretty sure TPU is still using AA or maintaining the same test settings

http://www.techpowerup.com/reviews/AMD/R9_Nano/9.html

http://www.techpowerup.com/reviews/MSI/GTX_980_Ti_Lightning/7.html

Comparing BF3 & looking at 980 Ti #s

@1080 it's 163.9 vs 164.2
@1440 it's 102.6 vs 102
@ 2160 it's 49.5 vs 49
 
I don't know how you can say nothing has changed, even for the Fury X just check benchmarks since its release and it's gained around 10% more performance, at this rate it will surpass the 980 Ti across the board by mid-2016. The same could be said about their entire line-up, the 280X is nipping at the 780's heels, same for the 290 and 970. AMD has basically moved all of their GPUs up a tier.

Both AotS and Fable show ~10% performance gain for AMD over their average DX11 performance. Of course there are some DX11 games that perform similarly as current DX12 benchmarks (Battlefront for example) so it's too early to claim they dominate the entire API, but it's certainly looking good for them.

Anyone denying these performance margins is simply burying their heads in the sand. How many benchmarks will it take before people acknowledge the gains? You can't deny it forever. It's right there in-front of your eyes.

Oh, hey, you're back! You forgot to do something while you were gone:

Normalization doesn't "skew" anything, it's just a scale. I unnormalized the data and calculated the % change on each AMD card relative to its Nvidia counterpart (listed in the post). If you'd like a basic math explanation on how that works I would be happy to provide it.
 
don't know why they're not labeling the use of AA in their latest review, but I'm pretty sure TPU is still using AA or maintaining the same test settings

http://www.techpowerup.com/reviews/AMD/R9_Nano/9.html

http://www.techpowerup.com/reviews/MSI/GTX_980_Ti_Lightning/7.html

Comparing BF3 & looking at 980 Ti #s

@1080 it's 163.9 vs 164.2
@1440 it's 102.6 vs 102
@ 2160 it's 49.5 vs 49


at X4 AA nV cards don't have much of a performance hit its pretty much free in most games, and this has been there since the g80.

Edit and if you look at the the same games without AA in both reviews, AMD cards have similar frame rates across the two reviews. The only major change is what I mentioned in the games that were not using AA in the new review anymore, and the impact is significant for AMD cards, close to 20% at times (depending on resolution, game and card)
 
Last edited:
at X4 AA nV cards don't have much of a performance hit its pretty much free in most games, and this has been there since the g80.

Edit and if you look at the the same games without AA in both reviews, AMD cards have similar frame rates across the two reviews. The only major change is what I mentioned in the games that were not using AA in the new review anymore, and the impact is significant for AMD cards, close to 20% at times (depending on resolution, game and card)

You forget that amd cards do what you tell them whilst nvidia cards don't.
 
You forget that amd cards do what you tell them whilst nvidia cards don't.

what? you are saying drivers do what ever they want to do? where do you get that? if the driver is set to in game settings then that is what they do, I haven't seen any evidence out of the contrary to that outside of bugs.

Or if you are talking about using a game profile, which I'm not since TPU doesn't use game profiles in their reviews.
 
Oh, hey, you're back! You forgot to do something while you were gone:
Since TaintedSquirrel doesn't seem to be interested, I'll do it.

Taking the 370 vs 950 figure he gave, it looks like he used the following formula to work out the percentage change
Code:
x = ((Pn1 / Pa1) - (Pn2 / Pa2)) * 100

where
x = the percentage change
Pn1 = the relative performance of the 950 in Win7
Pa1 = the relative performance of the 370 in Win7
Pn2 = the relative performance of the 950 in Win10
Pa2 = the relative performance of the 370 in Win10
Plugging in the numbers gives
Code:
((88 / 76) - (31 / 32)) * 100 = 18.9%

But you want a demonstration that using these percentages is the same as calculating from the raw performance numbers! That requires a little algebra...
Code:
Pn1 = (N1 / R1) * 100
Pa1 = (A1 / R1) * 100
Pn2 = (N2 / R2) * 100
Pa2 = (A2 / R2) * 100

where
N1 = the raw performance of the 950 in Win7
A1 = the raw performance of the 370 in Win7
R1 = the raw performance of the reference card (950 XtremeGaming) in Win7
N2 = the raw performance of the 950 in Win10
A2 = the raw performance of the 370 in Win10
R2 = the raw performance of the reference card (980 Ti Lightning) in Win10

Substitute those into the original equation...
Code:
x = ((((N1 / R1) * 100) / ((A1 / R1) * 100)) - (((N2 / R2) * 100) / ((A2 / R2) * 100))) * 100
= (((N1 / R1) / (A1 / R1)) - ((N2 / R2) / (A2 / R2))) * 100
= ((N1 / A1) - (N2 / A2)) * 100
... and you can see that the reference values cancel, so it doesn't matter whether you use the absolute or relative performance numbers.

I know most people aren't mathematically inclined, but I expect this level of algebra is taught to children in most countries.

One thing using the relative numbers doesn't tell you is what caused the change - it could be because the 370 is now faster, or because the 950 is slower for whatever reason. And I don't want to comment on the validity of comparing the two sets of benchmarks in the first place.
 
Last edited:
Good post, this is true but only if the test scenarios stay the same, same games, which isn't the case and this is why they get skewed along with the settings change as well. Which is seen by the raw numbers.
 
TaintedSquirrel is using 76% from this graph and 32% from this graph to claim that the "370 gained 18%, putting it on-par with the GTX 760 & 950."

On the pages that contain both of those graphs, TPU states: "The graphs on this page show a combined performance summary of all the tests on previous pages, broken down by resolution." I looked for benchmarks for the 370 in the Win10 review and found none.

Where does the 32% score for the 370 in that review come from?
 
Damn this makes me want to switch over to team red with the way that AMD unless team green can figure out a solution as well.
 
Damn this makes me want to switch over to team red with the way that AMD unless team green can figure out a solution as well.

THIS makes you want to switch over? I'll assume you read the first post and nothing else. Oh boys.
 
Wait, so a guy who posts sensationalist headlines on Reddit has no idea what he's talking about?
 


What you haven't been paying attention to why Fury X seems to loose ground with AA on?

both cards can do 4 samples per clock but there seems to be something with the GCN architecture that seems to do AA just a bit less efficient. Its not much 5% on average, but the penalty is there.

AA and AF performance isn't investigating like it used to be in reviews, but when the AMD benchmarks for Fury X was leaked with no AA and AF, it was looked in to.

Its great people just post without thinking about the reasons why Marketing benchmarks are just that, they show best case scenarios......in AMD's case, pure shader performance
 
Last edited:
Well I'm not sure what kind of AA you're talking about, but 4x MSAA is most definitely not "pretty much free in most games", at least if we're talking modern DX11 AAA titles.
 
Well I'm not sure what kind of AA you're talking about, but 4x MSAA is most definitely not "pretty much free in most games", at least if we're talking modern DX11 AAA titles.


well it dependent on bandwidth and memory limits, which is affected by resolution and of course the game itself, as long as they aren't hit there is pretty much no hit on frame rates on nV cards at x4 AA. But this doesn't matter, there is a 5% more penalty for AMD cards when x 4 AA is used, as with x8 AA even at x2 AA.

here is one with x8 AA,

http://www.hardwarezone.com.sg/revi...ut-maxwell/test-setup-and-performance-results

nV looses 40% and AMD looses 45%........
 
Don't know about the 5% more penalty bit for AMD cards, but no hit on nV cards at 4x AA just isn't true.

grand-theft-auto-v-pc-anti-aliasing-performance-640px.png

far-cry-4-anti-aliasing-performance-640px.png

watch-dogs-anti-aliasing-performance-chart.png


COD: Ghosts
cod-ghosts-aa-chart.png


COD: Black Ops II
blackops2-tweak-guide-antialiasing-performance-chart.png


Max Payne 3:
chart-09-msaa.png


BF3:
chart-aa-deferred.jpg

Granted it's not a huge sample, but even in the best case scenario with Black Ops II there's still a 10% penalty. The norm seems to be around 20-25%, but can be almost 40%.

In any case, I'm simply trying to explain (guessing really) why psyside :rolleyes: at your post.
 
hmm interesting, ok I might be wrong in that they don't get a hit.
 
Are you mixing up AF and AA? AF is what AMD was disabling in the marketing slides and what is generally considered "free" now.

It was an interesting revelation though, I'm wondering if anyone has actually looked into the impact of AF and why that is (or at least AMD felt that way).
 
ah yeah sorry bah lol.

But in anycase AMD does have a disadvantage even with AA, and definitely with AF.

http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=10

more with AA and AF tests. AMD just takes more of a hit.

http://www.pcworld.com/article/2982...e-pcs-incredible-shrinking-future.html?page=2

And another this one is x4 AA again AMD cards takes a greater hit.
Not sure what you are reading you link contradict your argument. The fury x has a smaller loss in performance when turning on AA and AF compared to the 980ti. Now if you mean to SSAA instead of AA(to general) I would agree since the fury x does not have enough VRAM. I didn't bother reading the entire 2nd link. They are comparing a rep constrained card to a full tdp card. I'm not sure what you expected to happen, when the 970 is tdp/temp constrained also as seen in the [H] review it does poorly also even though it is touted as an efficient card.
 
Last edited:
Well, Multi sampling AA hits the memory and that could be influenced with Fury's smaller VRAM pool, other than that they are only slightly more inefficient % wise than nvidia´s 980Ti, i mean if the fury was as efficient then at 4x it should see a performance of 46 fps instead of 44 fps.

BTW the Fury X is more efficient than the 980 non ti.
 
Sorry, but if you're taking 4 samples for every pixel then you are necessarily doing 4x as much work. Period.

The trick with MSAA is that it doesn't take 4 samples for EVERY pixel, just ones that are deemed to be 'edge' pixels. So the hit is significantly reduced.


In any case, MSAA performance is a moot point in modern games thanks to deferred rendering, thanks to consoles.

CONSOLES STOLE OUR MSAA!!! :p
 
Yeah these days I either go for SMAA or SSAA if I have the power to spare.
 
Back
Top