[WCCFTECH]AMD Rolling Out New Polaris GPU Revisions With 50% Better Perf/Watt

Oh, so you are saying that Global Foundries is producing these cards?


yes it is the same process, but Zen doesn't use the same automation for layout as AMD's GPU's. So it should be able to get more clocks. Again, architecture vs node argument, architecture drives clock speed. Node give very little in increased clocks now a days.

This is also why the respin theory is just BS, doesn't fit with everything we know about how target clock speeds, power usage etc, is part of the beginning steps of designing the architecture. If AMD didn't plan for it, its not going to happen. In essence what you see here with P11 and P10 is that if P10 could get to 95 watts at its current frequency and boost clocks, it would match the power usage of P11, which is a chip half its size? Does that even make sense lol. NOPE. Nothing logical about it.

Also looking at other chips and the features of those chips vs. Polaris namely Pascal, the amount of money nV has invested in its transistor layouts and architecture to reduce power consumption, AMD doesn't have that much money to do such things. So don't expect crazy theories like this to happen.

Just one quarter of R&D for nV for their chips is like an entire year of R&D for AMD GPU's. Although I'm a staunch believer in R&D expense isn't everything. Things like hand laid out transistors, is not something you can just skip over without spending lots of money. We saw the difference with Bulldozer and AMD engineers talking about the difference between hand layouts vs automated systems. 30% increase in size and 30% increase in power usage if using fully automated. That is something smarts can't over come.
 
Last edited:
Basic factors affecting a processor power:
P ~ f * C * V^2
f = frequency
C = capacitance
V = voltage

Lowering the voltage since a square function has a huge effect on overall power. JZ RX 480 was running lower than normal voltages. Firmware fine tuning the voltage throughout the gpu if possible could improve this as well. Node enhancements to prevent less leakage also would allow lower voltages (I think we are seeing this here).

C as in capacitance - this is affected by chip traces and surface area between then. Firmware shutting down portions of the GPU not being used if possible could affect capacitance and also power going to portions not being used - with portions of the gpu shutdown you have less surface area for capacitance. Another fine tuning aspect that is possible.

Frequency in this case you actually want higher.

Of course a revision can address capacitance as well as leakage in a chip thus allowing lower voltages. AMD may have been working on a revision but had to meet that June deadline since that is what they published when Polaris will be available. Also Glo-Flo will be improving their process as time goes on. I would think TSMC also would be improving their process as well.

I really don't think AMD has much choice in this if they want to get back into mobile big time - they will have to improve Polaris everyway possible. Vega does not look like it will be profitable on the low end.

Reference:
http://www.ijaiem.org/volume2issue7/IJAIEM-2013-07-23-077.pdf
 
yep but a 50% reduction is power is HUGE lol.

another example, take a look at Intel, they have lower power variants of their high end chips denoted by the suffix S, which cost much more. As node size drops we will start seeing GPU's with wide range of voltages, but for GPU's the market really isn't there for such a low amount of those low power variants. Intel gets away with it because the sheer amount of production they do, they will have a nice stock pile of those low voltage chips to address that market that wants those chips.

Intel Core i7-4790 Haswell Quad-Core 3.6 GHz LGA 1150 84W BX80646I74790 Desktop Processor Intel HD Graphics 4600 - Newegg.com

regular version

Intel Core i7-4790S Haswell Quad-Core 3.2 GHz LGA 1150 65W BX80646I74790S Desktop Processor Intel HD Graphics 4600 - Newegg.com

s version

Same price but the s version consumes 25% less power at the reduce performance of 11%

Think skylake uses R and T as their suffix for these types of chips.

6th Generation Intel® Core™ i7 Processors

There is a 300% difference in power between the T version and K version which those chips are the same, only 15% difference is clock speed.

So actaully I think it was since sandy bridge Intel introduced a new step into their verification process called Class (could have been before then but then again i wasn't paying too much attention to lower power Intel chips back before Nehalem lol), pretty much which splits out the low voltage chips from the regular batch, to improve binning.
 
Last edited:
yep but a 50% reduction is power is HUGE lol.

another example, take a look at Intel, they have lower power variants of their high end chips denoted by the suffix S, which cost much more. As node size drops we will start seeing GPU's with wide range of voltages, but for GPU's the market really isn't there for such a low amount of those low power variants. Intel gets away with it because the sheer amount of production they do, they will have a nice stock pile of those low voltage chips to address that market that wants those chips.
Sure is indeed!
May mean AMD just did not have everything right on launch also being affected by Glo-Flo new node. Nvidia had it down pat from the get go. Meaning AMD could see some hefty improvements if the case. All conjecture but when you do see some Polaris cards blowing away others way beyond the normal deviations - that does hint things can be much better for Polaris. I really don't think AMD has a choice in this matter, they will need to improve Polaris or sell them at such low prices that they can't make money with them (story of AMD's history lately).
 
too many tries, they would have had to mess up with both P10 and P11, that's a lot of metal and mask respins to not be able to fix the problem ;)

The variation in core voltages on the same chips on one node is not uncommon. Its just that the market for GPU's and volume they produce isn't worth the effort to split them out. This is pretty much the same thing nV did with the asic levels for their super enthusiast boards.
 
Last edited:
too many tries, they would have had to mess up with both P10 and P11, that's a lot of metal and mask respins to not be able to fix the problem ;)

The variation in core voltages on the same chips on one node is not uncommon. Its just that the market for GPU's and volume they produce isn't worth the effort to split them out. This is pretty much the same thing nV did with the asic levels for their super enthusiast boards.
Except the deviation is nowhere near 50%, Intel lower power chips are also clocked way slower with reduced voltage - not the same clock speed. Nor has Nvidia gpu's have this much variance on the same sku. In other words looks to be very abnormal. Meaning there is probably a good chance for improvement mostly with reduced voltage settings once less leaky chips can be made. Other fine tuning will probably help as well. Almost looks like AMD just raised the voltage high so more gpu's will be usable, to get the cards out. I expect improvement with AMD power, max clocks etc. I didn't expect after a couple of months samples now using way less power and OCing way better that is showing up now. Binning has separated the higher performing ones from the others, more like 10%-20% variations at best in the past.

So in a way this is good news that Polaris may see a big improvement - bad news that AMD has to stumble along to get there.
 
Except the deviation is nowhere near 50%, Intel lower power chips are also clocked way slower with reduced voltage - not the same clock speed. Nor has Nvidia gpu's have this much variance on the same sku. In other words looks to be very abnormal. Meaning there is probably a good chance for improvement mostly with reduced voltage settings once less leaky chips can be made. Other fine tuning will probably help as well. Almost looks like AMD just raised the voltage high so more gpu's will be usable, to get the cards out. I expect improvement with AMD power, max clocks etc. I didn't expect after a couple of months samples now using way less power and OCing way better that is showing up now. Binning has separated the higher performing ones from the others, more like 10%-20% variations at best in the past.

So in a way this is good news that Polaris may see a big improvement - bad news that AMD has to stumble along to get there.


No tits much more than 50%, I just linked 15% difference in clocks with close to 3 times reduction in power for the latest skylake chips :)

Intel® Core™ i7-6700T Processor (8M Cache, up to 3.60 GHz) 35 W

Intel® Core™ i7-6700K Processor (8M Cache, up to 4.20 GHz) 91 W

We can always go by this one too which the T variant is still close to two times the reduction of power

Intel® Core™ i7-6700 Processor (8M Cache, up to 4.00 GHz) 65 W

If you want to see what the T variant chip can do the mobile chips of the same family are very close if not the same as the T variants and yeah they have a very good over clockabliity just like the desktop processors at much lower power envelope. (mobile versions of the same T chip are at 45 watts at around the same frequency and they are the same chips as the desktop chips too).

No there isn't that much flexibility in nodes once they are ready for mass production for a certain chip. We have never seen node maturity affect power consumption to that degree.

Variations in errors in the wafer and node variations compound the factor, what AMD might have been expecting to get out of Polaris they weren't able to get because they weren't able to figure out the % of lower voltage cores vs. higher voltage cores. That could have happened. Something like that won't change with node maturity how ever.
 
Last edited:
No tits much more than 50%, I just linked 15% difference in clocks with close to 3 times reduction in power for the latest skylake chips :)

Intel® Core™ i7-6700T Processor (8M Cache, up to 3.60 GHz) 35 W

Intel® Core™ i7-6700K Processor (8M Cache, up to 4.20 GHz) 91 W

We can always go by this one too which the T variant is still close to two times the reduction of power

Intel® Core™ i7-6700 Processor (8M Cache, up to 4.00 GHz) 65 W

If you want to see what the T variant chip can do the mobile chips of the same family are very close if not the same as the T variants and yeah they have a very good over clockabliity just like the desktop processors at much lower power envelope. (mobile versions of the same T chip are at 45 watts at around the same frequency and they are the same chips as the desktop chips too).

No there isn't that much flexibility in nodes once they are ready for mass production for a certain chip. We have never seen node maturity affect power consumption to that degree.

Variations in errors in the wafer and node variations compound the factor, what AMD might have been expecting to get out of Polaris they weren't able to get because they weren't able to figure out the % of lower voltage cores vs. higher voltage cores. That could have happened. Something like that won't change with node maturity how ever.
We need to see real tested data in the mix - these CPU's include GPU's and the power ratings are reflecting the GPUs as well, under what kind of load? Speed/voltage differences on the gpu's? My I5 6500 is rated at 65w - it never even comes close to 65w - I had the whole system less than 65w with the I5 cpu at 100% as in 35w. I will be building almost the same system using that same CPU for my daughter I will probably re-run those tests, I had the GPU turned off since having a discrete video card in it.

Also we need to be talking like an I7-6700K - same skew - with a 50% deviation just on the cpu - undervolting etc. to see how low you can affect the actual power. There are differences between the K and non-K's in the chip. I almost think we are talking apples and oranges with GPU's and CPU's but maybe would apply as well. Does anyone see 50% deviations with power on the I7 (just the cpu) at the same clock speed and OC at the same time >10% then the others? No.
 
We need to see real tested data in the mix - these CPU's include GPU's and the power ratings are reflecting the GPUs as well, under what kind of load? Speed/voltage differences on the gpu's? My I5 6500 is rated at 65w - it never even comes close to 65w - I had the whole system less than 65w with the I5 cpu at 100% as in 35w. I will be building almost the same system using that same CPU for my daughter I will probably re-run those tests, I had the GPU turned off since having a discrete video card in it.

Also we need to be talking like an I7-6700K - same skew - with a 50% deviation just on the cpu - undervolting etc. to see how low you can affect the actual power. There are differences between the K and non-K's in the chip. I almost think we are talking apples and oranges with GPU's and CPU's but maybe would apply as well. Does anyone see 50% deviations with power on the I7 (just the cpu) at the same clock speed and OC at the same time >10%. No.


That is the reason why Intel has split those chips out into another sku, because those are the only chips that will show you that kind of power differential with pushing the chip to the max.

I don't have white papers on Intel's class testing all I know are the basics but they do this before they bin, and its specific to what the chip can do characteristic wise, TDP, voltage, frequency are three of the main things they look at.

http://download.intel.com/pressroom/kits/chipmaking/Making_of_a_Chip.pdf

Page 12

most companies don't do this, they do have a IC test step which Intel does too even before class testing but the IC step then the go straight to binning. The IC step is basically what GPU manufacturers do, where they look at functional units, errors in the chip etc.

AMD does kinda do class testing with their CPU's too (might call it something else), but not to the degree Intel has done recently. We might see it more extensive with Zen though.
 
Last edited:
That is the reason why Intel has split those chips out into another sku, because those are the only chips that will show you that kind of power differential with pushing the chip to the max.

I don't have white papers on Intel's class testing all I know are the basics but they do this before they bin, and its specific to what the chip can do characteristic wise, TDP, voltage, frequency are three of the main things they look at.

http://download.intel.com/pressroom/kits/chipmaking/Making_of_a_Chip.pdf

Page 12

most companies don't do this, they do have a IC test step which Intel does too even before class testing but the IC step then the go straight to binning.
I believe you are taking a leap of faith with that data. If you take that lower power skew, increase the clock speed to the I7 6700K (you can't due to the lock multiplier and adjusting the bus speed makes such test null and void) and use what ever voltage it takes to have the same reliability -> I would bet your power requirements will be very close or even higher in the end. You can't compare reliably I am saying different skews with different firmware, different voltage control based on the lower clock speeds etc. Including different GPU speeds and how controlled for minimizing exceeding a given power and thermals.

The only way to see real deviation is to take the same skew such as an I7 6700K - a large sampling - and test the deviations. 50% there I do not see will exist. Yet here on the RX 480 one sample out of many is showing 95w vice 153w at a higher sustain clock speed. A huge difference that is rather abnormal. Now is that a fluke - maybe - I would want to see more examples of what JZ had out in the wild.
 
The only way to see real deviation is to take the same skew such as an I7 6700K - a large sampling - and test the deviations. 50% there I do not see will exist. Yet here on the RX 480 one sample out of many is showing 95w vice 153w at a higher sustain clock speed. A huge difference that is rather abnormal. Now is that a fluke - maybe - I would want to see more examples of what JZ had out in the wild.


You won't see that diviation in that single sku, they have already been binned out, that is the problem, GPU's are not binned the same way.

So the only way you can do it, is lets take a mobile chip vs the T variant

The mobile chips are 45 watts max, the t veriest at the same clocks are still at 35 watts. So you are getting 20%is power reduction ( a bit lower) at the same frequency, less voltage. And the mobile chips are binned for low power usage and voltage too.
 
You won't see that diviation in that single sku, they have already been binned out, that is the problem, GPU's are not binned the same way.
Yeah I understand what you are saying but you also can't just use the other skews to prove much due to too many variables thrown in is what I am thinking as well.
 
Yeah I understand what you are saying but you also can't just use the other skews to prove much due to too many variables thrown in is what I am thinking as well.


The other sku's came from the same wafers :)

They are the same chips, just binned

They don't have different wafers or production lines for mobile vs desktop *same configuration chips
 
The other sku's came from the same wafers :)

The are the same chips, just binned

They don't have different wafers or production lines for mobile vs desktop *same configuration chips
Yes I know that, AMD does the same thing with their mobile chips keeping their power at 15w except the firmware straps that power limit to 15w even though it reduces the performance severely by underclocking the hell out of it. You can't compare that to the desktop processor where power is less limited or temperature less limited. You have other variables as in firmware that will keep the power rating of the chip that low and cool such as by underclocking it and having totally different voltage/temperature curves. You would have to match up the frequencies, behaviors (fix the frequency) apply the voltage that both chips are stable at and then compare the two different skews using a reliable benchmark to see what power differences you have - not use the basic ratings of the two different skews posted by Intel.

Let just say I just disagree with you, not saying you are wrong either.
 
Yes I know that, AMD does the same thing with their mobile chips keeping their power at 15w except the firmware straps that power limit to 15w even though it reduces the performance severely by underclocking the hell out of it. You can't compare that to the desktop processor where power is less limited or temperature less limited. You have other variables as in firmware that will keep the power rating of the chip that low and cool such as by underclocking it and having totally different voltage/temperature curves. You would have to match up the frequencies, behaviors (fix the frequency) apply the voltage that both chips are stable at and then compare the two different skews using a reliable benchmark to see what power differences you have - not use the basic ratings of the two different skews posted by Intel.

Let just say I just disagree with you, not saying you are wrong either.


You are not understanding this part, its like building a desktop with the T variant, you have the same limitations from a housing stand point as the K version or the non K version, no differences what so ever.

My mom just upgraded her office computers, to slim desktops last year, they all came with T variants, those are what the T variants are used for when you do have limited cooling capabilities. But those chips could over clock like crazy and still won't draw as much power as the full desktop ones, they aren't even close. But of course you are paying for that lol.

But again, they won't get to the max overclocks of the other chips because I couldn't test beyond what the coolers those system had unfortunately.

Also as you well know having less voltage to begin with doesn't mean you can get up to the same voltage as the other chips too or the frequency even if the cooling was available.
 
Last edited:
You are not understanding this part, its like building a desktop with the T veriest, you have the same limitations from a housing stand point as the K version or the non K version, no differences what so ever.

My mom just upgraded you office computers, to slim desktops last year, they all came with T variants, those are what the T variants are used for when you do have limited cooling capabilities. But those chips could over clock like crazy and still won't draw as much power as the full desktop ones, they aren't even close. But of course you are paying for that lol.

But again, they won't get to the max overclocks of the other chips because I couldn't test beyond what the coolers those system had unfortunately.

Also as you well know having less voltage to begin with doesn't mean you can get up to the same voltage as the other chips too or the frequency even if the cooling was available.
Well the proof will be in the pudding. We will see if this magical new Polaris will be available. Now knowing AMD in the past, they will incorporate better GPU's transparently (revised one's even) before the next launch of revised cards. So if we see more of JZ type results and especially if it is confirmed that some cards have a revised Polaris onboard then that will be that.
 
I completely agree. The Nvidia Fan Force is out in force. :ROFLMAO: I bet if Jayztwocents posted a 1080 video that showed more performance with less power usage and the board was different from what came before, they would be pointing to him as a perfect example of unbiased reporting. :ROFLMAO: Not really sure why they get their jollies off bashing AMD but then again, I guess we all need a hobby or two. :D

Same thing you said when we were opposed to believing the 1600mhz rumors by WCCF; 'oh look it's the nvidia fans feeling threatened by Polaris hee-hee-ha-ha'.

Same exact same thing you said when we were opposed to believing the 110W TDP AMD claimed.

Same exact thing you said when we were opposed to believing that two RX480s operating with 50% mGPU scaling were more powerful and more efficient than a GTX 1080 (another lie from AMD, and an utterly retarded one at that) - not that it stopped you from believing it.

Maybe razor1 and I are indeed hired PR guys whose sole aim is to defend NV from the encroaching revised Polaris menace - but from a purely statistical point of view I would say the odds of you being right are 1/1000000000000000000000000000
 
Same thing you said when we were opposed to believing the 1600mhz rumors by WCCF; 'oh look it's the nvidia fans feeling threatened by Polaris hee-hee-ha-ha'.

Same exact same thing you said when we were opposed to believing the 110W TDP AMD claimed.

Same exact thing you said when we were opposed to believing that two RX480s operating with 50% mGPU scaling were more powerful and more efficient than a GTX 1080 (another lie from AMD, and an utterly retarded one at that) - not that it stopped you from believing it.

Maybe razor1 and I are indeed hired PR guys whose sole aim is to defend NV from the encroaching revised Polaris menace - but from a purely statistical point of view I would say the odds of you being right are 1/1000000000000000000000000000

You forgot Polaris would have 2560-3072SP too :D
 
  • Like
Reactions: noko
like this
You forgot Polaris would have 2560-3072SP too :D

I also forgot about RX480 not having a power issue and the claim that it is not PCI-E complaint was false; I guess PCI-SIG are part of the nvidia defense force as well.

I have no *feelings* whatsoever about AMD or NV, couldn't care less what happens to them so long as it doesn't affect me, but thinking about all the arguments we had and all the accusations of bias and shit while reading this High PCIe Slot Power Draw Costs RX 480 PCI-SIG Integrator Listing

Was really just priceless, ah - my sides.
 
1070m is lackluster as hell so hopefully amd can deliver something better power draw and perf. Overpriced and overhyped for 1hr battery life meh, makes me wonder wtf the power draw is.
 
1070m is lackluster as hell so hopefully amd can deliver something better power draw and perf. Overpriced and overhyped for 1hr battery life meh, makes me wonder wtf the power draw is.


When was the last time you saw a desktop replacement laptop have great battery life and there is no such thing as a 1070m, its a full 1070 a bit different then the stand alone cards, it has more CUDA cores clocked a bit lower.
 
1070m is lackluster as hell so hopefully amd can deliver something better power draw and perf. Overpriced and overhyped for 1hr battery life meh, makes me wonder wtf the power draw is.

Uh, this has been tested, and frankly I'm confused by your statement "1070m is lackluster" - relative to what ? Is there some other graphics card that is available in mobile that offers better perf/w ?

The mobile 1070 and 1080 perform almost the same as the desktop counterparts, if that is lackluster to you then we don't agree on the meaning of the term lackluster
 
When was the last time you saw a desktop replacement laptop have great battery life and there is no such thing as a 1070m, its a full 1070 a bit different then the stand alone cards, it has more CUDA cores clocked a bit lower.


Hence why I specified mobile, it has more cores and is different than the desktop variant.


Uh, this has been tested, and frankly I'm confused by your statement "1070m is lackluster" - relative to what ? Is there some other graphics card that is available in mobile that offers better perf/w ?

The mobile 1070 and 1080 perform almost the same as the desktop counterparts, if that is lackluster to you then we don't agree on the meaning of the term lackluster

Lackluster in terms of the massive power savings that were touted, it last similar to a 980M with 60WHr batteryand this is with a FPS cap set
Price sucks big one too, way more expensive than previous gens and it's not that much faster in some of the latest benchmarks, 980Ti has beat it how many of the VR benchmarks? BF1 perf in DX12 older GPUs are very close to it in 1440; 390x and 980 ti were less than 10FPS lower and all over 60FPS, 4k is even closer.



You expected battery life out of gaming laptop? I have a bridge to sell you.

Should have clarified, I expected the 1070 with is highly touted low power draw to at least outperform previous gen laptops by a good margin. That said I run it plugged in %95 of the time, it's to be a portable machine for running VMs along with streaming Kodi
 
You stated that you expected "better performance" but you are running a frame rate cap. So color me confused. If you're capping the FPS then both GPUs are probably not able to stretch their legs, depending on the game and settings. What exactly did you expect to be much better? Battery life?
 
And seriously how often are you gaming on a laptop where you are running off the battery and can't plug it in?
 
Same thing you said when we were opposed to believing the 1600mhz rumors by WCCF; 'oh look it's the nvidia fans feeling threatened by Polaris hee-hee-ha-ha'.

Same exact same thing you said when we were opposed to believing the 110W TDP AMD claimed.

Same exact thing you said when we were opposed to believing that two RX480s operating with 50% mGPU scaling were more powerful and more efficient than a GTX 1080 (another lie from AMD, and an utterly retarded one at that) - not that it stopped you from believing it.

Maybe razor1 and I are indeed hired PR guys whose sole aim is to defend NV from the encroaching revised Polaris menace - but from a purely statistical point of view I would say the odds of you being right are 1/1000000000000000000000000000

Done speaking about off topic stuff, bye.
 
Last edited:
Nah, I just say it when you guys come in and bash AMD all day and all night. You do realize that everything you said up there about me is a lie, don't you? You kept saying same exact thing I said and then deflect your obvious bashing. Heck, all the Nvboys liked your post, that is pretty telling right there. If you do and the others do not have any feelings one way or the other, then why are you ALWAYS in here bashing AMD? Hum? Facts and actions speak for themselves.


Do you want us to link your posts? I'm sure it can be done if you like? There has definitely been more then one occasion you have stated something similar.
 
Do you want us to link your posts? I'm sure it can be done if you like? There has definitely been more then one occasion you have stated something similar.

:) Personally, the GTR is not a binned nor unique card, at least in a one off XFX only sort of way. Two different manufacturers with the same board and it was shown to be using far less power with far better overclocking ability. Not for me though since I do not really game all that much anymore. Having two of those would be nice to have though.
 
Last edited:
No, you did not say something similar, it was said that is was exactly the same thing pointing towards very specific circumstances. LOL Have fun Nvboy, I know I am. :) Notice also that the bashing does not occur in the Nv side with AMD users here? Yep. Personally, the GTR is not a binned nor unique card, at least in a one off XFX only sort of way. Two different manufacturers with the same board and it was shown to be using far less power with far better overclocking ability. Not for me though since I do not really game all that much anymore. Having two of those would be nice to have though.


Well its not a binned chip the card doesn't matter, since GPU's aren't binned the same way as CPU's are, the voltage doesn't matter for the most part since they can't sell a lower voltage chip at a different segment in the GPU market unless they want to increase prices because if they bin it that way they need to increase the cost because the cost of binning has to be included. If you can't understand that, well you just don't know how binning and the GPU industry works..... Sorry to say it, but you shouldn't assume others don't know that here. I have given specific examples when things like that happen (voltage ranges based on node differences on CPU's) and why they are separated into different classes, also showed as the node drops in size those variations get larger.

This has nothing to do with respins, it has to do with normal silicon growth curves and production.
 
Last edited:
Out of curiosity, did anyone see if that Jayz guy had a GPU-Z screenshot from that XFX card? I'd really like to see what it reports as the chip revision as that would tell us if they have made any changes. From what I recall the RX 480 launched with a chip revision of C0. I didn't see anything in that youtube video but maybe someone else knows.

Edit: The launch chip revision is C7, not C0.
 
Last edited:
Out of curiosity, did anyone see if that Jayz guy had a GPU-Z screenshot from that XFX card? I'd really like to see what it reports as the chip revision as that would tell us if they have made any changes. From what I recall the RX 480 launched with a chip revision of C0. I didn't see anything in that youtube video but maybe someone else knows.


If there was a respin, all cards after a certain point will have the same revision, also model numbers at retailers also change too. None of that has happened.
 
You stated that you expected "better performance" but you are running a frame rate cap. So color me confused. If you're capping the FPS then both GPUs are probably not able to stretch their legs, depending on the game and settings. What exactly did you expect to be much better? Battery life?
Think you missed my replices I expect it to outperform a 980ti in VR which it doesn't yet has VR slapped all over it. Along with outperform 390X in DX12 bf1 by a large margin, not similar performance.

So run my shooters at 60fps sipping power; I play to win not have particle effects blocking me view, so the 4k bf1 hitting 40fps on ultra is promising. But it looks like the 480 GTR from that JZ video could do that at a fraction of the price and possibly less power draw which would be amazing due to the savings and lets face it DX12 seems a lot more stable on AMD GPUs currently.

But how dare I hope that AMD can outdo that, save $700 and get similar system that does what I want, I'm such a bad person. The bar has been set pretty friggin low at least.
 
Think you missed my replices I expect it to outperform a 980ti in VR which it doesn't yet has VR slapped all over it. Along with outperform 390X in DX12 bf1 by a large margin, not similar performance.

So run my shooters at 60fps sipping power; I play to win not have particle effects blocking me view, so the 4k bf1 hitting 40fps on ultra is promising. But it looks like the 480 GTR from that JZ video could do that at a fraction of the price and possibly less power draw which would be amazing due to the savings and lets face it DX12 seems a lot more stable on AMD GPUs currently.

But how dare I hope that AMD can outdo that, save $700 and get similar system that does what I want, I'm such a bad person. The bar has been set pretty friggin low at least.

What the hell are you saying mate.

The 980Ti outperforms the 1070 by a hair in VR, a hair. Wait for SMP to be widely used and you'll see that delta widen in favor of the 1070. A 980Ti draws around 230w on average (reference). My ~1455mhz core, ~8Ghz memory 980ti can draw up to 320W (sustained) when I'm playing demanding games. A GTX 1080 can do the same thing in a 150W envelope.

Does Kyle use a reference 980Ti in his VR reviews ? (just checked, yes he does)

A GTX 1070 has 3 GPCs at around 1800mhz, a 980Ti has 6 at ~1200 mhz if reference, I suspect that's the main reason it's outperforming the 1070 by a hair, the 1070 is faster in terms of shaders by about 5% vs a reference 980ti

what the hell are you saying about the RX480 now ? 1070 runs circles around it, and draws less power, how is it even relevant to what you were saying earlier ?
0eln4f8.png

Something something AMD DX12

All the DX12ness in the world, with DICE at the helm, produces this fantastic result; it performs exactly the fucking same as a well coded DX11 title. GO FIGURE
'
Edit:

Apologies for my atrocious lie, the 390 gains 4 fps from 112 to 116 at 720p. WELCOME TO THE FUTURE
 
Last edited:
Think you missed my replices I expect it to outperform a 980ti in VR which it doesn't yet has VR slapped all over it. Along with outperform 390X in DX12 bf1 by a large margin, not similar performance.

So run my shooters at 60fps sipping power; I play to win not have particle effects blocking me view, so the 4k bf1 hitting 40fps on ultra is promising. But it looks like the 480 GTR from that JZ video could do that at a fraction of the price and possibly less power draw which would be amazing due to the savings and lets face it DX12 seems a lot more stable on AMD GPUs currently.

But how dare I hope that AMD can outdo that, save $700 and get similar system that does what I want, I'm such a bad person. The bar has been set pretty friggin low at least.


You do realize that the 1070 is a cheaper card than the 980ti? Its performance is for the most part supposed to be 5% above the 980ti and at times yeah you will see the 980ti get up to the 1070 different applications behave differently on different hardware. VR is not very mature yet, and there is an entire thread about this, actually forum section, I suggest you look over there and see what the it means to be good at VR its not only about frame rates.....

And BF1 is a good example of how DX12 is coded properly lol, when you have AMD and nV cards both loosing performance going from DX11 to DX12? yeah ok I think you need to take a step back and reassess what you are posting.

Yeah buy a dozen 480 GTR's and test the power out for us will ya? Lets see if that theory holds up. Cuase some one has to do it lol why not it be you. If every single one of them is as good as JZ's, you should be able to sell them on ebay for a higher then what ya bought them for.

Here ia another review of the 1070 running circles around the 480 in both dx11 and 12.

542fc63cc8.jpg

54319cb578.jpg


If you want other resolutions

Test wydajności Battlefield 1 - Wymagania sprzętowe pod kontrolą | PurePC.pl

guess what still running circles even at 4k
 
Last edited:
Think you missed my replices I expect it to outperform a 980ti in VR which it doesn't yet has VR slapped all over it. Along with outperform 390X in DX12 bf1 by a large margin, not similar performance.

So run my shooters at 60fps sipping power; I play to win not have particle effects blocking me view, so the 4k bf1 hitting 40fps on ultra is promising. But it looks like the 480 GTR from that JZ video could do that at a fraction of the price and possibly less power draw which would be amazing due to the savings and lets face it DX12 seems a lot more stable on AMD GPUs currently.

But how dare I hope that AMD can outdo that, save $700 and get similar system that does what I want, I'm such a bad person. The bar has been set pretty friggin low at least.
If you think you could save $700 getting rx480 system, i have another bridge to sell you.

Also, if you play to win, you don't play at 60 fps :).
 
You do realize that the 1070 is a cheaper card than the 980ti? Its performance is for the most part supposed to be 5% above the 980ti and at times yeah you will see the 980ti get up to the 1070 different applications behave differently on different hardware. VR is not very mature yet, and there is an entire thread about this, actually forum section, I suggest you look over there and see what the it means to be good at VR its not only about frame rates.....

And BF1 is a good example of how DX12 is coded properly lol, when you have AMD and nV cards both loosing performance going from DX11 to DX12? yeah ok I think you need to take a step back and reassess what you are posting.

Yeah buy a dozen 480 GTR's and test the power out for us will ya? Lets see if that theory holds up. Cuase some one has to do it lol why not it be you. If every single one of them is as good as JZ's, you should be able to sell them on ebay for a higher then what ya bought them for.

Here ia another review of the 1070 running circles around the 480 in both dx11 and 12.

542fc63cc8.jpg

54319cb578.jpg


If you want other resolutions

Test wydajności Battlefield 1 - Wymagania sprzętowe pod kontrolą | PurePC.pl

guess what still running circles even at 4k

According to PCGH the AMD cards gain nothing from DX12, at least in their benchmark run. These results are somehwat odd , why does the Fury gain more than Fury X. Something seems off here, lol this here says 390 gains 15% with DX12, so does 390X. PCGH all the same. Meh. Still shows recession for NV albeit a little less drastic than pcgh test
 
According to PCGH the AMD cards gain nothing from DX12, at least in their benchmark run. These results are somehwat odd , why does the Fury gain more than Fury X. Something seems off here, lol this here says 390 gains 15% with DX12, so does 390X. PCGH all the same. Meh. Still shows recession for NV albeit a little less drastic than pcgh test


Well that's the problem its all over the map and PCGH even stated frame times are just horrible on either IHV's in DX12, another words something is fubar in the DX12 version of Dice's engine. Frame times get screwed up when the engine and the drivers aren't playing nice together, either the engine is waiting for something or the drivers (GPU/CPU) are waiting for things to happen.
 
Well that's the problem its all over the map and PCGH even stated frame times are just horrible on either IHV's in DX12, another words something is fubar in the DX12 version of Dice's engine. Frame times get screwed up when the engine and the drivers aren't playing nice together, either the engine is waiting for something or the drivers (GPU/CPU) are waiting for things to happen.
Coordinating multiple threads I would presume would give some latency due to dependencies between them causing some stalls or can that be hidden?

DX 12 starts to shine when you exceed DX 11 draw call limitations, as in even more objects/shaders add on even more compute operations with multiple cpu cores driving the GPU. At this time I do not see developers wanting to push beyond DX 11 boundaries yet. Now in most of the BF1 benchmarks it does appear AMD does improve over DX 11 with DX 12 - does that mean it will give a better gaming experience then Nvidia - at this time it does not look like it but does look competitive non the less. Why does Nvidia not do as well in DX 12 or more exactly does worst? That I do not understand, is it lack of threading ability of Nvidia GPU? Meaning for DX 12 workloads with multiple threads it will always have limitations?
 
Back
Top