RUMOR: Radeon 480 to be priced to replace 380, similar output as 390.

hmm nope how do you know its overpriced?
because you cant accept nvidia is using mid tier gpus to be sold as high end parts it doesnt mean it isnt true. a 560ti(240usd) replacement sold for 380usd. a 570ti448(280usd) replacement sold for 500usd a year later a true high end part sold for 650-1000usd when the older high end was 500usd.

PS the SM layout itself hasn't changed it got wider but thats it, which helps throughput but increases latency so that latency has to be hidden by more registers which it does have that.

ALU performance per ALU will only change based on frequency. The rest of the chip can change when through put and latency is addressed.
it seems that it could have more TPCs based on Nvidia P100 data

AMD will not say they are getting 2.5 perf /watt unless its the best they are getting. And that is the same figures as GF, Samsung and AMD has stated they are getting for the 14nm node. If ALU amounts are dropping or staying the same (which way you want to look at is up to you, comparative to nV parts they are dropping % wise, and what they are replacing they are staying the same) on polaris which at this point its a given for their entire line up, we can figure out what the best case is for Polaris and how much extra through put they are getting extra over what they are replacing, and guess what it ends up around 10% more performance at most I would put it 15% more than Tonga.
if there is a 15-20% gain (which is possible,specially with the extra cache and the better gemeometry processor and probably more ROPs) then it should be around 290x/390 perfomance for a r9 380 replacement, another 10% SP would put it above R9 390x

This is exactly what I have been saying, they shouldn't be compared as they are in two totally separate brackets.

Its a Tonga replacement not a 290x replacement, its clearly seen as that because of the ALU amounts.


These are not its a high end parts.
it is that many people keep comparing GP104 to what could be a GP106 equivalent


The underlying changes to Polaris will have better performance in many areas that is a given, how much in those areas we don't know outside of the 10% -15% increase in throughput.
still there is missing information like the shared cache for SIMDs and CUs
 
because you cant accept nvidia is using mid tier gpus to be sold as high end parts it doesnt mean it isnt true. a 560ti(240usd) replacement sold for 380usd. a 570ti448(280usd) replacement sold for 500usd a year later a true high end part sold for 650-1000usd when the older high end was 500usd.

The tiers are setup by performance, not the size of the GPU, just like in the past, the 7900 which was a much smaller gpu than the 1900xtx was in the same performance level at that time with games coming out at that time so it could demand the same cost premium as the x1900xtx. Just as the 4xxx series of chips were smaller and more perf/watt, perf/mm2 over the gtx 2xx series.

In any case Pascal which is to be released soon is around 300mm^2 so it fits into the performance (high end) bracket. These are not the enthusiast bracket. So I don't see where the problem is if they charge the same amount as last generation as long as the performance boost is there.


it seems that it could have more TPCs based on Nvidia P100 data

Its possible as the texture units are decoupled from the shader array.

if there is a 15-20% gain (which is possible,specially with the extra cache and the better gemeometry processor and probably more ROPs) then it should be around 290x/390 perfomance for a r9 380 replacement, another 10% SP would put it above R9 390x

Geometry processing shouldn't be included in this because that is outside of this and we only see a decrease when too much geometry processing is done on current AMD products, so I would gander we just won't see that hit any more when high level tessellation is used if fixed. So yeah it will be around the r9 390x but it can be a bit more but not much more.

it is that many people keep comparing GP104 to what could be a GP106 equivalent

I haven't seen this at least not that much nor it doesn't make any sense unless nV is also doing their mid range first, but we have seen the GP106 chip already too


still there is missing information like the shared cache for SIMDs and CUs

Some of that information is verified already.
 
The tiers are setup by performance, not the size of the GPU, just like in the past, the 7900 which was a much smaller gpu than the 1900xtx was in the same performance level at that time with games coming out at that time so it could demand the same cost premium as the x1900xtx. Just as the 4xxx series of chips were smaller and more perf/watt, perf/mm2 over the gtx 2xx series.

In any case Pascal which is to be released soon is around 300mm^2 so it fits into the performance (high end) bracket. These are not the enthusiast bracket. So I don't see where the problem is if they charge the same amount as last generation as long as the performance boost is there.
I am not comparing them due to die size, rather how they measure they GPUs use their name for their GPUs and how they prioritize bus width for each segment

high end GPUs have either 100 or 200 /110 in the codename with 384 bit bus.
GF100 and GF110 were high end GPUs with 384 bit bus.
midrange GPUs have either 104 or 114,204 in the codename with 256 bit bus.
GF104 and GF114 were midrange GPUs with 256 bit bus.+

GK104/GM204 with 256 bit were midrange gpus
GK110/GM200 with 384 bit were high end gpus

the 680 being sold as high end GPU and then later rebranded then 680 to 770 as they gpus they are. midrange,then the high end gpus which were 500usd, now with the new series are 650-1000usd

if you apply that logic that for the other generations GF100 shouldmore expensive because was better than 200 series, and each time the performance gets then the price should increase 20-30%..


Some of that information is verified already.
none talks about local cache, nor how the SIMD are
 
Bracket prices don't change much, unless there is a need to change a bracket price (increased wafer cost being one of them), bus size doesn't matter if the GPU can utilize or is more efficient at using the available bandwidth.

Re branding happens, using a high bracket chip in a lower bracket, at times no big deal, how many times has AMD done the same?

Caching information for end users isn't that important, AMD won't talk much about this till they are ready to release, but in any event 2.5 perf/watt, gives you an over all idea of what they were going for. The reason why that 2.5 perf/watt wasn't in comparison to Fiji, is because Fiji used HBM, which by using HBM gave AMD a 30% power reduction right off the bat (1.75 perf per watt increase) and Polaris is not using HBM.
 
Assuming none of the repeated leaks are fake, the only way Polaris 10 can compete with the Fury X is if AMD puts two of them on the same card. That looks increasingly likely.

So, in a couple months, in order of decreasing performance, something like:

Radeon Pro Duo (unchanged)
490x2 (dual polaris 10)
490x (polaris 10)
480x2 (dual binned polaris 10?)
480x (binned polaris 10)
470x and lower (various levels of polaris 11, maybe some dual GPU cards here too)
 
Last edited:
Assuming none of the repeated leaks are fake, the only way Polaris 10 can compete with the Fury X is if AMD puts two of them on the same card. That looks increasingly likely.

So, in a couple months, in order of decreasing performance, something like:

Radeon Pro Duo (unchanged)
490x2 (dual polaris 10)
490x (polaris 10)
480x2 (dual binned polaris 10?)
480x (binned polaris 10)
470x and lower (various levels of polaris 11, maybe some dual GPU cards here too)

I really hope not. After the way both AMD and Nvidia have put very little effort into multi GPU solutions as of late. (software wise) I certainly won't be going there. I want a single high performance card.
 
Everybody in enthusiast forums like [H] wants the best and baddest. But remember (rumored) polaris 10 performance comes in roughly equivalent to a 390/GTX970, so you can imagine getting a 390x2 with dramatically lower heat and power usage-- and that could be pretty interesting when compared against the GP104, depending on pricing.
 
  • Like
Reactions: N4CR
like this
Everybody in enthusiast forums like [H] wants the best and baddest. But remember (rumored) polaris 10 performance comes in roughly equivalent to a 390/GTX970, so you can imagine getting a 390x2 with dramatically lower heat and power usage-- and that could be pretty interesting when compared against the GP104, depending on pricing.

I'm simply not intertested in a dual chip card from either vendor. I'm hoping the rumored performance of Polaris is better than that. More in line with Fury X. Yes that might be asking a bit to much. But otherwise, I can't see a point in upgrading.
 
Oh it's definitely not ideal. Thing is, Vega ain't ready yet and AMD needs some way to answer to GP104.
 
I am not comparing them due to die size, rather how they measure they GPUs use their name for their GPUs and how they prioritize bus width for each segment

Chip code names and bus widths aren't really correlated to market positioning or branding. Those are based on competition and relative performance.

Hence the same chip can serve in many different products as those dynamics change over time.
 
Bracket prices don't change much, unless there is a need to change a bracket price (increased wafer cost being one of them), bus size doesn't matter if the GPU can utilize or is more efficient at using the available bandwidth.

Re branding happens, using a high bracket chip in a lower bracket, at times no big deal, how many times has AMD done the same?

Caching information for end users isn't that important, AMD won't talk much about this till they are ready to release, but in any event 2.5 perf/watt, gives you an over all idea of what they were going for. The reason why that 2.5 perf/watt wasn't in comparison to Fiji, is because Fiji used HBM, which by using HBM gave AMD a 30% power reduction right off the bat (1.75 perf per watt increase) and Polaris is not using HBM.
yea if it were comparison With gpu only, the gpu core perf/watt increase would be quite good since Fiji xt core had higher power draw than a 390,but the memory system also uses power and would change the numbers


Chip code names and bus widths aren't really correlated to market positioning or branding. Those are based on competition and relative performance.

Hence the same chip can serve in many different products as those dynamics change over time.
Then why they have internal codenames with same nomenclature? it is another way to divide SKUs

And anither difference is between a midrange gpu gm204/gk104 and the high end gm200/gk110 there is a 30-35% gap in performance like there was with gf110/100 and gf114/104
 
Last edited:
Oh it's definitely not ideal. Thing is, Vega ain't ready yet and AMD needs some way to answer to GP104.

Sounds like it may be 3870x2 all over again then. AMD couldn't touch G80 with a single GPU and so started their dual-GPU card pedigree (although I think there was a dual-GPU trident or matrox card in the late 90s if I recall, but not counting that one)
 
yea if it were comparison With gpu only, the gpu core perf/watt increase would be quite good since Fiji xt core had higher power draw than a 390,but the memory system also uses power and would change the numbers

Well still have to factor in the memory bus for the GDDR5, which has a major affect on power draw from the GPU. Its not just the memory modules. The specific silicon for the bus its frequency will be the same as the memory.

Then why they have internal codenames with same nomenclature? it is another way to divide SKUs

Organizational purposes. Traditionally the for nV the number 104 has always been on level below the 100 series, but we have seen them use 102 for their high end, and other numbers for other chips. Its never set in stone.
And anither difference is between a midrange gpu gm204/gk104 and the high end gm200/gk110 there is a 30-35% gap in performance like there was with gf110/100 and gf114/104

I don't think anyone thinks there won't be that much difference, it seems to be going that way for quite sometime now.
 
Last edited:
Sounds like it may be 3870x2 all over again then. AMD couldn't touch G80 with a single GPU and so started their dual-GPU card pedigree (although I think there was a dual-GPU trident or matrox card in the late 90s if I recall, but not counting that one)

ATi itself had the Rage Fury MAXX. It wasn't all that super great, but it existed.
 
amd_roadmap_2016_2018_gpu.jpg


This is the latest AMD roadmap.. It seems to put pretty conclusively to rest the delineation between Polaris 10 and 11.

The split point woud be about the 380X/390. I'll speculate that 390 and up are Polaris 10.
 
ATi itself had the Rage Fury MAXX. It wasn't all that super great, but it existed.

The Rage Fury MAXX (the first card ever with AFR rendering) was terrible and had artifact issues in a lot of games. But that came out after the 3DFX Voodoo 2, which is essentially 2 Voodoo chips on one card. No one ever thinks about that as a multi-GPU card because everything was handled in hardware and it was done so well, but it was.

I find it pretty hard to believe that AMD would put out a lot of multi-GPU products because of the increased cost of having multiple GPU dies on a card. Maybe on a top of the range product, but not on lower-end cards. It's fully possible that a 2x Polaris 10 GPU would outperform the Fury X by a decent margin and cost less to make sure to the cheaper cooling requirements, no HBM and massive size of the Fury GPU.
 
amd_roadmap_2016_2018_gpu.jpg


This is the latest AMD roadmap.. It seems to put pretty conclusively to rest the delineation between Polaris 10 and 11.

The split point woud be about the 380X/390. I'll speculate that 390 and up are Polaris 10.


That split line isn't for that I think, you think Vega is going to replace the entire line up with HBM 2?
 
Razor1 posted that yesterday. Completely agree with TaintedSquirrel, the only way AMD can compete with the Fury X using Polaris GPUs on GDDR5 is to put multiples on a single card. And I think that is what they will do.

The dotted line doesn't mean Vega will replace their entire lineup. It will be their Fury or Rage highest-end card, to compete with GP102 in early 2017.
 
Razor1 posted that yesterday. Completely agree with TaintedSquirrel, the only way AMD can compete with the Fury X using Polaris GPUs on GDDR5 is to put multiples on a single card. And I think that is what they will do.

The dotted line doesn't mean Vega will replace their entire lineup. It will be their Fury or Rage highest-end card, to compete with GP102 in early 2017.


I really hope you are wrong. If it turns out right then I will skip this generation from AMD and look at what Pascal can do. Until I see some hard evidence of how dual cards run in DX12, i'm simply not interested. DX11 games of late have bben a bit of a joke when it comes to multi GPU solutions.
 
Razor1 posted that yesterday. Completely agree with TaintedSquirrel, the only way AMD can compete with the Fury X using Polaris GPUs on GDDR5 is to put multiples on a single card. And I think that is what they will do.

The dotted line doesn't mean Vega will replace their entire lineup. It will be their Fury or Rage highest-end card, to compete with GP102 in early 2017.

I'm pretty sure Nvidia is already surpasses FuryX on GDDR5. Its not magic. GDDR5X will make that even easier.
 
Multi die GPUs are the future if you like it or not. I for one, welcome our multi die, multi gpu overlords! As long as the downsides/compatibility/latency and the rest is sorted in DX12/Vulkan/whatever you want then I’m happy.
 
Multi die GPUs are the future if you like it or not. I for one, welcome our multi die, multi gpu overlords! As long as the downsides/compatibility/latency and the rest is sorted in DX12/Vulkan/whatever you want then I’m happy.

Tell that to DX11 support.
 
Multi GPU will only get worse before it gets better.

Here's the thing. Multi-GPU has to become the norm before it gets proper support and love from the devs. And it won't become the norm any time soon because of lack of proper support and love from the devs.

As a company hurting badly for cash and revenue, AMD is not in any position to try strong-arming the market like this. It's simply strategic suicide if that is indeed what they will do, unless they've come up with some magic multi-GPU implementation that works.
 
The per die performance will be sufficient if they mature 14nm a year or so to do it at 4k as it is, so it won't really matter for dx11 compatibility looking forward. Perhaps these first two releases more than latter though if at all. Either way, a 390x isn't anything to sneeze at anyway in worst case.

If they market it right, making mgpu for the midrange, they will force the market. Plus the marketing is a dream, almost anyone in the midrange would jump at a dual card if given the choice between an okay single and a dual that pulls well ahead when working, or does well even when not.
Someone has to do it first. DX12 makes this easier for them too.

Devs always follow hardware, that's how it is.


And If they can make a solution where this load is handled by hardware, the card is 'seen' as a single card, you never know... little bit like voodoo2 style.
 
Last edited:
The per die performance will be sufficient if they mature 14nm a year or so to do it at 4k as it is, so it won't really matter for dx11 compatibility looking forward. Perhaps these first two releases more than latter though if at all. Either way, a 390x isn't anything to sneeze at anyway in worst case.

If they market it right, making mgpu for the midrange, they will force the market. Plus the marketing is a dream, almost anyone in the midrange would jump at a dual card if given the choice between an okay single and a dual that pulls well ahead when working, or does well even when not.
Someone has to do it first. DX12 makes this easier for them too.

Devs always follow hardware, that's how it is.


And If they can make a solution where this load is handled by hardware, the card is 'seen' as a single card, you never know... little bit like voodoo2 style.

Would you pay $500 for a dual card that has a (much less than) 30% chance of working well, or a single Pascal that will always perform at 980Ti level and above, possibly more if overclocked?

If rumours are true, you're talking about 2x 390x level of performance best case scenario, vs 20-30% faster than a 980Ti. AFAIK the 980Ti is around 30% faster than a 390x, and 30% faster on top of that will mean around 70% faster than a stock 390x.

You're talking about sometimes double performance of a 390x, vs a constant 70% improvement over the 390x at the same price point.

I still don't see a reason to go the AMD route unless they can afford to optimise CFX for nearly EVERY RELEVANT game.
 
If support is there for the main games I want to play, yes I will. The 30% becomes moot then as the rest don't need such high levels of performance. I don't play 20 games a year.

Jury is still out on DX12 performance and it also leaves mgpu up to the developer. That's why they'll be bitched at and have to patch it or support it if AMD plays that card.
So we'll see how it plays out and what polaris can do performance wise, which is the main determining factor of this debate.


edit: I don't think I'd be in the market for one for those cards personally. If it's the only option, then possibly two or more for 4k, then vega/big pascal is more appealing next year.. next big upgrade round this year, want to enjoy No Mans Sky in 4k at a start.
 
Would you pay $500 for a dual card that has a (much less than) 30% chance of working well, or a single Pascal that will always perform at 980Ti level and above, possibly more if overclocked?

If rumours are true, you're talking about 2x 390x level of performance best case scenario, vs 20-30% faster than a 980Ti. AFAIK the 980Ti is around 30% faster than a 390x, and 30% faster on top of that will mean around 70% faster than a stock 390x.

You're talking about sometimes double performance of a 390x, vs a constant 70% improvement over the 390x at the same price point.

I still don't see a reason to go the AMD route unless they can afford to optimise CFX for nearly EVERY RELEVANT game.
You're completely missing how this would be implemented. It's effectively gluing 2 chips together so they appear and work as one. You wouldn't need Crossfire support any more than you would for a single card. This likely won't be a board with a GPU on each end. It will look more like a fiji with the GPU and 4 HBM stacks, but instead will have 2 GPUs on the package with the HBM. That would likely allow a level of interconnection that makes Crossfire irrelevant.

The whole point of using multiple dies is to improve yields (smaller is higher) while being able to cover create more products with less chips. A Fury level product could be 6x Polaris 11 chips for example. Taken further, a single chip could be 1 CU or SM and you pack 40 of them onto one package. That's probably a bit extreme still, but we're getting there.
 
You're talking about sometimes double performance of a 390x, vs a constant 70% improvement over the 390x at the same price point.
Not at the same price point, no. AMD would need to price substantially below NV for that to be attractive. And obviously they don't want to do that, but what's the alternative-- not compete at all until Vega?

Heck, it's not like AMD has pride to swallow. Their CPUs have underperformed intel for a decade now, and they simply price them accordingly.
 
And here's the official confirmation from AMD:

AMD confirms Polaris is not for the high-end market

" AMD also confirmed the new 14nm Polaris architecture is on track to ship towards the middle of this year. "We remain on track to introduce our new 14-nanometer FinFET based Polaris GPUs midyear. Polaris delivers double the performance per watt of our current mainstream offerings, which we believe provides us with significant opportunities to gain share." In response to a question by an analyst from JPMorgan Securities about channel fill and Polaris related revenue, AMD CEO Lisa Su replied the majority of Polaris is a second half of the year story."
 
I guess then we'll either see outrageous pricing on Pascal SKUs. I just don't see AMD making money if they price their Furies at $400, which is what I expect the new price to be given their performance, competing against the new x70 Pascal parts.
 
Tell that to DX11 support.

This has nothing to do with current dx implementations since we haven't had a multi-gpu on-die solution yet. If you are referring to current dual-gpu solutions then yes, most of us would probably agree on that.

I don't know. If the current roadmap is true... AMD is either going innovative or bust. Lets hope it is the former and not the latter.
 
There's a lot more money in mobile and mainstream than high-end. If Polaris releases with tiny die sizes and power utilization improvements AMD promises, they can be priced really low and sell massive volume.

Imagine a notebook with the dGPU performance of a 970/390! Imagine plugging your 2lb ultrabook into a little thunderbolt 3 dock with a bunch of ports and a 970/390 dGPU and gaming at 1080p with max settings! These are cool. AMD will do just fine.

Enthusiasts will largely skip over Polaris, true. I doubt the multi-GPU solutions will make anyone drool. But Vega is coming up soon enough, they haven't given up on enthusiasts.
 
And here's the official confirmation from AMD:

AMD confirms Polaris is not for the high-end market

" AMD also confirmed the new 14nm Polaris architecture is on track to ship towards the middle of this year. "We remain on track to introduce our new 14-nanometer FinFET based Polaris GPUs midyear. Polaris delivers double the performance per watt of our current mainstream offerings, which we believe provides us with significant opportunities to gain share." In response to a question by an analyst from JPMorgan Securities about channel fill and Polaris related revenue, AMD CEO Lisa Su replied the majority of Polaris is a second half of the year story."
Hardly surprising it will be a 2H story as they're launching around June?
 
Certainly, but on the other hand we have leaked pictures of actual GP104 chips floating around, so it seems NV may be ahead on actual production as well.
 
Certainly, but on the other hand we have leaked pictures of actual GP104 chips floating around, so it seems NV may be ahead on actual production as well.
Maybe, but AMD demoed the cards first, it's just nobody has seen one publicly. They could be really ugly or the form factor is a dead giveaway. If they all come out looking like nanos...
 
Back
Top