AMD Launching Polaris 10 400 Series GPU June 1st At Computex. $299 (rumor)

That is even if the clock speeds are even correct.....

So ignoring the clock speeds the scores are likely good. Also they won't be changing cock speeds if launch is close as well. As I stated before, they don't change clock speeds on a whim because of validation needs.
 
To me though, that mentality is one of it replacing the 390 and the 390 X. If that were the case, then why not name the cards the 490 and 490x instead of the 480 and 480x?

The pricing tiers have been standardized for years now.

Have they? Because for as long as I've been following, the pricing tiers have been like the stock market. They fluctuate, but overall trend is up.

My Radeon 9800 Pro, based on a Google search, launched at $399, and that was their top end card until the XT refresh. But, that's going back pretty far, so let's look at recent history.

The GTX 580 launched at $499
680 = $499
780 = $649
980 = $549
1080 = $599 (not counting FE)

I wouldn't call that set. That's fluctuation with an overall upward trend. But let's look at the mid-range cards, Nvidia's x60 lineup.

GTX 560 = $199 (vanilla, not ti)
GTX 660 = $229
GTX 760 = $249
GTX 960 = $199/$229 (2/4GB)

There's some fluctuation there, and we haven't seen the 1060 yet, but if the 1080/1070 to 980/970 prices are any indication, $279 is realistic for the 1060.

So yes, there should be some expectation of a price increase for AMD. I wouldn't say that the segment prices have been locked in stone with Nvidia, and I don't know enough about AMD to know how their prices went. I will say that the GTX 970/R9 390 were priced too low compared to their higher end parts*, and this caused some cannibalization, so NV may be trying to correct for that. AMD may do the same.

*GTX 970 - $329
GTX 980 - $549
67% higher cost for 15% higher performance. NV doesn't want to relive that.
 
67DF is the one with 2304 stream processors. We still haven't seen 67C0.

They show 2 versions of 67DF in that benchmark (not talking about the Xfire result) : AMD Radeon R9 480 3DMark11 benchmarks | VideoCardz.com
:C7 and the :C4.
The :C4 is about 5% below that of a 390X in the table.
No idea whether they are meant to be the same GPU or not, VideoCardz thinks not but who knows.. However...

TBH I still feel all these types of benchmarks pre-release or before given out to reviewers are fake, whether it be for AMD or NVIDIA.
Just to add Eteknix suggest they are different: AMD Polaris 10 Samples Clock at 1.2 Ghz
Cheers
 
Last edited:
They show 2 versions of 67DF in that benchmark (not talking about the Xfire result) : AMD Radeon R9 480 3DMark11 benchmarks | VideoCardz.com
:C7 and the :C4.
The :C4 is about 5% below that of a 390X in the table.
No idea whether they are meant to be the same GPU or not, VideoCardz thinks not but who knows.. However...

TBH I still feel all these types of benchmarks pre-release or before given out to reviewers are fake, whether it be for AMD or NVIDIA.
Just to add Eteknix suggest they are different: AMD Polaris 10 Samples Clock at 1.2 Ghz
Cheers

I would actually suspect that C7 and C4 are A1 and A0 samples.
 
Dude didn't you say the same thing about the Fury line lol?

If I remember correctly, it might have been another person though, but pretty sure it was you cause of the system build....

If there's one thing we should have learned from AMD is to temper expectations..a lot.
 
Well so far I think they have done a good job, expectations for 390x to Fury X performance seems to good. I would love to see it higher, but realistically, it seems to be where they are, and this is pretty much what they have been stating, just not directly. At a decent price and power consumption levels, it will do well.
 
LoL quite a lot of amusing expectation lately.. just hoping to be surprised as my expectations with the 1080 also was quite low..

All we have is speculation to tide us over until release. I normally don't engage in it, but I have a vested interest in how this generation pans out. I'm finally getting a Freesync monitor, and due to the "Gsync tax," it's cheaper to get a Freesync monitor + GPU than the Nvidia equivalent.*

*Using this FreeSync monitor ($400) and buying an AMD GPU that can drive it the way that I want (I'm ok with display scaling) is cheaper than buying this GSync equivalent ($800) and no GPU (since a P10 would be marginally faster than my existing GTX 970, I consider this the fairest comparison for what I want).

Now, if the comparison was FS Monitor + Vega vs. GS monitor + GTX 1080/1080ti (or even 1070, if Vega disappoints), using those same monitors, the comparison becomes even more skewed towards AMD. But I'll stop there, because going any further on this subject should occur in the displays forum.
 
Did you guys see this slide at wccftech.com?

AMD-Radeon-Graphics.png




So we are looking at top end polaris at 329.99 and up.

It doesn't make sense. this whole thing is confusing. They can't possibly be that dumb. Something doesn't add up, if mainstream polaris costs 329.99 for the top model than we have a big gap and no gpu until 500. If its already matching fury than thats gone as well and we only have fury x. which is sitting up there with a big gap. I am thinking there has to be polaris chip with 2560 shaders coming soon that will be at 400 or 429, in between 1070-1080. AMD might as well load these up on shelves and just put a fire sale on furys cuz I am sure these cards if they have any overclocking headroom they will make fury x obsolete.

I am betting there is somthing higher than 480 and 480x, most likely 490 490x, just like now with 2560 amd 2816 shaders.

there is nothing stopping them from releasing 490/490x with those specs but amd themselves, unless they really don't care about filling that spot, which is odd.
 
Last edited:
nah, they were talking about today's VR ready GPU price range. This has nothing to do with Polaris.

390 launch MSRP is $329
 
Vega is a different animal all together larger chip HBM2 and would come at a premium price.

If Polaris is to succeed at mid range it needs to have a price which compels people to leave or not look at Nvidia.
 
nah, they were talking about today's VR ready GPU price range. This has nothing to do with Polaris.

390 launch MSRP is $329

Makes sense I guess lol but they can still replace up to 390/390x with same amount of shaders until vega comes out. I don't see why they wont. There will be a big price gap from their mainstream to top end, and with these cards you can't even call fury top end because these might already be reaching fury range and beating it if these overclock another 100mhz.
 
Last edited:
Assuming 2560 SP model exists, so we add 10% to the existing score (slightly generous) it ends up being about 3-5% slower than the 980 Ti. That would actually be faster than the Fury X which seems to surpass all previous estimates. More likely it would trade blows with the Fury X.

Depending on how this all ends up with the pricing Polaris could directly compete with the 1070. We shall see.

Oh, it exists. The 36CU chip shown here is the same as the SiSoft leak earlier which was said by the leak to be a NOTEBOOK chip.

You know how notebook chips are often cut-down and running lower clocks? 1.26ghz is very low for FinFet LPP (an enhanced version of LPE that clocks higher!).

Think about 2560 SP @ 1.5ghz

Now we're at beyond Fury X levels of performance. If it's priced at $299, that would be irresponsible levels of performance, perf/w and perf/$! ;)
 
Oh, it exists. The 36CU chip shown here is the same as the SiSoft leak earlier which was said by the leak to be a NOTEBOOK chip.
You know how notebook chips are often cut-down and running lower clocks? 1.26ghz is very low for FinFet LPP (an enhanced version of LPE that clocks higher!).
Think about 2560 SP @ 1.5ghz
Now we're at beyond Fury X levels of performance. If it's priced at $299, that would be irresponsible levels of performance, perf/w and perf/$! ;)

Even tho I like what you are saying have you seen other products that are 14nm that can easily do this from Samsung or GF?
 
Even tho I like what you are saying have you seen other products that are 14nm that can easily do this from Samsung or GF?

Apple's SOC hit >2ghz on LPE, the early version of 14nm FF. Heck, Samsung's Exynos SOC hit even higher clocks.

LPP builds upon that to improve performance.

Now I don't expect to see 2ghz GCN, because it's historically been a lower clocked architecture. But 1.5ghz is certainly within expectations, given these notebook leaks earlier have it running 1.26 to 1.3ghz.
 
The interesting thing though is from AMD's recent Q&A webminar, they said AMD's reference design is to focus on perf/w & efficiency. But they allow AIB partners to crank up the clocks to whatever they want.

The beauty of FinFet is that it can tolerate much higher clocks with less leakage, so increasing clocks doesn't quickly result in a runaway power use rocketing and being unstable.
 
The interesting thing though is from AMD's recent Q&A webminar, they said AMD's reference design is to focus on perf/w & efficiency. But they allow AIB partners to crank up the clocks to whatever they want.

The beauty of FinFet is that it can tolerate much higher clocks with less leakage, so increasing clocks doesn't quickly result in a runaway power use rocketing and being unstable.

But if we look at serious Intel cpu overclocks it still requires a lot of power.
 
But if we look at serious Intel cpu overclocks it still requires a lot of power.

The correct way to think is FinFet allows a much bigger range of optimum clocks, where increasing it won't lead to major power use increases. But if you go beyond that and push lots of +vcore, you will still get a big power use increase and hit the limits of the chip.
 
All we have is speculation to tide us over until release. I normally don't engage in it, but I have a vested interest in how this generation pans out. I'm finally getting a Freesync monitor, and due to the "Gsync tax," it's cheaper to get a Freesync monitor + GPU than the Nvidia equivalent.*

*Using this FreeSync monitor ($400) and buying an AMD GPU that can drive it the way that I want (I'm ok with display scaling) is cheaper than buying this GSync equivalent ($800) and no GPU (since a P10 would be marginally faster than my existing GTX 970, I consider this the fairest comparison for what I want).

Now, if the comparison was FS Monitor + Vega vs. GS monitor + GTX 1080/1080ti (or even 1070, if Vega disappoints), using those same monitors, the comparison becomes even more skewed towards AMD. But I'll stop there, because going any further on this subject should occur in the displays forum.

Hey I bought the same monitor, arriving tomorrow. Not sure if I will upgrade with Polaris when Vega is 6 months out, may use the 290x/290 CFX to drive this or the Nano rig. Since FreeSync I might be able to live with lower frame rates but north of 30fps until then.

Raja said Polaris low to high - to me it means low range to low mid Polaris 11 (470,470x). Polaris 10 medium range or mainstream (480, 480x), Polaris 10 high or performance range (490, 490x). Which would seem Polaris 10 would cover too much unless the high range uses something like HBM1 and is clocked much higher for instance.
 
I'm pretty sure we will see

Polaris 11:

R9 465, 470 and 470X

Polaris 10:

R9 480 and 480X

Vega little:

R9 490 and 490X

Vega Fat:

R9 Fury quivilent.
 
BS

all they are doing is re badging old shit.

You don't want to buy an intel you just wait till bulldozer blows it away.
This is all they do now.
 
BS

all they are doing is re badging old shit.

You don't want to buy an intel you just wait till bulldozer blows it away.
This is all they do now.
Chill. These are not rebadges lol. Far from it.
 
The correct way to think is FinFet allows a much bigger range of optimum clocks, where increasing it won't lead to major power use increases. But if you go beyond that and push lots of +vcore, you will still get a big power use increase and hit the limits of the chip.

density man, density, you can't just go with all of those assumptions without take in consideration transistor density in a 14nm package.. you can't go even further comparing CPUs to GPUs as it would behave equally.. just to follow the example, with a modern i7 skylake CPU you pack 1.7b transistor in ~120mm2 versus how many? 8b?10b? 12b? how many transistor are rumored to be in Polaris 10? in a 232mm2 package which its also rumored to be LPP which its smaller than LPE that have nothing to do related to clock lower or higher.. major problems there will be the same intel started to feel with modern CPUs and those are density, heat is hard to spread with such high density reducing the effective frequency that can be achieved, requiring more voltage and more power the more higher you wanted to go..

Also you may want to take in consideration that this will be the first batch of an immature 14nm FF+ GPUs so density also may show an appreciable issue regarding clocks there at least during the first state of that process, which its why I think Nvidia decided to go with a more mature 16nm.
 
Why are you thinking 16nm is more mature than 14nm? Are you not aware that both of these nodes have been in production for a long time already?
 
  • Like
Reactions: N4CR
like this
Increasing transistor density also increases thermal hot spots, although we don't know enough about the processes (14nm vs 16nm) or the chip design for Polaris, this is a given. Its not about maturity of the processes, this is the nature of increasing transistor density. This is why when people say "more advanced" 14nm process, its not really more advanced, yeah you get more transistor density but everything else should be pretty much identical to the 16nm process that is being used for GPU's now.

Also you have commented about 1.5 ghz on AMD's new chips, the only way they reach that is if they designed the chip for it, the process will not give them that much of an increase in clock speed, you are thinking in the lines of moore's law where nodes can give the performance increases by increasing frequency. This has not been the case for the past 2 nodes, moore's law is pretty much dead. This is why nV's chip design for Pascal, is quite different from Maxwell, it was designed for higher frequency. If they didn't change maxwell's design and just dropped a node, they would only get a hundred or two hundred more mhz. Not the 500+ (we don't even know what the real limit is because of the power and thermal limits on Pascal yet).

You actually answered the reason why clock speeds don't automatically increase by leaps and bounds due to node transitions, FinFets don't have a tolerance for increased frequency when voltage have to increase. Its like an insulator around the end of a wire, the moment you have enough electricity running through it doesn't matter how much insulator you have around it, its going to jump to the close point, path of least resistance. In FinFets case the same things happens, this is why there is such a lower tolerance for voltage increases once that boundary is hit, leakage becomes uncontrollable once that point is hit. Unlike with planner transistors where leakage increases slowly *slower*, so there is a gradual increase but it too will come across a point where it becomes uncontrollable.
 
Last edited:
The process gives them increased clock speeds. That's on the actual FinFet claims from GloFo, Samsung and even TSMC.

30-40% is the process advantage. The rest has to come from the architecture.

The point here is that GCN seems to be targeting efficiency, ie, more perf per shader. So even if it gets a 30% clock speed boost (1.3ghz), add some extra uarch perf and the performance will be much better than 390X.
 
The process gives them increased clock speeds. That's on the actual FinFet claims from GloFo, Samsung and even TSMC.

30-40% is the process advantage. The rest has to come from the architecture.


yes but with a power draw that is not linear. What does that mean, if you pushed the clocks up to that point your power draw is going be much higher and this is for transistor for transistor equivalence. Which is definitely not the case with these new chips, the chips are using more transistors,even though they are smaller.

I don't give a damn about all those calculations, node, architecture etc, we already know best case if you use Hawaii its 2.5 perf/watt, and 2.0 perf/watt if its Tonga. So the rest of it, why even try to figure it out?


Easiest math in the world, they need to get to 150 watts to be in 390x to FuryX performance nice an simple. If they want more they need more power. The rest does it really matter? This is giving the benefit of the doubt their top end chip is the one that has the best perf/watt numbers.......
 
Last edited:
The latest investor briefing has Polaris as 2x perf/w compared to current mainstream cards, which given the position, I am assuming it's the 380X Tonga.

The earlier figures Raja threw around as 2.5x perf/w, is likely against 390X or Fury X, since these are less efficient.

And yes, I agree with you, 150W Polaris should match Fury X.
 
If they get the top polaris to compete or be very close with 1070, while priced launch 350 and going down to 320 or so, it would cannibalise nvidia sales. Likely the dumbfuck shareholders will bitch and force them to price parity. Which will fail.
If not, if they price it how we reckon they should, then the rest of the chips can directly compete with 1060 and rest down the chain on price/performance.

edit: WTF is a 'Dual Pro'.... AMD marketing throwing us a leak bone or a major marketing fuckup? I don't see someone in marketing messing that up so bad. Pro Duo to 'Dual Pro'.....?
 
Last edited:
Hey I bought the same monitor, arriving tomorrow. Not sure if I will upgrade with Polaris when Vega is 6 months out, may use the 290x/290 CFX to drive this or the Nano rig. Since FreeSync I might be able to live with lower frame rates but north of 30fps until then.

This monitor's FreeSync range is 40-60hz and doesn't support LFC (no 4K FreeSync monitors currently support LFC). So, running in the 30s on that monitor gives you zero FreeSync benefit.
 
This monitor's FreeSync range is 40-60hz and doesn't support LFC (no 4K FreeSync monitors currently support LFC). So, running in the 30s on that monitor gives you zero FreeSync benefit.
What is LFC? Thanks

Looks like I need to keep it above 40fps then.
 
Some monitors can be hacked to increase the freesync range. It seems to often be artificially limited in order to push the G-$ync range. Can't have the same thing for 300 bucks less...
 
Some monitors can be hacked to increase the freesync range. It seems to often be artificially limited in order to push the G-$ync range. Can't have the same thing for 300 bucks less...

Not to diverge too much off topic, but I hacked my already decent MG279Q range (35-90hz) to do 31-110hz (which basically keeps me in Freesync range all the time). Pretty much every Freesync monitor out there can go at least a bit lower/wider in range with a hack.

To go back to topic, I'm really hoping Polaris 10 can deliver above-390x performance since it would allow me to keep my monitor. Otherwise, I'm either going to keep it, go with a 1080 or 1070 and deal with not having Freesync, or will swap it out.
 
  • Like
Reactions: N4CR
like this
Btw, the reason there's a 99% chance Polaris 10 is a 2560 SP chip is way back in the manifest, we had C4, C7 and C10. The C10 was listed as 2560, C7 2304 and C4 2048 (IIRC) SP. These are the 3 SKUs of Polaris 10, it's getting 2 cuts.

The reason why it's 2 cuts is quite obvious. Because Polaris 11 is only 20 CU, 1280 SP. There's a huge gap between these two in performance, and it needs to be filled.

Think 480X, 480, 475 (or 480 SE) as Polaris 10.

Then 470X, 470 for Polaris 11 (it's really too small for further cuts, pointless due to decent yields according to Samsung).

I also believe the C7 and C4 chips are going to be in notebooks, hence all the early leaks come from 2048 and 2304 SP Polaris 10, because these OEMs build the notebook ES and they run benchmarks to test it and inadvertently the results appear in online databases (SiSoft & 3dMark).

If 2304 Polaris 10 @ 1.26ghz is already 10% ahead of the 390X, that's just amazing already for a 480 replacement ($229?), just imagine 480X with 2560 SP @ 1.4ghz+ for $299! Can't wait!
 
  • Like
Reactions: N4CR
like this
I wonder who has more inventory on the shelves and in ware houses? AMD or Nvidia products? This could get nasty for Nvidia with all their old hardware gathering dust even when prices are reduced to nothing.
 
  • Like
Reactions: N4CR
like this
I wonder who has more inventory on the shelves and in ware houses? AMD or Nvidia products? This could get nasty for Nvidia with all their old hardware gathering dust even when prices are reduced to nothing.
Both will suffer but AMD would be worst situation with their sales channel (their own logistics/distributors/retailers) as NVIDIA has already started to go through the process of winding down existing Maxwell 2 while also pricing the 1080/1070 in a way that the previous generation still has some kind of value (yeah I agree 970/980 would be a really bad buy but average consumer may see it being worthwhile).
The 980 has not yet been cut dramatically in price but it will soon IMO to shift those.

So initially I see it doing more harm to AMD chain than NVIDIA, but where it will hurt both equally is products sitting just below the Polaris tier.
Cheers
 
Both will suffer but AMD would be worst situation with their sales channel (their own logistics/distributors/retailers) as NVIDIA has already started to go through the process of winding down existing Maxwell 2 while also pricing the 1080/1070 in a way that the previous generation still has some kind of value (yeah I agree 970/980 would be a really bad buy but average consumer may see it being worthwhile).
The 980 has not yet been cut dramatically in price but it will soon IMO to shift those.

So initially I see it doing more harm to AMD chain than NVIDIA, but where it will hurt both equally is products sitting just below the Polaris tier.
Cheers

I can't see the logic in this if you followed the Nvidia hype train it said how many billions they invested into Pascal and suddenly AMD gets shafted by a low price strategy, I don't think so ...
Nvidia wants to dominate the high end and that is where they get the money , see how much grief they got for founder edition. If anything you would favour AMD to come out of this better then Nvidia and not by a small margin ....
 
Pricing? Well, if we're all just gonna guess...

If P10 480 perf is about on par with 390, but lower power use, updated GCN, etc., then I can't see P10 480 at $229 or $249. That'd be insanely low. I'd guess it'd go for close to current 390 pricing ($300 to $350, depending on cooling solution).

Similarly for P10 480x: about $375 range (or higher).

Ken "I'm not a "money guy", but I did stay in a Holiday Inn Express last night"
 
Back
Top