AMD Launching Polaris 10 400 Series GPU June 1st At Computex. $299 (rumor)

I know, why dont you understand that it was something specific designed for OEM, I am not arguing you on that. What you don't understand it that Polaris is a launch of a new architecture they don't need to worry about special parts for apple. They will all be low wattage so no need to make anything special for apple. Not happening this time.

And you know this how?
 
And you know this how?
lol because they are not launching the cards for apple. Do you know tonga was like not even hyped up about, it was more like a frickin specially cooked product. What proof do you have that some how apple is going to get the fastest part when all the parts meet or exceed apples requirements? I mean do you think AMD trying to launch these cards for apple? There is no reason for that. That was once in a blue moon thing. Apple has mobility chips in imacs so I have no idea why you are starting this thing about apple is going to get fastest part. It has never happened at a launch. All desktop parts have been faster than anything apple has had or same. Tonga wasn't even the fastest card available, so I don't see your point to be honest.
 
Still there is more rumors that support there will be atleast two cards. Polaris has been rumored up to 40CUs, there is more for me to believe that they will have 2 cards atleast than to say they some how need to change strategy. I don't think you are seeing what I am trying to say. you are firm on saying that AMD will have just one desktop part and one laptop top part, which does not make sense what so ever when we are seeing so many parts floating around. May be we can agree to disagree? lol


Don't really care about the rumors about amount of units and what not, I'm going by what AMD has stated from their front end perf/watt split and what what the node gives from perf/watt, if those are best case thats what you get. And mid range usually is the best perf/watt cards *excluding HBM cards.
 
Don't really care about the rumors about amount of units and what not, I'm going by what AMD has stated from their front end perf/watt split and what what the node game from perf/watt, if those are best case thats what you get.
I feel like we are totally on a separate page. I am talking about different SKU's but you are saying amd through their secret messages have confirmed that there will only be one card. My head hurts lol
 
I'm not saying one card, I'm saying two GPU's for P10 and 2 GPU's for P11, thats 4 cards total, all those 4 cards aren't going to have the same perf/watt increases right? Some will have more and some will have less. If we take the best case that AMD has stated, 2.0 from current midrange, and put that on their most performant card P10 full or most ALU's active, you end up with what I stated. This is why the rumors about ALU counts, clocks all the stuff, doesn't matter.
 
I'm not saying one card, I'm saying two GPU's for P10 and 2 GPU's for P11, thats 4 cards total, all those 4 cards aren't going to have the same perf/watt increases right? Some will have more and some will have less. If we take the best case that AMD has stated, 2.0 from current midrange, and put that on their most performant card P10 full or most ALU's active, you end up with what I stated. This is why the rumors about ALU counts, clocks all the stuff, doesn't matter.

Right now I want to take Raja and choke him. Frickin end this madness. I think this the most damn tight launch I have ever seen. No benchmark leaks, no one knows exact specs, fuck no one even knows clock speeds. from it failing validation to 1300. Madness.
 
The tightest GPU launch ever with no concrete leaks, that's for certain.

Come on right, at least a 3dMark pseudo bench or something. All we got, is Hitman 1440p 60 fps vsync locked. Supposed on "Ultra" but who knows.
 
lol because they are not launching the cards for apple. Do you know tonga was like not even hyped up about, it was more like a frickin specially cooked product. What proof do you have that some how apple is going to get the fastest part when all the parts meet or exceed apples requirements? I mean do you think AMD trying to launch these cards for apple? There is no reason for that. That was once in a blue moon thing. Apple has mobility chips in imacs so I have no idea why you are starting this thing about apple is going to get fastest part. It has never happened at a launch. All desktop parts have been faster than anything apple has had or same. Tonga wasn't even the fastest card available, so I don't see your point to be honest.

No they aren't lauching the cards for Apple. Nor did they Launch Tonga for Apple either. It was an OEM Product which we never saw.

Something you have to think about. Tonga was a mid-ranged card. Apple won the OEM design and the top chip of that design.

We know Apple is now going to get Polaris in Imac. Polaris 10 is a midrange card. This is why I think we "might" not even see a Top Polaris Chip for the retail Market.
 
No they aren't lauching the cards for Apple. Nor did they Launch Tonga for Apple either. It was an OEM Product which we never saw.

Something you have to think about. Tonga was a mid-ranged card. Apple won the OEM design and the top chip of that design.

We know Apple is now going to get Polaris in Imac. Polaris 10 is a midrange card. This is why I think we "might" not even see a Top Polaris Chip for the retail Market.

Polaris might be a mid range part, but like you said tonga was something we never saw. That is not polaris. Its an actual launch for the masses and apple is not just going to get a special part. This is a replacement of 300 series so yea amd might have bunch of design wins I am sure apple will more than satisfied with its performance compared to last gen. It's just a different scenario for this. It's not a card hidden from the masses that amd squeezed in you oem market quietly. You are right about tonga but this is a all around launch for all segments. I doubt anyone gets anything special. There will be different specs to pick from but highest one will sure be in desktop card form.
 
Polaris might be a mid range part, but like you said tonga was something we never saw. That is not polaris. Its an actual launch for the masses and apple is not just going to get a special part. This is a replacement of 300 series so yea amd might have bunch of design wins I am sure apple will more than satisfied with its performance compared to last gen. It's just a different scenario for this. It's not a card hidden from the masses that amd squeezed in you oem market quietly. You are right about tonga but this is a all around launch for all segments. I doubt anyone gets anything special. There will be different specs to pick from but highest one will sure be in desktop card form.

Uhhh We did see Tonga, just not the full version. AMD Radeon R9 285 Introduction - MSI Radeon R9 285 GAMING OC Video Card Review

Even the semi full version of Tonga 380x was not a 384bit card like the ones in the Macbooks.

The reason why I say, We "might" not see a full Polaris 10 GPU. I have no idea tbh, rumors have been tight lipped. But isn't it strange we have only seen a 2300 GCN card in the leaks and not the full version?

Makes ya wonder why...(Apple maybe?)
 
LOL the reason I think they will have cards based on 40, 36, 32 CUs and below is because every launch they have made I have seen them replace the x80 and x90 series cards (x being the generation number) now they are going to release 400 series card it its odd they won't replace the 390/390 x everything below it. I am skeptical they will release just a 480x and call it a day and there is no market for 390/390x since no one is going to buy them. I expect them to have 480x amd 490 series cards. Leave fury alone. They have done this almost every launch last few generations and its odd they will only release one polaris 10 desktop card. I may be wrong but the logic doesn't make sense.

When was the last time they only announced 1 desktop card? I don't remember ever. Fury series were two cards, 390 series were two, 290 series were two. It's something they have done for some time now, and they never make notebook chips a major announcement and I don't know why you would count that honestly. That is a separate segment and always has been. Then 380, 280 and so on. They won't replace this with one card. They have always replaced from top down.

Raja said/indicated this
Speaking with PCPer’s Ryan Shrout after the event, Raja revealed a bunch of interesting stuff about Polaris that wasn’t touched upon at the event. Chief among which is performance per dollar and market positioning. To that effect Raja said that he’d like to bring 14nm technology and all its goodies to as many people as possible. Furthermore, Raja said to expect Polaris based graphics cards cross the entire performance stack. From the entry level to the very top-end. He also stressed that everyone will be “very pleased” and “surprised” at what he and his team are going to do in terms of positioning
AMD : Polaris Is All About Sweet Performance Per Dollar

Top to bottom Polaris is - so yes multple cards. This is not Vega but Polaris. Why some stumble over simple English (not you) is mind plexing :joyful:
 
The entire stack which also mentions later, is multiGPU cards. I really hope they don't this to fill in the performance bracket, personally I think that is just a bad idea.
 
The entire stack which also mentions later, is multiGPU cards. I really hope they don't this to fill in the performance bracket, personally I think that is just a bad idea.

No, he didn't. The only thing he talked about in that interview was the inevitability of multiGPU since Moore's Law is ending.
 
Even the semi full version of Tonga 380x was not a 384bit card like the ones in the Macbooks.

384-bit configuration was never used in any SKU. 6x64 bit memory controllers was speculated based on die shots and wasn't confimred until end of 2015 by AM

A 384-bit bus actually is a problem for mobility GPUs due to the number of chips and traces required for >256 bit GDDR5 bus.
 
384-bit configuration was never used in any SKU. 6x64 bit memory controllers was speculated based on die shots and wasn't confimred until end of 2015 by AM

A 384-bit bus actually is a problem for mobility GPUs due to the number of chips and traces required for >256 bit GDDR5 bus.

Um you realize the 7970/280X has a 384-bit config? Tonga was the replacement for the 280x.

The Apple version of Tonga was 384-bit.
 
The tightest GPU launch ever with no concrete leaks, that's for certain.

Come on right, at least a 3dMark pseudo bench or something. All we got, is Hitman 1440p 60 fps vsync locked. Supposed on "Ultra" but who knows.

Amd there won't be any new material info until later next week at earliest. Marketing is waiting for the fracas on Pascal to die down a touch to ensure they're heard.
 
Amd there won't be any new material info until later next week at earliest. Marketing is waiting for the fracas on Pascal to die down a touch to ensure they're heard.

Personally, I'd like to see Polaris take pascal head on. Take some of the wind out of the sails of Pascal launch.
 
Um you realize the 7970/280X has a 384-bit config? Tonga was the replacement for the 280x.

The Apple version of Tonga was 384-bit.

Tahiti was never used in a mobility product. There is no mobility GPU I am aware of with a >256 bit bus.

Which Apple product used Tonga with a 384 bit bus? Apple used Tonga mobility variants, m295x, m390x, m395, and m395x all of which were 256 bit.

The only reason we know Tonga even has the additional controllers is because AMD official (Raja) confirmed it - AMD Confirms Tonga 384-bit Memory Bus, Not Enabled For Any Products | PC Perspective
Prior to that statement it was specualted based upon die shots.
 
  • Like
Reactions: NKD
like this
No, he didn't. The only thing he talked about in that interview was the inevitability of multiGPU since Moore's Law is ending.


Nope, check the interview, in the first 4 minutes, if I remember correctly.


Yeah what you are talking about is at ~4 minutes what I'm talking about is ~7:30 minutes, Ryan asks Raja about mGPU Polaris (that was the conversation they were taking about), Raja didn't deny it, he actually said yes you will see it. Not sure to what degree we will see it as he didn't explain that.
 
Last edited:
Right now I want to take Raja and choke him. Frickin end this madness. I think this the most damn tight launch I have ever seen. No benchmark leaks, no one knows exact specs, fuck no one even knows clock speeds. from it failing validation to 1300. Madness.

Probably nothing good to leak. ;)

If there was I'd leak it on purpose to hold people off from buying the 1080/1070, or just make an announcements
 
Probably nothing good to leak. ;)

If there was I'd leak it on purpose to hold people off from buying the 1080/1070, or just make an announcements

You can't buy the 1080 and especially 1070 yet.

1070's launch is 10th june.

AMD's event is in Computex before that.
 
You can't buy the 1080 and especially 1070 yet.

1070's launch is 10th june.

AMD's event is in Computex before that.

The 1080 launch is before Computex I thought?

We'll see if they actually announce anything.
 
They might announce for E3 release that is what I'm guessing right now, they have done this in the past.

And people food for thought about clocks speeds.

Don't expect AMD cards to do the same clocks as nV cards even though their chips are smaller, its a design thing.

Hexus has a good explanation of why the Pascal does what it does with clocks.

Review: Nvidia GeForce GTX 1080 Founders Edition (16nm Pascal) - Graphics - HEXUS.net - Page 2

Jonah Alben, who oversaw Pascal, said to us that a huge amount of work had been done to minimise the number of 'violating paths' that stand in the way of additional frequency. This is critical-path analysis by another name, where engineers pore over the architecture to find and eliminate the worst timing paths that actually limit chip speed. If successful, as appears to be the case here, the frequency potential is shifted to the right, or higher. Alben reckoned that Nvidia managed a good 'few hundred megahertz' by going down this path, if you excuse the pun, so Pascal is Maxwell refined to within an inch of its life.
 
They might announce for E3 release that is what I'm guessing right now, they have done this in the past.

And people food for thought about clocks speeds.

Don't expect AMD cards to do the same clocks as nV cards even though their chips are smaller, its a design thing.

Hexus has a good explanation of why the Pascal does what it does with clocks.

Review: Nvidia GeForce GTX 1080 Founders Edition (16nm Pascal) - Graphics - HEXUS.net - Page 2

So true. Nvidia to be honest didn't do major redesign. Their chips always wanted to clock higher you could see this with maxwell.

So Pascal is basically maxwell on steroids. 980ti is 1000mhz base, and now you have 1080 at 1600 base and looks like its always boosting around 1700+, that is more than 70% increase in clock speed at stock. Which is pretty insane. Architecture wise they focused on VR enhancements and memory compression. Other than that this was a safe route and they focused on one thing maxwell already loved "Clock Speed". I bet you clock 1080 down to 980ti speeds it might actually end up being slower. But that really doesn't matter because now we have a stable clock of 1700 at stock. Pretty insane, Kudos to Nvidia.

I think AMD has to go their route, they can't get that insane of a clock speed bump so they invested in making the shaders more efficient.
 
Nvidia in the past had double clocked shaders but for power reasons went to synchronous clocked shaders like AMD. AMD has a tendency to get performance more from more shaders then from more clock speed - potential power savings by doing this. Higher frequencies does add a lot of extra power (exponential). Still at 1700 boost speed and the 1080 still remains a 180w card - I think that is rather amazing. With faster clocks you can make smaller chips for same performance or maximize you big chip performance beyond anyone else.
 
AMD shouldn't need insane clocks if they do more work per cycle which is what really matters because even lower clocked fury x can keep up with a 980 ti. I hope this doesn't turn into a Netburst vs Athlon deal like the old days where Intel had the ghz advantage but still got pummeled. I suspect this could happen with Polaris and Vega, especially in dx 12.
 
Nvidia in the past had double clocked shaders but for power reasons went to synchronous clocked shaders like AMD. AMD has a tendency to get performance more from more shaders then from more clock speed - potential power savings by doing this. Higher frequencies does add a lot of extra power (exponential). Still at 1700 boost speed and the 1080 still remains a 180w card - I think that is rather amazing. With faster clocks you can make smaller chips for same performance or maximize you big chip performance beyond anyone else.


exponential increase in power through frequency is only when voltage is increased, so if they don't need to increase voltage, you only get linear or close to linear increases.
 
AMD shouldn't need insane clocks if they do more IPC which is what really matters. I hope this doesn't turn into a Netburst vs Athlon deal like the old days where Intel had the ghz advantage but still got pummeled. I suspect this could happen with Polaris and Vega, especially in dx 12.


No changes in IPC, changes in through put by removal of bottlenecks, and we know what the throughput increases are around 10%, max 20% if they don't consider any changes to power consumption (this is why 10% is a safe bet at least for midrange get best of both worlds, slight increase in through put with a slight decrease in power consumption). IPC has not changed as that would change the shader compiler greatly and also affect how cache, memory management and other parts of the GPU will interact, this will create other bottlenecks too. This is why AMD shifted to a scalar architecture to begin with as did nV. IPC stays the same, 2 operations per clock per ALU.
 
exponential increase in power through frequency is only when voltage is increased, so if they don't need to increase voltage, you only get linear or close to linear increases.

Good testing here on power requirements for 1080 when OC
Power Consumption Results - Nvidia GeForce GTX 1080 Pascal Review

The 1080 when OC got up to 392w @ 2100ghz boost (max spikes but for the most part less than 300w which the 980 would go over much more) . Results, a 12% clock increase gave the 1080 a 8.9% performance increase with a overall the power increase of 19.1%, that is for one game. That is not linear nor is it exponential, so we are both wrong :D.
 
No changes in IPC, changes in through put by removal of bottlenecks, and we know what the throughput increases are around 10%, max 20% if they don't consider any changes to power consumption (this is why 10% is a safe bet at least for midrange get best of both worlds, slight increase in through put with a slight decrease in power consumption). IPC has not changed as that would change the shader compiler greatly and also affect how cache, memory management and other parts of the GPU will interact, this will create other bottlenecks too. This is why AMD shifted to a scalar architecture to begin with as did nV. IPC stays the same, 2 operations per clock per ALU.

Performance will go up in real usage, better cache, pre-fetch etc. Also good culling so it is not spending as much time wasting shaders on something never going to be seen. In other words it may do 2 operations per clock a hell a lot more times when needed and not when not needed or waiting for an instruction, less waste, way more effiecient. Not sure what your point is here. Slight decrease in power consumption? o_O 2.5x decrease in power per w for same performance - in other words kick ass. You are spending way too much time worrying about AMD Polaris :sneaky:
 
Personally, I'd like to see Polaris take pascal head on. Take some of the wind out of the sails of Pascal launch.
It might be better if they just are below that with a better price. Let Vega do the heavy lifting.

AMD is better of selling more cards in the lower price bracket and leave the "high end" to Nvidia for now.
 
Performance will go up in real usage, better cache, pre-fetch etc. Also good culling so it is not spending as much time wasting shaders on something never going to be seen. In other words it may do 2 operations per clock a hell a lot more times when needed and not when not needed or waiting for an instruction, less waste, way more effiecient. Not sure what your point is here. Slight decrease in power consumption? o_O 2.5x decrease in power per w for same performance - in other words kick ass. You are spending way too much time worrying about AMD Polaris :sneaky:


I'm worried because it give me less companies to negotiate with lol, once my cinematic and demo are ready.

if they have a solid product and good earnings potential its better for devs as they would be more willing to push more money in a potential good product through marketing or engineers.
 
Last edited:
Good testing here on power requirements for 1080 when OC
Power Consumption Results - Nvidia GeForce GTX 1080 Pascal Review

The 1080 when OC got up to 392w @ 2100ghz boost (max spikes but for the most part less than 300w which the 980 would go over much more) . Results, a 12% clock increase gave the 1080 a 8.9% performance increase with a overall the power increase of 19.1%, that is for one game. That is not linear nor is it exponential, so we are both wrong :D.

The new Boost 3.0 increases voltage too ;)
 
Performance will go up in real usage, better cache, pre-fetch etc. Also good culling so it is not spending as much time wasting shaders on something never going to be seen. In other words it may do 2 operations per clock a hell a lot more times when needed and not when not needed or waiting for an instruction, less waste, way more effiecient. Not sure what your point is here. Slight decrease in power consumption? o_O 2.5x decrease in power per w for same performance - in other words kick ass. You are spending way too much time worrying about AMD Polaris :sneaky:

Indeed, AMD's current setup as we all know don't equate it's TFlops with gaming performance. A 390X has similar TFlops to a 980Ti. Under situations where it can put those Tflops to performance, it actually matches a 980Ti.

So that's where AMD focused on. Remember, their 30:70 claims from Raja, out of the 2.5 x perf/w increase, 30% of that is from the architecture improvements, 70% from the node.

If they manage to increase throughput or shader efficiency by 30%, that puts a 390X onto 980Ti territory or even beyond (390X is ~20% behind at 1440p).

Because of this focus, they don't need to get huge raw clock boosts. The leaks are running at ~1.3ghz for the notebook SKUs, not hard to imagine desktop running 1.4 or 1.5ghz. That's 40-50% improvement in clocks compared to the current 390/390X.
 
Good testing here on power requirements for 1080 when OC
Power Consumption Results - Nvidia GeForce GTX 1080 Pascal Review

The 1080 when OC got up to 392w @ 2100ghz boost (max spikes but for the most part less than 300w which the 980 would go over much more) . Results, a 12% clock increase gave the 1080 a 8.9% performance increase with a overall the power increase of 19.1%, that is for one game. That is not linear nor is it exponential, so we are both wrong :D.

Just to add, it is rather tricky to calculate real-world values because of the brief spikes compared to continuous-sustained type behaviour, case in point 300w (let alone 390w) actually breaks the power spec of the 1080/PC with its PCIE+6-pin.
In reality it is still inside the threshold due to window size being too small intervals and so measuring spikes-bursts that are not relevant.

Cheers
 
Indeed, AMD's current setup as we all know don't equate it's TFlops with gaming performance. A 390X has similar TFlops to a 980Ti. Under situations where it can put those Tflops to performance, it actually matches a 980Ti.

So that's where AMD focused on. Remember, their 30:70 claims from Raja, out of the 2.5 x perf/w increase, 30% of that is from the architecture improvements, 70% from the node.

If they manage to increase throughput or shader efficiency by 30%, that puts a 390X onto 980Ti territory or even beyond (390X is ~20% behind at 1440p).

Because of this focus, they don't need to get huge raw clock boosts. The leaks are running at ~1.3ghz for the notebook SKUs, not hard to imagine desktop running 1.4 or 1.5ghz. That's 40-50% improvement in clocks compared to the current 390/390X.


30% of the 2.5 perf per watt doesn't equate to 30% throughput....

Nor does the increase in clock speed of nV's use of the 16nm process give them a 50% increase of clocks either. Its more like 25% increase due to node and 75% due to design for clock speed on Pascal. Easy to figure that out as Maxwell max clocks end up around 1500, where Pascal seems to do just fine above 2000 but have to see how far they can really go with pascal, with the thermal and power barriers removed.
 
From what I've read it will have 390X performance (maybe better overclocking as well?) with excellent efficiency for less than $299 (guessing $279). If all that turns out to be correct, then AMD will make a decent amount of money from this card. Of course, a solid argument can be made for a $379 1070, but that $100 can make a big difference in a new build budget. Also, the 490X will probably max-out 1440p while not being robust enough for 4K.........and I'm guessing the same can be said for the 1070. In other words, the 1070 might find itself in an awkward price/performance position. Does that make sense?
 
Last edited:
From what I've read it will have 390X performance (maybe better overclocking as well?) with excellent efficiency for less than $299 (guessing $279). If all that turns out to be correct, then AMD will make a decent amount of money from this card. Of course, a solid argument can be made for a $379 1070, but that $100 can make a big difference in a new build budget. Also, the 490X will probably max-out 1440p while not being robust enough for 4K.........and I'm guessing the same can be said for the 1070. In other words, the 1070 might find itself in an awkward price/performance position. Does that make sense?

I expect it to be faster than 390x. If they call it 490x than it will definitely be faster. I highly doubt they replace 390x this time with same performance. Otherwise people will just call it a rebadge, and I doubt amd wants to labeled as that after launching a whole new product. They have really worked on the architecture. Even if its at 299 I expect it to be 10-20% faster than 390x.

If people see 490x is same performance as 390x it leaves a bad taste no matter what the price is. They better call it a 480x just for the name sake and better message to the consumers. 490x = 390x is bad marketing for amd, they can use less of that right about now. lol
 
From what I've read it will have 390X performance (maybe better overclocking as well?) with excellent efficiency for less than $299 (guessing $279). If all that turns out to be correct, then AMD will make a decent amount of money from this card. Of course, a solid argument can be made for a $379 1070, but that $100 can make a big difference in a new build budget. Also, the 490X will probably max-out 1440p while not being robust enough for 4K.........and I'm guessing the same can be said for the 1070. In other words, the 1070 might find itself in an awkward price/performance position. Does that make sense?
Yes.

The 970 sold so well because it offered a hell a lot for $349 and was not way different from the 980 in performance. It made AMD reduce the 290x from over $500 way down as well as the 290. Now Nvidia 1070 is $449 out the door and looks like it will have significantly less performance then a 1080, 1/3 the shaders, slower clock speed. Earlier in the year Raja specified that Polaris will cover low to high end of the market. Not sure how high is high for Polaris $399, $299 etc. I do believe AMD will have 390x or greater performing cards for less than $300. That does not mean they won't have better than Fury performing cards in the $300 range. Anyways if AMD can have a $349 card that competes or beats a 1070 that would shake things up.

Nvidia went for speed, Maxwell design - AMD went for efficiency, biggest updates for GCN arch ever, a smaller Fab process. Can't wait to see how this pans out.
 
Yes.

The 970 sold so well because it offered a hell a lot for $349 and was not way different from the 980 in performance. It made AMD reduce the 290x from over $500 way down as well as the 290. Now Nvidia 1070 is $449 out the door and looks like it will have significantly less performance then a 1080, 1/3 the shaders, slower clock speed. Earlier in the year Raja specified that Polaris will cover low to high end of the market. Not sure how high is high for Polaris $399, $299 etc. I do believe AMD will have 390x or greater performing cards for less than $300. That does not mean they won't have better than Fury performing cards in the $300 range. Anyways if AMD can have a $349 card that competes or beats a 1070 that would shake things up.

Nvidia went for speed, Maxwell design - AMD went for efficiency, a smaller Fab process. Can't wait to see how this pans out.

I think we will see polaris up to 350 mark. May be at 350 faster than the 1070 and a version at 300 trading blows with 1070. Who knows we will see. But seems like that will work.
 
Back
Top