AMD Radeon RX 480 Supplies at Launch

Status
Not open for further replies.
Wait actual power reported from rumors is around 110w at 1266? So I disagree its not the same power usage wise. To say it is at stock speeds is simply wrong. Now yes we need to wait for reviews but you can't be so sure given what the every rumor has been saying its around 110w at stock.

Same TDP, amd said 150w, same what nvidia said for 1070, I genuinely have no idea what rumors says about power consumption
 
Same TDP, amd said 150w, same what nvidia said for 1070

lol so we are going by tdp now not actual in game power usage? So if 1070 uses 160 under gaming and polaris uses 110 at stock clocks its the same? . To me 110w actual usage is not the same as 160w so I could care less about tdb. 1070 is using around 160 on average. Yes it is faster and better is probably more efficient but 110w vs 160w at stock clocks is not the same.

You are implying at stock clocks it uses the same power as 1070, not true! Yes wait for the reviews but not from what we have seen from several rumors around 110w at stock.
 
lol so we are going by tdp now not actual in game power usage? So if 1070 uses 160 under gaming and polaris uses 110 at stock clocks its the same? Sometimes as educated as you guys are about this technical stuff but you do anything to support your argument. To me 110w actual usage is not the same as 160w so I could care less about tdb. 1070 is using around 160 on average. Yes it is faster and better is probably more efficient but 110w vs 160w at stock clocks is not the same.
Of course not! Powr usage matters! 1070 is 150 avg if not mistaken but whatever, just pointing out they have the same announced TDP
 
Of course not! Powr usage matters! 1070 is 150 avg if not mistaken but whatever, just pointing out they have the same announced TDP

cool! yea checked a few sites. 1070 seems be using about 160w on average.
 
power_maximum.png
Maximum , not average

TPU actually measures at the 12v rails, most accurate measurement in a review afaik
 
  • Like
Reactions: noko
like this
Now that it seems like RX 480 is the full die -

Pitcairn 212mm^2 at 28nm (then leading edge process) using 2GB (highest density) 5 Gbps (second speed bin, compared to 6Gbps) GDDR5. Launch MSRP was $350, cut to $300 a few months latest due to pressure from Nvidia.

Polaris 10 232mm^2 at 14nm (current leading edge process) using 8GB (highest density) 8Gbp (current highest bin) GDDR5. Launch MSRP is $229.

Current 14nm/16nm processes are generally considered to be higher cost per mm and transistor relative to 28nm at their respective points in lifetime.

Further let's look at Pitcairn relative to GK104 and P10 relative to GP104. Keep in mind that GTX 1070 is further cut down then GTX 670 was.

GK104 die size 292mm^2 (37.7% larger then Pitcarin). GTX 670 $400 (33.3% more thant 7870 at $300, 14.3% at $400)
GP104 die size 314mm^2. (35.3% larger then P10). GTX 1070 $380 (65.2% more than RX 480 at $230).

Basically P10 is both larger and cheaper compared to Pitcairn relative to the competition. Interestingly 7870 also had higher theoretical tflops then GTX 670 while RX 480 is lower than GTX 1070.

AMD is conceding quite a bit more in terms of pricing relative to Nvidia this time around.
 
more opinion you are claiming to be fact? Thanks for your opinion. While well founded, it is still your opinion... I disagree with it. I think it is wise to wait until failure rates drop, HBM2 is readily available, they have time to evaluate and meet/beat nVidias offerings, etc. Considering a lot of people recently jumped on the TitanX/980ti/FuryX trains, the high-end market isnt exactly impatient for the next round. I believe you or ledra himself are gloating about holding off on upgrading because our 980ti is plenty, so where is the rush for AMD to fill this market segment?

High end market is always impatient for more performance.
 
  • Like
Reactions: NKD
like this
AMD confirms high-end Polaris GPU will be released in 2016

In an interview with VentureBeat, graphics chief Raja Koduri explained that one of those GPUs is aimed at thin-and-light laptops and entry-level desktops, while the the other is a larger, high-performance GPU designed to take back the premium graphics card market currently dominated by rival Nvidia. However, the overall target for Polaris is still "console-class gaming on a thin-and-light notebook."

I interviewed Koduri at the 2016 International CES, the big tech trade show in Las Vegas last week. He acknowledged that AMD intends to put graphics back in the center. And he said that 2016 will be a very big year for the company as it introduces its advanced FinFET manufacturing technology, which will result in much better performance per watt — or graphics that won’t melt your computer. Koduri believes this technology will help AMD beat rivals such as Nvidia. AMD’s new graphics chips will hit during the middle of 2016, Koduri said.

A seperate statement issued by AMD's Robert Hallock confirmed that Polaris will use HBM (high bandwidth memory) or GDDR5, depending on the "market segment."

Well, AMD was very very vague with their statements around the unveiling of Polaris in January. My first impression after reading this article and the interview is that Polaris 11, the smaller die, will be targeted at mobiles and entry desktop, while Polaris 10 will be high-end performance part. There's also mentions of GDDR5 or HBM depending on market segment, suggesting Polaris was going to span multiple segments, not just mainstream.

We can also infer that Polaris 10 will use either HBM or GDDR5, since Polaris 11 already spans mobiles and entry desktop, which has no need for HBM.

This is Polaris 10 and that’s Polaris 11. In terms of what we’ve done at the high level, it’s our most revolutionary jump in performance so far. We’ve redesigned many blocks in our cores. We’ve redesigned the main processor, a new geometry processor, a completely new fourth-generation Graphics Core Next with a very high increase in performance. We have new multimedia cores, a new display engine.

I don't know but Kyle sounds very correct right about now. Note that he said jump in performance, not performance per watt.

Oh and here's the quote regarding their supposed advantage in transitioning to FinFET.

This is very early silicon, by the way. We have much more performance optimization to do in the coming months. But even in this early silicon, we’re seeing numbers versus the best class on the competition running at a heavy workload, like Star Wars—The competing system consumes 140 watts. This is 86 watts. We believe we’re several months ahead of this transition, especially for the notebook and the mainstream market. The competition is talking about chips for cars and stuff, but not the mainstream market.

This was the same time they were demoing Star Wars Battlefront on Polaris 11.
 
Last edited:
1070 has about a fifth of it's die completely cut off, and P10 is made on smaller process.

If you actually took off AMD Gaming Evolved stickers from your glasses and did the basic math with 10% size difference between TSMC's and Samsung's/GloFo's process, you would come to conclusion that P10 should come very close to 1070 at least.

It does not, so no, P10 is indeed short of where it should have been. It may have happened because 1070 turned out too good for nV, but that's irrelevant.
I wouldn't jump to conclusions on how good Pascal turned out yet (looking very good at the moment) until yields and sufficient supply is established. Also GP 106 will be competing against a bigger chip then probably planned for. Pascal should excel though at the p11 size as long as Nvidia can get TSMC to make enough of them.

Bottom line is it does not matter - what matter is what you need and what is available and at what price. Rx480 looks rather good in that department minus the real reviews. Also how high will it overclock neglecting power usage - is it a super OCer with some major power? Good OC or just average or worst? If it can handle 250w plus (I doubt) there will be plenty of folks that will have some fun pushing it. The cards are not dealt yet or shown.
 
  • Like
Reactions: NKD
like this
I wouldn't jump to conclusions on how good Pascal turned out yet (looking very good at the moment) until yields and sufficient supply is established. Also GP 106 will be competing against a bigger chip then probably planned for. Pascal should excel though at the p11 size as long as Nvidia can get TSMC to make enough of them.

Bottom line is it does not matter - what matter is what you need and what is available and at what price. Rx480 looks rather good in that department minus the real reviews. Also how high will it overclock neglecting power usage - is it a super OCer with some major power? Good OC or just average or worst? If it can handle 250w plus (I doubt) there will be plenty of folks that will have some fun pushing it. The cards are not dealt yet or shown.
Well, it was not a jump, i am just keeping the possibility that it was Pascal clocking higher than expected rather than Polaris underperforming as severely.
 
Now that it seems like RX 480 is the full die -

Pitcairn 212mm^2 at 28nm (then leading edge process) using 2GB (highest density) 5 Gbps (second speed bin, compared to 6Gbps) GDDR5. Launch MSRP was $350, cut to $300 a few months latest due to pressure from Nvidia.

Polaris 10 232mm^2 at 14nm (current leading edge process) using 8GB (highest density) 8Gbp (current highest bin) GDDR5. Launch MSRP is $229.

Current 14nm/16nm processes are generally considered to be higher cost per mm and transistor relative to 28nm at their respective points in lifetime.

Further let's look at Pitcairn relative to GK104 and P10 relative to GP104. Keep in mind that GTX 1070 is further cut down then GTX 670 was.

GK104 die size 292mm^2 (37.7% larger then Pitcarin). GTX 670 $400 (33.3% more thant 7870 at $300, 14.3% at $400)
GP104 die size 314mm^2. (35.3% larger then P10). GTX 1070 $380 (65.2% more than RX 480 at $230).

Basically P10 is both larger and cheaper compared to Pitcairn relative to the competition. Interestingly 7870 also had higher theoretical tflops then GTX 670 while RX 480 is lower than GTX 1070.

AMD is conceding quite a bit more in terms of pricing relative to Nvidia this time around.



AMD confirms high-end Polaris GPU will be released in 2016







Well, AMD was very very vague with their statements around the unveiling of Polaris in January. My first impression after reading this article and the interview is that Polaris 11, the smaller die, will be targeted at mobiles and entry desktop, while Polaris 10 will be high-end performance part. There's also mentions of GDDR5 or HBM depending on market segment, suggesting Polaris was going to span multiple segments, not just mainstream.

We can also infer that Polaris 10 will use either HBM or GDDR5, since Polaris 11 already spans mobiles and entry desktop, which has no need for HBM.



I don't know but Kyle sounds very correct right about now. Note that he said jump in performance, not performance per watt.

Oh and here's the quote regarding their supposed advantage in transitioning to FinFET.



This was the same time they were demoing BF4 on Polaris 11.

well at the end they are right. They brought their mainstream cards first to market ahead of competition. That part is absolutely true. Now Vega is essentially based off polaris architecture with may be a difference where it is suppose to be using IP level 9.0. Polaris is using GDDR5 with little OC it could get to Fury level. So that is not bad for chip with 2300 shaders competing with 3500 shaders. If it was using gddr5x we are talking probably better performance due to better bandwidth. Looks like AMD went with the card they could produce the most of in mainstream. They hit that just fine, now ramp it up to higher end in vega.

I think alot of people here say this over and over but now they forget it. AMD didn't wake up a 6 months ago and say lets redesign polaris 10 for mid range because it cant do highend. They would have had to have some expectations where this chip will land with 36CUs and looks like it was the only part they planned until Vega. It takes time to design this stuff. It may have to do with designing GCN 4.0 for midrange first and getting all the kinks out of the way to ramp it up to high end on the new process. It just makes sense. AMD can't live through another year of high end part that they cant produce enough of.
 
Last edited:
Polaris will never beat Pascal because Polaris is a gpu and pascal is an Architecture family

Anyway.

The 1070 is 25% cut 314mm die. We can assume it would have been 280mm on glofo process, consider the 25% cut as a 20% reduction in die area and we have 224mm roughly.

Polaris 10 could well have been trading blows with a 1070, with a price tag to match. Instead, it is trading blows with a 980.

Where are the Polaris high end parts you speak of? In the anandtech feature. It's conjecture, because no sane person would have thought they were letting the performance segment go.

There's no reason why p10 couldn't cover 1070/1060ti segment, p11 entry level 960/950

No reason at all. Why do you categorically exclude the possibility that amd were xounting on higher clocks and better scaling with p10?

You realize nvidia and amd don't have to have matching performance in every segment rigjt?

There's no reason why p10 couldn't compete with 1070, and cut down Vega 10 compete with 1080


You keep shrinking down the 1070 due to it being a cut down and 16nm v 14nm, but I don't know if that last is as valid and simple a task as you are making it out to be. If nvidia had designed a chip from the outset to use up a 224mm die I don't know what kind of real world performance we'd see, and it will probably take the release of a 1060 to see a closer test of how a smaller die pascal chip performs.


But you still avoided something basic. nvidias pascal chip in the 1080/1070 was engineered and designed to fill out a 314 mm die. Polaris tops out in 230s. You can argue all day that the cut down 1080 in the 1070 should be the same comparison, but it does not get rid of the FACT that the top of the line polaris part was designed to have a lower top end potential than pascal.

Do we need to get into car analogies? 3 liter engine, vs 2 liter engine. BUT the 3 liter has a cut down variation so that only a bit over 2 liters is usable.... That does NOT change the fact that the 2 liter engine was targeting a lower top end performance range than the engine designed to use a full 3 liters.


I do think the relative performance of polaris will be useful information in terms of whether it was on target or a below the goals for amd. Can it beat out a 390x for example? If it can match or tie a 390x, then its within the 35% die increase premium of the full 1080 when the stars are aligned with dx12 like hitman for amd cards

Nvidia GeForce GTX 1080 review

If it's significantly slower than the 390x (some overclocking is allowed for polaris since the 390x is kind of heavily pre overclocked) then that will suggest it's less than ideal.

But if it can match or exceed a 390x in dx12 titles, it should be in the ballpark compared to cards like the 1080 once chip size is factored in (and again, I'm not sure how valid adjusting the density due to the differences between 14nm and 16nm are, does one process from one fab clock higher than another? There seems like there are a lot of variables there that are not one to one)
 
You keep shrinking down the 1070 due to it being a cut down and 16nm v 14nm, but I don't know if that last is as valid and simple a task as you are making it out to be. If nvidia had designed a chip from the outset to use up a 224mm die I don't know what kind of real world performance we'd see, and it will probably take the release of a 1060 to see a closer test of how a smaller die pascal chip performs.


But you still avoided something basic. nvidias pascal chip in the 1080/1070 was engineered and designed to fill out a 314 mm die. Polaris tops out in 230s. You can argue all day that the cut down 1080 in the 1070 should be the same comparison, but it does not get rid of the FACT that the top of the line polaris part was designed to have a lower top end potential than pascal.

Do we need to get into car analogies? 3 liter engine, vs 2 liter engine. BUT the 3 liter has a cut down variation so that only a bit over 2 liters is usable.... That does NOT change the fact that the 2 liter engine was targeting a lower top end performance range than the engine designed to use a full 3 liters.


I do think the relative performance of polaris will be useful information in terms of whether it was on target or a below the goals for amd. Can it beat out a 390x for example? If it can match or tie a 390x, then its within the 35% die increase premium of the full 1080 when the stars are aligned with dx12 like hitman for amd cards

Nvidia GeForce GTX 1080 review

If it's significantly slower than the 390x (some overclocking is allowed for polaris since the 390x is kind of heavily pre overclocked) then that will suggest it's less than ideal.

But if it can match or exceed a 390x in dx12 titles, it should be in the ballpark compared to cards like the 1080 once chip size is factored in (and again, I'm not sure how valid adjusting the density due to the differences between 14nm and 16nm are, does one process from one fab clock higher than another? There seems like there are a lot of variables there that are not one to one)

It was not designed to fill out a 314 mm^2 die, it ended fill a 314mm^2 die, while the target was probably "around 300, like GK104". Also, even with that said, the only thing "designed" die size really affects is amount of chips you get from wafer, nothing else.

Next, your car analogy is horrid. To an extent, it's true that overall chassis, transmission et cetera of car that has a 3 liter engine model are probably better than that of car that tops out at 2 liter engine. But we know that transistor-wise, GP104 and P10 uncore are fairly similar (256-bit bus with possible GDDR5X support and mostly similar display output/codec support), so the analogy does not work at all.

And you keep ignoring the fact that die premium of 14nm 280 would be more like 50mm^2, less than a quarter, rather than "35%".

Either way, embargo is over in at most a week, so we'll see soon. I'll stand by past bets, either way.
 
Vega and HBM2 is another problem, namely that nvidia will have no trouble matching the performance and the price without using hbm2
Because you typed it first right?
You're ignoring everything you're unhappy with, you're ignoring numerous lengthy posts, you're ignoring what people said and twisting words. So you're clearly not in this for an actual discussion, you're here to speculate about our intentions. Color me surprised it's the negative kind. Lol
I would ignore anything you wrote after the previous post
It is a performance failure. It's die size puts it right in the 1070 zone. It's performing 20% slower than a 390x in the only official amd benchmarks available. Do you consider that a resounding success? You were one of the people saying gcn is a better architecture than maxwell because of 'hardware scheduler and async' and whatever else it was you said. So why is amd focusing on fixing the things that made maxwell great and gcn not so much like front-end improvements, and the addition of primitive discard which will finally allow gcn not die from tesselation?
At the same time they're losing 20% compared to a 390x in their beloved AotS benchmark. AMD is playing catch up man, and they're losing performance where you said it counts before, to gain it where you said it didn't count lol. Nvidia gimpworks will soon become the premier proving ground for rx480
You are the only person which consistently posts Nvidia garbage in AMD threads Not only do you draw conclusions no one else does for the sake of pissing off people in the AMD forums there false as this statement is. You purposely draw the RX 480 into the 1070 area which it does not compete with. AMD outset the performance part in GDC2016 (March) to be Vega coming next year? You ignore this and just strap the RX 480 on your own little tour of failure while parading Nvidia superiority.

Let me make it really clear to the Nvidia fanboys in this thread, IF Nvidia is so superior where is their $199 $229 part ? Can't they compete ? Are Nvidia Pascal parts failures ?

AMD confirms high-end Polaris GPU will be released in 2016
Well, AMD was very very vague with their statements around the unveiling of Polaris in January. My first impression after reading this article and the interview is that Polaris 11, the smaller die, will be targeted at mobiles and entry desktop, while Polaris 10 will be high-end performance part. There's also mentions of GDDR5 or HBM depending on market segment, suggesting Polaris was going to span multiple segments, not just mainstream.
We can also infer that Polaris 10 will use either HBM or GDDR5, since Polaris 11 already spans mobiles and entry desktop, which has no need for HBM.
I don't know but Kyle sounds very correct right about now. Note that he said jump in performance, not performance per watt.
Oh and here's the quote regarding their supposed advantage in transitioning to FinFET.
This was the same time they were demoing BF4 on Polaris 11.
Actually find the 390x [H] article where Kyle responds to the discussion that the next line of cards is very special. Even AMD thought they had a winner back then but it seems they had to adapt hence what you saw at GDC 2016.
 
Last edited:
No they didn't AMD didn't even know Pascal was that far along prior to March 24th when nV talked about p100 and its Tflop levels.

Few weeks after that then they stated midrange.

AMD confirmed the CTEX launch to those in the loop, prior to Nvidia launching the 1070/1080.
Sure AMD probably also had inside information on performance of Nvidia cards but I'd say they just decided to run with it, and hope they can get some more process improvements between then and release.
 
Agreed that regions with lower income and/or VAT seem to be more excited about 480 launch than the US.

Lower than US VAT would be negative tax and the government would pay you after buying an RX 480. :D
 
It was not designed to fill out a 314 mm^2 die, it ended fill a 314mm^2 die, while the target was probably "around 300, like GK104". Also, even with that said, the only thing "designed" die size really affects is amount of chips you get from wafer, nothing else.

Next, your car analogy is horrid. To an extent, it's true that overall chassis, transmission et cetera of car that has a 3 liter engine model are probably better than that of car that tops out at 2 liter engine. But we know that transistor-wise, GP104 and P10 uncore are fairly similar (256-bit bus with possible GDDR5X support and mostly similar display output/codec support), so the analogy does not work at all.

And you keep ignoring the fact that die premium of 14nm 280 would be more like 50mm^2, less than a quarter, rather than "35%".

Either way, embargo is over in at most a week, so we'll see soon. I'll stand by past bets, either way.


Some other things we don't know, who is having an easier time with production? tsmc with those 314mm dies? or GF with the samsung assist with 14nm? Even if nvidia clearly takes the perf per watt and perf per mm categories, they will radically lose out on the most important factor of all for some time. perf/dollar.

majority of gamers are still on 1080p displays, do they get a much more widely available 480 that runs their games at 60+fps for around 200-230, or a 1070 that runs them at 80fps for twice the price, where they can't even see the increased frames on their 60Hz display. This is not even a contest. Take your performance crown, much like the protoss zealot, strong and proud, more expensive to make and produce, but stronger as a singular unit as well. polaris is the zerg swarm. And based off good enough performance to allowing gamers to thrive at 200 bucks and a seemingly greater ability to produce cards off samsungs 14nm process and lower die size strategy, nvidia will be overwhelmed. Kyle and razor and leidra will consider this a failure, the nvidia cards are technically superior! So was beta max.

That 1070, so strong! so proud! so dead

 
Lower than US VAT would be negative tax and the government would pay you after buying an RX 480. :D

The issue is, especially in the UK and Eurozone, retailers are really pushing the envelope this generation for how much they're bloating the price. In the UK for example, the 1070 is selling for £400 and up -- that's starting at nearly $600 US for a 1070!. Now Europeans expect a certain level of disparity between what they pay and the announced US price, but retailers are bloating the cost far past the additional 20% VAT and added import costs. And now their are rumors that the 480 will start at £250 for the 8GB model (around $370 US). Even over at OCUK there are at least four threads now with people complaining on current prices -- and that forum is usually very acceptant of cost increases.

It's absurd, and I have several friends in the UK who have washed their hands and said they're sticking with their old cards this round unless prices normalize. Be thankful if you are in the US.
 
Kyle has no f'ing idea how many cards are in the retail channel, because AMD wouldn't tell him anyway even if there was a dialogue open and he was free to discuss NDA related matters. I doubt too many channel partners will talk to him either because who would want to be traced to Kyle by AMD?

Kyle isn't bound by the NDA because he hasn't signed one because he's persona non grata. Cut off and for all time presumably. In the meantime other sites have been running their benchmarks and preparing their reviews for the NDA embargo lifting on the 29th.

Kyle in the interim begins to resemble Yibada or some other clone site suckling off the tit of whatever rumor mill site posts something so he can post the negative rumor and spin it to damage AMD. Walk up to the edge of WCCFTech and look down, there's Kyle respinning rumors.

This thread is the most I've seen him active in in a long time considering the page count. Darting here and there, shoring shit up and trying to put out the spot fires. Kyle, I told you you'd become irrelevant and you damn well have. You shit the nest and you have nothing to post that is relevant until you can actually BUY a card and test it.

In the meantime I'll have already read reviews from 6 or 8 other sites across the globe and given your attitude wouldn't read yours as it's likely tainted and not unbiased given your recent behavior.

Sad state of affairs I must say.
 
The issue is, especially in the UK and Eurozone, retailers are really pushing the envelope this generation for how much they're bloating the price. In the UK for example, the 1070 is selling for £400 and up -- that's starting at nearly $600 US for a 1070!. Now Europeans expect a certain level of disparity between what they pay and the announced US price, but retailers are bloating the cost far past the additional 20% VAT and added import costs. And now their are rumors that the 480 will start at £250 for the 8GB model (around $370 US). Even over at OCUK there are at least four threads now with people complaining on current prices -- and that forum is usually very acceptant of cost increases.

It's absurd, and I have several friends in the UK who have washed their hands and said they're sticking with their old cards this round unless prices normalize. Be thankful if you are in the US.

Well... UK prices aren't really that high tbh. 449 USD MSRP + 20% UK VAT + about 10% Shipping and Customs and what not = 593USD = 404GBP

Cheapest one I could find here in puny Hungary is around 165.000HUF so about 403GBP, mind you that is not even in stock. Cheapest in stock is a Gigabyte FE model from 180.000HUF (440GBP)... So about as much as my monthly salary.

You can directly order one from nVIDIA in the UK it's 399GBP on their own site, with free shipping included in the UK.
 
There seem to be a lot of people here convinced that the high end market is more important than the range where Polaris is now headed for. There are really few reliable sources for sales figures and one of the few things to rely on is the Steam Hardware Survey. Now in order for the high end market to really be significant in sales figures it would require a lot of people using high end cards not to use Steam at all. I personally can't imagine that is the case and hence it seems like GTX 970/GTX 960/GTX 750ti is the main territory. If AMD sells enough cards and isn't selling them at a loss it will be just fine. Probably the only real question is how much people base their purchase on how the high end performs as in: do people buy a GTX 970/960/750ti or equivalent based on the fact that a GTX 1080 is the current fastest card and hence Nvidia is "better". Personally I have kids' pc's to think of where RX 480 is going to be a good replacement unless Nvidia launches something more interesting in the same price bracket.
 
The claim is based on what? just opinion on how you feel the market should work?




There was a good post as to the reasons why.... Wait until 14nm yields improve to push the big chips, and by that time HBM2 will be in higher availability... In the mean time, slam the largest segment of the market ($100-$300)... You may not agree to the logic, but that certainly doesnt lead to the conclusions you are trying to make.
I have answered your first question numerous times. Reading is fundamental.
So if the $100 to $300 space is so rich with opportunities, where is the new $100 card?
 
If AMD can keep supply lines open and flooded these cards will sell quick. $200 for a card that performs (speculatively) at 390X or better levels, with less heat and power usage, is huge. Real huge. The 390X is no slouch, still, for 1080 and 1440 gaming. The majority of people buy in the $200 range and there are a lot of people running older cards because there hasn't been an option in the $200 price range to offer a substantial upgrade without having to upgrade other components like the PSU. Given a good review showing at launch they'll sell themselves. NV can't complete at that price point on performance.
 
Common sense use it. Read the history of GPU, has any company ever succeeded doing what AMD is doing right now? No, in fact companies that have tried this strategy have failed. So what you believe AMD is so incompetent that they decided to go down this path because they thought they could be the one to defy history. You think multibillion dollar corporations think and operate like that? What happened is what always happens, the Company in this AMD, aimed at a target and missed, So now it has to improvise, it's really as simple as that. More concerning is that they have missed all their GPU targets for awhile now, sometimes by a little and in case a lot, and their is nothing to indicate that they are ever going to get it right. Forget about the consumer, I think they need vega to hit just for their confidence as much as their bottom line. it has got to wear on you as an engineer to keep coming up short generation after generation.
 
Common sense use it. Read the history of GPU, has any company ever succeeded doing what AMD is doing right now? No, in fact companies that have tried this strategy have failed. So what you believe AMD is so incompetent that they decided to go down this path because they thought they could be the one to defy history. You think multibillion dollar corporations think and operate like that? What happened is what always happens, the Company in this AMD, aimed at a target and missed, So now it has to improvise, it's really as simple as that. More concerning is that they have missed all their GPU targets for awhile now, sometimes by a little and in case a lot, and their is nothing to indicate that they are ever going to get it right. Forget about the consumer, I think they need vega to hit just for their confidence as much as their bottom line. it has got to wear on you as an engineer to keep coming up short generation after generation.

Which company tried which strategy exactly?
 
Common sense use it. Read the history of GPU, has any company ever succeeded doing what AMD is doing right now? No, in fact companies that have tried this strategy have failed. So what you believe AMD is so incompetent that they decided to go down this path because they thought they could be the one to defy history. You think multibillion dollar corporations think and operate like that? What happened is what always happens, the Company in this AMD, aimed at a target and missed, So now it has to improvise, it's really as simple as that. More concerning is that they have missed all their GPU targets for awhile now, sometimes by a little and in case a lot, and their is nothing to indicate that they are ever going to get it right. Forget about the consumer, I think they need vega to hit just for their confidence as much as their bottom line. it has got to wear on you as an engineer to keep coming up short generation after generation.

Nvidia Maxwell?

Just to take the most straightforward source, HardOCP in September 2014:
The "Big" Maxwell is here! Well..."the "bigger" Maxwell is here," would be more accurate. The much anticipated next generation GPU from NVIDIA is ready for prime time. We've actually seen a hint of what NVIDIA had in store for us with the release of the GeForce GTX 750 Ti back in February of this year. The GeForce GTX 750 Ti was actually the first next generation Maxwell chip release from NVIDIA.

So AMD needs to plan a high performance card with a similar pedigree as Polaris 10 within 6 months ... Sounds a bit like Vega to me.

Judging companies by history in a market which is dominated by so few companies is not exactly a good thing BTW. Nor is it in a lot of others. Nearly all market upsets started by someone doing exactly what history would predict as an assured failure.
 
Kyle has no f'ing idea how many cards are in the retail channel, because AMD wouldn't tell him anyway even if there was a dialogue open and he was free to discuss NDA related matters. I doubt too many channel partners will talk to him either because who would want to be traced to Kyle by AMD?

Kyle isn't bound by the NDA because he hasn't signed one because he's persona non grata. Cut off and for all time presumably. In the meantime other sites have been running their benchmarks and preparing their reviews for the NDA embargo lifting on the 29th.

Kyle in the interim begins to resemble Yibada or some other clone site suckling off the tit of whatever rumor mill site posts something so he can post the negative rumor and spin it to damage AMD. Walk up to the edge of WCCFTech and look down, there's Kyle respinning rumors.

This thread is the most I've seen him active in in a long time considering the page count. Darting here and there, shoring shit up and trying to put out the spot fires. Kyle, I told you you'd become irrelevant and you damn well have. You shit the nest and you have nothing to post that is relevant until you can actually BUY a card and test it.

In the meantime I'll have already read reviews from 6 or 8 other sites across the globe and given your attitude wouldn't read yours as it's likely tainted and not unbiased given your recent behavior.

Sad state of affairs I must say.

Be careful.. you might get a temp ban for telling it how you see it. (How a lot of people see it).
 

Ars are just plain wrong.

A seperate statement issued by AMD's Robert Hallock confirmed that Polaris will use HBM (high bandwidth memory) or GDDR5, depending on the "market segment."
Please show me where AMD stated that Polaris will use HBM? It is just the assumption of the writer, unless the quote is incomplete?

Koduri's comments confirm the company does have a high-end GPU in the works for release this year.
Please show me where Koduri's comments show this? He explicitly states Mainstream and perf/watt. He even mentions that they were not "driven by 'the benchmark score this year is X. Next year we need to target 20 percent better at this cost and this power. [...]The target we set was to do console-class gaming on a thin and light notebook."

Indeed, when talking about both Polaris 10 and Polaris 11 he explicitly mentions "The competing system consumes 140 watts." which probably means 970/980 and definitely not 980ti performance and he also says "With Polaris we want to bring that down to a much larger part of the market.".

He never once says "we'll take the performance crown" or "we'll beat our Fury line" or anything similar.


Seriously, if Polaris 11 was supposed to be the 480 line, Polaris 10 the 490 line, what would Vega 10/11 be?
A new 5xx line 6 months after the 4xx line?
 
Be careful.. you might get a temp ban for telling it how you see it. (How a lot of people see it).
Why the hell would he get a ban for his statements above? You are more than welcome to share your opinions here. You guys that say this shit are laughable. I addressed this yesterday, but will address it again. If you come in here calling names, yes, you will get banned. If you come in here telling the forum that HardOCP accepts bribes and writes reviews for cash, you will get banned. Unless of course you can prove that, then please do. I however would like to know exactly where that money is though, because I have not seen a penny of it. I see people write on other forums how we have gone ban-happy because someone's opinion is not what we like. That is horseshit. Come in here ranting and calling name and making statements of fact of illegal or unethical activity, and that will get you banned. When people gripe about getting banned, have you noticed one thing? They never quote what got them banned. Think about that for a minute.
 
Common sense use it. Read the history of GPU, has any company ever succeeded doing what AMD is doing right now? No, in fact companies that have tried this strategy have failed. So what you believe AMD is so incompetent that they decided to go down this path because they thought they could be the one to defy history. You think multibillion dollar corporations think and operate like that? What happened is what always happens, the Company in this AMD, aimed at a target and missed, So now it has to improvise, it's really as simple as that. More concerning is that they have missed all their GPU targets for awhile now, sometimes by a little and in case a lot, and their is nothing to indicate that they are ever going to get it right. Forget about the consumer, I think they need vega to hit just for their confidence as much as their bottom line. it has got to wear on you as an engineer to keep coming up short generation after generation.

Yes - big time - Commodore is one for a company.

Nvidia with little Maxwell that was out way before big Maxwell.

If AMD has no competition in the $100-$300 with a superior performing product - you do the math. Still AMD has to be able to make what the market will buy and products that can make a profit. In the end time will tell - there are probably already way more buyers ready to buy a 480 or below then the last month of sells for the 1080/1070 and that is only a start.
 
Some other things we don't know, who is having an easier time with production? tsmc with those 314mm dies? or GF with the samsung assist with 14nm? Even if nvidia clearly takes the perf per watt and perf per mm categories, they will radically lose out on the most important factor of all for some time. perf/dollar.

majority of gamers are still on 1080p displays, do they get a much more widely available 480 that runs their games at 60+fps for around 200-230, or a 1070 that runs them at 80fps for twice the price, where they can't even see the increased frames on their 60Hz display. This is not even a contest. Take your performance crown, much like the protoss zealot, strong and proud, more expensive to make and produce, but stronger as a singular unit as well. polaris is the zerg swarm. And based off good enough performance to allowing gamers to thrive at 200 bucks and a seemingly greater ability to produce cards off samsungs 14nm process and lower die size strategy, nvidia will be overwhelmed. Kyle and razor and leidra will consider this a failure, the nvidia cards are technically superior! So was beta max.

That 1070, so strong! so proud! so dead


You were doing fine until you made starcraft comparison to someone who knows a thing or two about it.

Either way, it all comes down to DPW, someone, fire up the calculator.
 
A Chinese retailer claims the stock RX 480 does 5600 graphics score in 3DMark Extreme

Might not be the newest driver though.

212611vdq5hrqgihwrbsb9.jpeg
 
Kyle has no f'ing idea how many cards are in the retail channel, because AMD wouldn't tell him anyway even if there was a dialogue open and he was free to discuss NDA related matters. I doubt too many channel partners will talk to him either because who would want to be traced to Kyle by AMD?
Yes I do have an idea of how many cards are in the retail channel, otherwise I would have never reported on it. AMD does not report shipping card inventories to any journalists that I am aware of, so your statement singling HardOCP out is off base and shows your lack of understanding of what you are saying here. AIBs still very much want HardOCP reviews of their cards if they think it is beneficial to them, I assure you of that. Again, you have no understanding of the situation again to make such assumptions.

In the meantime other sites have been running their benchmarks and preparing their reviews for the NDA embargo lifting on the 29th.
Interesting. I talked to several prominent hardware review sites yesterday and those did not have RX 480 samples from AMD. Again, you just making things up that you think are true and reporting those as facts. Another failure on your part.

Kyle in the interim begins to resemble Yibada or some other clone site suckling off the tit of whatever rumor mill site posts something so he can post the negative rumor and spin it to damage AMD. Walk up to the edge of WCCFTech and look down, there's Kyle respinning rumors.
I am not even sure what that all means or how it is pertinent, but whatever. I assure you that whenever we publish original content about a company it is based on our own industry sources. Again, you are making statements based on whatever you dreamed up.

This thread is the most I've seen him active in in a long time considering the page count. Darting here and there, shoring shit up and trying to put out the spot fires. Kyle, I told you you'd become irrelevant and you damn well have. You shit the nest and you have nothing to post that is relevant until you can actually BUY a card and test it.
This simply shows how little time you spent researching your statements.

In the meantime I'll have already read reviews from 6 or 8 other sites across the globe and given your attitude wouldn't read yours as it's likely tainted and not unbiased given your recent behavior.
OK. You are one of those guys.....got it. You don't read anything we publish, but you know all about our content and how it is sourced. Got it.

Sad state of affairs I must say.
Your thoughts are noted. Thanks for once again visiting our sad site, which you don't read, to tell us how sad our content is, which you don't read. ;) I won't tell anyone your secret if you don't.
 
Common sense use it. Read the history of GPU, has any company ever succeeded doing what AMD is doing right now? No, in fact companies that have tried this strategy have failed. So what you believe AMD is so incompetent that they decided to go down this path because they thought they could be the one to defy history. You think multibillion dollar corporations think and operate like that? What happened is what always happens, the Company in this AMD, aimed at a target and missed, So now it has to improvise, it's really as simple as that. More concerning is that they have missed all their GPU targets for awhile now, sometimes by a little and in case a lot, and their is nothing to indicate that they are ever going to get it right. Forget about the consumer, I think they need vega to hit just for their confidence as much as their bottom line. it has got to wear on you as an engineer to keep coming up short generation after generation.

Funny you should say that because Hibben posted a column yesterday discussing that :p
The point he makes is about the dissembling of future prospects and provides case-studies looking at the turnaround thesis.

The bigger question is if those concerns can also be leveled at Zen too :/

Hibben himself has been subject to immature behavior by AMD fanboys - not unlike what we are witnessing here on [H].
Negative reaction to my position: my critical articles of these companies have usually been greeted with a vituperative outpouring of personal attacks in comments.
 
Ars are just plain wrong.


Please show me where AMD stated that Polaris will use HBM? It is just the assumption of the writer, unless the quote is incomplete?

Robert Hallock, the Technical Marketing Lead at AMD, explains: "We have the flexibility to use HBM or GDDR5 as costs require. Certain market segments are cost sensitive, GDDR5 can be used there. Higher-end market segments where more cost can be afforded, HBM is viable as well".

Well, when you read this comment, and couple that with the fact that Raja in the interview with VentureBeat mentioned Polaris 10 being a "premium graphics" part, is it that hard to make the logical conclusion? Of course AMD had to be vague, they'd be stupid to outright say whatever, because at that point they had no idea what Pascal was capable of.

They didn't announce Vega back then, so is it that hard to understand that that comment pertains to Polaris?

Please show me where Koduri's comments show this? He explicitly states Mainstream and perf/watt. He even mentions that they were not "driven by 'the benchmark score this year is X. Next year we need to target 20 percent better at this cost and this power. [...]The target we set was to do console-class gaming on a thin and light notebook."

And he said that 2016 will be a very big year for the company as it introduces its advanced FinFET manufacturing technology, which will result in much better performance per watt — or graphics that won’t melt your computer. Koduri believes this technology will help AMD beat rivals such as Nvidia. AMD’s new graphics chips will hit during the middle of 2016, Koduri said.

Middle of 2016 obviously meant Polaris. This part meant that they planned on beating existing GPUs from NVIDIA, such as Titan X and GTX 980Ti.

Indeed, when talking about both Polaris 10 and Polaris 11 he explicitly mentions "The competing system consumes 140 watts." which probably means 970/980 and definitely not 980ti performance and he also says "With Polaris we want to bring that down to a much larger part of the market.".

But even in this early silicon, we’re seeing numbers versus the best class on the competition running at a heavy workload, like Star Wars—The competing system consumes 140 watts. This is 86 watts.

This is probably what he was talking about:

AMD Reveals Polaris GPU Architecture: 4th Gen GCN to Arrive In Mid-2016
For their brief demonstration, RTG set up a pair of otherwise identical Core i7 systems running Star Wars Battlefront. The first system contained an early engineering sample Polaris card, while the other system had a GeForce GTX 950 installed (specific model unknown). Both systems were running at 1080p Medium settings – about right for a GTX 950 on the X-Wing map RTG used – and generally hitting the 60fps V-sync limit.

Which is entirely irrelevant to the discussion regarding the 480 because that was Polaris 11, i.e. entry desktop.

He never once says "we'll take the performance crown" or "we'll beat our Fury line" or anything similar.


Seriously, if Polaris 11 was supposed to be the 480 line, Polaris 10 the 490 line, what would Vega 10/11 be?
A new 5xx line 6 months after the 4xx line?

And he said that 2016 will be a very big year for the company as it introduces its advanced FinFET manufacturing technology, which will result in much better performance per watt — or graphics that won’t melt your computer. Koduri believes this technology will help AMD beat rivals such as Nvidia. AMD’s new graphics chips will hit during the middle of 2016, Koduri said.

And no, nobody said Polaris 11 was supposed to be the 480 line. Entry desktop for AMD has always been x70 and below. But it's not implausible that Polaris 10 was supposed to have at least 2 cuts, 480 and 490, is it?
 
Good morning, lol. Nice to see nobody is upset by comments made last night :p
 
Which company tried which strategy exactly?


AMD lol, what do you think the 3xxx line was? midrange strategy, it went well up against nV's midrange, but in over all sales it got killed.

Any one who thinks AMD would forgo the performance segment on purpose

We are talking about the prospect of the rx480 as the performance part if Pascal wasn't around or Pascal's performance was a bit lower that it is.

Just answer me this question

Tell me what would you do if you could make 200 million in the performance segment in one quarter vs 100 million in the mainstream segment? Which would you target, All of this when your company is in crushing debt and negative cash flow? Keeping in mind once you make a choice you loose the rest of the market not chosen for the next year to the competition, there are no take back.

Its a fantasy question YOU ARE THE CEO OF AMD for this question, what would you do?
 
Last edited:
How does AMD expect to take marketshare from Nvidia when (it appears) they literally have less quantity available than the 1070 or 1080 (certainly less than both combined)?
Even if AMD sells every single 480 available, they will gain less share than Nvidia. Fab issue maybe???

You have to remember that AMD claimed this card was never meant to compete with the 1080 (not sure about the 1070, but seems doubtful). So two different targets.

That said, I'd expect quite a few people will be looking to purchase a pair of 480s at launch so they can outperform a 1080 like the AMD slides showed.
 
Status
Not open for further replies.
Back
Top