AMD Radeon RX 480 Supplies at Launch

Status
Not open for further replies.
You do realize the 1070/1080 are Nvidia mid-range chips? They are just selling them at high-end prices?

Now you might ask why? Because Nvidia has no competition in the high-end segment.

Which is why AMD is going after the low-end market.
1060(ti?) are Nvidia's mid-range. 1070 and 1080 are high-end. 1080ti, if and when will be high-end flag ship. And no Titian isn't a whole other level except in cost so let's leave that out okay.
Same as last gen.
 
1060(ti?) are Nvidia's mid-range. 1070 and 1080 are high-end. 1080ti, if and when will be high-end flag ship. And no Titian isn't a whole other level except in cost so let's leave that out okay.
Same as last gen.

And where is that card? No rumors or leaks on the 1060ti. Nvidia can't even produce enough 1000 series cards let alone the 1060ti.

But you never know. I am more interested in DX 12 performance. Reviews will show if the RX 480 is even worth it.
 
Why didn't AMD overclock it to like 1400 from factory and run it at 95C like Fury? lol
 
It's max TDP is 150w. That's not power draw. Even if that was the case, 150w for 1400-150mhz is pretty good if the returns are there.


Yeah that is what I stated, we know it won't draw the max its going to be less that 150 watts at stock how much less is the question.
 
But yea it may hit higher clocks but at some point it won't be the performance/watt champ that it is at 1266. We will see to be honest how much power it takes at 1500. That is too be determined if it takes too much than yes it does clock fine but it does use more power. So some parts of it using more power would be true may not be the clocks part. I mean no one expected amd to hit 1800 because their architecture is just that way.
But 1500 MHz will not really change its positioning (well, it might, if it is in practice slower than 980), though it might justify $299 AIB custom card. In order to reach FuryX performance, you would need 1866 MHz(*) - and here I agree this would mean 77% clocks increase compared to Fiji, which is kind of far-stretching. And since FuryX is slower than both 980Ti/1070, it still would not be good enough (at least in DX11, DX12 might paint a bit different picture).

(*) 1866 = 4096 (no. of FuryX SPs) * 1050 (FuryX MHz) / 2306 (no of 480 SP)
 
But 1500 MHz will not really change its positioning (well, it might, if it is in practice slower than 980), though it might justify $299 AIB custom card. In order to reach FuryX performance, you would need 1866 MHz(*) - and here I agree this would mean 77% clocks increase compared to Fiji, which is kind of far-stretching. And since FuryX is slower than both 980Ti/1070, it still would not be good enough (at least in DX11, DX12 might paint a bit different picture).

(*) 1866 = 4096 (no. of FuryX SPs) * 1050 (FuryX MHz) / 2306 (no of 480 SP)

No one is comparing it to the 980ti, 1070, or Fury X... so what do you mean "it still would not be good enough?"

The only ones comparing it to those cards, are the people trying to show it doesnt hit those levels. No one that is enthusiastic about the product has made those claims.

For $229-$299, 980/390x/Fury(non-X) performance will be amazing.
 
  • Like
Reactions: Zuul
like this
But 1500 MHz will not really change its positioning (well, it might, if it is in practice slower than 980), though it might justify $299 AIB custom card. In order to reach FuryX performance, you would need 1866 MHz(*) - and here I agree this would mean 77% clocks increase compared to Fiji, which is kind of far-stretching. And since FuryX is slower than both 980Ti/1070, it still would not be good enough (at least in DX11, DX12 might paint a bit different picture).

(*) 1866 = 4096 (no. of FuryX SPs) * 1050 (FuryX MHz) / 2306 (no of 480 SP)
But that is only if fury was 100% efficient. Which is not. That's the thing so it won't necessarily need 1866. Yes in pure math it will but I know it won't reach fury x but I do know it won't require 1866. That is if fury x had 100% shader efficiency which it doesn't.
 
Why didn't AMD overclock it to like 1400 from factory and run it at 95C like Fury? lol

Curious if he was mixing up the 290X launch reviews - those actually were 90+ and were well-known for clock-throttling:
AMD's Radeon R9 290X graphics card reviewed

OTOH, Fury X didn't even get close to that. Most of the attention was on pump noise or AIO discussion in general haha.

Had a sapphire and a gigabyte fury x at launch both had the pump whine and got returned, was playing around with a powercolor one the other day which was bought new a week or so ago and it also had it. Seems like it was never really fixed and it just varied if you got it or not =/. So called "Premium design" but they cheaped out on the pump which was something that could have done with a premium solution.
 
Am I missing something that is causing people to think that AMD should have expected higher clocks with their architecture? It seems a lot of people are implying that AMD should have expected higher clocks, closer to nVidia...

For the longest time a die shrink from lets say 100nm to 50nm (or in this case 28nm->14nm) would have meant close to a 100% scaling in speed. It would only take half the time for the electrons to reach their destination and significantly less power to the transistor to work.

However things like heat density, and power leakage put a stop to those things. And that is just to name a few. FinFET helped reduce the leakage. However other issues persisted. Process yields also improve with time. For example a small defect count per um could severly hamper a new nodes scaling. But over time this defect count per um could drop. It is very possible AMD could scale better as the 14nm process improves (NVIDIA is on 16nm) But I wouldn't expect a huge increase. As should be noted Intel and AMD have problems scaling past 4.5GHz since Sandy Bridge at 28nm. Die shrinks do not necessitate faster clocks any more.

I should also note that faster clocks to not directly mean faster speeds. Deeper pipelines can lead to faster clocks. However the overall latency increases and efficiency drops. (IE: Pentium III and AMD Athlon were significantly more efficient per clock than the Pentium IV) Even though this is an ASIC which specializes in matrix operations and not a GP-CPU, there are parallels when it comes to instruction loading.

I've done a little math based on the estimated number of transistors per mm and Gigaflops/MHz. AMD is rather high in the transistor count department per GF compared to NVIDIA. This might explain why they are clocked lower...they have a little more overhead. So AMD might have been aiming for efficiency over raw clock speed. But it looks like they missed the mark. They might have actually done better on a 16nm node that NVIDIA is using because they are less susceptible to defects in the die as the signal paths are wider.
 
Last edited by a moderator:
Die shrinks do not necessitate faster clocks any more.

That's making me somewhat bearish on Zen's initial overclocking prospects then :/
I'm recalling the refinement that AMD has typically experienced. See TBred-A vs TBred-B.
Phenom I vs Phenom II.
 
For the longest time a die shrink from lets say 100nm to 50nm (or in this case 28nm->14nm) would have meant close to a 100% scaling in speed. It would only take half the time for the electrons to reach their destination and significantly less power to the transistor to work.

However things like heat density, and power leakage put a stop to those things. And that is just to name a few. FinFET helped reduce the leakage. However other issues persisted. Process yields also improve with time. For example a small defect count per um could severly hamper a new nodes scaling. But over time this defect count per um could drop. It is very possible AMD could scale better as the 14nm process improves (NVIDIA is on 16nm) But I wouldn't expect a huge increase. As should be noted Intel and AMD have problems scaling past 4.5GHz since Sandy Bridge at 28nm. Die shrinks do not necessitate faster clocks any more.

I should also note that faster clocks to not directly mean faster speeds. Deeper pipelines can lead to faster clocks. However the overall latency increases and efficiency drops. (IE: Pentium III and AMD Athlon were significantly more efficient per clock than the Pentium IV) Even though this is an ASIC which specializes in matrix operations and not a GP-CPU, there are parallels when it comes to instruction loading.

I've done a little math based on the estimated number of transistors per mm and Gigaflops/MHz. AMD is rather high in the transistor count department per GF compared to NVIDIA. This might explain why they are clocked lower...they have a little more overhead. So AMD might have been aiming for efficiency over raw clock speed. But it looks like they missed the mark. They might have actually done better on a 16nm node that NVIDIA is using because they are less susceptible to defects in the die as the signal paths are wider.

If rumored 1500mhz clocks hold, they scaled just as well as nVidia did with the die shrink.. GCN 3 > 4 vs Maxwell > Pascal

Explain to me what 'mark' they were supposed to hit, and how they missed it...
 
Performance, power consumption, perf/watt, pick your poison. They will be behind Pascal at least that is what it looks like, and its not going to be very close at all. See to make a comparison direction you will need to have similar performance to see what the relative power usage different is or have similar power consumption to see what the relative performance difference is.

If the rumor of having AIB's doing the binning to figure out which chips they will use for reference designs vs. overclocked cards. That means there is a wide variation in clocks per wafer and something AMD didn't want to take the risk of worrying about and to reduce that risk they picked a clock speed that would work for the reference design at a set price and power usage level. But that wasn't what they were going for.

Chip design looks at base clock targets as a fundamental design choice, if the variance in clock speeds are true, they were going for something higher because you won't get that much of change in clock speeds (higher and lower) and this is not AMD's issue from a design point of view its the Fab's problem but end result it is AMD's. This all goes back to when AMD locked themselves into GF for production. So its not even the current management's fault either.
 
If rumored 1500mhz clocks hold, they scaled just as well as nVidia did with the die shrink.. GCN 3 > 4 vs Maxwell > Pascal

Explain to me what 'mark' they were supposed to hit, and how they missed it...

The fastest RUMORED overclocks I've seen are around 1300MHz->1350MHz give or take.
 
Chip design looks at base clock targets as a fundamental design choice, if the variance in clock speeds are true, they were going for something higher because you won't get that much of change in clock speeds (higher and lower) and this is not AMD's issue from a design point of view its the Fab's problem but end result it is AMD's. This all goes back to when AMD locked themselves into GF for production. So its not even the current management's fault either.

I agree. Low base clocks give to higher yields. Defect density can affect a yield's stable clock range wildly. And the defect density can vary things drastically at smaller nodes. Hence when I said AMD might have done better with the 16nm node NVIDIA is using.

Is the RX480 a good product for the price? Undoubtedly.
But the end of the day, a Ferrari still beats an Ariel Atom in most courses.
 
If rumored 1500mhz clocks hold, they scaled just as well as nVidia did with the die shrink.. GCN 3 > 4 vs Maxwell > Pascal
They dont.
New AMD RX 480 CrossFire benchmarks hit the web, exclusive first look at new overclocking tool | VideoCardz.com
In crossfire they dont cope well with only a +22MHz overclock, final clocks are 1288MHz.
They are pushing more than 150W and getting very hot, 82 and 87C after being under load a short time. Temps were still climbing.

Recent leaks show single cards struggle to reach 1400MHz, I havent seen those but saw mention, fyi.
 
They dont.
New AMD RX 480 CrossFire benchmarks hit the web, exclusive first look at new overclocking tool | VideoCardz.com
In crossfire they dont cope well with only a +22MHz overclock, final clocks are 1288MHz.
They are pushing more than 150W and getting very hot, 82 and 87C after being under load a short time. Temps were still climbing.

Recent leaks show single cards struggle to reach 1400MHz, I havent seen those but saw mention, fyi.
Source for "more than 150W" ??
With FinFET the level of temp is much less a concern. Lets not forget the 480 is using a HSF assembly typically seen on entry level parts. (Aluminum with copper "slug"). Through that Heatsink on a couple GTX 1070s and SLI them,.. then compare temps..
 
They dont.
New AMD RX 480 CrossFire benchmarks hit the web, exclusive first look at new overclocking tool | VideoCardz.com
In crossfire they dont cope well with only a +22MHz overclock, final clocks are 1288MHz.
They are pushing more than 150W and getting very hot, 82 and 87C after being under load a short time. Temps were still climbing.

Recent leaks show single cards struggle to reach 1400MHz, I havent seen those but saw mention, fyi.

The problem here is 1/2 of the fan is covered in Crossfire. Thus the airflow is reduced on the backside of the primary card which has severely restricted openings.
 
They dont.
New AMD RX 480 CrossFire benchmarks hit the web, exclusive first look at new overclocking tool | VideoCardz.com
In crossfire they dont cope well with only a +22MHz overclock, final clocks are 1288MHz.
They are pushing more than 150W and getting very hot, 82 and 87C after being under load a short time. Temps were still climbing.

Recent leaks show single cards struggle to reach 1400MHz, I havent seen those but saw mention, fyi.
Those were factory oced cards I think. Plus every single time I have done sli or crossfire. The fucking temps suck! There is usually a 10c temp difference in cards at least. And 150w? Stop spreading bullshit. There is not a word on wattage on that page. Pushing above factory specs? At almost stock clocks? You sound like an overjoyed fanboy right now lol.
 
Source for "more than 150W" ??
With FinFET the level of temp is much less a concern. Lets not forget the 480 is using a HSF assembly typically seen on entry level parts. (Aluminum with copper "slug"). Through that Heatsink on a couple GTX 1070s and SLI them,.. then compare temps..

True. But the over 150Watts is confirmed and OUT of spec on the same page that cites the overclocking. You better darn well have the PSU PCIE cables thick enough to support > 6.25 amps.
 
  • Like
Reactions: Nenu
like this
Source for "more than 150W" ??
With FinFET the level of temp is much less a concern. Lets not forget the 480 is using a HSF assembly typically seen on entry level parts. (Aluminum with copper "slug"). Through that Heatsink on a couple GTX 1070s and SLI them,.. then compare temps..
Zoom in the image and count the pixels.
1 pixel is approx 10W (shown when at idle).
The peaks are 16 pixels high.
 
Those were factory oced cards I think. Plus every single time I have done sli or crossfire. The fucking temps suck! There is usually a 10c temp difference in cards at least.
1288 MHz is the overclock. 1266 MHz is the factory stock. The poor temp is a factor of the 2 sided fan with 1 side covered. A board designed for Crossfire with 3 slot spacing would be your friend here.
 
We will see on the 1500mhz thing I suppose. I have a hard time believing that the stock cooler will represent AIB clocks.

Still, even 1400 clocks isnt far off the mark since last gen only got to 1050-1100 max... Thats still around 30% max clock gains, which is around what nVidia saw... Why would more have been expected of AMD?
 
True. But the over 150Watts is confirmed and OUT of spec on the same page that cites the overclocking. You better darn well have the PSU PCIE cables thick enough to support > 6.25 amps.

Confirmed ?
Source ?

Zoom in the image and count the pixels.
1 pixel is approx 10W (shown when at idle).
The peaks are 16 pixels high.
qXVWFf2.png


1288 MHz is the overclock. 1266 MHz is the factory stock. The poor temp is a factor of the 2 sided fan with 1 side covered. A board designed for Crossfire with 3 slot spacing would be your friend here.

And the nice thing about CFX (compared to SLI) , no need for bridges.. ie no need to stack cards on top of one another, can have 1 card in PCIe1 and another in PCIe 6 (if have slots)
 
To count pixels instead of looking at the actual figure listed? ...? lol
Are you discounting the images I linked?
I can create graphs on my gfx card that show different results, so what?
We are looking at what happens at the limits, or as close as.
The images I linked show power maxing at over 150W.
 
And the nice thing about CFX (compared to SLI) , no need for bridges.. ie no need to stack cards on top of one another, can have 1 card in PCIe1 and another in PCIe 6 (if have slots)

That is if PCIe slot 2, 3, 4, 5, 6 has as many lanes dedicated to it. And a PLX chip introduces latency.

The screen shot image shows current power usage. However if you trace the graph it does go higher. Also momentary power spikes != average usage. It's those momentary spikes that look to be exceeding the 150 limit when overclocked.
 
  • Like
Reactions: Nenu
like this
Are you discounting the images I linked?
I can create graphs on my gfx card that show different results, so what?
We are looking at what happens at the limits, or as close as.
The images I linked show power maxing at over 150W.
Where did you get the details that 1 pixel = 10w? You are looking at a 10.1w listing and trying to determine the prior power usage from the pixels of the timeline?

Did you determine it by seeing 10w and 1 pixel on the graph? If 1 pixel = 10w, and that graph is almost full with 160w. how do they show 250w? Do they just make the graph taller? Change the pixels per watt? lol.... What if the graph jumps a pixel ever 7w, so still only has 1 pixel at 10w as well? That would mean 16 pixels = 112w...

Just curious how you came to the conclusion that 1 pixel = 10w in your picture...
 
Last edited:
no no no, ur supposed to count the pixels :p
I guess it really has come down to this ... "count the pixels, the pixels man , THE PIXELS"

bNZya57.jpg


Where did you get the details that 1 pixel = 10w? You are looking at a 10.1w listing and trying to determine the prior power usage from the pixels of the timeline?

Did you determine it by seeing 10w and 1 pixel on the graph? What if the graph jumps a pixel ever 7w, so still only has 1 pixel at 10w as well? That would mean 16 pixels = 112w... Just curious how you came to the conclusion that 1 pixel = 10w in your picture...

EXACTLY.. its near impossible to say conclusively what the draw is using a rudimentary method..

MKkwUUl.png
 
Look first people were saying the rumors of the rx 480 being at around 970 to 980 performance were false, that is coming out to be true,

now the frequency amounts people were saying they were going to get up to 1500 stock, when i stated that wasn't going to happen in any reasonable way i got flamed, I stated AMD will get 200 mhz for the node transition and that's it, i never believed they were going to make the architectural change needed for 1500mhz on a consistent basis.

I'm going to say it again, even with that pic, that doesn't show use average consumption. its probably higher than that 112 not to mention if its on Firestrike it not going to push wattage much as regular games.
 
  • Like
Reactions: Nenu
like this
Look first people were saying the rumors of the rx 480 being at around 970 to 980 performance were false, that is coming out to be true,

now the frequency amounts people were saying they were going to get up to 1500 stock, when i stated that wasn't going to happen in any reasonable way i got flamed, I stated AMD will get 200 mhz for the node transition and that's it, i never believed they were going to make the architectural change needed for 1500mhz on a consistent basis.

I'm going to say it again, even with that pic, that doesn't show use average consumption. its probably higher than that 112 not to mention if its on Firestrike it not going to push wattage much as regular games.

Some were claiming sub-970 because of the VR score, which was contested... the 970-980 claims were questioned in the sense of what it will do OCed. Some say above 980, some say below. We shall see there.

I believe the base clock of the 380x was 950mhz, so AMD got 300+mhz out of stock clocks alone... The 380x only OCed 150mhz, so I'd I expect the 480 to OC atleast 200mhz, if all things remain fairly proportional. Im actually expecting a bit more because of non-die-shrink changes, but thats optimism :)

Any OCing results we will see now only reflect a stock cooler and 6 pin power connector, so dont represent the 480s full potential. Yes, higher clocks will require more power and more cooling, the exact same as they do with nVidia cards, so no knock on AMD there.
 
Where did you get the details that 1 pixel = 10w? You are looking at a 10.1w listing and trying to determine the prior power usage from the pixels of the timeline?

Did you determine it by seeing 10w and 1 pixel on the graph? If 1 pixel = 10w, and that graph is almost full with 160w. how do they show 250w? Do they just make the graph taller? Change the pixels per watt? lol.... What if the graph jumps a pixel ever 7w, so still only has 1 pixel at 10w as well? That would mean 16 pixels = 112w...

Just curious how you came to the conclusion that 1 pixel = 10w in your picture...

The power level momentarily blips to 2 pixels quite often despite there being no load at all on the GPU.
Idle is just over 10W.
So 1 pixel is very close to 10W.
 
Last edited:
Status
Not open for further replies.
Back
Top