AMD Radeon RX 460 Official Specification Information @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
AMD Radeon RX 460 Official Specification Information - AMD is launching the AMD Radeon RX 460 today, completing its lineup of Polaris architecture GPUs in the sub-$250 product space. Here is all the press information, specifications, and pricing in regards to the AMD Radeon RX 460 in today's launch. No card was sent for review. All three Polaris GPUs are now on the table.
 
Doesn't look bad. More options for the low end are nice. I agree with your comment on product compression though; I guess that's what happens when the top of the stack isn't that great of a performer.
 
PurePC has benchmarks up. Site is not in English.

Seems to not have done better than 40fps at 1080p on the games tested, and often was doing 24-30fps.
Not sure it's much of an upgrade from an iGPU.
 
Last edited:
Hoping that some third party makes a half-height variant. It's silly that the half-height reigning champ is still the GTX 750 Ti.
 
  • Like
Reactions: xorbe
like this
If some cards end up around the $100 price range I can see this being the new GTX 750 TI, the card you recommend to people who either don't have a decent power supply or can't spend more than $100 and still want to game. It looks like it hovers around the GTX 960 in benchmarks which is quite an upgrade over the GTX 750 TI. Boy am I glad I sold my 750 TI last month.

What this chip will really be good for is thinner laptops. It doesn't pull very much power even on a desktop card.

Based on this review:
ASUS Radeon RX 460 STRIX Gaming 4 GB review
 
Does the 4XX series have 10 bit colour capability as standard? I thought that was coming to domestic cards this year rather than just Quadro/FireGL.

Also I really love the look of those compact AMD reference 460 cards.
 
Does the 4XX series have 10 bit colour capability as standard? I thought that was coming to domestic cards this year rather than just Quadro/FireGL.

Also I really love the look of those compact AMD reference 460 cards.
Would HDR require that? If so then it should be 10bit.
 
Would HDR require that? If so then it should be 10bit.

Well I'm referring to being able to activate 10bit colour in Photoshop etc. with a 10bit capable monitor. AFAIK only Quadro and FireGL cards have been able to do this.

Actual info on this ranges from sketchy, assumptions to downright archaic.
 
I must say, i do enjoy AMD's policy!!.
Some companies aim for the top & better performance for their new GPUs, while on the other hand AMD continously proudly announces that their new GPU managed to surpass the medium / bottom-end of NVidia's previous generation GPUs.
I'm looking forward for AMD's next announcement, which will probably proudly announce that another of their new GPUs is able to surpass GTX 940 & GTX 930 !! :ROFLMAO: ( and after a couple of years they might be able to compete with the GTX 1080 :nailbiting:)
 
Doesn't look bad. More options for the low end are nice. I agree with your comment on product compression though; I guess that's what happens when the top of the stack isn't that great of a performer.
I would say that it doesn't look very good either. I expected a bit more.
 
If some cards end up around the $100 price range I can see this being the new GTX 750 TI, the card you recommend to people who either don't have a decent power supply or can't spend more than $100 and still want to game. It looks like it hovers around the GTX 960 in benchmarks which is quite an upgrade over the GTX 750 TI. Boy am I glad I sold my 750 TI last month.

What this chip will really be good for is thinner laptops. It doesn't pull very much power even on a desktop card.

Based on this review:
ASUS Radeon RX 460 STRIX Gaming 4 GB review

In DX 11 it is on par with or behind the GTX 950, while in DX 12/Vulkan it is on par with or ahead of the GTX 960, about where I expected it to be. I got an EVGA GTX 950 SC+ for $99.99 over a month ago brand new from Newegg (for my secondary rig). I don't regret it.
 
Is amd on crack or are all their capacity is being gobbled up by console makers?

the product stack should be this
460 2gb 90
460 4gb 110
470 4gb 150
480 4gb 190
480 8gb 230

As reported, it was originally:

460 2GB = $99
460 4GB = $119 (or $129)
470 4GB = $149
470 8GB = $179
480 4GB = $199
480 8GB = $229 (or $239)

There are two problems with that stack. First, the 460 and 470 4GB versions are too close in price for the massive performance gap. Who would buy a 4GB 460 at that price!? The problem seems temporarily solved, however, as the cheapest 4GB 470 I can find today is $199 and out of stock.

The second problem is the 4GB 480. Granted, it doesn't seem to currently exist, but if it did, it's hovering too close to the 8GB 470 in the ladder. Again, problem solved due to price gouging. What we're really seeing is pretty much this:

460 2GB = NIS yet, $99 doubtful
460 4GB = NIS yet, $150-ish likely
470 4GB = $199+
470 8GB = $229+
480 4GB = LOL
480 8GB = $279 from AIB partners as reference are perpetually out of stock

Solution - Decompress the damn stack

460 2GB = $100
470 4GB = $150
470 8GB = $200
480 8GB = $250

We don't need a 4GB 460 or 480. Yes, I know that this kills the "$199 480" bit, but oh well.
 
As reported, it was originally:
Solution - Decompress the damn stack

460 2GB = $100
470 4GB = $150
470 8GB = $200
480 8GB = $250

We don't need a 4GB 460 or 480. Yes, I know that this kills the "$199 480" bit, but oh well.

The problem with that pricing is that 4GB of RAM for the 470 card doesn't equate to $50...
 
I wanted the RX 460 to be a competitive card but after reading the review over on Bit-tech, I don't believe it is. In the usual battery of AAA games (probably not the target audience of the product though certainly the audience of their readers) it was well behind the GTX 950 in 1080p high and medium. When you factor in the fact that you can grab an MSI GTX 950 GAMING 2GB for $130 USD - $30 rebate card on Newegg.com, the RX 460 is a tough sell to those who are on a tight budget and actually read reviews.
 
So what happened to this:

RX470Lisa_575px.jpg


The little ~50W GPU that could supposedly match the GTX 950 at 86W total system power:

Radeon%20Technologies%20Group_Graphics%202016-page-016_575px.jpg


Instead we get a GPU that match the 950.... at about the same power. I thought the whole purpose was to be as energy efficient as possible and comfortably fit into the <75W bracket (where you can fill in lots of interesting form factors, like low profile, single-slot, or passively cooled). Most versions seem to have the 6 pin connector, and those that don't seem to trail behind the 950 by a fair amount.
 
So what happened to this:



The little ~50W GPU that could supposedly match the GTX 950 at 86W total system power:

Radeon%20Technologies%20Group_Graphics%202016-page-016_575px.jpg


Instead we get a GPU that match the 950.... at about the same power. I thought the whole purpose was to be as energy efficient as possible and comfortably fit into the <75W bracket (where you can fill in lots of interesting form factors, like low profile, single-slot, or passively cooled). Most versions seem to have the 6 pin connector, and those that don't seem to trail behind the 950 by a fair amount.

It seems that particular scene had settings lowered and Vsync enabled so that both cards ran at a locked 60fps (appearance of equal performance) and both cards consumed less than their full power (appearance of lower power draw).
 
So what I'm seeing is that AMD has about 5 models between $100-$200 (including the unicorn 4GB 480). Seems like a lot of engineering effort to carve the market into so many pieces... was this really the best use of their R&D resources? Yes, we all know that this is the "bread and butter" market and that all the volume supposedly comes from the mainstream.... but still, heres a card based on a unique chip that isn't really competitive with a product 1 generation older from NV that they're going to try to sell for peanuts and make almost no margin on (probably). They'd better hope this is the card that brings them back to relevance in mobile gaming then, because they've mostly passed on the market thats growing (higher end GPUs) and focused on the one thats shrinking (low end of mainstream)
 
I get that it's on the new 14nm process... but it seems to me if you are on a budget and wanting some cheap/silent/low power gaming, according to everything I've seen here and elsewhere, the R9 270 looks to be a better card. I see them going for 80 to 100 dollars online, it uses only about 40w more worth of power, and performs around 30% better.

I have one in my media PC and it's fantastic. (granted I don't do any gaming on it) 1070 in my main rig.

Just seems like for the money there's way better options out there with the only downside being it's not on 14nm
 
So what I'm seeing is that AMD has about 5 models between $100-$200 (including the unicorn 4GB 480). Seems like a lot of engineering effort to carve the market into so many pieces... was this really the best use of their R&D resources? Yes, we all know that this is the "bread and butter" market and that all the volume supposedly comes from the mainstream.... but still, heres a card based on a unique chip that isn't really competitive with a product 1 generation older from NV that they're going to try to sell for peanuts and make almost no margin on (probably). They'd better hope this is the card that brings them back to relevance in mobile gaming then, because they've mostly passed on the market thats growing (higher end GPUs) and focused on the one thats shrinking (low end of mainstream)

To piggy back on this:

Integrated/mobile GPUS (laptops, AIOs, etc.) - market is growing, AMD doesn't have a new product for it yet.
High-end discrete GPUS - market is growing, AMD doesn't have a new product for it yet.
Mainstream $100-$300 GPUS - market is shrinking, AMD saturates this market with multiple SKUs, though availability remains limited

The strategy doesn't make sense to me. However, I hope that AMD does succeed at it. We can't afford to drop down to one GPU maker.
 
So what happened to this:

RX470Lisa_575px.jpg


The little ~50W GPU that could supposedly match the GTX 950 at 86W total system power:

Radeon%20Technologies%20Group_Graphics%202016-page-016_575px.jpg


Instead we get a GPU that match the 950.... at about the same power. I thought the whole purpose was to be as energy efficient as possible and comfortably fit into the <75W bracket (where you can fill in lots of interesting form factors, like low profile, single-slot, or passively cooled). Most versions seem to have the 6 pin connector, and those that don't seem to trail behind the 950 by a fair amount.

it was a flat out rigged lie. Just like the 2.8x claims.
 
To piggy back on this:

Integrated/mobile GPUS (laptops, AIOs, etc.) - market is growing, AMD doesn't have a new product for it yet.
High-end discrete GPUS - market is growing, AMD doesn't have a new product for it yet.
Mainstream $100-$300 GPUS - market is shrinking, AMD saturates this market with multiple SKUs, though availability remains limited

The strategy doesn't make sense to me. However, I hope that AMD does succeed at it. We can't afford to drop down to one GPU maker.

AMD's strategy only really makes sense if you assume AMD thought all their cards would clock better than they actually did. If the RX 480 clocked near 2,000Mhz like the GTX 1070 and 1080 it would be more competitive with a GTX 1070. Basically, if all the cards had clocked better then they'd all match up to Nvidia's cards that are currently one notch up and be competitive on power efficiency. And right before launch they needed to mark everything down because they couldn't hit those clock speeds. Designs are finalized years in advance, the only last-minute variables are how fast they can clock and remain stable and drivers. It seems the most reasonable explanation to me.
 
AMD's strategy only really makes sense if you assume AMD thought all their cards would clock better than they actually did. If the RX 480 clocked near 2,000Mhz like the GTX 1070 and 1080 it would be more competitive with a GTX 1070. Basically, if all the cards had clocked better then they'd all match up to Nvidia's cards that are currently one notch up and be competitive on power efficiency. And right before launch they needed to mark everything down because they couldn't hit those clock speeds. Designs are finalized years in advance, the only last-minute variables are how fast they can clock and remain stable and drivers. It seems the most reasonable explanation to me.

I think it's a stretch to assume AMD thought they would be able to reach 1070 levels of performance with the number of CUs and memory bus width they included on the 480. I do think they probably expected better, but not radically better. If they were expecting to hit 2GHz with those chips they probably wouldn't have gone with just a 256bit memory bus that would have left a chip running 66% faster starved for bandwidth. I think they probably did expect the 480 to hit the 1.5-1.6GHz range reliably, in that 150 watt TDP range, though. The 480 didn't even have as many CUs as the 290 did, but it could have had 390x kind of performance if they'd have been able to hit 1.5GHz+.
 
I think it's a stretch to assume AMD thought they would be able to reach 1070 levels of performance with the number of CUs and memory bus width they included on the 480. I do think they probably expected better, but not radically better. If they were expecting to hit 2GHz with those chips they probably wouldn't have gone with just a 256bit memory bus that would have left a chip running 66% faster starved for bandwidth. I think they probably did expect the 480 to hit the 1.5-1.6GHz range reliably, in that 150 watt TDP range, though. The 480 didn't even have as many CUs as the 290 did, but it could have had 390x kind of performance if they'd have been able to hit 1.5GHz+.

GTX 1070 also has a 256bit bus and basically the same RAM the 480 has.

You're probably right that it wouldn't perform as well as a GTX 1070, but I didn't say I thought it would. Just that AMD expected it would.
 
Solution - Decompress the damn stack

460 2GB = $100
470 4GB = $150
470 8GB = $200
480 8GB = $250

We don't need a 4GB 460 or 480. Yes, I know that this kills the "$199 480" bit, but oh well.

Simplify it, and don't skimp the vram, rip 2GB cards imho.

460 4GB = $130
470 4GB = $190
480 8GB = $250
 
Last edited:
GTX 1070 also has a 256bit bus and basically the same RAM the 480 has.

You're probably right that it wouldn't perform as well as a GTX 1070, but I didn't say I thought it would. Just that AMD expected it would.

The GTX 1070 also has a tile based renderer, drastically reducing memory bandwidth constraints. AMD isn't dumb, they know what their own architecture can do per cycle and they also knew what maxwell did per cycle, probably even knew nVidia had implemented tiling.

Unless they tried to implement tiling and it's just not exposed currently for being broken or still not fully implemented in drivers I can't see AMD thinking they'd reach pascal levels of efficiency with regards to memory bandwidth.

The most obvious explanation is that AMD knew they were targeting a mid range part but hoped they'd gain more from the new process node than they did.
 
I must say, i do enjoy AMD's policy!!.
Some companies aim for the top & better performance for their new GPUs, while on the other hand AMD continously proudly announces that their new GPU managed to surpass the medium / bottom-end of NVidia's previous generation GPUs.
I'm looking forward for AMD's next announcement, which will probably proudly announce that another of their new GPUs is able to surpass GTX 940 & GTX 930 !! :ROFLMAO: ( and after a couple of years they might be able to compete with the GTX 1080 :nailbiting:)

Well you can not really say that when AMD is kicking some serious ass on the DX12 front and Doom on Vulkan another spanking ....
AMD planned Vega for 2017 I'm sure that will be better performance with HBM2 then what is out now.
 
AMD's strategy only really makes sense if you assume AMD thought all their cards would clock better than they actually did. If the RX 480 clocked near 2,000Mhz like the GTX 1070 and 1080 it would be more competitive with a GTX 1070. Basically, if all the cards had clocked better then they'd all match up to Nvidia's cards that are currently one notch up and be competitive on power efficiency. And right before launch they needed to mark everything down because they couldn't hit those clock speeds. Designs are finalized years in advance, the only last-minute variables are how fast they can clock and remain stable and drivers. It seems the most reasonable explanation to me.

I'd have to agree that this scenario seems fairly likely. If the 480 was supposed to be a good 30-40% higher clocks at the same TDP, it would have been a $300+ GPU instead of where it is now, and that opens up the range in the stack fairly well. Likewise the 470 would have had more room to maneuver up or down the food chain with better clocks as well. huh, its almost like that Kyle Bennett guy had good information!

Well you can not really say that when AMD is kicking some serious ass on the DX12 front and Doom on Vulkan another spanking ....

Well thank goodness its kicking ass on a game that the competition doesn't exactly struggle with, then loses to it in many other places! By the time DX12/Vulkan really kick off, the 460 will be obsolete.
 
Well thank goodness its kicking ass on a game that the competition doesn't exactly struggle with, then loses to it in many other places! By the time DX12/Vulkan really kick off, the 460 will be obsolete.

Most cards will be obsolete by the time then what kind of silly argument is that ? If DX12/Vulkan games progress to the point where there hitting around 300K batches then "nothing" what is out today will suffice.
 
AMD's strategy only really makes sense if you assume AMD thought all their cards would clock better than they actually did. If the RX 480 clocked near 2,000Mhz like the GTX 1070 and 1080 it would be more competitive with a GTX 1070. Basically, if all the cards had clocked better then they'd all match up to Nvidia's cards that are currently one notch up and be competitive on power efficiency. And right before launch they needed to mark everything down because they couldn't hit those clock speeds. Designs are finalized years in advance, the only last-minute variables are how fast they can clock and remain stable and drivers. It seems the most reasonable explanation to me.

This was a popular theory for awhile, but there is no way that AMD intended for a 32 ROP part to compete with the 1070.

I do think that the card did come in below expectations, however. I think that they wanted GTX 960/970/980 performance with the 460/470/480, but with lower power draw. The problem is that these cards are nowhere near as efficient when pushed that hard. The MSI and Sapphire 480s come close to 980 performance, but at higher power draw.
 
I'd have to agree that this scenario seems fairly likely. If the 480 was supposed to be a good 30-40% higher clocks at the same TDP, it would have been a $300+ GPU instead of where it is now, and that opens up the range in the stack fairly well. Likewise the 470 would have had more room to maneuver up or down the food chain with better clocks as well. huh, its almost like that Kyle Bennett guy had good information!

AMD never intended for the 480 to compete with the 1070. It explicitly said that they were releasing mid range first, which is the same thing Nvidia did with Maxell, which launched first with the 750TI. I don't know why people insist these cards were going to compete with Nvidia's high end cards when that wasn't the case at all.

Now is it possible these cards missed their target? Of course, but let's not pretend that a card that's chip is 220 mm in size was competing with a chip that's 314mm. That's just being intentionally obtuse.
 
I think AMD's Polaris product stack is hurting because of a poor GF 14LPP implementation of Samsung's 14LPP. The fact that AMD is pumping a lot more voltage to qualify as many Rx 480 chips as possible is a good indicator. Undervolting ref Rx 480 cards leads to better performance as there is reduced clock throttling as the chip hits TDP LIMIT at higher clocks.


Google Translate

On average GTX 1060 is 10% faster than Rx 480 ref which clocks around 1200 Mhz. Rx 480 could be a better GPU if it had been able to hit 1400 Mhz and stay below 150w. The problem can be fixed when and only if GF fixes their process issues or AMD fabs chips at Samsung (who have the best possible 14LPP implementation). Rx 400 naming scheme reveals a second revision is planned and a Rx 485 could launch in H1 2017.

AMD Radeon RX 400 series naming scheme explained | VideoCardz.com

AMD Officially Diversifies 14nm Manufacturing With Samsung
 
AMD never intended for the 480 to compete with the 1070. It explicitly said that they were releasing mid range first, which is the same thing Nvidia did with Maxell, which launched first with the 750TI. I don't know why people insist these cards were going to compete with Nvidia's high end cards when that wasn't the case at all.

Now is it possible these cards missed their target? Of course, but let's not pretend that a card that's chip is 220 mm in size was competing with a chip that's 314mm. That's just being intentionally obtuse.

I didn't say I thought it was going to compete with the 1070, thats you trying to read between the lines, but IMO its not really the same as what NV did with the 750Ti either. I think its obvious that they were never shooting to be a direct competitor to the 1070 for the reasons you and others mentioned (chip size, ROP count etc), but they probably wanted to be this generations 970, and its not. The 1070 is a $400 GPU... I think if they had been able to clock the 480 design higher, they would have been able to charge more for it and offer a more compelling alternative to the 1070 at a lower price. As it currently stands the 480 is zero competition to the 1070, and instead slots in just around (or even below) the 1060.They're far enough behind that people who were looking to spend $300-$350 aren't going to be impressed by the 480 and many will likely trade up to the 1070 as a result rather than sit and wait for Vega.
 
I didn't say I thought it was going to compete with the 1070, thats you trying to read between the lines, but IMO its not really the same as what NV did with the 750Ti either. I think its obvious that they were never shooting to be a direct competitor to the 1070 for the reasons you and others mentioned (chip size, ROP count etc), but they probably wanted to be this generations 970, and its not. The 1070 is a $400 GPU... I think if they had been able to clock the 480 design higher, they would have been able to charge more for it and offer a more compelling alternative to the 1070 at a lower price. As it currently stands the 480 is zero competition to the 1070, and instead slots in just around (or even below) the 1060.They're far enough behind that people who were looking to spend $300-$350 aren't going to be impressed by the 480 and many will likely trade up to the 1070 as a result rather than sit and wait for Vega.

First you say that I'm reading in-between the lines (putting words in your mouth) and then do exactly what you said you weren't doing. The 970 is a $400 GPU it's high end not mid range. Even with the TI $300 - 350 isn't mid range nor is it what AMD charges for the 480. This years 970 as you put it would be the 1070. The 980 Ti was a single SKU release that went above what the 980 released at. Again, if Nvidia can release a new generation with mid range first then AMD should be able to do the same thing and everyone should be able to comprehend it like we've been looking at this stuff for years and not pretend like we don't know what mid range is. Based on price alone it wouldn't be a 970 or a 1070 for that matter.

Now can we guess and do creative writing exercises? Sure, but let's not pretend those are facts either. At that point it's no more legitimate than eating glue or playing with your feet.
 
Last edited:
In DX 11 it is on par with or behind the GTX 950, while in DX 12/Vulkan it is on par with or ahead of the GTX 960, about where I expected it to be. I got an EVGA GTX 950 SC+ for $99.99 over a month ago brand new from Newegg (for my secondary rig). I don't regret it.

After going back and looking it over again, DX12/Vulkan 1080p, RotTR is the only place where the 960 is close, although I'd wonder why the 960 doesn't appear in the AotS benchmark. Even the OpenGL benchmark of doom has the 480 substantially ahead of the 960.

Step into DX11 though...Maxwell comes alive again and things are close.

I'm guessing they threw out the 1440p numbers just to illustrate what segment is the target for this card.

However, I look at that card and just get the impression of someone who puts spinners on their Toyota Yaris.

Edit: For the first time in...a while at least, there seems to be almost perfect dollar/performance scaling from budget to mid(ish) range. A 460 costs about half as much as a 1060 and performs about half as well. The 470 (but this really should be the 4GB 480) sits between the 460 and 1060

Usually, there was at least some fall off in the performance purchasing power when you cross $200, but now it seems like it just scales up to wherever your max budget is, until you cross around $280.
 
Last edited:
First you say that I'm reading in-between the lines (putting words in your mouth) and then do exactly what you said you weren't doing. The 970 is a $400 GPU it's high end not mid range. Even with the TI $300 - 350 isn't mid range nor is it what AMD charges for the 480. This years 970 as you put it would be the 1070. The 980 Ti was a single SKU release that went above what the 980 released at. Again, if Nvidia can release a new generation with mid range first then AMD should be able to do the same thing and everyone should be able to comprehend it like we've been looking at this stuff for years and not pretend like we don't know what mid range is. Based on price alone it wouldn't be a 970 or a 1070 for that matter.

Now can we guess and do creative writing exercises? Sure, but let's not pretend those are facts either. At that point it's no more legitimate than eating glue or playing with your feet.

Again, I never made that comparison in my original post, but since you felt compelled to bring up the 1070 I figured the comparison was fair game. The 970 was a $329 GPU and was widely considered one of the best values at the time of its release... thats what I meant buy "this years 970", that it could be considered the obvious bargain of this product cycle (which it isn't). I guess we can argue the semantics of what is or isn't "mid range" but I'd consider it to be the $200-$300 market. The 480 is at the bottom of that, if it was a faster card I think it could have been closer to the top, decompressed AMD's product stack to the point of rationality, and won back key market share in an obviously large segment of the market. But hey, if you want to keep making smart little remarks at the end of every post, feel free... we're done here.
 
Again, I never made that comparison in my original post, but since you felt compelled to bring up the 1070 I figured the comparison was fair game. The 970 was a $329 GPU and was widely considered one of the best values at the time of its release... thats what I meant buy "this years 970", that it could be considered the obvious bargain of this product cycle (which it isn't). I guess we can argue the semantics of what is or isn't "mid range" but I'd consider it to be the $200-$300 market. The 480 is at the bottom of that, if it was a faster card I think it could have been closer to the top, decompressed AMD's product stack to the point of rationality, and won back key market share in an obviously large segment of the market. But hey, if you want to keep making smart little remarks at the end of every post, feel free... we're done here.

Um is 329 higher than 300? If so then it isn't mid range and in reality the card really released at $349. 329 price was for the reference boards which were not available at launch. As I said before it wasn't mid range and creating a narrative that the 480 was supposed to compete around the level of a 1070 or that a 1070 or 970 were mid range cards is nuts especially when nVidia considers it's competition the 1060. So if you want to believe that the card was supposed to compete in a bracket higher than even what Nvidia thinks then so be it. It just doesn't make any sense is all. And what's the die size of a GTX 1060? 200mm. Hmm..
 
Back
Top