From ATI to AMD back to ATI? A Journey in Futility @ [H]

When I first saw 170W I swear I thought it was total system power using a stock 6600k or something.

lol I don't blame you, I so wanted to believe this was a 100-120watt part...

How the hell is a 7850 beating it in power consumption!?
 
I honestly think because it would look less efficient

Stupid part is it is still much more efficient than their last gen cards. Like it is getting twice the performance of last tonga. Kind of stupid move, o well. I need a break from the forums. My eyes are going blind and newegg decided to do a fuckin bi annual inventory and said wont ship until friday. Damn these guys! Got 850 evo and corsair h100i sitting here still sealed waiting for these puppies. Yes you are right no gpu yet lol.

upload_2016-6-29_19-56-32.png
 
Stupid part is it is still much more efficient than their last gen cards. Like it is getting twice the performance of last tonga. Kind of stupid move, o well. I need a break from the forums. My eyes are going blind and newegg decided to do a fuckin bi annual inventory and said wont ship until friday. Damn these guys! Got 850 evo and corsair h100i sitting here still sealed waiting for these puppies. Yes you are right no gpu yet lol.

View attachment 4752

Best of luck upgrading :) Don't CF until we see if Kyle manages to blow up his board, all this debating aside, watching kyle blow up his board, no matter the cause, is something everyone can look forward to
 
Best of luck upgrading :) Don't CF until we see if Kyle manages to blow up his board, all this debating aside, watching kyle blow up his board, no matter the cause, is something everyone can look forward to

Hell no can't even go crossfire I dont do multi gpu crap lol. Thats a mini itx board no crossfire anyways. I eliminated that option from the start. I shall never be tempted to make that mistake.
 
Stupid part is it is still much more efficient than their last gen cards. Like it is getting twice the performance of last tonga. Kind of stupid move, o well. I need a break from the forums. My eyes are going blind and newegg decided to do a fuckin bi annual inventory and said wont ship until friday. Damn these guys! Got 850 evo and corsair h100i sitting here still sealed waiting for these puppies. Yes you are right no gpu yet lol.

View attachment 4752


I would get a better power supply an extra 25 bucks there would get you pretty far.
 
I would get a better power supply an extra 25 bucks there would get you pretty far.

Na I read reviews on it. Those are damn good power supplies, fully modular and quiet. Its not cheap by any means. 80 plus gold certified and slim cabling. I never go cheap on power supply that was just on sale. the 650w was the same price as 750. 750 will do me solid for any single high end card. bought all the parts other than gpu, I pulled the trigger after a month of looking.
 
I would get a better power supply an extra 25 bucks there would get you pretty far.

oh and I decided to go with 6700k. I was like wtf I don't want any regrets after putting it together. I spent the extra hundred just for piece of mind lol. I think that is what people undervalue the most. lol
 
I have a 1TB 850 EVO and a 1TB WD Blue, love this thing, wish I could afford the Pro in this size though.
 
I have a 1TB 850 EVO and a 1TB WD Blue, love this thing, wish I could afford the Pro in this size though.

I was going to go 1tb but my last ssd was 240 I went double, I wanted to go with decent ssd. it was either 500 or 1tb but I couldnt justify another 170 jump in price, from 150 to 320. Plus I am very picky on games I install. So 500 should do me solid. I have a fat ass steam library only because suckers always got on those 30 dollar game for 5 and 10 and shit in summer and winter sales.
 
I was going to go 1tb but my last ssd was 240 I went double. Plus I am very picky on games I install. So 500 should do me solid. I have a fat ass steam library only because suckers always got on those 30 dollar game for 5 and 10 and shit in summer and winter sales.

Yea, I spent a good week re-downloading my library to the EVO, only filled 300gb's though.
 
Hey Kyle, did you IP ban me or something?

Hardocp takes a long time to load and when I click an article it says 404 not found.

I'm on mobile. Can't switch to full site cause ot won't load. Forums works fine.
 
m.hardocp.com isn't working for me, but normal is (on my phone).

On the PC, normal is working as well.
 
Disband AMD. Spin off their GPU division. Scrap their CPU division since no one would want that liability. Bring former ATI to glory.

I foresee Zen as a huge flop seeing how AMD has been misleading the general audience. Wouldn't take anything from them seriously.
 
Disband AMD. Spin off their GPU division. Scrap their CPU division since no one would want that liability. Bring former ATI to glory.
I foresee Zen as a huge flop seeing how AMD has been misleading the general audience. Wouldn't take anything from them seriously.

You must really loved getting assfucked by Intel.
 
You must really loved getting assfucked by Intel.

Like Intel has any particular competition from AMD. I see their prices quite stable for the last few years and that too without any pressure from AMD.
 
Like Intel has any particular competition from AMD. I see their prices quite stable for the last few years and that too without any pressure from AMD.

I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.
 
I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.

So Intel's been competing with itself? There's been zero pressure from AMD for the last 8 years. You are lucky to get a 4790K or 6700K for ~300$ and that too without any competition.
 
I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.

Idk, you should check out the reviews. Skylake is much better than Sandy.
 
So Intel's been competing with itself? There's been zero pressure from AMD for the last 8 years.
Pretty much. When people are replacing their GPUs more often than their CPUs, Intel 's revenues suffer. Lately, tho, most of the impetus to upgrade that Intel has been providing for desktop users have been in the platform (for example, M.2, USB-3.1, and more PCI-E 3.0 lanes) not the processor. The laptop market, however, is a different story.
 
I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.
Not if you actually have enough GPU to be CPU limited. I went from an i7 930 @ 4.4GHz to a I7-4770K @ 4.4GHz and my minimum frame rates in games went up 10+fps.

The CPU performance itself is quite a bit better than people realize, it's just that most people aren't actually CPU limited or using CPU intensive applications.
 
Getting back on track to this actual thread-

The one thing I don't understand: If the fully enabled Polaris 10 chip (which became RX480) was supposed to be a competitor with Nvidia's next gen, how did AMD expect to do so with such a small die size chip (I'm seeing somewhere between 220-232 mm^2, compared to 314mm^2 1080), with so few shaders and ROPs?

I'm no GPU designer, but history shows that each generation has the top end parts with more shaders (or at least in the same ballpark - like 5870->6970). A smaller process size gives you room to pack more stuff into an equal or smaller space.

So either AMD engineers decided - "these new shaders and ROPs are so great, we don't need anywhere near as many of them to equal the performance of our current top parts", or what? It actually wasn't supposed to be a top end part?

I think the benchmarks support a lot of Kyle's original editorial, but I'd really like someone with more in-depth knowledge (then me) take a stab at trying to piece together what AMD wanted to happen.

Also, where does Vega fit in this. Earliest info I can find on it was Capsaicin in March. If RX480 was a top end part at that point what was Vega, a Titan competitor?
 
Last edited:
Getting back on track to this actual thread-

The one thing I don't understand: If the fully enabled Polaris 10 chip (which became RX480) was supposed to be a competitor with Nvidia's next gen, how did AMD expect to do so with such a small die size chip (I'm seeing somewhere between 220-232 mm^2, compared to 314mm^2 1080), with so few shaders and ROPs?

I'm no GPU designer, but history shows that each generation has the top end parts with more shaders (or at least in the same ballpark - like 5870->6970). A smaller process size gives you room to pack more stuff into an equal or smaller space.

So either AMD engineers decided - "these new shaders and ROPs are so great, we don't need anywhere near as many of them to equal the performance of our current top parts", or what? It actually wasn't supposed to be a top end part?

Or was it that things looked good on paper but the process was garbage and ran hot?

I think the benchmarks support a lot of Kyle's original editorial, but I'd really like someone with more in-depth knowledge (then me) take a stab at trying to piece together what AMD wanted to happen.

Also, where does Vega fit in this. Earliest info I can find on it was Capsaicin in March. If RX480 was a top end part at that point what was Vega, a Titan competitor?
Long story short: AMD expected Polaris 10 to clock way better, according to Kyle.

Also, if you think about it, transistor count wise P10 vs cut GP104 relationship is similar to Tonga XT vs cut GM204. We know how 380x and 970 compared.
 
Getting back on track to this actual thread-

The one thing I don't understand: If the fully enabled Polaris 10 chip (which became RX480) was supposed to be a competitor with Nvidia's next gen, how did AMD expect to do so with such a small die size chip (I'm seeing somewhere between 220-232 mm^2, compared to 314mm^2 1080), with so few shaders and ROPs?

They probably expected it to clock higher (my guess would be around 1.5 or 1.6Ghz before boost and planned on using GDDR5x instead of GDDR5), then Polaris 10 would be much, much closer to a 1080 after boost clocks to about 1.7 or 1.8 are applied. The RX480 is obviously bandwidth starved from looking at 1080p vs 1440p benches but AMD went cheap on the ram since they had no choice when they decided to change it to mid-range instead of a high end card.

If you look at the 1080 it's less wide than the previous 900 gen but it makes up for it with extra Mhz too.
 
So Intel's been competing with itself? There's been zero pressure from AMD for the last 8 years. You are lucky to get a 4790K or 6700K for ~300$ and that too without any competition.
Skywell has been getting such good yields (in excess of 90% now, compared to ~30% at launch) that there should be tons of inventory. Funny thing, there should be tons of Haswell out there too as Intel ramped up production on that in case Skywell yields did not pan out quickly. Anway....

The one thing I don't understand: If the fully enabled Polaris 10 chip (which became RX480) was supposed to be a competitor with Nvidia's next gen, how did AMD expect to do so with such a small die size chip (I'm seeing somewhere between 220-232 mm^2, compared to 314mm^2 1080), with so few shaders and ROPs?
I was told that Polaris was supposed to take the Fury/Fury X spot in the stack, with Vega still to be on top when it gets here......late. AMD was taken by surprise on 1080/1070 perf. They got caught with their pants downs, and are struggling to pull them up.
 
I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.

It doesn't and they aren't.


Back to the topic at hand, I can see AMD possibly selling off the Radeon division to get an influx of cash with exclusivity to licensing the GPU tech for an extended period of time. That would be the best option for both sides. Let Radeon get a new parent with money to spend and give AMD some breathing room.
 
It doesn't and they aren't.


Back to the topic at hand, I can see AMD possibly selling off the Radeon division to get an influx of cash with exclusivity to licensing the GPU tech for an extended period of time. That would be the best option for both sides. Let Radeon get a new parent with money to spend and give AMD some breathing room.


Agreed on the CPUs, but back on topic.

I was told from a single source week before last that the RTG licensing deal with Intel is still very much on track....and top secret.
 
So, sounds like some combination of the process (ability to clock higher) and AMD's design on said process (Nvidia really touted the work they did to get their clocks @ 16nm).

I know there's a sizable performance diff between the 390X and the 980, but it's close enough for this horrible armchair math:

980 GM204 = 5.2B Transistors = 13.07M Transistors/mm^2
1080 GP104 = 7.2B Transistors = 22.93M Transistors/mm^2

390X Grenada XT = 6.2B Transistors = 14.15 Transistors/mm^2
RX480 Polaris 10 = 5.7B Transistors = 24.57 Transistors/mm^2

So things are in the same ballpark as far as density goes. 14nm vs 16nm easily account for 1080/RX480 delta.

Is the 1080 faster then the 980, yes - all things being equal they packed more transistors in it. (In real life it's also clocked a heck of a lot higher, more IOC, etc)

Is the RX480 faster then then the 390X, no - all things being equal it contains less transistors (again, there's some other improvements, but they don't help enough). So the only way for the RX480 to be faster then the 390X would have been to seriously clock it up (like Kyle suggested and way more so then it currently is).

So Nvidia played it safe, if the new part couldn't hit the clocks, they can always fall back on the fact that they threw more parts on the problem. New chip would likely be more efficient and eventually cheaper as the die is smaller.

AMD tried to get away with less, used a super small die, hoping they could clock higher. When they couldn't it becomes a mid-tier part.

That really sounds like a hail-mary move from AMD. Shame.
 
Might tie in with Intel's looming FreeSync support, or perhaps their support of VESA Adaptive Sync was just a coincidence.
 
Might tie in with Intel's looming FreeSync support, or perhaps their support of VESA Adaptive Sync was just a coincidence.
As the proud owner (really, they are awesome!) of two Acer XB321HK 32" 4K G-Sync monitors, I find myself wishfully wondering whether they might someday get a firmware patch to make them VESA Adaptive Sync compatible ... I don't know enough about the two protocols to know if that's practical.
 
I'm really having a hard time understanding how AMD creates a part on a smaller process that's getting killed in perf/watt by 16nm Pascal.

I keep wondering the same thing. I assumed (incorrectly) that it would draw less power than a 1070, but it doesn't. I wish someone would post an in-depth analysis on how/why nvidia can be much more efficient than AMD

I wonder if it is just b/c GLOFLO is such a sub-par company and AMD is hamstrung with that terrible, unbreakable contract they signed with them :(
 
As the proud owner (really, they are awesome!) of two Acer XB321HK 32" 4K G-Sync monitors, I find myself wishfully wondering whether they might someday get a firmware patch to make them VESA Adaptive Sync compatible ... I don't know enough about the two protocols to know if that's practical.

Even if it's possible, I'm not sure who would be motivated to do that. Acer? Nope, they'd prefer that you "upgrade" to the FreeSync model. Nvidia? Even if they move to support VESA Adaptive Sync, they will likely continue to support G-Sync. Making the monitor FreeSync compatible would only provide incentive to not stay with their GPU.
 
What Kyle said, plus that graphic giving the impression that P10/11 is an entire product stack, and Vega is another complete top to bottom stack, gives the impression that P10 was supposed to do a LOT more than just offer GTX 970 performance. Unless, "AMD was taken by surprise on 1080/1070 perf" is code for "AMD legitimately thought that Nvidia's high end 2016 card would only match the GTX 970 in performance."

That said, I'll repeat what I've said before. AMD likely expected 780ti-980 performance jump, and not the actual 980ti-1080 that we got. And, P10 was targeting that 780ti-980 performance leap, but came nowhere close. I know, people will disagree. There's evidence for and against it. But, that's my speculation.

That is what I believe, AMD wants a product to be on par with Fury X in order to get rid of the Fury lineup since it is costing them more than they like it to be.
 
As far as I'm concerned, the RX480 reviews made me buy a $400 1070 non-FE.

Even if the perf/$ is good enough to ignore the ~160W TDP and the crappy cooler, when people they have to choose between that combination versus avoiding the PCIE power overdraw FUD, they will choose the latter aka "not fucking fry my motherboard".

Nvidia already @ 80% marketshare and yet these AMD jokers still never learn from their PR disasters.
 
It's tempting to go back over the 62 pages of this thread to see how many times Kyle was insulted and demeaned for telling the truth, here and elsewhere across the web. Anyone buying the reference RX480 in its first days on the market should have heeded this report. Kyle you're officially vindicated.
 
What Kyle said, plus that graphic giving the impression that P10/11 is an entire product stack, and Vega is another complete top to bottom stack, gives the impression that P10 was supposed to do a LOT more than just offer GTX 970 performance. Unless, "AMD was taken by surprise on 1080/1070 perf" is code for "AMD legitimately thought that Nvidia's high end 2016 card would only match the GTX 970 in performance."

That said, I'll repeat what I've said before. AMD likely expected 780ti-980 performance jump, and not the actual 980ti-1080 that we got. And, P10 was targeting that 780ti-980 performance leap, but came nowhere close. I know, people will disagree. There's evidence for and against it. But, that's my speculation.

Well kyle was right. Its just that alot of people thought he was mad but in reality thats just his style. Here is what happened.

AIB cards are coming out and kyle himself reported that they can do between 1.48 to 1.6 but that range it really depends on the chip..

That proves his point.

AMD wanted close to fury performance but the card was drawing too much power at those clocks and they were only designing one chip with 2306 shaders and going for max clocks and efficiency.

So they turned that bitch down to 1266 and priced it cheap and made a mistake of putting a 6 pin connector on there. I don't know why they didn't put an 8 pin connector and called it 160w TDP for the power usage. Blows my mind.

Kyle's report about AIBs hitting 1.48 - 1.6 just shows that this chip is capable of doing more but with much higher power draw.

Which actually makes me glad they didn't try to throw vega on the brand new 14nm process and made it a mess. I think they will probably have few more revisions of this chip by the time vega comes out. We might even see a 485 that is a different revision and improves on clock speed and efficiency.

It seems that AMD is having all kinds of variations in chips. We are getting 1266 and then probably 1.5-1.6 max but at current state at GF its requiring too much power to run at those speeds that they weren't comfortable with.

I truly think it should use much less power at 14nm but I think it is just taking them longer to get there. I hope another 6 months will do the trick and they can refine this process well enough in time for Vega.
 
AMD got lucky with the HBM2 delays, they most likely would have tried to release it on the new process. But we don't even know for sure if its the new process to blame yet or the aging GCN arch.
 
Back
Top