From ATI to AMD back to ATI? A Journey in Futility @ [H]

Discussion in 'AMD Flavor' started by Kyle_Bennett, May 27, 2016.

  1. madgun

    madgun [H]ard|Gawd

    Messages:
    1,727
    Joined:
    May 11, 2006
    Disband AMD. Spin off their GPU division. Scrap their CPU division since no one would want that liability. Bring former ATI to glory.

    I foresee Zen as a huge flop seeing how AMD has been misleading the general audience. Wouldn't take anything from them seriously.
     
  2. rat

    rat [H]ardness Supreme

    Messages:
    6,085
    Joined:
    Apr 16, 2008
    You must really loved getting assfucked by Intel.
     
    NeoNemesis likes this.
  3. madgun

    madgun [H]ard|Gawd

    Messages:
    1,727
    Joined:
    May 11, 2006
    Like Intel has any particular competition from AMD. I see their prices quite stable for the last few years and that too without any pressure from AMD.
     
    Kyle_Bennett likes this.
  4. rat

    rat [H]ardness Supreme

    Messages:
    6,085
    Joined:
    Apr 16, 2008
    I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.
     
    Maddness likes this.
  5. madgun

    madgun [H]ard|Gawd

    Messages:
    1,727
    Joined:
    May 11, 2006
    So Intel's been competing with itself? There's been zero pressure from AMD for the last 8 years. You are lucky to get a 4790K or 6700K for ~300$ and that too without any competition.
     
  6. TheLAWNoob

    TheLAWNoob Limp Gawd

    Messages:
    346
    Joined:
    Jan 10, 2016
    Idk, you should check out the reviews. Skylake is much better than Sandy.
     
  7. pcgeekesq

    pcgeekesq Limp Gawd

    Messages:
    451
    Joined:
    Apr 23, 2012
    Pretty much. When people are replacing their GPUs more often than their CPUs, Intel 's revenues suffer. Lately, tho, most of the impetus to upgrade that Intel has been providing for desktop users have been in the platform (for example, M.2, USB-3.1, and more PCI-E 3.0 lanes) not the processor. The laptop market, however, is a different story.
     
  8. Rizen

    Rizen [H]ardForum Junkie

    Messages:
    8,957
    Joined:
    Jul 16, 2000
    Not if you actually have enough GPU to be CPU limited. I went from an i7 930 @ 4.4GHz to a I7-4770K @ 4.4GHz and my minimum frame rates in games went up 10+fps.

    The CPU performance itself is quite a bit better than people realize, it's just that most people aren't actually CPU limited or using CPU intensive applications.
     
    Red Falcon likes this.
  9. thingi

    thingi [H]Lite

    Messages:
    86
    Joined:
    May 15, 2006
    Looks like we were both right Kyle, props to you and your source. Your quoted post ties in perfectly with mine (1922). The only thing I'm surprised about is how close to the redline the 1266Mhz boost of the standard card is.

    That's a pretty serious process note issue they'll need to resolve before the PS4neo and Xbox Scorpio get anywere near the wild. MS may well have trouble meeting their announced schedule for it taking console GPU validation time into account.

    RX480 is still great bang for buck though, even with the pci slot power draw issue I'm still temped to buy the fastest AIB version as I've intended all along (fully expect AIB boards to have 8pin and draw less proportionately from the PCIe slot).
     
    Kyle_Bennett likes this.
  10. Track Drew

    Track Drew Limp Gawd

    Messages:
    488
    Joined:
    Dec 6, 2007
    Getting back on track to this actual thread-

    The one thing I don't understand: If the fully enabled Polaris 10 chip (which became RX480) was supposed to be a competitor with Nvidia's next gen, how did AMD expect to do so with such a small die size chip (I'm seeing somewhere between 220-232 mm^2, compared to 314mm^2 1080), with so few shaders and ROPs?

    I'm no GPU designer, but history shows that each generation has the top end parts with more shaders (or at least in the same ballpark - like 5870->6970). A smaller process size gives you room to pack more stuff into an equal or smaller space.

    So either AMD engineers decided - "these new shaders and ROPs are so great, we don't need anywhere near as many of them to equal the performance of our current top parts", or what? It actually wasn't supposed to be a top end part?

    I think the benchmarks support a lot of Kyle's original editorial, but I'd really like someone with more in-depth knowledge (then me) take a stab at trying to piece together what AMD wanted to happen.

    Also, where does Vega fit in this. Earliest info I can find on it was Capsaicin in March. If RX480 was a top end part at that point what was Vega, a Titan competitor?
     
    Last edited: Jun 30, 2016
    extide likes this.
  11. lolfail9001

    lolfail9001 [H]ard|Gawd

    Messages:
    1,547
    Joined:
    May 31, 2016
    Long story short: AMD expected Polaris 10 to clock way better, according to Kyle.

    Also, if you think about it, transistor count wise P10 vs cut GP104 relationship is similar to Tonga XT vs cut GM204. We know how 380x and 970 compared.
     
  12. thingi

    thingi [H]Lite

    Messages:
    86
    Joined:
    May 15, 2006
    They probably expected it to clock higher (my guess would be around 1.5 or 1.6Ghz before boost and planned on using GDDR5x instead of GDDR5), then Polaris 10 would be much, much closer to a 1080 after boost clocks to about 1.7 or 1.8 are applied. The RX480 is obviously bandwidth starved from looking at 1080p vs 1440p benches but AMD went cheap on the ram since they had no choice when they decided to change it to mid-range instead of a high end card.

    If you look at the 1080 it's less wide than the previous 900 gen but it makes up for it with extra Mhz too.
     
  13. Kyle_Bennett

    Kyle_Bennett El Chingón Staff Member

    Messages:
    47,139
    Joined:
    May 18, 1997
    Skywell has been getting such good yields (in excess of 90% now, compared to ~30% at launch) that there should be tons of inventory. Funny thing, there should be tons of Haswell out there too as Intel ramped up production on that in case Skywell yields did not pan out quickly. Anway....

    I was told that Polaris was supposed to take the Fury/Fury X spot in the stack, with Vega still to be on top when it gets here......late. AMD was taken by surprise on 1080/1070 perf. They got caught with their pants downs, and are struggling to pull them up.
     
  14. Cr4ckm0nk3y

    Cr4ckm0nk3y Gawd

    Messages:
    743
    Joined:
    Jul 30, 2009
    It doesn't and they aren't.


    Back to the topic at hand, I can see AMD possibly selling off the Radeon division to get an influx of cash with exclusivity to licensing the GPU tech for an extended period of time. That would be the best option for both sides. Let Radeon get a new parent with money to spend and give AMD some breathing room.
     
    Kyle_Bennett likes this.
  15. Kyle_Bennett

    Kyle_Bennett El Chingón Staff Member

    Messages:
    47,139
    Joined:
    May 18, 1997

    Agreed on the CPUs, but back on topic.

    I was told from a single source week before last that the RTG licensing deal with Intel is still very much on track....and top secret.
     
    razor1 likes this.
  16. Araxie

    Araxie [H]ardness Supreme

    Messages:
    5,657
    Joined:
    Feb 11, 2013
    But due to some guy who is now a vindicated, respected an acclaimed CHEF that's no "secret" anymore.. ;) guess you should know at least one of that kind of chef.
     
    razor1 and Kyle_Bennett like this.
  17. Track Drew

    Track Drew Limp Gawd

    Messages:
    488
    Joined:
    Dec 6, 2007
    So, sounds like some combination of the process (ability to clock higher) and AMD's design on said process (Nvidia really touted the work they did to get their clocks @ 16nm).

    I know there's a sizable performance diff between the 390X and the 980, but it's close enough for this horrible armchair math:

    980 GM204 = 5.2B Transistors = 13.07M Transistors/mm^2
    1080 GP104 = 7.2B Transistors = 22.93M Transistors/mm^2

    390X Grenada XT = 6.2B Transistors = 14.15 Transistors/mm^2
    RX480 Polaris 10 = 5.7B Transistors = 24.57 Transistors/mm^2

    So things are in the same ballpark as far as density goes. 14nm vs 16nm easily account for 1080/RX480 delta.

    Is the 1080 faster then the 980, yes - all things being equal they packed more transistors in it. (In real life it's also clocked a heck of a lot higher, more IOC, etc)

    Is the RX480 faster then then the 390X, no - all things being equal it contains less transistors (again, there's some other improvements, but they don't help enough). So the only way for the RX480 to be faster then the 390X would have been to seriously clock it up (like Kyle suggested and way more so then it currently is).

    So Nvidia played it safe, if the new part couldn't hit the clocks, they can always fall back on the fact that they threw more parts on the problem. New chip would likely be more efficient and eventually cheaper as the die is smaller.

    AMD tried to get away with less, used a super small die, hoping they could clock higher. When they couldn't it becomes a mid-tier part.

    That really sounds like a hail-mary move from AMD. Shame.
     
  18. Cr4ckm0nk3y

    Cr4ckm0nk3y Gawd

    Messages:
    743
    Joined:
    Jul 30, 2009
    [​IMG]
     
  19. Daniel_Chang

    Daniel_Chang [H]ard|Gawd

    Messages:
    1,331
    Joined:
    Jan 4, 2016
    Might tie in with Intel's looming FreeSync support, or perhaps their support of VESA Adaptive Sync was just a coincidence.
     
  20. pcgeekesq

    pcgeekesq Limp Gawd

    Messages:
    451
    Joined:
    Apr 23, 2012
    As the proud owner (really, they are awesome!) of two Acer XB321HK 32" 4K G-Sync monitors, I find myself wishfully wondering whether they might someday get a firmware patch to make them VESA Adaptive Sync compatible ... I don't know enough about the two protocols to know if that's practical.
     
  21. illli

    illli [H]ard|Gawd

    Messages:
    1,046
    Joined:
    Oct 26, 2005
    I keep wondering the same thing. I assumed (incorrectly) that it would draw less power than a 1070, but it doesn't. I wish someone would post an in-depth analysis on how/why nvidia can be much more efficient than AMD

    I wonder if it is just b/c GLOFLO is such a sub-par company and AMD is hamstrung with that terrible, unbreakable contract they signed with them :(
     
  22. Daniel_Chang

    Daniel_Chang [H]ard|Gawd

    Messages:
    1,331
    Joined:
    Jan 4, 2016
    Even if it's possible, I'm not sure who would be motivated to do that. Acer? Nope, they'd prefer that you "upgrade" to the FreeSync model. Nvidia? Even if they move to support VESA Adaptive Sync, they will likely continue to support G-Sync. Making the monitor FreeSync compatible would only provide incentive to not stay with their GPU.
     
  23. trandoanhung1991

    trandoanhung1991 Gawd

    Messages:
    1,023
    Joined:
    Aug 26, 2011
    Well, AMD did have this roadmap, after all:
    [​IMG]
     
    Presbytier likes this.
  24. Daniel_Chang

    Daniel_Chang [H]ard|Gawd

    Messages:
    1,331
    Joined:
    Jan 4, 2016
    What Kyle said, plus that graphic giving the impression that P10/11 is an entire product stack, and Vega is another complete top to bottom stack, gives the impression that P10 was supposed to do a LOT more than just offer GTX 970 performance. Unless, "AMD was taken by surprise on 1080/1070 perf" is code for "AMD legitimately thought that Nvidia's high end 2016 card would only match the GTX 970 in performance."

    That said, I'll repeat what I've said before. AMD likely expected 780ti-980 performance jump, and not the actual 980ti-1080 that we got. And, P10 was targeting that 780ti-980 performance leap, but came nowhere close. I know, people will disagree. There's evidence for and against it. But, that's my speculation.
     
    Chimpee likes this.
  25. Chimpee

    Chimpee Gawd

    Messages:
    642
    Joined:
    Jul 6, 2015
    That is what I believe, AMD wants a product to be on par with Fury X in order to get rid of the Fury lineup since it is costing them more than they like it to be.
     
    Kyle_Bennett likes this.
  26. Palladium@SG

    Palladium@SG Limp Gawd

    Messages:
    283
    Joined:
    Feb 8, 2015
    As far as I'm concerned, the RX480 reviews made me buy a $400 1070 non-FE.

    Even if the perf/$ is good enough to ignore the ~160W TDP and the crappy cooler, when people they have to choose between that combination versus avoiding the PCIE power overdraw FUD, they will choose the latter aka "not fucking fry my motherboard".

    Nvidia already @ 80% marketshare and yet these AMD jokers still never learn from their PR disasters.
     
    WorldExclusive likes this.
  27. 17seconds

    17seconds n00bie

    Messages:
    27
    Joined:
    Oct 16, 2012
    It's tempting to go back over the 62 pages of this thread to see how many times Kyle was insulted and demeaned for telling the truth, here and elsewhere across the web. Anyone buying the reference RX480 in its first days on the market should have heeded this report. Kyle you're officially vindicated.
     
    Kyle_Bennett, staknhalo and jwcalla like this.
  28. NKD

    NKD [H]ardness Supreme

    Messages:
    5,825
    Joined:
    Aug 26, 2007
    Well kyle was right. Its just that alot of people thought he was mad but in reality thats just his style. Here is what happened.

    AIB cards are coming out and kyle himself reported that they can do between 1.48 to 1.6 but that range it really depends on the chip..

    That proves his point.

    AMD wanted close to fury performance but the card was drawing too much power at those clocks and they were only designing one chip with 2306 shaders and going for max clocks and efficiency.

    So they turned that bitch down to 1266 and priced it cheap and made a mistake of putting a 6 pin connector on there. I don't know why they didn't put an 8 pin connector and called it 160w TDP for the power usage. Blows my mind.

    Kyle's report about AIBs hitting 1.48 - 1.6 just shows that this chip is capable of doing more but with much higher power draw.

    Which actually makes me glad they didn't try to throw vega on the brand new 14nm process and made it a mess. I think they will probably have few more revisions of this chip by the time vega comes out. We might even see a 485 that is a different revision and improves on clock speed and efficiency.

    It seems that AMD is having all kinds of variations in chips. We are getting 1266 and then probably 1.5-1.6 max but at current state at GF its requiring too much power to run at those speeds that they weren't comfortable with.

    I truly think it should use much less power at 14nm but I think it is just taking them longer to get there. I hope another 6 months will do the trick and they can refine this process well enough in time for Vega.
     
    Armenius and Trimlock like this.
  29. Trimlock

    Trimlock [H]ardForum Junkie

    Messages:
    14,351
    Joined:
    Sep 23, 2005
    AMD got lucky with the HBM2 delays, they most likely would have tried to release it on the new process. But we don't even know for sure if its the new process to blame yet or the aging GCN arch.
     
  30. Chimpee

    Chimpee Gawd

    Messages:
    642
    Joined:
    Jul 6, 2015
    Indeed, but time is not AMD friend at the moment, they really cannot afford to sit around trying to iron this out while nVidia is preparing for their next release. Forget the whole pcie spec thing, I agree with you and really hope AMD just get the power consumption issue under control, I was just surprised how much power it is using at 14nm process.
     
    FighterOH likes this.
  31. NKD

    NKD [H]ardness Supreme

    Messages:
    5,825
    Joined:
    Aug 26, 2007
    I am pretty sure it is. I think they have had like 3 revisions of the chip before rhey released and reports of first one not hitting 850. Looks like they improved it a lot. Almost all the YouTube videos I have seen are reporting the problem is not voltage on the card if you up the power meter it clocks higher just fine but the cooler can't handle it and the second problem is power draw from pic-e. I think think it's just power hungry at this time. That to me sounds like the new node that's giving them pain in the ass.

    Remember the 290 and 290x those were horribly power hungry when they first came out and ran hot. Seems like the same to me but this time this chip can run at high clocks but it wants a whole lot of juice to do it.
     
    Trimlock likes this.
  32. Chimpee

    Chimpee Gawd

    Messages:
    642
    Joined:
    Jul 6, 2015
    I am thinking new process, correct me if I am wrong, I believe 480 chips are produce by Global Foundries and Samsung right? If so, will using standard library be detrimental in a new process?
     
    Trimlock likes this.
  33. NKD

    NKD [H]ardness Supreme

    Messages:
    5,825
    Joined:
    Aug 26, 2007
    Yea me too! Doesn't look like clocks are the issue from aib reports. 1.48 to 1.6 is possibly and the range depends on chips. AMD just decided to sell it cheap and let AIBs do their thing and I am sure they will keep tweaking it.
     
  34. NKD

    NKD [H]ardness Supreme

    Messages:
    5,825
    Joined:
    Aug 26, 2007
    I am thinking VEGA might be using custom libraries seems like they are using newer graphics ip 9.0 on it and some people have reported it might be the reason. So it seems like vega might be tweaked on the new process. All speculation but one can hope they are doing their best to get the best out of 14nm and they know what they need by now cuz I am sure they have had year to play around with it on Polaris.
     
  35. Trimlock

    Trimlock [H]ardForum Junkie

    Messages:
    14,351
    Joined:
    Sep 23, 2005
    I thought it was purely GF?

    If it was the new process then they dodged at least that bullet.
     
  36. Chimpee

    Chimpee Gawd

    Messages:
    642
    Joined:
    Jul 6, 2015
    I think you are right that it is GF, I think it was just a rumor since I am googling it. I really wish AMD could just ditch GF, feel like they have been nothing but trouble for AMD.
     
    FighterOH and Trimlock like this.
  37. The Unworthy

    The Unworthy n00bie

    Messages:
    22
    Joined:
    Sep 11, 2015
    Hilarious how AMD is always so quick on social media when NV "supposedly" had issues, but are so quiet now. :D

    I see mahigan is back to defend them in the mean time though.
     
  38. NKD

    NKD [H]ardness Supreme

    Messages:
    5,825
    Joined:
    Aug 26, 2007
    you know AMD actually responded to the thread on reditt the same day. It's kinda damned if you do and damned if you don't. You can't have it both ways. They said they are testing it and working with reviewers. you want them to come out and give a half assed answer? Will that satisfy you? I don't think this has anything to do with fanboy crap its just common sense.
     
    FighterOH, N4CR and Daniel_Chang like this.
  39. Trimlock

    Trimlock [H]ardForum Junkie

    Messages:
    14,351
    Joined:
    Sep 23, 2005
    I think what would satisfy most would be a response on their official page over that useless subreddit
     
    Armenius and Stev3FrencH like this.
  40. pcgeekesq

    pcgeekesq Limp Gawd

    Messages:
    451
    Joined:
    Apr 23, 2012
    AMD officially responded, and announced "yes, it's broken, the fast memory is to blame, we will patch via a firmware update."

    Prepape to have your cards neutered, RX 480 owners.