From ATI to AMD back to ATI? A Journey in Futility @ [H]

Discussion in 'AMD Flavor' started by FrgMstr, May 27, 2016.

  1. razor1

    razor1 [H]ardForum Junkie

    Messages:
    10,155
    Joined:
    Jul 14, 2005
    Yeah current Intel IGP's are all held back by their memory bandwidth, don't see how AMD will over come that unless they use things like HBM or something other then GDDR4.
     
  2. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,973
    Joined:
    Apr 22, 2006
    The reason Intel GPUs are failing in old games on GOG is not performance, but drivers.

    Also AFAIK, even AMDs current APUs are faster than Intels. Intel has to use expensive eDRAM cache to catch AMD.

    It will be interesting to see how it turns out but it seems unlikely AMD would boost Shader units by 50% only to deliver the same performance they have today.
     
    JustReason likes this.
  3. lolfail9001

    lolfail9001 [H]ard|Gawd

    Messages:
    1,509
    Joined:
    May 31, 2016
    The older the game, the lower the advantage, last time i checked. In GTA V AMD is faster than even eDRAM models, but in old stuff they

    Huh. How old games are we talking about, just in case?
     
  4. Ninjaman67

    Ninjaman67 n00b

    Messages:
    17
    Joined:
    Oct 3, 2012
    no idea but I can tell you from work that intel's graphics drivers still have a tough time with hardware acceleration on video and embedded videos in powerpoint. I wouldn't even try games on those chips
     
  5. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,973
    Joined:
    Apr 22, 2006
    10+ years old. I think there are more problems because back then games were never even tested running on Intel IGP so there was zero work done assuring that the worked, and Intel probably doesn't use regression tests with games that old either...
     
  6. FrgMstr

    FrgMstr Just Plain Mean Staff Member

    Messages:
    48,298
    Joined:
    May 18, 1997
    Article pending. Taking me a bit to get enough solid information together to defend my position and explain what went on. It is still not all cut and dry.
     
    N4CR, Schmave, AlphaQup and 7 others like this.
  7. Bahanime

    Bahanime Limp Gawd

    Messages:
    392
    Joined:
    Sep 27, 2011
    This one is pretty simple. Vega APU should be much more efficient with bandwidth from 3 changes:

    1. Better primitive discard/culling steps in their new geometry engine.
    2. Binning Cache for the Rasterizer, less traffic to and from the APU to system RAM, more on-chip cache traffic.
    3. ROPs a client of L2, for deferred rendering games, less traffic to off-chip RAM.

    They don't need HBM for +50% iGPU performance uplift. HBM would be required if they ever go big APU, like PS4Pro or Scorpio class.

    One has to wonder whether there's such a market for such a strong APU w/ 2-4GB HBM2 though.
     
  8. razor1

    razor1 [H]ardForum Junkie

    Messages:
    10,155
    Joined:
    Jul 14, 2005

    can't assume that at all, it will be better, but we have no idea of how much better, nV gave their estimates on Pascal and most of its bandwidth savings came from compression....

    Also Intel has primitive discard for just about the same time as nV.

    So kinda left with the last 2, if AMD's is as efficient as nV's and can be used at all times (binned rasterizer) that will get around 20%, and nV is on a second generation so... Lets say 10% for AMD sounds fair.

    ROP L2 cache is coherent in Intel IGP if I remember correctly and that is what it looks like in the diagram, so yeah.

    Sorry was looking at the diagram wrong, Intel IGP has edram for that, so they have a pretty BIG advantage there.
     
    Last edited: May 26, 2017
    trandoanhung1991 and noko like this.
  9. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,973
    Joined:
    Apr 22, 2006
    Too niche for AMD.
     
  10. Bahanime

    Bahanime Limp Gawd

    Messages:
    392
    Joined:
    Sep 27, 2011
    That would be nice, if Intel has eDram for their regular APUs. But Intel figured it was too expensive for them to keep doing such a design. Their eDram Crystal Well powered APU was ridiculous, an entire 2x die is devoted to eDram.

    HBM2 2GB should serve well with Vega's HBCC for any APU. Keep costs low while it's effectiveness as a cache keeps performance high with low system ram bandwidth.
     
  11. Bahanime

    Bahanime Limp Gawd

    Messages:
    392
    Joined:
    Sep 27, 2011
    Probably, such an APU only makes sense for boutique and mITX builds. Still, they could create an entire new niche with overpowered APUs with HBM2 freeing system ram bandwidth limitations.

    Actually now that I think about it, it makes sense: notebooks, ultrabooks especially, and NUCs. Even Intel throws their Crystal Well APU into these higher margin markets.
     
  12. razor1

    razor1 [H]ardForum Junkie

    Messages:
    10,155
    Joined:
    Jul 14, 2005

    hmm edram is some of the U model Kaby lake IGP's (mobile), but for the desktop ones directly connected to L3 cache, so still more benefit than what AMD has right now in current architecture, Vega should catch up to that with l2 direct access with ROP's.

    Its not where the ROP's connect to that is important as long as they have direct access to the cache is whats important, right now the way AMD's uarch's are set up there can be a lot of cache thrashing if not coded right. So this is why you can't estimate the bandwidth savings for such a change, its highly application specific and programmer specific.
     
  13. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,973
    Joined:
    Apr 22, 2006
    Yes, but not even Intel is making a special Kabylake With bigger IGP for that, they are just adding some eDRAM in a separate die in a MCM module.

    If AMD can pull that off for a performance boost sure.

    I am just saying they won't tape out a separate Raven Ridge with Big GPU section. AMD is all about cost containment.
     
  14. Simplyfun

    Simplyfun Gawd

    Messages:
    1,012
    Joined:
    Dec 17, 2016
    I* believe that eventually you'll buy an AMD SOC that is literally all on one die even the DRAM. In that context it's no wonder they started cultivating HBM. Sure it's not ready for this now, but it will be.
     
  15. trandoanhung1991

    trandoanhung1991 [H]ard|Gawd

    Messages:
    1,096
    Joined:
    Aug 26, 2011
    It'll never be as good as CPU + GPU combo. What's the point?

    Integrated GPU + HBM will probably be not that much cheaper than a MXM GPU+GDDR. CPU is a wash.

    You get more flexibility and existing economies of scale with MXM designs.

    Also, with external graphics becoming more affordable, why do you even want an APU?
     
  16. geok1ng

    geok1ng 2[H]4U

    Messages:
    2,135
    Joined:
    Oct 28, 2007
    a good, strong integrated graophics will always beat a half assed MXM design. Truth is these so called hybrid builds are full of compatibility issues, failure to wake up from sleep and deliver very poor battery life. Intel has gone the extra mile delivering very good iGPU upgrades over teh last generations. It is par for the course that AMd's APU bring an even better power/performance graphics to the mobile market.

    external GPUs are another beast altogether: they are amazing, but require the user to stay connected, because gaming with a battery is not realistic using those.
     
  17. FrgMstr

    FrgMstr Just Plain Mean Staff Member

    Messages:
    48,298
    Joined:
    May 18, 1997
    And I did when I made the original statement. I have not been keeping up with all the changes going on, but there have been a lot to cover.
     
  18. noko

    noko [H]ardness Supreme

    Messages:
    4,279
    Joined:
    Apr 14, 2010
    Intel has the big $ AMD does not. For survival in the long term AMD may need to be very inventive to stay around.

    AMD does not have to sell outright but Intel and AMD can have a Joint Adventure which Intel's $ and AMD Intellectual assets are combined. I am not sure it would be gaming graphics that Intel would be too worried about, chunk change at best but more so HPC particularly Deep learning with fast FP 16/8 calculations - Intel is weak there. That is the new frontier that is growing exponentially. AMD would have to have something that can address that area and they do. While Nvidia is scrambling to keep their edge in this AMD can be a very upsetting force in all of this.

    The deal AMD strikes up, can leverage the potential China server making deal in China or head that off if Intel has a good enough proposition.
     
  19. razor1

    razor1 [H]ardForum Junkie

    Messages:
    10,155
    Joined:
    Jul 14, 2005
    I find it hard to believe AMD would give its latest and greatest architecture over to Intel, Intel is already ahead of them in DL and HPC with Phi too. I can see AMD giving away, last gen (GCN 1.3/1.4) to Intel or for a very specific product.
     
  20. lolfail9001

    lolfail9001 [H]ard|Gawd

    Messages:
    1,509
    Joined:
    May 31, 2016
    Well, that's probably why it all fell through with Ryzen out.
     
  21. Simplyfun

    Simplyfun Gawd

    Messages:
    1,012
    Joined:
    Dec 17, 2016
    You know for a fact the licensing deal is a no go?

    I see any potential licensing deal as a natural progression to the fact they already cross license each others x86/64 CPU IP. A strong secondary supplier of x64/GPU is rather important for Both Nvidia and Intel, otherwise they are operating monopolies which makes doing business a little more difficult than it is today.
     
  22. lolfail9001

    lolfail9001 [H]ard|Gawd

    Messages:
    1,509
    Joined:
    May 31, 2016
    I know for a fact it was not done at the time of denials.
     
  23. Simplyfun

    Simplyfun Gawd

    Messages:
    1,012
    Joined:
    Dec 17, 2016
    Yeah that statement has no wiggle room. You are confirming 100% there is no signed licensing deal, but not that they are (for example--- EXAMPLE) in the middle of pushing it through legal.
     
  24. DieAntw00rd

    DieAntw00rd n00b

    Messages:
    1
    Joined:
    May 31, 2017
    Hey Kyle, any updates or status? If not, an ETA on when you might have something more perhaps?
    Thanks in advance..
    Cheers
     
  25. N4CR

    N4CR 2[H]4U

    Messages:
    3,780
    Joined:
    Oct 17, 2011
    Just outed yourself. How are those shares going bro?
     
  26. lolfail9001

    lolfail9001 [H]ard|Gawd

    Messages:
    1,509
    Joined:
    May 31, 2016
    Waiting for AMD to drop a little after Q2 report before going all-in.
    My statement has all the wiggle room it needs. Both of them in fact. In fact, it seems too obvious to me that if Ryzen was a flop, it would be done a quarter ago.
     
    razor1 and N4CR like this.
  27. SighTurtle

    SighTurtle [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Jul 29, 2016
    Once again, there is no sane reason why AMD needs to give Intel it's graphics wholesale for Intel to sell their CPUs. None. Ryzen is perfectly capable of being used with Radeon tech and perfectly able to power laptops.

    If it was a exclusive deal for Apple, sure I see that happening, not selling off the golden goose.
     
    razor1 likes this.
  28. Bahanime

    Bahanime Limp Gawd

    Messages:
    392
    Joined:
    Sep 27, 2011
    I think you are vindicated Kyle.

    The engineering sample of this monstrosity has arrived on Sisoft and GFXbench. It's on the main page of videocardz.

    Intel CPU + Radeon gfx9 (Vega architecture?)?.. some weird CU and 1720 SP counts (might not be reading right, or some rather strange configuration here).

    I don't believe it's on the same die, more like Intel's MCM approach.

    Definitely for Apple, IMO. Only Apple would want a premium iGPU of such high performance.

    RTG would enable this, without conflicting with AMD's business. Basically RTG sell Intel GPU chips, Intel packages it next to their CPU dies as a custom SOC. There would be zero need for licensing at all, Intel becomes a customer of RTG chips.
     
    trandoanhung1991 and gigaxtreme1 like this.
  29. Araxie

    Araxie [H]ardness Supreme

    Messages:
    6,363
    Joined:
    Feb 11, 2013
    Awww boy.. 694C:C0

    intel CPU - AMD GPU.jpg
     
  30. Simplyfun

    Simplyfun Gawd

    Messages:
    1,012
    Joined:
    Dec 17, 2016
    1720 SP 's? da fuq did they build here and what are they feeding it with....
     
  31. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,973
    Joined:
    Apr 22, 2006

    That doesn't make sense. That would be cramming Polaris 10 into a CPU package, with no memory bandwidth.

    Apple has much better choice, doing what they do today, putting AMDs GPUs on the MB, along with GDDR memory to actually feed it.

    Also, the reset of the info is nonsensical. It says it is a 1 Core Intel CPU.
     
  32. N4CR

    N4CR 2[H]4U

    Messages:
    3,780
    Joined:
    Oct 17, 2011
    It's a habbenin'!

    #KYLEWUZRITE

    HARDOCP.COM
    RUSSEL checkem nigga.jpg 1468683001162.jpg
     
    ba294 and razor1 like this.
  33. Schmave

    Schmave [H]ard|Gawd

    Messages:
    1,719
    Joined:
    Jan 2, 2001
    Who knows, maybe it is a cut down Vega chip with HBM2 on the same package as the CPU. Wouldn't that be interesting?
     
  34. Simplyfun

    Simplyfun Gawd

    Messages:
    1,012
    Joined:
    Dec 17, 2016
    Be more interesting if AMD did it.

    I've been thinking about this and I'm calling bullshit. It doesn't make economic sense even for Apple.

    If I'm wrong I'll own up to that later.

    The only possibility that comes to mind is that this is an Intel design of an AMD IP holding and has it's own dedicated memory in some form even if that's two channels of a quad channel controller for GPU only. Something like that, because feeding that many SP's requires something more than what's in an Intel CPU at the moment.
     
  35. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,973
    Joined:
    Apr 22, 2006
    Sure with a 1 core Intel CPU, with 528 KB cache and 10.4 GB of RAM. ;)

    Everything in the post is absurd, but hey it say Intel and AMD in the same sentence, so; Unicorns are Real !!!
     
  36. N4CR

    N4CR 2[H]4U

    Messages:
    3,780
    Joined:
    Oct 17, 2011
    It's weird as hell but we have no idea of design or drivers used.
    Could be artificially restricted like pre-release GPU drivers.

    Some of that RAM could be set aside for slower VRAM if onboard is used up? Who knows... We need more details but it's a weird oddity to pop up with all the talk going on about this.
     
  37. Anarchist4000

    Anarchist4000 [H]ard|Gawd

    Messages:
    1,659
    Joined:
    Jun 10, 2001
    TV or some sort of console might make sense with a shared memory pool. Current push with Metal2 was GPU driven graphics so a fast CPU might not be required. Intel CPU because of Apple.
     
  38. Bahanime

    Bahanime Limp Gawd

    Messages:
    392
    Joined:
    Sep 27, 2011
    Doesn't seem to be iGPU on the CPU.

    But an MCM. The GPU seems to be 1536 SP w/ 4GB (HBM @ 800mhz) + Intel's HD 630 so the SP count is read as 1720.

    For clarification, L2 on Intel HD 630 is 512kb shared, while L2 on each GCN SP is 16kb, it adds up to 528kb.

    A 1536 SP GCN would be 24 CU. HD 630 is 23 CU. Add up it's 47 CU.

    Sisoft apparently will add up iGPU + dGPU if you run the GPU benchmark on dual GPU system.