HBM3: Cheaper, Up To 64GB On-Package, And Terabytes-Per-Second Bandwidth

Discussion in 'HardForum Tech News' started by Megalith, Aug 28, 2016.

  1. Megalith

    Megalith 24-bit/48kHz Staff Member

    Messages:
    13,004
    Joined:
    Aug 20, 2006
    Here is some talk about the next iteration of High Bandwidth Memory, which Samsung and Hynix are working on. HBM is basically a stacked RAM configuration, meaning space savings and bandwidth improvements. The third generation is expected to not only increase the density of memory dies but have more stacked on a single chip.

    HBM1, as used in AMD's Fury graphics cards, was limited to 4GB stacks. HBM2, as used in Nvidia's workstation-only P100 graphics card, features higher density stacks up to 16GB, but is prohibitively expensive for consumer cards. HBM3 will double density of the individual memory dies from 8Gb to 16Gb (~2GB), and will allow for more than eight dies to be stacked together in a single chip. Graphics cards with up to 64GB of memory are possible. HBM3 will feature a lower core voltage than the 1.2V of HBM2, as well as more than two times the peak bandwidth: HBM2 offers 256GB/s of bandwidth per layer of DRAM, while HBM3 doubles that to 512GB/s. The total amount of memory bandwidth available could well be terabytes per second.
     
  2. thesmokingman

    thesmokingman [H]ardness Supreme

    Messages:
    4,772
    Joined:
    Nov 22, 2008
    Insano!
     
  3. Spidey329

    Spidey329 [H]ardForum Junkie

    Messages:
    8,677
    Joined:
    Dec 15, 2003
    I look forward to games fully utilizing this in the year 2045 when the consoles finally refresh.

    :)
     
    pxc and JosiahBradley like this.
  4. chili dog

    chili dog Limp Gawd

    Messages:
    208
    Joined:
    Oct 23, 2014
    So if HMB2 is prohibitively expensive for consumer cards, does that mean Volta and Vega will both still be on GDDR5x at best?
     
  5. pxc

    pxc [H]ard as it Gets

    Messages:
    33,064
    Joined:
    Oct 22, 2000
    The summary/article is worded incorrectly. "per layer" should be "per stack" (i.e. a device).

    I wonder what the sweet spot will be for on-package memory, since it's almost certain it won't have full system memory size due to cost. In the Intel forum I compared how Crystal Well was changed in Skylake-R models is somewhat similar to the goals of HBM.
     
    Last edited: Aug 28, 2016
  6. DedEmbryonicCe11

    DedEmbryonicCe11 [H]ard|Gawd

    Messages:
    1,553
    Joined:
    Jun 6, 2006
    I am under the impression HBM2 is only so expensive because NVidia put out a product before working yields and production had ramped up. I would be very surprised if there weren't $400-500 cards with HBM2 by this time next year.
     
  7. Shintai

    Shintai [H]ardness Supreme

    Messages:
    5,691
    Joined:
    Jul 1, 2016
    Only GV100 will use HBM (Tesla) from Nvidia. For AMD a HPC card would use HBM.

    But it doesn't matter for a gamer if there is no benefit. GM200, GP104 for example didn't need HBM to beat a HBM card.
     
  8. Cataulin

    Cataulin [H]ard|Gawd

    Messages:
    1,085
    Joined:
    Nov 22, 2010
    Heres hoping we see GDDR6 sooner than later.
     
  9. Shintai

    Shintai [H]ardness Supreme

    Messages:
    5,691
    Joined:
    Jul 1, 2016
    GDDR6 is just another name for GDD5X as such.
     
  10. Booyaah

    Booyaah [H]Lite

    Messages:
    109
    Joined:
    Jul 6, 2014
    Well I don't want to wait another 4 years for this stuff so I'll be fine with 16 GB HBM2 VRAM :)

    [​IMG]
     
  11. Arcygenical

    Arcygenical Will Watercool for Crack

    Messages:
    24,651
    Joined:
    Jun 10, 2005
    I was under the impression that the power requirements of these dies was increasing exponentially with capacity and bandwidth increases? I saw a slide somewhere.