PCI-E 4.0 SSD

Discussion in 'SSDs & Data Storage' started by Thatguybil, May 28, 2019.

  1. Thatguybil

    Thatguybil [H]Lite

    Messages:
    98
    Joined:
    Jan 21, 2017
  2. thecold

    thecold Limp Gawd

    Messages:
    259
    Joined:
    Nov 12, 2017
    Honestly, I only care about qd 1-4. Sequential at those read and write speeds will provide me with nothing at this point.
     
    vegeta535 likes this.
  3. EniGmA1987

    EniGmA1987 Limp Gawd

    Messages:
    198
    Joined:
    May 2, 2017
    The question is how many GB can be written at those speeds before cache runs out?
     
    lostin3d likes this.
  4. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    Bench it against Optane, then we can talk :D
     
    PhaseNoise likes this.
  5. Thatguybil

    Thatguybil [H]Lite

    Messages:
    98
    Joined:
    Jan 21, 2017
    Agreed. It is nice to see that we will see PCI-E device sooner rather then later.
     
  6. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    It's basically an E12 with 96L TLC...hooked up to a PCIe 4.0 interface. Most E12 drives have ~30GB of SLC cache when empty, they are not heavily reliant on it.
     
  7. ochadd

    ochadd Gawd

    Messages:
    871
    Joined:
    May 9, 2008
    Low queue depth performance is where it's at. I'll be impressed once we're at 100 MBps or more 4K QD1 read performance. Intel Optane is over 200 MBps and the Samsung 970 Pro is around 70 MBps. The faster sequential speeds is good but sequential improvements can be gained today with RAID arrays in a straight forward solution. I can't tell the difference between an M500, 850 pro, and 960 evo in my day to day usage.
     
  8. EniGmA1987

    EniGmA1987 Limp Gawd

    Messages:
    198
    Joined:
    May 2, 2017
    Gigabyte just showed off a PCI-E 4.0 x16 card with NVME drives in it that gets 15GB/s sustained and around 175MB/s 4kQ1 speed
     
    {NG}Fidel likes this.
  9. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    Yep, it runs four of their Aorus NVMe Gen4 drives.
     
    Last edited: Jun 22, 2019
    {NG}Fidel likes this.
  10. Alienslare

    Alienslare Limp Gawd

    Messages:
    141
    Joined:
    Jan 23, 2016
    Yes PCIe Gen 4 NVME speeds are impressive but they do generate alot of heat.
     
    {NG}Fidel likes this.
  11. EniGmA1987

    EniGmA1987 Limp Gawd

    Messages:
    198
    Joined:
    May 2, 2017
    I wonder if it is simply the NVME protocol design that is inefficient, or we just are not good at making the controllers in an efficient way right now.
    They make networking chipsets that can run 600 gigabytes per second through them in barely 25W~, and NICs with 50GB/s capability are barely as much power as these newer NVME drives.
    Seems weird that a chipset only capable of moving 15GB/s is so power hungry.
     
  12. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    The E16 is rated at 8W TDP, for what it's worth. But it's basically just an E12 with a 4.0 PHY (interface). The E12 is a quad-core design and like most consumer SSD controllers is based on ARM Cortex-R. These do not run at high clocks (we're talking like 500 MHz) but are specialized for low latency, high-count operations as you would find with SSDs in general. But really what gets them hottest is sustained writes as you're handling eight channels at once of relatively fast NAND in a relatively small form factor. Most such controllers start to throttle in the 70-80C range but of course can handle much more; it's more of a limit to maintain efficiency.
     
  13. Alienslare

    Alienslare Limp Gawd

    Messages:
    141
    Joined:
    Jan 23, 2016
    Not clearly the case ssd includes read/write process. When done on TLC and and on such speeds the ICs do get hot especially taking into consideration the PCIe 4.0

    Transferring data is one thing and writing data to the IC around 4999mb/s is another.
     
  14. Abula

    Abula Gawd

    Messages:
    942
    Joined:
    Oct 29, 2004
    Why is it that we havent seen PCIe Gen4 from Samsung, Intel or Crucial? is it because board manufacturers got to play with PCIe 4.0 before?
     
  15. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    Not much point in rushing products with so very little benefit?
     
  16. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    Crucial does have one on the way. Phison had the advantage of AMD investment for the E16 since AMD wanted PCIe 4.0 NVMe to market with its X570 launch. SMI's controller (usually they partner with Intel) is looking at 2020. Samsung I cannot speak for as of yet.
     
  17. daglesj

    daglesj [H]ardness Supreme

    Messages:
    5,053
    Joined:
    May 7, 2005
    When I can copy an entire UserData folder at 50MBps minimum speed then call me.
     
    westrock2000 likes this.
  18. Alienslare

    Alienslare Limp Gawd

    Messages:
    141
    Joined:
    Jan 23, 2016
    Waiting for Samsung’s answer to this drive.
     
  19. MrGuvernment

    MrGuvernment [H]ard as it Gets

    Messages:
    19,163
    Joined:
    Aug 3, 2004
    Optane is not that great at least from all i have read, it helps a little but vs just using an SSD in your rig...almost pointless.
     
  20. daglesj

    daglesj [H]ardness Supreme

    Messages:
    5,053
    Joined:
    May 7, 2005

    Which Optane are you referring to? The cache drives are pretty useless but the full SSD drives based on the tech are pretty darn nippy.
     
    IdiotInCharge likes this.
  21. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    I am somewhat looking forward to their 'hybrid' drives that pair Optane with QLC NAND, but overall, a pure Optane drive is as good as it gets. I wouldn't touch their 'cache' drives.
     
  22. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    Ah, you mean the H10. Basically a 660p with 32GB of XPoint (at worthwhile capacities). Limited compatibility. Performance, eh, it's okay for an ultrabook or something. Better than a 660p but not TLC really, still have that inconsistency.
     
    Red Falcon and IdiotInCharge like this.
  23. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    It's the inconsistency that I mind too. However, for a system with an OEM OS load and typical consumer applications, I bet it will fly.
     
  24. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    AnandTech has a good review on it but they tend to test heavy, Storage Review is a bit more sanguine. It's hard to argue with the combination because the 660p already relies on workloads falling within the (diminishing) SLC cache that gets down to 12-24GB when fuller versus the 32GB of XPoint we have on the 1TB SKU here. Intel pushes low-latency, low queue depth random (4K esp.), they nudged SMI in this direction with the SM2262/63 especially, and for the average user I think that's great. QLC has the density to stay single-sided and be power-efficient. So this drive is ideal for normal workloads in an ultrabook or something, where it will consistently fly. Options are good.

    I think I agree with Storage Review that ultimately it comes down to price. It's going to be competing with TLC-based products so it's important you have the machine and workload to benefit in that price range, which I still think is mobile. But of course the H10 is an OEM product so I guess that's a moot point.
     
    IdiotInCharge likes this.
  25. {NG}Fidel

    {NG}Fidel [H]ardness Supreme

    Messages:
    6,148
    Joined:
    Jan 17, 2005
    I think the heat will be manageable on good motherboards and that it's entirely worth the bandwidth upgrade. Can't wait to see PCIe 6.0 ssds
     
  26. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    For... what?
     
    kirbyrj likes this.
  27. {NG}Fidel

    {NG}Fidel [H]ardness Supreme

    Messages:
    6,148
    Joined:
    Jan 17, 2005
    For everything. I simply enjoy faster storage.
     
  28. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    Fail answer. We're well past the point of diminishing returns.
     
  29. x509

    x509 [H]ard|Gawd

    Messages:
    1,709
    Joined:
    Sep 20, 2009
    I can see your point, but answer me this. Why are the manufacturers defining new versions of PCIE with higher and higher bandwidths?
     
    IdiotInCharge likes this.
  30. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    Because these are being used (today) in the enterprise. I get that. On the desktop front, however, you're going to have to be really specific as to your usecase to justify something faster, especially since these implementations today come with drawbacks including cost, heat, power, and board flexibility in terms of layout.

    You can race benchmarks all you want, and that's cool, we're on the [H]!, but with essentially zero benefits from increasing the speed from commonly available and inexpensive PCIe 3.0 x4 SSDs to PCIe 4.0 x4, it's really difficult to get excited about.

    I'd like to be proven wrong.
     
    Hakaba and kirbyrj like this.
  31. {NG}Fidel

    {NG}Fidel [H]ardness Supreme

    Messages:
    6,148
    Joined:
    Jan 17, 2005
    We will see.
     
  32. x509

    x509 [H]ard|Gawd

    Messages:
    1,709
    Joined:
    Sep 20, 2009
    I can't prove you wrong. Whether you realize it or not, you just made an excellent argument against favoring one of the new AMD Ryzen 7 motherboards to the exclusion of all the AMD (and Intel) motherboards that support only PCIE 3.

    Check out the heatsink on this bad boy: https://www.anandtech.com/show/14416/corsair-announces-mp600-nvme-ssd-with-pcie-40

    x509
     
    IdiotInCharge likes this.
  33. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    It's hard to argue against a feature-equivalent X470 board. By the time PCIe 4.0 matters, you're going to want a platform upgrade regardless.
     
  34. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
    The issue with X470 (for me) isn't PCIe 3.0, it's that the chipset is 8x PCIe 2.0 downstream. It's extremely limiting if you want more than one NVMe drive. At least there's dedicated lanes for M.2; Intel is stuck with the chipset for ALL its M.2 storage. Having 4.0 upstream is just so nice, but the 4.0 downstream is overkill - I think the B550 will be PCIe 3.0 downstream which would be a better choice than X570 outside of niche cases.
     
  35. Maxx

    Maxx [H]ard|Gawd

    Messages:
    1,412
    Joined:
    Mar 31, 2003
  36. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    Direct link to video:



    Also, it seems that the autotranslate to English is working pretty well. And Mandarin is easy to listen to, so- give it a shot.
     
  37. daglesj

    daglesj [H]ardness Supreme

    Messages:
    5,053
    Joined:
    May 7, 2005

    Mainly market churn for the public/domestic market.

    But as I said earlier you can have all the bandwidth you like. The AppData folder will still drop to 50kbps...

    We need some other improvements other than bandwidth.
     
  38. EniGmA1987

    EniGmA1987 Limp Gawd

    Messages:
    198
    Joined:
    May 2, 2017

    For me, I wish we had PCIE6 right now. We could move all NVME drives to 1x lane configurations and then be able to connect 8-12 drives on a motherboard easily while each still having the same bandwidth of the current PCIE4 4x drives. If they wont give us more lanes to use enough drives, then we should get higher bandwidth and then we wont need to use as many lanes to satisfy necessary speed.
     
  39. x509

    x509 [H]ard|Gawd

    Messages:
    1,709
    Joined:
    Sep 20, 2009
    Maybe, but I tend to "overbuy" a bit and keep my systems for 5-6 years. Also I do a lot with photography (Lightroom, Photoshop) and the Intel vs. AMD reviews tend to favor Intel. There is a small company in the northwest that builds custom Photoshop and Lightroom systems and they use only Intel CPUs (and nVidia GPUs).

    x509
     
    nEo717 likes this.
  40. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,347
    Joined:
    Jun 13, 2003
    I get wanting to access your port collection at 8GB/s, but even at 8k120, that's overkill ;).

    On a more serious note: for what? Are you buying fistfuls of the largest NVMe drives, trying to shove them all into one desktop, needing full bandwidth from each, and running out of space?

    I use Lightroom quite a bit myself. Generally, even with a 50,000+ image catalog, I'm good keeping a local copy of the catalog itself on an SSD and then keeping the images on relatively fast storage. I currently used a ZFS-based NAS over 10GbE. If I need to work faster I'll just import to a local SSD first.

    That's not saying that I wouldn't see a speedup, however, I don't see much room for one. As for keeping systems... I have systems running today that have run Lightroom that have SSDs that are 5-6 years old, and the systems themselves are even older ;).

    What I'm seeing is arguments for >3.5GB/s transfer rates out of x4 NVMe SSDs. I get the benchmark racing, but the real-world desktop applications seem exceedingly thin.