PCI-E 4.0 SSD

Honestly, I only care about qd 1-4. Sequential at those read and write speeds will provide me with nothing at this point.
 
The question is how many GB can be written at those speeds before cache runs out?

It's basically an E12 with 96L TLC...hooked up to a PCIe 4.0 interface. Most E12 drives have ~30GB of SLC cache when empty, they are not heavily reliant on it.
 
Low queue depth performance is where it's at. I'll be impressed once we're at 100 MBps or more 4K QD1 read performance. Intel Optane is over 200 MBps and the Samsung 970 Pro is around 70 MBps. The faster sequential speeds is good but sequential improvements can be gained today with RAID arrays in a straight forward solution. I can't tell the difference between an M500, 850 pro, and 960 evo in my day to day usage.
 
Gigabyte just showed off a PCI-E 4.0 x16 card with NVME drives in it that gets 15GB/s sustained and around 175MB/s 4kQ1 speed
 
Yes PCIe Gen 4 NVME speeds are impressive but they do generate alot of heat.
I wonder if it is simply the NVME protocol design that is inefficient, or we just are not good at making the controllers in an efficient way right now.
They make networking chipsets that can run 600 gigabytes per second through them in barely 25W~, and NICs with 50GB/s capability are barely as much power as these newer NVME drives.
Seems weird that a chipset only capable of moving 15GB/s is so power hungry.
 
I wonder if it is simply the NVME protocol design that is inefficient, or we just are not good at making the controllers in an efficient way right now.
They make networking chipsets that can run 600 gigabytes per second through them in barely 25W~, and NICs with 50GB/s capability are barely as much power as these newer NVME drives.
Seems weird that a chipset only capable of moving 15GB/s is so power hungry.

The E16 is rated at 8W TDP, for what it's worth. But it's basically just an E12 with a 4.0 PHY (interface). The E12 is a quad-core design and like most consumer SSD controllers is based on ARM Cortex-R. These do not run at high clocks (we're talking like 500 MHz) but are specialized for low latency, high-count operations as you would find with SSDs in general. But really what gets them hottest is sustained writes as you're handling eight channels at once of relatively fast NAND in a relatively small form factor. Most such controllers start to throttle in the 70-80C range but of course can handle much more; it's more of a limit to maintain efficiency.
 
Why is it that we havent seen PCIe Gen4 from Samsung, Intel or Crucial? is it because board manufacturers got to play with PCIe 4.0 before?
 
Why is it that we havent seen PCIe Gen4 from Samsung, Intel or Crucial? is it because board manufacturers got to play with PCIe 4.0 before?

Not much point in rushing products with so very little benefit?
 
Why is it that we havent seen PCIe Gen4 from Samsung, Intel or Crucial? is it because board manufacturers got to play with PCIe 4.0 before?

Crucial does have one on the way. Phison had the advantage of AMD investment for the E16 since AMD wanted PCIe 4.0 NVMe to market with its X570 launch. SMI's controller (usually they partner with Intel) is looking at 2020. Samsung I cannot speak for as of yet.
 
Optane is not that great at least from all i have read, it helps a little but vs just using an SSD in your rig...almost pointless.


Which Optane are you referring to? The cache drives are pretty useless but the full SSD drives based on the tech are pretty darn nippy.
 
Which Optane are you referring to? The cache drives are pretty useless but the full SSD drives based on the tech are pretty darn nippy.

I am somewhat looking forward to their 'hybrid' drives that pair Optane with QLC NAND, but overall, a pure Optane drive is as good as it gets. I wouldn't touch their 'cache' drives.
 
I am somewhat looking forward to their 'hybrid' drives that pair Optane with QLC NAND, but overall, a pure Optane drive is as good as it gets. I wouldn't touch their 'cache' drives.

Ah, you mean the H10. Basically a 660p with 32GB of XPoint (at worthwhile capacities). Limited compatibility. Performance, eh, it's okay for an ultrabook or something. Better than a 660p but not TLC really, still have that inconsistency.
 
Ah, you mean the H10. Basically a 660p with 32GB of XPoint (at worthwhile capacities). Limited compatibility. Performance, eh, it's okay for an ultrabook or something. Better than a 660p but not TLC really, still have that inconsistency.

It's the inconsistency that I mind too. However, for a system with an OEM OS load and typical consumer applications, I bet it will fly.
 
It's the inconsistency that I mind too. However, for a system with an OEM OS load and typical consumer applications, I bet it will fly.

AnandTech has a good review on it but they tend to test heavy, Storage Review is a bit more sanguine. It's hard to argue with the combination because the 660p already relies on workloads falling within the (diminishing) SLC cache that gets down to 12-24GB when fuller versus the 32GB of XPoint we have on the 1TB SKU here. Intel pushes low-latency, low queue depth random (4K esp.), they nudged SMI in this direction with the SM2262/63 especially, and for the average user I think that's great. QLC has the density to stay single-sided and be power-efficient. So this drive is ideal for normal workloads in an ultrabook or something, where it will consistently fly. Options are good.

I think I agree with Storage Review that ultimately it comes down to price. It's going to be competing with TLC-based products so it's important you have the machine and workload to benefit in that price range, which I still think is mobile. But of course the H10 is an OEM product so I guess that's a moot point.
 
I think the heat will be manageable on good motherboards and that it's entirely worth the bandwidth upgrade. Can't wait to see PCIe 6.0 ssds
 
I can see your point, but answer me this. Why are the manufacturers defining new versions of PCIE with higher and higher bandwidths?

Because these are being used (today) in the enterprise. I get that. On the desktop front, however, you're going to have to be really specific as to your usecase to justify something faster, especially since these implementations today come with drawbacks including cost, heat, power, and board flexibility in terms of layout.

You can race benchmarks all you want, and that's cool, we're on the [H]!, but with essentially zero benefits from increasing the speed from commonly available and inexpensive PCIe 3.0 x4 SSDs to PCIe 4.0 x4, it's really difficult to get excited about.

I'd like to be proven wrong.
 
Because these are being used (today) in the enterprise. I get that. On the desktop front, however, you're going to have to be really specific as to your usecase to justify something faster, especially since these implementations today come with drawbacks including cost, heat, power, and board flexibility in terms of layout.

You can race benchmarks all you want, and that's cool, we're on the [H]!, but with essentially zero benefits from increasing the speed from commonly available and inexpensive PCIe 3.0 x4 SSDs to PCIe 4.0 x4, it's really difficult to get excited about.

I'd like to be proven wrong.
I can't prove you wrong. Whether you realize it or not, you just made an excellent argument against favoring one of the new AMD Ryzen 7 motherboards to the exclusion of all the AMD (and Intel) motherboards that support only PCIE 3.

Check out the heatsink on this bad boy: https://www.anandtech.com/show/14416/corsair-announces-mp600-nvme-ssd-with-pcie-40

x509
 
Whether you realize it or not, you just made an excellent argument against favoring one of the new AMD Ryzen 7 motherboards to the exclusion of all the AMD (and Intel) motherboards that support only PCIE 3.

It's hard to argue against a feature-equivalent X470 board. By the time PCIe 4.0 matters, you're going to want a platform upgrade regardless.
 
The issue with X470 (for me) isn't PCIe 3.0, it's that the chipset is 8x PCIe 2.0 downstream. It's extremely limiting if you want more than one NVMe drive. At least there's dedicated lanes for M.2; Intel is stuck with the chipset for ALL its M.2 storage. Having 4.0 upstream is just so nice, but the 4.0 downstream is overkill - I think the B550 will be PCIe 3.0 downstream which would be a better choice than X570 outside of niche cases.
 
I can see your point, but answer me this. Why are the manufacturers defining new versions of PCIE with higher and higher bandwidths?


Mainly market churn for the public/domestic market.

But as I said earlier you can have all the bandwidth you like. The AppData folder will still drop to 50kbps...

We need some other improvements other than bandwidth.
 
Because these are being used (today) in the enterprise. I get that. On the desktop front, however, you're going to have to be really specific as to your usecase to justify something faster, especially since these implementations today come with drawbacks including cost, heat, power, and board flexibility in terms of layout.

You can race benchmarks all you want, and that's cool, we're on the [H]!, but with essentially zero benefits from increasing the speed from commonly available and inexpensive PCIe 3.0 x4 SSDs to PCIe 4.0 x4, it's really difficult to get excited about.

I'd like to be proven wrong.


For me, I wish we had PCIE6 right now. We could move all NVME drives to 1x lane configurations and then be able to connect 8-12 drives on a motherboard easily while each still having the same bandwidth of the current PCIE4 4x drives. If they wont give us more lanes to use enough drives, then we should get higher bandwidth and then we wont need to use as many lanes to satisfy necessary speed.
 
It's hard to argue against a feature-equivalent X470 board. By the time PCIe 4.0 matters, you're going to want a platform upgrade regardless.
Maybe, but I tend to "overbuy" a bit and keep my systems for 5-6 years. Also I do a lot with photography (Lightroom, Photoshop) and the Intel vs. AMD reviews tend to favor Intel. There is a small company in the northwest that builds custom Photoshop and Lightroom systems and they use only Intel CPUs (and nVidia GPUs).

x509
 
connect 8-12 drives on a motherboard

I get wanting to access your port collection at 8GB/s, but even at 8k120, that's overkill ;).

On a more serious note: for what? Are you buying fistfuls of the largest NVMe drives, trying to shove them all into one desktop, needing full bandwidth from each, and running out of space?

Maybe, but I tend to "overbuy" a bit and keep my systems for 5-6 years. Also I do a lot with photography (Lightroom, Photoshop) and the Intel vs. AMD reviews tend to favor Intel. There is a small company in the northwest that builds custom Photoshop and Lightroom systems and they use only Intel CPUs (and nVidia GPUs).

x509

I use Lightroom quite a bit myself. Generally, even with a 50,000+ image catalog, I'm good keeping a local copy of the catalog itself on an SSD and then keeping the images on relatively fast storage. I currently used a ZFS-based NAS over 10GbE. If I need to work faster I'll just import to a local SSD first.

That's not saying that I wouldn't see a speedup, however, I don't see much room for one. As for keeping systems... I have systems running today that have run Lightroom that have SSDs that are 5-6 years old, and the systems themselves are even older ;).

What I'm seeing is arguments for >3.5GB/s transfer rates out of x4 NVMe SSDs. I get the benchmark racing, but the real-world desktop applications seem exceedingly thin.
 
I get wanting to access your port collection at 8GB/s, but even at 8k120, that's overkill ;).

On a more serious note: for what? Are you buying fistfuls of the largest NVMe drives, trying to shove them all into one desktop, needing full bandwidth from each, and running out of space?



I use Lightroom quite a bit myself. Generally, even with a 50,000+ image catalog, I'm good keeping a local copy of the catalog itself on an SSD and then keeping the images on relatively fast storage. I currently used a ZFS-based NAS over 10GbE. If I need to work faster I'll just import to a local SSD first.

That's not saying that I wouldn't see a speedup, however, I don't see much room for one. As for keeping systems... I have systems running today that have run Lightroom that have SSDs that are 5-6 years old, and the systems themselves are even older ;).

What I'm seeing is arguments for >3.5GB/s transfer rates out of x4 NVMe SSDs. I get the benchmark racing, but the real-world desktop applications seem exceedingly thin.
My Lightroom catalog right now is only about 25 K images, but I have a big backlog of images to import. And then I need to scan years of slides. Within a year, I expect to have about 60-80 K images in the catalog. I've read about people with Lightroom catalogs of 500 K images! :woot:

So I do what you do: Keep the catalog on an SSD and all the image files on a 7200 rpm HDD (Hitachi, of course). My current desktop is about 8 years old, ASUS P9X79Pro with an Intel 3930K. I'm planning to upgrade either later this year or early next year, after all the recently announced motherboards have started to ship.
 
Depending on resolution, you could probably put them all on a 2TB 660p, which has decent performance for user workloads. Biggest issue is platform support really, but as a non-bootable device, I'm betting you could get away with it on HEDT with a little research.
 
Back
Top