Super-duper moar faster PCIe 5 nvme drives coming soon (according to Adata anyways) !

I hope they come out with PCI-E Gen 6 NVME drives next week.
Then maybe I can buy a PCI-E Gen 3 drive for a reasonable cost. At this point speed isn't the issue. The problem is that for any degree of density the price is astronomical. 8TB NVME drives can cost more than some people's entire systems.
 
Those are getting around DDR3-1600 single channel type of bandwidth even getting close to slow single channel DDR4 for the 15GB one.
 
Cool news but won't buy ADATA storage devices. Hoping Samsung or Inland release PCIe Gen 5 drives.
 
I hope they come out with PCI-E Gen 6 NVME drives next week.
Then maybe I can buy a PCI-E Gen 3 drive for a reasonable cost. At this point speed isn't the issue. The problem is that for any degree of density the price is astronomical. 8TB NVME drives can cost more than some people's entire systems.
So they're like GPU's?
 
So they're like GPU's?
Not at all. Considering this was a problem before the silicon shortage. And having a smaller density drive doesn't necessarily have performance impact like having a lesser GPU would. In that sense, NVME drives are a greater "luxury item" than even a GPU is. Especially considering that it's possible to buy a $2000 NVME consumer drive. And $5000 enterprise drives. I'm not sure how many enterprises can afford an array of $5000 drives at this paltry density though. GPU's definitely have WAY more use cases and ROI than drives and therefore are way more justified in terms of cost.

You can ask folks around here that do any CAD/design work or anything involving GPGPU, and if GPU's essentially speed up your workflow by 20%, it's pretty much an instant buy as you'll get your money back in time. Buying a GPU for $2500, or really in the enterprise space for well over $5000 is still totally worth it. To businesses this increase in cost is just the cost of doing business.

There's nothing like that for drives; arrays have long be used to increase the speed of pools of data. SAS SSD drives pooled together are still more than fast enough, any gain from NVME drives is marginal. Any speed increase would be fractional and not multiplicative like a GPU is. It would take eons for the cost increase vs the performance increase to be worth it. And of course by then there would be newer and faster and (hopefully) less expensive technology that would replace it long before its ROI vs a "slightly slower" array is achieved.
 
Last edited:
1640289263770.png


That is some crazy title gore.
 
This is cool, but still 100% irrelevant until DirectStorage and other similar APIs become mainstream. For 99% of use cases there is still almost no difference between a SATA and NVMe drive because everything is held back by storage and file-system protocols that were designed in the 20th century. Even when copying between two brand new NVMe SSDs, if you try to copy a folder with 20,000 tiny text files in it for example, you will get floppy-disk-era transfer speeds.
 
This is cool, but still 100% irrelevant until DirectStorage and other similar APIs become mainstream. For 99% of use cases there is still almost no difference between a SATA and NVMe drive because everything is held back by storage and file-system protocols that were designed in the 20th century. Even when copying between two brand new NVMe SSDs, if you try to copy a folder with 20,000 tiny text files in it for example, you will get floppy-disk-era transfer speeds.
100% agree

And I wan to be able to use my entire drive in SLC mode if I choose to. I have use cases where I need the endurance more than capacity.
 
This is cool, but still 100% irrelevant until DirectStorage and other similar APIs become mainstream. For 99% of use cases there is still almost no difference between a SATA and NVMe drive because everything is held back by storage and file-system protocols that were designed in the 20th century. Even when copying between two brand new NVMe SSDs, if you try to copy a folder with 20,000 tiny text files in it for example, you will get floppy-disk-era transfer speeds.
This is true but how does DirectStorage solve that problem?

I just want cheaper high-capacity NVMe drives. I don't care what PCI generation they are. Shit's plenty fast with anything made in the last few years. Trick is, once you go over 2TB, the prices skyrocket.
Also true, but it's not clear why this is the case.
 
This is true but how does DirectStorage solve that problem?

It doesn't, it takes advantage of the speeds the storage media is capable of and uses it to cache assets for games. He's saying that the main reason to have one of these drives isn't being implemented right now, so why drop the cash on bleeding-edge tech?
 
Not at all. Considering this was a problem before the silicon shortage. And having a smaller density drive doesn't necessarily have performance impact like having a lesser GPU would. In that sense, NVME drives are a greater "luxury item" than even a GPU is. Especially considering that it's possible to buy a $2000 NVME consumer drive. And $5000 enterprise drives. I'm not sure how many enterprises can afford an array of $5000 drives at this paltry density though. GPU's definitely have WAY more use cases and ROI than drives and therefore are way more justified in terms of cost.

You can ask folks around here that do any CAD/design work or anything involving GPGPU, and if GPU's essentially speed up your workflow by 20%, it's pretty much an instant buy as you'll get your money back in time. Buying a GPU for $2500, or really in the enterprise space for well over $5000 is still totally worth it. To businesses this increase in cost is just the cost of doing business.

There's nothing like that for drives; arrays have long be used to increase the speed of pools of data. SAS SSD drives pooled together are still more than fast enough, any gain from NVME drives is marginal. Any speed increase would be fractional and not multiplicative like a GPU is. It would take eons for the cost increase vs the performance increase to be worth it. And of course by then there would be newer and faster and (hopefully) less expensive technology that would replace it long before its ROI vs a "slightly slower" array is achieved.
I worked for one of the top 3 storage manufacturers until a couple of months ago. To answer how many enterprises can afford that - a @%@%# ton. Don't forget - most are doing dedupe/compression on flash, so 100TBu of NVMe is normally 400+TB Effective. Home filesystems don't do dedupe; or compression. Or other data reduction.

The speed doesn't buy you anything directly - but it does let you get more aggressive with dedupe and compression, and that pulls cost down and density WAY up.

But taht's the enterprise :)
 
Man, I thought it was crazy when I set up my first RAM drive and was getting less than what these are pulling.
You know what bothers me, my Amiga had a Ram Drive built into the OS over 30 years ago.
Why can't Windows do it? Not sure if Linux has it built in or not.
 
You know what bothers me, my Amiga had a Ram Drive built into the OS over 30 years ago.
Why can't Windows do it? Not sure if Linux has it built in or not.
Linux has had it forever. Windows - there are ways. The question is why - most people don't have a use for it.
 
I worked for one of the top 3 storage manufacturers until a couple of months ago. To answer how many enterprises can afford that - a @%@%# ton. Don't forget - most are doing dedupe/compression on flash, so 100TBu of NVMe is normally 400+TB Effective. Home filesystems don't do dedupe; or compression. Or other data reduction.

The speed doesn't buy you anything directly - but it does let you get more aggressive with dedupe and compression, and that pulls cost down and density WAY up.

But taht's the enterprise :)
Yep, still work at one of the big 3. Majority of biz is now NVMe, and lots starting to go NVMe protocol over fiber and soon 100gbe rocev2. Spinning being relegated to tiering/object/etc on prem in most cases.
 
Why can't Windows do it? Not sure if Linux has it built in or not.

Windows has had it running as a background service since Vista. All your RAM is getting used even if the OS doesn't tell you what it's using it for. It's like a variable Ready Boost drive, page file hybrid.

There are utilities to set up RAM drives in Windows, and can be useful if you need a discrete cache for stuff. Content creation stuff, mostly. But I bet once Direct Storage and it's other OS equivalents become standard, that software will take advantage of it, too, by that time, RAM drives will probably lose relevance.
 
You know what bothers me, my Amiga had a Ram Drive built into the OS over 30 years ago.
Why can't Windows do it? Not sure if Linux has it built in or not.
If you have an windows close to you, you can go to the task manager->performance->memory

Under committed you should see the number of gig of ram currently used for the SuperFetch cache (since Vista like said above):
https://www.technipages.com/windows-enable-disable-superfetch#:~:text=Superfetch caches data so that,improve performance with business apps.

I never tried to disactivate it to run Windows without it's active ram cache to see the difference, but I have right now 19-20 gig of ram used for it, which is about 100% of my "free" ram. That one is active, react to what you recently did and depends of your usage pattern.

You also have since way early in the 90s on the WIndows NT side something called: Cache resident bytes
https://systemcenter.wiki/?GetElement=Microsoft.Windows.Server.10.0.OperatingSystem.MemorySystemCacheResidentBytes.Collection&Type=Rule&ManagementPack=Microsoft.Windows.Server.2016.Monitoring&Version=10.1.0.5#:~:text=System Cache Resident Bytes is,memory pages not currently resident.

I think that purely for the OS files cache and not shown in the regular task manager and is more hardcoded.
 
This is cool, but still 100% irrelevant until DirectStorage and other similar APIs become mainstream. For 99% of use cases there is still almost no difference between a SATA and NVMe drive because everything is held back by storage and file-system protocols that were designed in the 20th century. Even when copying between two brand new NVMe SSDs, if you try to copy a folder with 20,000 tiny text files in it for example, you will get floppy-disk-era transfer speeds.
Thank you for the last quoted sentence. This is the first time I've ever heard anyone explain what "held back by [ancient] storage protocols" means, and without a concrete example, it sounds like marketing BS.
 
Even when copying between two brand new NVMe SSDs, if you try to copy a folder with 20,000 tiny text files in it for example, you will get floppy-disk-era transfer speeds.
Not really. Using robocopy with multi-thread switch alone helps a ton with this. On SSDs and PCI nVME drives, I often set it to at least 8, if not 16, and they absolutely fly. I've done up to 64, but at some point it tapers off, though I haven't done rigorous testing to find that point.
 
Thank you for the last quoted sentence. This is the first time I've ever heard anyone explain what "held back by [ancient] storage protocols" means, and without a concrete example, it sounds like marketing BS.
That’s file system overhead. They have enough IOPS to do the operations but copies are single threaded processes for very good reason (with exceptions, of course). The open/close/metadata update take time, and with thousands of files, that’s the majority of the work (vs the actual transfer). There are options on some file systems to tune around this, with potential impacts to reliability or detailed information being available, but it’s a known limitation of effectively every consumer filesystem in existence.
 
Not really. Using robocopy with multi-thread switch alone helps a ton with this. On SSDs and PCI nVME drives, I often set it to at least 8, if not 16, and they absolutely fly. I've done up to 64, but at some point it tapers off, though I haven't done rigorous testing to find that point.
Robocopy (and rsync with the right flags, etc) are the way around my above point (but also not default for various reasons).
 
That’s file system overhead. They have enough IOPS to do the operations but copies are single threaded processes for very good reason (with exceptions, of course). The open/close/metadata update take time, and with thousands of files, that’s the majority of the work (vs the actual transfer). There are options on some file systems to tune around this, with potential impacts to reliability or detailed information being available, but it’s a known limitation of effectively every consumer filesystem in existence.
Didn't know rsync finally got multithread capability. Only way I knew how was with xargs and that's kind of clunky. I only ever do multi-thread when it's single access and just source->dest. Beyond that, and the concerns you mention get to become reality quickly :)
 
Didn't know rsync finally got multithread capability. Only way I knew how was with xargs and that's kind of clunky. I only ever do multi-thread when it's single access and just source->dest. Beyond that, and the concerns you mention get to become reality quickly :)
There are … ways. Xarg is one. I used to have another scripted when I regularly had to unpack and move tarballs with 100,000 small text files in them. Can’t remember it as I haven’t had to do it in a few years and I’d have to dig into my notes, but you nailed it. It can get real unsafe. Especially if the target is a network filesystem and doing fun things with metadata to start with!
 
Cool news but won't buy ADATA storage devices. Hoping Samsung or Inland release PCIe Gen 5 drives.
Trudat ! I will NOT ever buy anything from any company that does the bait & switch bullshit that they did with their recent drives.

But on a moar cheery note, Sammy, WD and Crucial have also announced pcie gen 5 drives as coming soon :D
 
Trudat ! I will NOT ever buy anything from any company that does the bait & switch bullshit that they did with their recent drives.

But on a moar cheery note, Sammy, WD and Crucial have also announced pcie gen 5 drives as coming soon :D
Am I the only one that sees the irony? I can't be, right?
 
Am I the only one that sees the irony? I can't be, right?
While I know about adata being shady I suspect I’m missing something. Haven’t needed to buy drives in a bit, and I just buy Sabrent for high end and Inland from micro center otherwise.
 
I'd be somewhat concerned about max read cycles being exceeded, but I otherwise support this 100%. ;)

it's a good point, using a SSD as RAM would increase the read/writes exponentially. reliability has gotten much better but still a long way off RAM.
 
it's a good point, using a SSD as RAM would increase the read/writes exponentially. reliability has gotten much better but still a long way off RAM.
We have persistent ram. It’s expensive as hell.
 
Back
Top