Western Digital Ships 24TB & 28TB Hard Disks, Declares "Total Supremacy"

DPI

[H]F Junkie
Joined
Apr 20, 2013
Messages
12,833
Western-Digital_Ultrastar-28TB-24TB_WD-Gold-24TB_678x452.jpg

Western Digital made both enterprise storage planners and dedicated porn hoarders swoon today with the availability of 24TB CMR hard disks, while production on 28TB SMR hard disks ramps up during enterprise trials.

The new lineup of 3.5-inch 7200 RPM hard drives includes Western Digital's Ultrastar DC HC580 24 TB and WD Gold 24 TB HDDs, which are based on the company's energy-assisted perpendicular magnetic recording (ePMR) technology. Both of these drives are further enhanced with OptiNAND to improve performance by storing repeatable runout (RRO) metadata on NAND memory (instead of on disks) and improve reliability.

The new drives are slightly faster than predecessors due to higher areal density. Meanwhile, per-TB power efficiency of Western Digital's 24 TB and 28 TB HDDs is around 10% - 12% higher than that of 22 TB and 26 TB drives, respectively, due to higher capacity and more or less the same power consumption.

Source
 
Last edited:
New Big Size Harddisk FAQ™

Q: Will this make smaller sized drives cheaper? This should push down 20TB & 18TB drive prices.
A: No, it wont. It's never worked that way.

Q: Why are they making hard disks bigger, I thought we were transitioning to SSD's?
A: One has nothing to do with the other.

Q: Boy that's a lot of data to lose, I'd hate to lose that much data. Doesn't that seem like a lot of data to lose?
A: No. Shut up.

1700427226209.png
 
Last edited:
Wait, they're offering SMR disks at higher capacities? Not that long ago the attempt to cheapen manufacture despite the limitations of SMR meant people were scrambling to ensure the new discs they were buying and/or pulling out of externals were CMR. I thought, unless things have changed considerably (which is possible I suppose) that anyone buying a 8TB-16TB much less a 24TB+ drive would accept the limitations of SMR for their use cases. I am a bit curious of the difference between UltraStar and WD Gold in this case though,
 
This hard drive is literally 121739 times bigger than the first hard drive I bought. (Which was a whopping 230 MB if I remember correctly. Oh how many 3.5" disks it could hold! Edit: 3.5" floppy disks, that is. High Density ones, even. ;) )
 
Last edited:
New Big Size Harddisk FAQ

Q: Will this make smaller sized drives cheaper? This should push down 20TB drive prices.
A: No, it wont. It's never worked that way.

It often does, gradually over time, but never instantly. Higher density platters opens opportunities for lower capacity drives to be made at a lower cost, but the price generally doesnt drop until at least one competitor has caught up and is able to offer a competing product at that lower price, so it takes a while.


Q: Why are they making hard disks bigger, I thought we were transitioning to SSD's?
A: One has nothing to do with the other.

Yeah, these are not intended for client workloads like most SSD's are. In 2023 hard drives are 100% for the data center (or at the very least a NAS server for us enthusiasts or small business owners)

Q: Boy that's a lot of data to lose, I'd hate to lose that much data. Doesn't that seem like a lot of data to lose?
A: Shut up.

Then don't lose it. Anyone running one of these drives as a standalone disk without both redundancy and regular backups deserves to lose that much data.
 
Backup ftw here. RAID 1 (or mirroring at a higher level) also ok to add to that.

Warning of course for anything with regards to parity for reconstruction (to maximize space), as large arrays of large drives produce gigantic reconstruction times. I could seen an 8 drive array of these taking weeks, perhaps months.

Density is "fine" as long you don't try to just blindly throw it into "what you used to do with 500GB drives".
 
I think in the next few years we're going to see more of these extremely high capacity HDDs. Logically, it's the only way they can compete with SSDs. Flash drives are constantly dropping in price and increasing capacity. Can't beat them at speed, only size.
 
I wonder how loud these things are. Noise is the only thing scaring me away from a big drive.
 
Sad we’re stuck at a performance wall. Is what it is.
Performance? Related to SATA 3? My understanding is that one drive alone cannot saturate a SATA 3 channel. Yes? No?

That said, within a NAS box, it would be nice to have a much higher transfer rate. In fact, even for a regular channel, it would be nice to have SATA at 12 gb/sec or more. Better yet, I would like an NVMe or even a regular SSD that could hold 24 GB. :rolleyes::joyful::geek:
 
Wait, they're offering SMR disks at higher capacities? Not that long ago the attempt to cheapen manufacture despite the limitations of SMR meant people were scrambling to ensure the new discs they were buying and/or pulling out of externals were CMR. I thought, unless things have changed considerably (which is possible I suppose) that anyone buying a 8TB-16TB much less a 24TB+ drive would accept the limitations of SMR for their use cases. I am a bit curious of the difference between UltraStar and WD Gold in this case though,

These are SMR for enterprise, where everyone knows they're SMR, and they're almost certainly running in host-managed mode. The idea there is you get denser recording, but each zone has to be written sequentially. There's some server loads this is great for, because getting ~ 20% more storage helps a lot when you're running hundreds of storage boxes, and a lot of storage use cases are write once, read many, sometimes keep forever. It's not bad for storage that expires uniformly either, everything writen to one zone is going to expire around the same time, so you can start writing to that zone again.

What happened at the low end was WD decided to ship device-managed SMR to consumers, because they thought they could get away with it, and maybe they managed to reduce the number of heads in the drive by one, so they saved costs. Of course, device-managed SMR is terrible if you use the drive much, and especially if you're doing an array rebuild.
 
They’re fine if you use a noise dampening 3.5 to 5.25 adapter. I’m too lazy to link one
I don't know, bud. I'm not really sold on the idea that the HDD noise is primarily caused by the drive's vibrations interacting with the case. Perhaps it might help in some cases (no pun intended). On the other hand, if you could stop the drive vibrating the air, then we'd be really getting somewhere...
 
In my experience, at the consumer level is once a given capacity drops below a certain price, it tends to disappear, causing an overall price floor.

To be fair, hard drives are rather complicated micro-mechanical devices.

As size shrinks, the value to the buyer decreases, so they expect a lower price. The cost to manufacture shrinks more slowly as the storage size decreases. Eventually you just just reach an intersection point where they just can't be manufactured profitably.
 
I would like an NVMe or even a regular SSD that could hold 24 GB. :rolleyes::joyful::geek:

They exist if you have enough money ;).


I'd settle for an open source standard for tiered storage that could be used under both Windows and Linux.

It could be great. Modular. And because it would be an open source standard, it could actually work with standard partition management/rescue tools unlike the likes of AMD's StoreMi which was a disaster.

I'm picturing a user interface where you just build an infinitely deep hierarchy, with ranking priority, and the ability to assign ties. Faster drives get filled first (leaving some percentage of open space for writing). Frequently accessed files stay on the fast storage. Less frequently accessed files get moved down the assigned hierarchy as the drives fill up.

Tied ranks would be striped with eachother. A built in feature to handle redundancy and backups to remote NAS would also be nice, just in case the whole thing goes kablooie.

I'd build something like this:
Tied at #1 priority: Two 2TB NVME Samsung 990 Pro's
Tied at #2 Priority: Two 8TB SATA SSD's (probably Samsung 870 QVO)
#3 Priority One or two slow spinners just to catch the rarely accessed stuff.

Heck, maybe even a feature for those who have a shit ton of RAM to have a tier 0 RAM drive.

The system, would use idle time to move shit around and continuously optimize it based on access patterns.

There would also be a feature to right click on a specific folder and manually promote it to faster storage, like - for instance - if you are about to re-play an older game you haven't played in a while, that has been downgraded to slower storage.

It would be a little complicated, but it would allow for infinite customization of cool solutions (which is fun!) and allow for some large high performing logical devices without needing to spend tens of thousands of dollars on extreme-sized NVME drives.
 
I'd settle for an open source standard for tiered storage that could be used under both Windows and Linux.

It could be great. Modular. And because it would be an open source standard, it could actually work with standard partition management/rescue tools unlike the likes of AMD's StoreMi which was a disaster.

I'm picturing a user interface where you just build an infinitely deep hierarchy, with ranking priority, and the ability to assign ties. Faster drives get filled first (leaving some percentage of open space for writing). Frequently accessed files stay on the fast storage. Less frequently accessed files get moved down the assigned hierarchy as the drives fill up.

Tied ranks would be striped with eachother. A built in feature to handle redundancy and backups to remote NAS would also be nice, just in case the whole thing goes kablooie.

I'd build something like this:
Tied at #1 priority: Two 2TB NVME Samsung 990 Pro's
Tied at #2 Priority: Two 8TB SATA SSD's (probably Samsung 870 QVO)
#3 Priority One or two slow spinners just to catch the rarely accessed stuff.

Heck, maybe even a feature for those who have a shit ton of RAM to have a tier 0 RAM drive.

The system, would use idle time to move shit around and continuously optimize it based on access patterns.

There would also be a feature to right click on a specific folder and manually promote it to faster storage, like - for instance - if you are about to re-play an older game you haven't played in a while, that has been downgraded to slower storage.

It would be a little complicated, but it would allow for infinite customization of cool solutions (which is fun!) and allow for some large high performing logical devices without needing to spend tens of thousands of dollars on extreme-sized NVME drives.


All of that said, I wonder what the performance of like five Samsung 990 Pro's using motherboard RAID5 would be.

You could in theory get to the 16-20 TB volume level with five Samsung 990 Pro's (depending on your fault tolerance) and a workstation motherboard with more PCIe lanes than a consumer board. Striping 5 of them = 20TB, setting up 5 in RAID5 for some minimal fault tolerance would result in 16TB. A little pricy, but it should not be surprising that NVMe costs more than a mechanical hard drive per TB.

I'm guessing a nice little bump in sequential transfer speeds and IOPS would occur, but that it would probably be outweighed by a drop in Random 4k read performance at low queue depths and an increase in latency.
 
Last edited:
it could actually work with standard partition management/rescue tools unlike the likes of AMD's StoreMi which was a disaster.

What happened here? I'm rocking an AMD 7900x CPU in an ASUS Strix 670e motherboard.
I'm picturing a user interface where you just build an infinitely deep hierarchy, with ranking priority, and the ability to assign ties. Faster drives get filled first (leaving some percentage of open space for writing). Frequently accessed files stay on the fast storage. Less frequently accessed files get moved down the assigned hierarchy as the drives fill up.

Sure, if you have that many drives in your system. I have 4 HDDs, 2 NVMe's, and 2 SATA SDDs. But each drive has specific partitions and I doubt that more than 2 drives/partitions are being used at any one time.
It would be a little complicated, but it would allow for infinite customization of cool solutions (which is fun!)

More than I would need, certainly.
without needing to spend tens of thousands of dollars on extreme-sized NVME drives.
My MEDIA partition (photos, videos, music, etc) is 8 TB on spinning rust. I suppose I could get 2 4TB NVMe's but that's a lot of scratch. 2-3 years from now, who knows about pricing.
 
All of that said, I wonder what the performance of like five Samsung 990 Pro's using motherboard RAID5 would be.

You could in theory get to the 16-20 TB volume level with five Samsung 990 Pro's (depending on your fault tolerance) and a workstation motherboard with more PCIe lanes than a consumer board. Striping 5 of them = 20TB, setting up 5 in RAID5 for some minimal fault tolerance would result in 16TB. A little pricy, but it should not be surprising that NVMe costs more than a mechanical hard drive per TB.

I'm guessing a nice little bump in sequential transfer speeds and IOPS would occur, but that it would probably be outweighed by a drop in Random 4k read performance at low queue depths and an increase in latency.
If we’re talking performance, I would much, much rather get lower latency storage and I’d sacrifice a lot of space to do it. A 256GB or 512GB latency Optimized SLC drive with a fat onboard DRAM cache would be dramatically more useful for a performance drive than 4TB of QLC. Could still raid them if you really need the space, but at least any latency overhead of the raid would be offset by the drives themselves.
 
What happened here? I'm rocking an AMD 7900x CPU in an ASUS Strix 670e motherboard.

To be clear, StoreMI is a software application you (at least for a while there, not sure about anymore) got a license for included with the purchase of a Ryzen/Threadripper CPU. You didn't have to install it. The system would work perfectly well like normal without it. Unless you wanted to use storage tiering. It was pretty basic though, and as I recall the license only allowed for two tiers, one drive in each. My memory is a little vague here, as I haven't touched it in a few years, so don't take this as hard fact.

StoreMI works well, when it works. Firstly it is Windows only. Secondly if you want to do partition dumps, edit your partitions using some sort of partition manager or resize tool, or have multiple partitions on the drives you are using with StoreMI (like in the case of a dual boot) it just doesn't work.

Common partition management tools just don't recognize StoreMI's data scheme, so drives just look blank to all of these tools, making it nearly impossible to do any duplication of partitions or data rescue efforts using other devices.
 
If we’re talking performance, I would much, much rather get lower latency storage and I’d sacrifice a lot of space to do it. A 256GB or 512GB latency Optimized SLC drive with a fat onboard DRAM cache would be dramatically more useful for a performance drive than 4TB of QLC. Could still raid them if you really need the space, but at least any latency overhead of the raid would be offset by the drives themselves.

I'd agree for running programs and booting OS:es. Low latency, and good low queue depth 4k random read performance is key there.

For file storage? Who cares. You're not going to notice if that movie in your media library takes a few ms longer to start playing.

My MEDIA partition (photos, videos, music, etc) is 8 TB on spinning rust. I suppose I could get 2 4TB NVMe's but that's a lot of scratch. 2-3 years from now, who knows about pricing.

Personally I've just banished all file storage to my overkill NAS server now since mid 2010. (well, it wasn't overkill in the beginning, but it grew into that over time)

My desktop has been 100% SSD since I set up my first NAS in 2010. All client machines in my house have been all SSD since at least 2012. My desktop has been all NVMe since 2015 when I got an Intel 750 PCIe drive. SATA SSD's have been slowly working their way out of the house for years now. The last few are in older laptops that lack NVMe slots.

I have twelve 16TB 7200rpm Seagate Enterprise (Exos?) drives in the NAS server though for mass storage, and since 2014 I've had 10gig networking to be able to interact with that storage quickly.

(I am currently scheming a move to 25gig, and maybe even a single 100gig dedicated line between my desktop and the server, just for fun and overkill, and a possible future with raided NVMe drives on the server)
 
Too large to fit in my laptop, currently have 17TB of storage, 2x 4TB NVMEs, 1x 8TB SATA SSD, and a 1TB microSD.
 
I don't know, bud. I'm not really sold on the idea that the HDD noise is primarily caused by the drive's vibrations interacting with the case. Perhaps it might help in some cases (no pun intended). On the other hand, if you could stop the drive vibrating the air, then we'd be really getting somewhere...
The noise these days is from the constant tick of the head sweep across the drive to spread the lubricant. https://forums.guru3d.com/threads/clicking-hdd-is-now-a-feature-pwl.437452/ This is what I use to silence that
 
I don't know, bud. I'm not really sold on the idea that the HDD noise is primarily caused by the drive's vibrations interacting with the case. Perhaps it might help in some cases (no pun intended). On the other hand, if you could stop the drive vibrating the air, then we'd be really getting somewhere...

The noise these days is from the constant tick of the head sweep across the drive to spread the lubricant. https://forums.guru3d.com/threads/clicking-hdd-is-now-a-feature-pwl.437452/ This is what I use to silence that

Honestly these days hard drives are pretty quiet.

There used to be constant spindle whine back in the days, but now it's just an occasional tick as they read. I don't find that bothersome at all. Granted, I dn't sit in the same room as mine, so maybe I would be more bothered if I did.
 
Honestly these days hard drives are pretty quiet.

There used to be constant spindle whine back in the days, but now it's just an occasional tick as they read. I don't find that bothersome at all. Granted, I dn't sit in the same room as mine, so maybe I would be more bothered if I did.
It's not occasional, that PWL is horrible. That's why I spend so much fixing it
 
Honestly these days hard drives are pretty quiet.

There used to be constant spindle whine back in the days, but now it's just an occasional tick as they read. I don't find that bothersome at all. Granted, I dn't sit in the same room as mine, so maybe I would be more bothered if I did.
This is one of my 10,000 RPM Seagate Cheetah SCSI drives from back in the day.

View: https://www.youtube.com/watch?v=AMFkmuhTgbY

The internals of it.
The platters are sitting on a single 3.5" platter from a standard drive for scale,
IMG_1932.JPEG
 
Nice! I have 8 4TB Seagates (NAS models) I want to upgrade. Currently 6 for data and 2 for Snapraid. I also use DrivePool and duplicate certain folders.
Thinking going with 4 16TB WD Golds....2 for data and 2 for raid....maybe 3 and 2, have not decided. Hope prices come down a little soon.
Any recommendations to replace SnapRaid? I have never had to recover. Used to use some gui overlay to update the snapshots....dont want to mess with command prompts.
 
Nice! I have 8 4TB Seagates (NAS models) I want to upgrade. Currently 6 for data and 2 for Snapraid. I also use DrivePool and duplicate certain folders.
Thinking going with 4 16TB WD Golds....2 for data and 2 for raid....maybe 3 and 2, have not decided. Hope prices come down a little soon.
Any recommendations to replace SnapRaid? I have never had to recover. Used to use some gui overlay to update the snapshots....dont want to mess with command prompts.
A man of culture I see. JBOD + Snapraid + DrivePool is the way for home media. There isn't anything else GUI based with Snapraid-like functionality that I'm aware of, but restoring is trivial, and definitely recommend practicing a few times.

You could start a thread in Storage subforum about replacing snapraid, and will get plenty of recommends- they'll boil down to either ZFS, TrueNAS, unRAID. But I'd avoid ZFS since striping introduces unnecessary risk for home media IMO, nevermind the inability to easily expand one disk at a time, nor different sized disks, plus you lose individual drive spinup/spindown. UnRAID is good and probably closest to Drivepool+Snapraid in functionality.
 
Last edited:
A man of culture I see. JBOD + Snapraid + DrivePool is the way for home media. There isn't anything else GUI based with Snapraid-like functionality that I'm aware of, but restoring is trivial, and definitely recommend practicing a few times

I know nothing about Snapraid or Drivepool, but I am going to have to read up now.

I have always been a huge ZFS fan. It's JBOD does its magic in software, and is reportedly one of the most reliable RAID-like solutions out there due to how well it combats bit-rot.

I believe TrueNAS Core (formerly FreeNAS) still has a web based GUI implementation of ZFS ontop of a barebones FreeBSD install. I haven't used it in ages though, instead favoring the DIY approach using ZFS from the command line.

I currently have my main pool configured as follows:
Code:
 state: ONLINE
  scan: scrub repaired 0B in 10:02:09 with 0 errors on Sun Nov 12 10:26:14 2023
config:

    NAME                                               STATE        READ  WRITE CKSUM
    pool                                               ONLINE       0     0     0
      raidz2-0                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
      raidz2-1                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
    special  
      mirror-4                                         ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
    logs  
      mirror-3                                         ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
    cache
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0

errors: No known data errors

So, essentially ZFS 12 drive RAID60 equivalent on the hard drives. A three way mirror of 2TB Gen 3 MLC NVMe drives for small files and metadata, two mirrored 280GB Optanes for the log device (speeds up sync writes) and two striped 1TB Gen 3 NVMe drives for read cache.

It works pretty well for me. Well, maybe all except the read cache. The hit rate is atrocious on those. Despite being 2TB total that read cache does very little for me.
 
Last edited:
I know nothing about Snapraid or Drivepool, but I am going to have to read up now.

I have always been a huge ZFS fan. It's JBOD does its magic in software, and is reportedly one of the most reliable RAID-like solutions out there due to how well it combats bit-rot.

I believe TrueNAS Core (formerly FreeNAS) still has a web based GUI implementation of ZFS ontop of a barebones FreeBSD install. I haven't used it in ages though, instead favoring the DIY approach using ZFS from the command line.
Truenas core is BSD and TrueNAS scale is Linux under the hood. I recently converted to Scale and quite like it
 
This hard drive is bigger than all of my hard drives added together :O
Haha, I always have to 'remember' that I'm an outlier. If you work at all in the video world and you actually preserve your data (which you should), then single projects can often take 4-5TB of space.
It actually make sense to charge clients for drives as a line item in their bill. Called "Data Storage" or whatever.
Sad we’re stuck at a performance wall. Is what it is.
Generally for this kind of drive, it's going into a high-density server anyway. With 32+ of them pooled together, overall throughput is 'reasonable'.

And there is this:
They exist if you have enough money ;).
I mean, that was kind of the point, it's unobtanium for normal people. People that think that NAND is cheap right now don't have to do any serious amounts of storage. It's not 'cheap' until magnetic media can be replaced for only a 'slight' premium, not for the price difference of a car.
 
Truenas core is BSD and TrueNAS scale is Linux under the hood. I recently converted to Scale and quite like it

I used the original BSD based FreeNAS first bare metal, and then as a VM in ESXi with my SAS HBA's passed through to the FreeNAS.

Then In 2015 or 2016 (can't remember now) I got pissed off at ESXi, and decided it was time for a change. I moved the server to Proxmox, which is a front end for KVM and LXC. At first I tried continuing to run a FreeNAS VM with pass-through, but then I realized that Proxmos supported ZFS natively, and that was way more efficient, so I just did that using the command line tools. Towards the end with FreeNAS I had pretty much abandoned the GUI in favor of the command line anyway.

So now my main pool (one of four pools in that machine) runs bare metal under Linux / Proxmox.

The three other pools are much smaller.

There is my boot pool (rpool) which is just a basic two-ssd mirror.

Then there is a pool specifically for Scheduled TV recordings from my MythTV container using two mirrored 1TB Inland premium NVMe drives

And then there is a pool with two mirrored 256GB Inland premium drives for my VM drive images. This one was a little bit of a mistake. It is absolutely tearing through the write endurance of those little NVMe drives. They've been in there since mid to late 2021, and have already burned through 37% of the write endurance (according to Smart values)

I guess it's not really THAT bad. At this rate they will live for about 5.5 years, and they were relatively inexpensive NVMe drives in 2021 and $39.99 each. I'll just have to remember to keep an eye on them and swap them out one by one and rebuild when they get low.

And yes. That is a lot of NVMe drives for one system. 14 of them in fact, if I haven't miscounted. Three of those four way x16 bifurcation risers, and one little 8x to dual u.2 bifurcation adapter for my two optanes. I love systems with lots of PCIe lanes.

It's a pretty neat system, but it has nothing on some of the guys over in the Storage Showoff Thread. :p
 
I know nothing about Snapraid or Drivepool, but I am going to have to read up now.

I have always been a huge ZFS fan. It's JBOD does its magic in software, and is reportedly one of the most reliable RAID-like solutions out there due to how well it combats bit-rot.

I believe TrueNAS Core (formerly FreeNAS) still has a web based GUI implementation of ZFS ontop of a barebones FreeBSD install. I haven't used it in ages though, instead favoring the DIY approach using ZFS from the command line.

I currently have my main pool configured as follows:
Code:
 state: ONLINE
  scan: scrub repaired 0B in 10:02:09 with 0 errors on Sun Nov 12 10:26:14 2023
config:

    NAME                                               STATE        READ  WRITE CKSUM
    pool                                               ONLINE       0     0     0
      raidz2-0                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
      raidz2-1                                         ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
        Seagate Exos X18 16TB                          ONLINE       0     0     0
    special  
      mirror-4                                         ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
        Inland Premium NVMe 1TB                        ONLINE       0     0     0
    logs  
      mirror-3                                         ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
        Intel 280GB Optane 900p                        ONLINE       0     0     0
    cache
        Inland Premium NVMe 2TB                        ONLINE       0     0     0
        Inland Premium NVMe 2TB                        ONLINE       0     0     0

errors: No known data errors

So, essentially ZFS 12 drive RAID60 equivalent on the hard drives. A three way mirror of 1TB Gen 3 MLC NVMe drives for small files and metadata, two mirrored Optanes for the log device (speeds up sync writes) and two striped 2TB Gen 3 NVMe drives for read cache.

It works pretty well for me. Well, maybe all except the read cache. The hit rate is atrocious on those. Despite being 4TB total that read cache does very little for me.

That's a very nice array right there.
 
Back
Top