I need an advice on a suitable SAS/SATA HBA

SaleB

n00b
Joined
Apr 2, 2014
Messages
20
So, when the project is finished I intend to have an Intel based computer with an HBA controller card, an external case with HP SAS Expander and 4TB WD Red drives connected to it. At the moment my archives are about 20TB. I intend to buy enough drives to have reasonable security, after that slowly expand. I assume 2-3TB/yr expansion. I intend to make a case for 20 drives. It should be enough for many years.

The system on the machine should be Open Media Vault, so I need an JBOD capable HBA, that supports 2+TB drives and works fine with Debian, and it should be a pci express. Also, I would rather take a card with two external 8088 ports then going through hassle of internal ports, internal cables, internal to external bracket and then external cable.

The expander should arrive shortly, the exact model is HSTNM-B017, the sticker on it says ROM 2.08800, 4f12 and it's green.

I have used the post about exanders quite often and the list from the first post. I have almost bought an LSI 3081E-R, but I have learned in last moment that it is limited to 2tb drives. Then I also found a post telling that expanders are not really such a good idea.

So, I would like to know for my scenario, is the expander recomandable, and which HBA cards should I consider?

Thank you for your time,
Sasa
 
Last edited:
Just as an example, I have foun on ebay various LSI 3082E card and HP H221 SAS9207-8e which even has external ports.

For this SAS9207 LSI states that 9207 has IT firmware, and 9217 IR firmware, but I cannot find words Passtrough or JBOD anywhere in technical specs. I would not like ending up with a card that I cannot use.

And I have zero experience with all of this
 
LSI 9207 is perfect if you just want to offer disks to the OS without any raid functionality,
similar to Sata/AHCI (This HBA is one of the best for Jbod or ZFS Software Raid).

Disks > 2TB: no problem
 
Excellent, that's a very good news.

Yes, I would like to use it without HW RAID

Thank you
 
I intend to buy enough drives to have reasonable security, after that slowly expand. I assume 2-3TB/yr expansion. I intend to make a case for 20 drives. It should be enough for many years.

Of course I am reading into this without context on your plans (other than what you have written,) but please follow the mantra of RAID IS NOT A BACKUP. A single bad power supply can take out your entire array, no matter how many parity disks you have. If your data is important to you, either buy enough external singles for staged backups, or build a second box to maintain a full backup of your important data.
 
I understand your concerns and I fully concur.
But, at the moment I have a four 4tb drives and a few 2tb drives spread across two computers working 24/7 without a backup. SMART helped me transfer data from three disks in last few years three to five days before these drives died. No data lost in the process. No backup. From that perspective is RAID, while not a backup, will be a huge relief.
 
Last edited:
You should not expect that Smart will warn you. Many disk failures occur without a Smart warning. If you want to have the best possible datasecurity, use ZFS as the filesystem with ZFS software raid and with a pool layout that allows any two disks to fail.

Beside a real disaster (fire, server stolen or overvoltage due a flash), this will help on nearly all cases that usually means a dataloss, even when you get a virus that deletes/encrypt data due the versioning filesystem with readonly snaps.
 
I have read about ZFS mostly on this forum and some blogs. I liked it very much, but it needs ECC RAM, and because of that it needs a real server board. When ASRock C2550D4I was announced I have been thinking about ZFS on that board then I have read a few horror stories where people lost all of their data and scratched that idea and focused on the option linux/mdadm raid 6 for media/other files and mirror raid for personal data.

If I remember correctly in ZFS drives are arranged in a few vdevs that follow the same quantity laws as raid (+1 for z1, +2 for z2, +3 for z3). And the vdevs are then arranged in a zpool. The deal-breaker was, when I found that if I for any reason lose a vdev I am loosing the whole pool. That seemed like a huge risk for inexperienced user as myself. On the other hand with regular software RAID (mdadm) I have a eight drive RAID 6 for some data, and another six drive RAID6 for other data, and a 1+1 disk mirror RAID for most important data, and these arrays are fully individual one from another.

I had an opportunity to run an open media vault virtual machine with a few virtual drives on my main machine. So, I simulated some disk failures and I was able to recover without huge problems, so when there is a serious situation I will not do it for the first time. For a beginner like myself that is a huge bonus.

As of the SMART, yes, I know it's limited to measurable problems. One of my provisions is SMART and the other is that I try to change drive after 3-5yrs, and there is also a burn-in period, in the first month I add data that is is backed-up, so I can cope with a DOA problem. It is interesting to me that there are Samsung drives with more then 1000 active hrs (only drawback is that they are max 2tb), and there are Seagate drives that barely go trough the guarantee period (one died after 17 months, another 26 months, third 21 months, forth is still active), WD Green and Red are fine with some 2-3deg temperature difference.

Until now I was lucky, but intend to do some changes so I am not depending on the luck only.
 
There is also a financial point that may force me in the direction of mirror raid for all the data, because it's much more simple to buy two drives for another array, then to buy six or eight drives; but that is a story that has nothing to do with technology.
 
ZFS doesn't require ECC. It's just highly recommended, but it would be for any filesystem. ZFS is no different.

You can create different pools, and have a separate RAIDz1 (or VDEV of your choosing) in each pool. That would be equivalent to multiple RAID6 arrays. Plus, even if you did lose a VDEV in a pool, data recovery isn't impossible.

Gea is correct - ZFS is the best storage option out there. There's a learning curve, but it's worth it. I've been using ZFS for my primary file storage for years, and now wouldn't even consider standard RAID. It's more expensive, doesn't protect against many types of failures, and lacks all of the major benefits that ZFS has (integrity, expansion, flexibility).
 
There is also a financial point that may force me in the direction of mirror raid for all the data, because it's much more simple to buy two drives for another array, then to buy six or eight drives; but that is a story that has nothing to do with technology.

You can run drives in a ZFS mirror very easily. I do that for my SSDs that act as a VMware datastore. No offense, but your education in ZFS seems lacking. The problems you describe with ZFS are nonexistent.
 
I have read about ZFS mostly on this forum and some blogs. I liked it very much, but it needs ECC RAM, and because of that it needs a real server board. When ASRock C2550D4I was announced I have been thinking about ZFS on that board then I have read a few horror stories where people lost all of their data and scratched that idea and focused on the option linux/mdadm raid 6 for media/other files and mirror raid for personal data.

If I remember correctly in ZFS drives are arranged in a few vdevs that follow the same quantity laws as raid (+1 for z1, +2 for z2, +3 for z3). And the vdevs are then arranged in a zpool. The deal-breaker was, when I found that if I for any reason lose a vdev I am loosing the whole pool. That seemed like a huge risk for inexperienced user as myself. On the other hand with regular software RAID (mdadm) I have a eight drive RAID 6 for some data, and another six drive RAID6 for other data, and a 1+1 disk mirror RAID for most important data, and these arrays are fully individual one from another.

I had an opportunity to run an open media vault virtual machine with a few virtual drives on my main machine. So, I simulated some disk failures and I was able to recover without huge problems, so when there is a serious situation I will not do it for the first time. For a beginner like myself that is a huge bonus.

As of the SMART, yes, I know it's limited to measurable problems. One of my provisions is SMART and the other is that I try to change drive after 3-5yrs, and there is also a burn-in period, in the first month I add data that is is backed-up, so I can cope with a DOA problem. It is interesting to me that there are Samsung drives with more then 1000 active hrs (only drawback is that they are max 2tb), and there are Seagate drives that barely go trough the guarantee period (one died after 17 months, another 26 months, third 21 months, forth is still active), WD Green and Red are fine with some 2-3deg temperature difference.

Until now I was lucky, but intend to do some changes so I am not depending on the luck only.

It doesnt "require" it to work, it is strongly recommended to get the full benefit of ZFS (and most file systems really) but many people run without it. There are more affordable boards that have ECC. I use a Supermicro X9SCL (might be an X9SCM) both of which I would call affordable, with a Core i3 and 16GB of ECC. Not an expensive combo.

You can have as many pools as you want. There is nothing keeping you from having two RAIDz2 pools in one server, or twenty pools. You do not have to combine them into one pool as multiple vdevs unless you want one pool with all that space.

Losing a vdev is bad news, but a pool comprised of two RAIDz2 vdevs would have to lose three drives from the same vdev at the same time to lose the pool. A pool with two RAIDz2 vdevs could lose four drives (two from each vdev) and keep on going.
 
It doesnt "require" it to work, it is strongly recommended to get the full benefit of ZFS (and most file systems really) but many people run without it. There are more affordable boards that have ECC. I use a Supermicro X9SCL (might be an X9SCM) both of which I would call affordable, with a Core i3 and 16GB of ECC. Not an expensive combo.

What???? As far as i know that ECC is useless because the processor does not support it.
 
These SuperMicro boards support ECC with the i3 and the Xeon
 
You can have as many pools as you want. There is nothing keeping you from having two RAIDz2 pools in one server, or twenty pools. You do not have to combine them into one pool as multiple vdevs unless you want one pool with all that space.

I did not know this. This changes everything.

For the ECC, I know, it is not required but for the concept that does most of it work in memory it is unpractical in long run to think about ECC as not required.

For the i3 subject, I have read that there are specific models in each series that support ECC and others do not, but it is interesting that, according to Intel Arc, Pentium G2020 and G2030 support ECC memory. Would this processor be to slow for the ZFS machine?

I want to think again about the use of ZFS, still need to figure out is there an option that combines Open Media Vault and ZFS.
 
I did not know this. This changes everything.

For the ECC, I know, it is not required but for the concept that does most of it work in memory it is unpractical in long run to think about ECC as not required.

For the i3 subject, I have read that there are specific models in each series that support ECC and others do not, but it is interesting that, according to Intel Arc, Pentium G2020 and G2030 support ECC memory. Would this processor be to slow for the ZFS machine?

I want to think again about the use of ZFS, still need to figure out is there an option that combines Open Media Vault and ZFS.

Yes, most people consider it required. But it will work without it.

Ive never heard of the G2020 and G2030 so I dont know? The specs seem fine.
 
I have researched a bit. There is no (recommendable) way to combine ZFS and Open Media Vault, so that leaves FreeNAS. It seems that most (even uPNP) if not all services that I need, also exist for FreeNAS, so there is no problem.

I have also researched the motherboards. In the time of my last comment I did not realize that it's a LGA1155 Intel gen 2/3 board. So, I researched availability and prices, in a specific german shop, the X9SCM-F costs 192EUR, and a gen 4 LGA1150 board, X10SLM-F costs 194EUR. The plus point on that is that I already have a G3220 processor which is ECC compatible. Can you comment on this board as there is no driver data on FreeBSD at SuperMicro site (but there are documented cases of use in FreeNAS forum)?

There is still the question about the memory. I have to find which Kingston modules are compatible, because the recommended Samsung/Micron are in Europe either unavailable or way too expensive. I have been thinking about trying with Kingston Server Premier or Kingston KFJ-PM316E/8G, they are around 60-70eur per 8gb in Germany, and 120e in my country. I intend to take two for start and see where it gets me.

One additional question, is the sas expander usable with ZFS, or it's also a no-go?
 
ZFS runs just fine on "lower end" CPUs. If you're just doing basic storage (and not something like deduplication), then just about any CPU you buy today will suffice. I believe one hard recommendation is using a 64-bit CPU, but you'd have a tough time finding one that isn't anyway.

Years back when I did my first ZFS build (2011?) I used the cheapest CPU I could find for my motherboard and I had no issues with performance. Didn't use ECC RAM either. Had zero issues, and in fact that same pool I migrated all the way up to the system I'm using today.

Another huge benefit to ZFS. You can "export" an existing pool, "import" it to an entirely new machine, and all your data is intact. Just move all the drives in a pool to a new machine and off you go. I've even done it between OmniOS (Gea's brainchild w/ napp-it) and FreeNAS. Or you could do it to ZFS on Linux. You're not stuck with a specific RAID controller and OSs that support it.
 
I have researched a bit. There is no (recommendable) way to combine ZFS and Open Media Vault, so that leaves FreeNAS. It seems that most (even uPNP) if not all services that I need, also exist for FreeNAS, so there is no problem.

I have also researched the motherboards. In the time of my last comment I did not realize that it's a LGA1155 Intel gen 2/3 board. So, I researched availability and prices, in a specific german shop, the X9SCM-F costs 192EUR, and a gen 4 LGA1150 board, X10SLM-F costs 194EUR. The plus point on that is that I already have a G3220 processor which is ECC compatible. Can you comment on this board as there is no driver data on FreeBSD at SuperMicro site (but there are documented cases of use in FreeNAS forum)?

There is still the question about the memory. I have to find which Kingston modules are compatible, because the recommended Samsung/Micron are in Europe either unavailable or way too expensive. I have been thinking about trying with Kingston Server Premier or Kingston KFJ-PM316E/8G, they are around 60-70eur per 8gb in Germany, and 120e in my country. I intend to take two for start and see where it gets me.

One additional question, is the sas expander usable with ZFS, or it's also a no-go?

Either of those boards should work. You don't need specific "FreeBSD" drivers. I myself have an X10SL7-F and it works great. For memory, use what's on the approved list and you shouldn't have any issues.

SAS expanders are fine with ZFS. You just want direct connections to drives, not going through RAID controllers. I have a SAS HBA with 2xSFF-8086 ports. The one port connects to a 16 drive chassis through an expander, and ZFS can natively see all 16 drives. My throughput isn't great, but it's more than enough for data storage.
 
I have been researching last few days and I have a few additional questions if I may.

I know the rule of thumb of 1GB ram per 1TB hdd in zfs system. Now, when I know that different storage pools are doable in a ZFS FS, my intention is to (when I bay all the disks) have three pools, two Z2 of six 4tb drives giving me 16tb volumes each, and a mirror of 1+1 drives giving me additional 4tb. This totals in 56tb gross and 36tb net. I have also read that using a few pools with one vdev each gives better performance then using one pool with three vdevs.

Question is, the math about the ram is it on the level of whole ZFS fs, on the level of each pool and does it take into account gross or net capacity?

I am asking because this board (X10SLM-F) is limited to 32GB, so I would like to know if it would be a problem in some moment down the road. Also, in this particular case, how much help would make the L2ARC if there is unfavorable ram[gb]/hdd[tb] ratio? And how much do the the read/write speeds really suffer the lower memory issue?
 
The rule of 1 GB RAM per TB data is something that you hear always in the FreeNAS world and even there I would see it more as a suggestion if you need a very fast system with the option to use all features like dedup.

On Solaris the Oracle specs demands 1-2 GB RAM without a restriction of the poolsize. If you avoid dedup, i would say, use at least 4 GB and everything above gives a better performance as it is used as readcache but is not needed for stability.

about performance
If you use several pools, this may be faster on concurrent access from different persons to different pools.
If different persons access the same pool a pool from multiple vdevs is faster as pool performance scale sequentially over number of datadisks and regarding iops with number of vdevs (ZFS stripes over vdevs in a raid-0 manner).

Mostly you use several pools for diffeernt use cases, example an SSD only pool for a datastore and virtual machines and a slower pool with spindels for backup or a general use filer.
 
Last edited:
What you'll find with a pool is that it'll show both gross capacity and net capacity. Obviously affected by your choice of VDEV(s) for those pools. Your memory, to my knowledge, isn't done on pool levels, but the system as a whole and inclusive of all pools. You shouldn't have to worry about it, ZFS should handle memory management just fine on its own. For you, more memory means more performance, but depending on what you're doing you may not notice a difference. 32GB of RAM should give you plenty of headroom. If you're not doing deduplication, then you won't hit any bottlenecks anytime soon.

To Gea's points, you may want to combine both your RAIDz2 VDEVs in a single pool. Matching VDEV types in a single pool should improve performance. I would keep your mirrored drives in a separate pool. Mixing VDEVs is generally something to avoid in a single pool.

I'm running a ~22TB pool on my setup and have 16GB RAM to ZFS. I have primarily media files I'm serving, and if I check the ARC hits (memory cache), I get well over 90% hits. If that number was lower, that'd mean I was likely hitting memory constraints.
 
Mostly you use several pool for differnt use cases, example a SSD only pool for a datastore and virtual machines and a slower pools with spindels for backup or a general use filer.
That is exactly my configuration, and it works very well for the money spent. I run the usual 9211-8i HBA in IT mode.
 
Thank you all. I have just read, understood and ordered one SAS9207-8E at vey low price point. And also ordered the Supermicro X10SLM-F and two 8gb sticks Kingston KVR16E11/8HB (Hynix die) to wait for me somewhere in Germany.

When comes the time for installation and tune-up I may have some more questions
 
Back
Top