Raid Card or ZFS?

HICKFARM

Gawd
Joined
Jun 18, 2008
Messages
547
So I have been doing a bit of reading about ZFS and it seems to have some advantages and disadvantages.

I mostly just use Windows for all the PC's in my house. So ZFS server would be the only Linux system in the house. So would any of the Window's PC's be able to access the server? If i used it solely for plex no big deal. But I have pictures and personal video on there as well. How can i access this data with a windows system?

I currently have this Adaptec Raid 51645 and I can't seem to find a reason not to just use it instead.

With a ZFS system I would need to have ECC ram. With the adaptec raid card i shouldn't have too since it controls all the data.

I have seen a lot of ZFS builds on here and just curious how everyone accesses the data. Maybe I am just missing something. Are all these people using linux for their main OS? Also how would I copy data from my current windows 7TB raid 5 array over to the ZFS array that is in linux?
 
If you need ECC ram for ZFS, you definitely also need it for NTFS.

I was just reading that is more catastrophic in ZFS where it is compounded and can ruin your whole pool compared to NTFS only a few files will corrupt themselves within the RAID. Been doing some research into ECC memory as well and looks like I need to go a good Intel server motherboard for it. Seems only consumer cpu's that support ECC is a few i3's and celerons. Or it looks as if AMD supports ECC memory even in desktop processors.
 
Nope, you've read it wrong. ZFS without ECC memory is still safer than NTFS without ECC memory.

Matthew Ahrens:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.

I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
 
Alright so I should use ECC memory no matter what, how about the communication with windows sharing? How would i transfer data from a Raid card array in windows to a linux ZFS array?
 
Copy, paste to share.. the shares on the ZFS server will act like any share on the network.. You would not be able to pull a drive from the ZFS server and read it on a windows box, but over the network it will work just fine as that is SMB etc.

From the questions you are asking, you might instead look at a Synology, QNAP, or Asustor NAS for your needs. Easier and less technical.. Unless your plan is to learn and you are not in a hurry to put it into use with critical files right away. (can afford to lose the data in the process of learning)
 
Copy, paste to share.. the shares on the ZFS server will act like any share on the network.. You would not be able to pull a drive from the ZFS server and read it on a windows box, but over the network it will work just fine as that is SMB etc.

From the questions you are asking, you might instead look at a Synology, QNAP, or Asustor NAS for your needs. Easier and less technical.. Unless your plan is to learn and you are not in a hurry to put it into use with critical files right away. (can afford to lose the data in the process of learning)

Well i am already running a RAID 5 array with five 2TB drives using Windows 7. So i know a little bit of what is going on here. I have just never mixed operating systems into the mix with multiple file structures. So my current Raid 5 7TB array is a bit lacking in space and I want to build another system which is a lot more room for improvement. And ya this build is to learn as well, I will still have all my files backed up on my current server so any catastrophic event when setting up won't be a problem. And mostly for Plex so no critical files.

I am leaning towards just using windows 7 and the raid card I linked. I can easily hook up 16 HDD's with just the internal connections alone. I have a lot of extra 2TB drives kicking around to make up another RAID 5 array. And that will free up the space on my 3 4TB drives that i have media data stored on, since i have been procrastination for the last year on what I want to do. Then i can make a RAID with 4TB drives and just expand the volume if I need more space. I have read you can't easily expand a volume "vdev" in ZFS like you can with a hardware raid.
 
Well i am already running a RAID 5 array with five 2TB drives using Windows 7. So i know a little bit of what is going on here. I have just never mixed operating systems into the mix with multiple file structures. So my current Raid 5 7TB array is a bit lacking in space and I want to build another system which is a lot more room for improvement. And ya this build is to learn as well, I will still have all my files backed up on my current server so any catastrophic event when setting up won't be a problem. And mostly for Plex so no critical files.

I am leaning towards just using windows 7 and the raid card I linked. I can easily hook up 16 HDD's with just the internal connections alone. I have a lot of extra 2TB drives kicking around to make up another RAID 5 array. And that will free up the space on my 3 4TB drives that i have media data stored on, since i have been procrastination for the last year on what I want to do. Then i can make a RAID with 4TB drives and just expand the volume if I need more space. I have read you can't easily expand a volume "vdev" in ZFS like you can with a hardware raid.

I'm running a linux ZFS with SMB and using it just fine, mounted as a drive on Windows 8, couldn't be happier.
Hardware RAID is expensive, adds another failure point and is limited in that if you want to make sure if two disks die, you can still recover your data RAIDZ2, or 3 disks RAIDZ3.
ZFS really is the way to go. ECC is not a must have, but a should have.

It's true that vdevs are hard to expand, but plan ahead, if you need an array of a certain size, figure out how many disks you'll need, which RAIDZ# you'll use, get an enclosure that supports that, or use existing case. It's not hard, it just seems so. Spend 30 mins thinking hard about it or spends a lot of time swearing after you loose your data.
 
Well i am already running a RAID 5 array with five 2TB drives using Windows 7. So i know a little bit of what is going on here. I have just never mixed operating systems into the mix with multiple file structures. So my current Raid 5 7TB array is a bit lacking in space and I want to build another system which is a lot more room for improvement. And ya this build is to learn as well, I will still have all my files backed up on my current server so any catastrophic event when setting up won't be a problem. And mostly for Plex so no critical files.

I am leaning towards just using windows 7 and the raid card I linked. I can easily hook up 16 HDD's with just the internal connections alone. I have a lot of extra 2TB drives kicking around to make up another RAID 5 array. And that will free up the space on my 3 4TB drives that i have media data stored on, since i have been procrastination for the last year on what I want to do. Then i can make a RAID with 4TB drives and just expand the volume if I need more space. I have read you can't easily expand a volume "vdev" in ZFS like you can with a hardware raid.
Hi there,

I'm using the same Controller 51645 for about 6 years with 16x3TB Seagate... you can also switch the used HDDs with bigger ones... one by one, after you switched every disk (and passed the rebuild) you can expant the Array.... it is possible I want to say
also you can switch the whole controller if you have to, against the same or a newer one with the same amount of ports. You could also by an HP MSA60 an hook it up to your external SAS Port so you get an extra of 12 slots, also usefull for migration work :)
 
Hi there,

I'm using the same Controller 51645 for about 6 years with 16x3TB Seagate... you can also switch the used HDDs with bigger ones... one by one, after you switched every disk (and passed the rebuild) you can expant the Array.... it is possible I want to say
also you can switch the whole controller if you have to, against the same or a newer one with the same amount of ports. You could also by an HP MSA60 an hook it up to your external SAS Port so you get an extra of 12 slots, also usefull for migration work :)

Ya I am currently using a HighPoint card with my current array. My previous Highpoint raid card fried on me. But wasn't too hard to recover the array using a new card by the same manufacturer.

So are you running Raid 6? Also do you use the battery pack on that raid card? So how does one SAS port handle 12 drives at once where the other internal connectors can only do 4? Is the bandwidth per drive reduced?

I still am considering software raid, believe i can configure that ataptec 51645 card to work as a JBOD card as well. Seems like i just need to configure SAMBA to share the folder to windows?
 
Last edited:
Any exact reason why? My guess is ZFS is a little bit much for my smaller setup.
Personally i'd suggest ZFS over a raid card for any home setup. That way there's zilch chance of a failed card and the hunt for an identical one.
Motherboard or other bit fails on a ZFS setup? Transplant to a new system and you're done.

Far as which host os: i've seen Linux,. OpenSolaris and FreeNAS (FreeBSD) ZFS builds.
 
Any exact reason why? My guess is ZFS is a little bit much for my smaller setup.

Exactly. I pretty much stick with AMD based mother boards and use the onboard AMD raid controller. I've moved my raided drives between 3 or 4 AMD mother boards, going to newer generation MBs and every single one of them was able to read my raided drives when moved over with zero problems. The only time you'll have problems with the AMD MB raid controller, is if you get in there and start jacking around with settings and you configure yourself into an unsupported configuration.

ZFS is rife issues, it overly complicated, and it has steep requirements. Yes, it has its benefits, and you can move your data some what easily between setups, but if you watch this forum.. there are a fair amount of threads where ZFS melted down and folks needed help.

and besides.. ZFS, MB, or drive failure.. Using Raid does not replace a good back up strategy.
 
Personally I use ZFS for absolutely everything from small two disk mirrors all the way up to my 12 hard drive server with two mirrored SSD's for a ZIL/SLOG and two striped SSD's for read cache.


ZFS is rife issues

Such as? Only one I can think of right now is that there is no way to defragment it, but my current 12 disk 48TB pool has been running for 2-3 years now, and my fragmentation levels are still pretty low, and even with that the way ZFS works limits the performance impact of fragmentation.

it overly complicated

Personally I find even managing ZFS manually from the command line to be dead simple, and if you aren't comfortable with the command line there are tools (FreeNAS, NappIT, etc.) you can use.

and it has steep requirements.

Nope. The oft quoted recommendation of 1GB RAM per TB of drive space is not a requirement at all. More RAM helps performance as you get a bigger ARC, but the 1GB per TB "requirement" is mostly fiction propagated on the FreeNAS forums. I'd even argue that for a basic server ZFS has LOWER requirements, as you can get away with using on board SATA or cheap HBA's rather than fancy RAID controllers.

and besides.. ZFS, MB, or drive failure.. Using Raid does not replace a good back up strategy.

This part I agree with completely.

Drive redundancy protects you from drive failures, but not from accidental deletions, loss due to fire/flood ransomware, etc. etc.
 
Last edited:
I would go as far as to say that it doesn't matter if you're running ZFS / hardware raid / software raid / storage spaces / UnRaid as long as you have viable backups. This can be off-site (Crashplan) or onsite.

At the moment, I've been running ZFS on my primary file server, and UnRaid on my backup file server. Still working on a viable offsite backup solution though.

Each approach has its good and bad points. If you're insisting on sticking with Windows, why not run your OS on two SSDs (120GB wil work) in a software RAID1, and your storage array in storage spaces? You can then either backup to an additional server (Ubuntu server running on two 8TB archive drives in a software RAID1, with a backup task set to run every N days) or buy a Crashplan account and keep your backups off-site.
 
I've been using ZFS in various forms for years, wouldn't consider anything else. Try FreeNAS if you want a storage appliance. Gets you a nice GUI where you can set everything up, including Windows (SMB) shares. I've moved my data pool between multiple systems including new motherboards/CPUs, upgraded the hard drives, all without ever losing data. In one case, I copied my datapool to a new set of drives, then gave the old ones to a friend. Once FreeNAS was installed on his entirely new set of hardware, he imported my pool with movies & tv shows on it. Worked great.

You can also take snapshots, backups, etc, all very easy. Perfect to get data protected off-site.
 
So while this does NOT replace backups. ZFS does have a neat feature that can give your data even more safety. Snapshots, ZFS is a copy-on-write filesystem which gives you basically performance free snapshots (you must enable it). So if you do regular snapshots and get affected by a ransom virus you just restore those file to a snapshot before. This is usually easier than restoring from a backup and certainly much faster. Again with this feature you still want backups but it does help handle another failure point that is handled with backups.
 
Personally I use ZFS for absolutely everything from small two disk mirrors all the way up to my 12 hard drive server with two mirrored SSD's for a ZIL/SLOG and two mirrored SSD's for read cache.




Such as? Only one I can think of right now is that there is no way to defragment it, but my current 12 disk 48TB pool has been running for 2-3 years now, and my fragmentation levels are still pretty low, and even with that the way ZFS works limits the performance impact of fragmentation.



Personally I find even managing ZFS manually from the command line to be dead simple, and if you aren't comfortable with the command line there are tools (FreeNAS, NappIT, etc.) you can use.



Nope. The oft quoted recommendation of 1GB RAM per TB of drive space is not a requirement at all. More RAM helps performance as you get a bigger ARC, but the 1GB per TB "requirement" is mostly fiction propagated on the FreeNAS forums. I'd even argue that for a basic server ZFS has LOWER requirements, as you can get away with using on board SATA or cheap HBA's rather than fancy RAID controllers.



This part I agree with completely.

Drive redundancy protects you from drive failures, but not from accidental deletions, loss due to fire/flood ransomware, etc. etc.

I can't argue with this response, because you clearly have the skill set and the experience to make this work. For the average joe blow who's never touch a Linux command line in their life? Not a good fit.
 
All my personal pictures and videos I have backed up on separate external USB drives. So if i were to lose all my data on the raid, i really wouldn't care. I mean sure it would be hard to re catalog everything, but not irreplaceable.

So I don't need to worry about off site backups and other overhead. It seems everyone uses different storage solutions and i can see ZFS being great in a small company where people accidentally delete files and such where snapshots would be super handy. But for what I am doing it may be a lot more then what I need. It is mostly a plex server. In the meantime while i am looking for a cheap Xeon CPU solution with ECC ram I am installing linux on a VM to mess around with it more and ZFS. I do have a Xeon server already with 2x E5335, but it is just a power hog. Looking into finding a cheaper E3-1220 setup that is a little more future proof as well.
 
Last edited:
I can't argue with this response, because you clearly have the skill set and the experience to make this work. For the average joe blow who's never touch a Linux command line in their life? Not a good fit.

Ya I agree. I have fiddled with linux a bit, but not all that much. And can be really hard when every explanation or tutorial is full of abbreviations and other things that you need to have used linux as your main OS to know. Then I end up with 5 google windows open looking up what they are referencing.
 
All my personal pictures and videos I have backed up on separate external USB drives. So if i were to lose all my data on the raid, i really wouldn't care. I mean sure it would be hard to re catalog everything, but not irreplaceable.

So I don't need to worry about off site backups and other overhead. It seems everyone uses different storage solutions and i can see ZFS being great in a small company where people accidentally delete files and such where snapshots would be super handy. But for what I am doing it may be a lot more then what I need. It is mostly a plex server. In the meantime while i am looking for a cheap Xeon CPU solution with ECC ram I am installing linux on a VM to mess around with it more and ZFS. I do have a Xeon server already with 2x E5335, but it is just a power hog. Looking into finding a cheaper E3-1220 setup that is a little more future proof as well.
You should really look into FreeNAS. I'm running it as a VM on a Xeon server w/ ECC memory, but it works just as well on bare metal. You can use FreeNAS to create "jails", which are basically mini-VMs, to run apps like Plex, SABnzbd, Sonarr, etc. Aside from setting up your network, you never need to hit a command line. Gets you all the benefits of ZFS without having to directly fiddle with it. I've been running my Plex server for several years now like this. SO much easier than having separate VMs for everything.
 
You should really look into FreeNAS. I'm running it as a VM on a Xeon server w/ ECC memory, but it works just as well on bare metal. You can use FreeNAS to create "jails", which are basically mini-VMs, to run apps like Plex, SABnzbd, Sonarr, etc. Aside from setting up your network, you never need to hit a command line. Gets you all the benefits of ZFS without having to directly fiddle with it. I've been running my Plex server for several years now like this. SO much easier than having separate VMs for everything.

I always just ran Plex right on the base OS itself. Idk if that is technically the right thing to do. But I don't see the harm in it. Let it have all the resources it needs for transcoding. I will admit my collection is very diverse when it comes to either .mkv, .avi, .mp4, and the like.

I will look into FreeNas as well. I just like the versatility of windows, just use Teamviewer and remote into the OS. I can install a game server and other things on it. I will admit I need to setup more automation with my downloads. I haven't used sonarr, but another thing to look into.
 
I always just ran Plex right on the base OS itself. Idk if that is technically the right thing to do. But I don't see the harm in it. Let it have all the resources it needs for transcoding. I will admit my collection is very diverse when it comes to either .mkv, .avi, .mp4, and the like.

I will look into FreeNas as well. I just like the versatility of windows, just use Teamviewer and remote into the OS. I can install a game server and other things on it. I will admit I need to setup more automation with my downloads. I haven't used sonarr, but another thing to look into.
For my setup I'm running ESXi, so I can run multiple VMs, FreeNAS w/ Plex being one of them. My motherboard has IPMI, so I can remote in and turn everything on/off if needed. With a VPN on my router, I always have remote access. In your case, you could easily run a second VM for Windows alongside FreeNAS.

It's been a great setup, but does require an extra level of technical knowledge to set up, as you need to pass through an HBA (disk controller) to the FreeNAS VM. My Supermicro board has an on-board HBA, so that's part is easy. Just need to have a Xeon CPU w/ the proper virtualization extensions.
 
For my setup I'm running ESXi, so I can run multiple VMs, FreeNAS w/ Plex being one of them. My motherboard has IPMI, so I can remote in and turn everything on/off if needed. With a VPN on my router, I always have remote access. In your case, you could easily run a second VM for Windows alongside FreeNAS.

It's been a great setup, but does require an extra level of technical knowledge to set up, as you need to pass through an HBA (disk controller) to the FreeNAS VM. My Supermicro board has an on-board HBA, so that's part is easy. Just need to have a Xeon CPU w/ the proper virtualization extensions.

So let me get this straight. ESXi is installed on the computer and allows virtual machines to basically talk to the hardware more effectively then an actual install of a virtual machine running in windows? I also have a VPN on my router that I need to configure, I don't believe the motherboard that I plan on using has IPMI. How do I know what CPU extensions are needed? I currently have 2x E5335 chips installed on the board.

So it seems ESXi is meant for installing several different OS's on one server. What is the point if I only end up with 2 or so? Is it easier to fix if an OS corrupts or something?
 
So let me get this straight. ESXi is installed on the computer and allows virtual machines to basically talk to the hardware more effectively then an actual install of a virtual machine running in windows? I also have a VPN on my router that I need to configure, I don't believe the motherboard that I plan on using has IPMI. How do I know what CPU extensions are needed? I currently have 2x E5335 chips installed on the board.

So it seems ESXi is meant for installing several different OS's on one server. What is the point if I only end up with 2 or so? Is it easier to fix if an OS corrupts or something?
I don't know if those CPUs will pass-through the HBA. You need VT-d extensions, but from looking at the Intel spec page I just see VT-x. So, you'd need a new setup if you wanted to run FreeNAS as a VM to pass through a controller.

And yes, ESXi is a VM "hypervisor" letting you install multiple virtual machines onto a single piece of hardware. I'm using it because I have 32GB of RAM, but don't need all of it on a single machine. I can do snapshots of VMs, save those for backups, restore back if an OS update breaks, etc. So there are benefits even if you only run a couple machines. There is a performance benefit too vs. running one VM on top of another OS. Here's a link describing the difference between level 1 and level 2 hypervisors: Comparison Type 1 vs Type 2 Hypervisor ~ GoLinuxHub
 
So are you running Raid 6? Also do you use the battery pack on that raid card?
Im running a RAID6 right now... yes I've got the BBU (sometimes you get it for 35€ on eBay...)
the external port has got 4 Channels... so 4x3Gb/s ... so you are "limited" to 12Gb/s for all drives.. the MSA works like a JBOD (please correct me if I'm wrong), but if you don't need any SSDs in the MSA, you don't get any performance trouble

sure you can also use the 51645 in JBOD-mode and make software raids...
 
IBM m1015
DELL H200
.... flashed to LSI 9211 iT Fiwmare p19 : best perf/price for ZFS
 
Back
Top