Nas/SAN/Plex server redesign

Joust

Supreme [H]ardness
Joined
Nov 30, 2017
Messages
6,309
Gents:

I've been using a 16-bay supermicro chassis with a 3570k-based hardware. Filled with 4tb drives. I have a 24-bay that I plan to populate with 14tb drives as resources allow.

I have been using 2 HBA's for my current usage, and running Plex server from the machine. It transcodes fine. All this via FreeBSD and FreeNAS.

I think I see benefits in just having a NAS with separate transcoding units pulling from it. I think I want to do an energy efficient setup with hardware RAID 6, using Windows 10 or server as OS.

What considerations would you all have? I figured I'd do one 24-port RAID card - probably just one gigabit NIC would be enough, I'd think.
 
Raid 6 is not efficient at all. What most people miss out on with raid6 is you take a double write penalty. You may get the redundancy, but you basically half the performance of the card running your raid6 array. If you are transcoding, you must be using ffmpeg. If you are serious about performance, or any kind of transcoding, you don't use a raid6 volume as your work space. You got a bunch of drives.. Do 4 drives in a raid 10 as your transcoding work space, then move the data off to a raid6 volume when your renders are complete. Renders normally = finals = write once/read many. Raid6 vs Raid5 in reads especially sequential will perform about the same. Writes will always be slower with raid6 due to the double parity. Comes down to your tolerance for how fast your renders coming out.
 
Any video encoding is going to be CPU, not disk limited. It's trivially easy to get 1GB/s+ reads and writes with RAID6 these days. A separate RAID10 volume is unnecessary.
 
  • Like
Reactions: Joust
like this
What I have works well for me...
Host OS: ESXi 6.7
Server: Dell T630 (it uses an LSI hardware raid card), everything is VMFS
Raid: 10 x 4TB SAS raid6

Windows 2012R2 vm as the NAS, read only fileshare to plex user (I use the guest account)
CentOS vm for plex with plex repository added for easy updating
Startup script to mount fileshares at boot time, example: mount -t cifs -o ro,noperm,username=guest,password= //192.168.1.11/TV_Shows /plex/TV_Shows
I also have my plex VM on pcie ssd so that all the tiny thumbnails load nice n quick on my TVs

I've allocated 4 cores of an E5-2640 to plex, so far haven't had any issues, but I have very little concurrent usage.
My array does north of 800mbytes/sec, not that it needs to be that fast.
I'm currently 10gbit limited (pcie 2.0 10gbit card) going 40gig next month when I rebuild my rack.
I also share the files in openssh ftp to the internet (ip filtered, but whole ISPs opened up, low rent DHCP users) & plex completely open to internet, so its nice having that little bit of extra security of everything being read only & I hated it when plex messed with my files.
 
  • Like
Reactions: Joust
like this
Thank you all for the opinions. I haven't done any VMs. I have some concern that it will cause some difficulty that I cannot foresee. Though, if I can manage to run a VM for Plex host and vm for NAS, that'd be of interest to me.

A Ryzen product seems like it would be well-suited to such a setup.

Then again, perhaps a Windows 10 box with a SMB share open to other machines would be suitable.
 
Most NAS operating systems expect to have real control of underlying disks. Hell, most of them don't even like RAID controllers because they handle their data redundancy in the NAS software.

With that said, your last option - a Windows 10 box with a SMB share - is actually what I use. In my signature, the system marked "Server" is actually running Windows 10 Pro. It's got a RAID6 array (12x 3TB disks) for the data that, in Windows, is just a big D drive. Plex runs on this Windows 10 machine natively.

But since I do need VM support, I then run VMware Workstation on top of Windows, and use it to virtualize a few systems. I run VMware (which is paid) rather than Hyper-V (which is free) because my work involves a ton of VMware stuff, so it's a closer test environment than Hyper-V. But you could 100% do whatever you want on Hyper-V or even VirtualBox or whatever.

As for the RAID, I bought an inexpensive LSI 9265-8i card for like $50. I paired that with one of the cheap HP SAS expander cards I bought on eBay for I think $18. It's a full hardware RAID 6 implementation, and with my exceptionally shitty 3TB 5400 RPM drives I get sequential read speeds in the 1 GB/s range. Sequential writes are way more moderate at ~300 MB/s, obviously, but still plenty fast to write out various media to disk. I've got a couple inexpensive SATA SSDs in the system as well, configured as a RAID 1 array, which is where my VMs live so that they're snappy in terms of performance.

I run an actual 'server' mobo with an ES 10-core CPU, but that's because I got the RAM as cast-offs from my company. If I was building it from scratch and using my own money, I would consider a Ryzen setup for sure.
 
Ah. I have several bits of hardware, but most of which I don't want to use for this purpose.

I have two supermicro dual socket boards. One is quite old and is like 2 core 4 threads per socket. The other is 4 core 8 threads per socket. Both are very power hungry. In total, for server use I have 16x4tb SAS drives, 6x4tb Sata drives, and 6x1tb Sata drives. All enterprise drives.

I do not need to run a lab for any purpose. I just want a solid, stable NAS for media (with expandable RAID array, something that ZFS sucks at), and a transcoder/host for Plex. It seems like a pretty low demand situation.
 
How many simultaneous users of Plex do you anticipate? More specifically, how many Plex users will simultaneously need transcoding service?

Most NAS devices, even those running Plex, are super low power- think CPU from a cellphone low power. Plex supports Nvidia ( NVENC ) and Intel (QuickSync) hardware encoding acceleration, which can allow dramatically lower power CPUs to keep up with the transcoding requirements.

If you don't really use transcoding - for example, most devices played within the local network will direct play on Plex where they aren't transcoded and instead the original media file is streamed directly (my situation)- then essentially any CPU will work. I would get something super low end like a Ryzen 3000G or something for cost/heat/noise/power reasons.

If you are going to need some transcoding support, then something like a Core i3 8100/9100 or perhaps even a Pentium G5400 would work, because they have hardware Quick Sync. A moderate Ryzen CPU would likely work as well, though they'll be doing the encoding in software and thus under a bit heavier load; one of the Ryzen 5 1600 AF CPUs might be handy here.

If you want *better* transcoding, then pairing any CPU with a Turing-block NVENC GPU to handle the encoding will result in better quality by offloading the transcoding work to the Nvidia GPU. Essentially any Turing GPU *except* the basic 1650 would handle this.

If you want a *lot* of transcoding at decent quality, then the easiest way to achieve it is with a Turing-block NVENC GPU of the Quadro variety. GeForce GPU NVENC blocks are driver-limited to 2 simultaneous encodes, where the Quadros have more than that. You can look at Nvidia's NVENC matrix for comparison: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

The Non-Turing NVENC blocks work, they are just lower quality.
 
I had no problem running FreeNAS in a HyperV VM on top of Server 2016. The NAS drives were passed to the VM, and I was getting full performance at 10Gbit (~700MB/s reads).

With ZFS, they've focused on RAID expansion from the enterprise perspective first, not unexpectedly -- so you do need to plan it out, but it's not difficult.
 
ZFS works very well, but I need an *expandable* RAID array. Candidly, I don't want to come off the wallet for $4k+ in HDD's at once. I'd rather take it in two or three bites.
 
So, as mentioned MY RAID6 array is based on an inexpensive LSI 9265-8i card combined with a SAS expander to get me 24 ports. I've never done it, but online capacity expansion (OCE) is a feature supposedly supported by this (and other) LSI cards. So right now I have 12x 3TB drives, and if I for example got 4 more I would, in theory, be able to incorporate them into the array while it was online. I've never done it personally though, since every time I outgrow a RAID array I just start with a new one - I went from 8x 2TB drives to these 12x 3TB drives, for example.

The LSI cards, or any 'real' RAID array that I'm aware of, still won't let you really mix drives of different capacities though. If I was to add some 4TB drives to my array, it would treat them as 3TB drives because that's what the array is built on. If you want the ability to mix drives like that, you might consider something like FreeNAS or Storage Spaces.

Lastly, you could always consider StableBits DrivePool. It sits atop Windows and can combine space from multiple drives, with the redundancy level for your files set to whatever level you deem necessary. It's not *RAID*, in that the data isn't uniformly striped across all the drives, but it *does* combine all the drive capacity into a single "big disk" and can add redundancy to protect against the failure of 1+ drives. It also has the feature of being able to add/remove drives from the pool in an on-demand fashion and not caring about the drives being wildly different in capacity or speed.
 
I have a pretty fair amount of data without a backup. I guess about 50-55tb So, RAID redundancy is already the bare minimum.

I'm not trying to mix drives. I'm not using the current chassis or HDD's for the new setup. That's why the array expandability matters. I can buy 7x14tb drives and migrate all my data to that array - even accounting for two parity drives. Then buy more drives later and add them to the volume.
 
Well, just FYI, adding drives into an array via OCE is supposed to be a process that takes a long time and harms the performance of the array a significant amount during the process. Again, my info is all third hand as I've never done it. But it says it can do it on the box, so there you go.
 
Any video encoding is going to be CPU, not disk limited. It's trivially easy to get 1GB/s+ reads and writes with RAID6 these days. A separate RAID10 volume is unnecessary.

Totally right. However, from OPS post, looks like this will be a mixed use environment. Bunch of streaming, and bunch of trans-coding. Most likely used for other "Stuff". Depending on the work load, and user connection count could easily murder 1 giant raid 6 volume. Especially if workload is bouncing around from heavy sequential reads, sequential writes and random io. Raid6 is trash for random IO. Couple of users? Most likely no big deal. OP didn't say if he's populating it with SSDs or spinners. Spinners? I'd still go with a raid 10 volume as a work/staging volume, then move it to my raid6 volume. Hell.. 2 SSD's mirrored together, and bunch of spinners raid6'ed together would be ideal.
 
Totally right. However, from OPS post, looks like this will be a mixed use environment. Bunch of streaming, and bunch of trans-coding. Most likely used for other "Stuff". Depending on the work load, and user connection count could easily murder 1 giant raid 6 volume. Especially if workload is bouncing around from heavy sequential reads, sequential writes and random io. Raid6 is trash for random IO. Couple of users? Most likely no big deal. OP didn't say if he's populating it with SSDs or spinners. Spinners? I'd still go with a raid 10 volume as a work/staging volume, then move it to my raid6 volume. Hell.. 2 SSD's mirrored together, and bunch of spinners raid6'ed together would be ideal.

Well. It transcodes on demand for the most part. Prefer direct play, but not all users are ...considerate.

Yes, spinners.

I'm currently using a 16-drive zfs2 array - and it's worked pretty well.

User connection is less than 10 streams. No more than 4-5 simultaneous transcodes.
 
in ZFS it is common to build smaller 'arrays' and then add them too a pool. typical example is 8 drives in 4 individual mirrors. then all the mirrors pooled into one storage group. this also makes it so the pool is expandable by adding more drives. could also be multiple small raidz2 sets, or whatever, all pooled.

Freenas is simple, works on a domain, or with windows shares and group or user permissions. i actually wrote the first KVM freenas VM guide with full pass through and everything, on an AMD host to boot. now with hardware being so cheep i usually say it is not worth the effort. an old opteron and 64gb of ECC RAM is under 200$. and handles 10gb to the freenas box with cpu to spare.

PROXMOX makes doing the VM work a lot easier than windows hyperv, or VMware. this is just a simple fact. i have administered all of those and dabbled in XEN even. for a home config it is PROXMOX with no argument.

oh and i might as well mention emby, (instead of plex) i do not know if it is my desire to seek efficiency, or what, but i usually always use the 'not quite as mainstream' approach to everything. maybe they are better products, maybe i am a masochist. the world may never know.
 
Last edited:
Totally right. However, from OPS post, looks like this will be a mixed use environment. Bunch of streaming, and bunch of trans-coding. Most likely used for other "Stuff". Depending on the work load, and user connection count could easily murder 1 giant raid 6 volume. Especially if workload is bouncing around from heavy sequential reads, sequential writes and random io. Raid6 is trash for random IO. Couple of users? Most likely no big deal. OP didn't say if he's populating it with SSDs or spinners. Spinners? I'd still go with a raid 10 volume as a work/staging volume, then move it to my raid6 volume. Hell.. 2 SSD's mirrored together, and bunch of spinners raid6'ed together would be ideal.
Streaming media takes so little disk, it's not even funny. Raw blu-ray rips are ~6MB/s. Even with a bunch of users, they'll have no issues. I have a mixed use environment similar to what the OP wants and even with software RAID6 (using an Atom CPU no less), it's unusual for disk utilization to break 25% for me.
 
Well. It transcodes on demand for the most part. Prefer direct play, but not all users are ...considerate.

...

User connection is less than 10 streams. No more than 4-5 simultaneous transcodes.

I would consider adding a Turing GPU then, obviously with consideration for your budget. The 1650 Super (the Super part is important, since the regular 1650 doesn't have the full Turing NVENC block) is ~$160 or so, and natively supports 2 concurrent accelerated streams. If you're willing to tinker, though, you can look at removing that limit in software: https://github.com/keylase/nvidia-patch

Here's a thread on folks using the 1660 with the patched driver having lots of luck, with between 8 and 20 simultaneous streams running and not breaking a sweat. The 1650 Super has the same NVENC block as the 1660 so it'd be equal there.
 
in ZFS it is common to build smaller 'arrays' and then add them too a pool. typical example is 8 drives in 4 individual mirrors. then all the mirrors pooled into one storage group. this also makes it so the pool is expandable by adding more drives. could also be multiple small raidz2 sets, or whatever, all pooled.

Freenas is simple, works on a domain, or with windows shares and group or user permissions. i actually wrote the first KVM freenas VM guide with full pass through and everything, on an AMD host to boot. now with hardware being so cheep i usually say it is not worth the effort. an old opteron and 64gb of ECC RAM is under 200$. and handles 10gb to the freenas box with cpu to spare.

PROXMOX makes doing the VM work a lot easier than windows hyperv, or VMware. this is just a simple fact. i have administered all of those and dabbled in XEN even. for a home config it is PROXMOX with no argument.

oh and i might as well mention emby, (instead of plex) i do not know if it is my desire to seek efficiency, or what, but i usually always use the 'not quite as mainstream' approach to everything. maybe they are better products, maybe i am a masochist. the world may never know.
I used to use proxmox for a while (couple of years), but have dropped it and moved to just running Ubuntu server with docker and my life is so much simpler now. I also am (impatiently) waiting on emby to add Linux DTV support, sadly they still only support windows, I do prefer their interface and wish I could switch, but they can't support my config, I can use it.
 
linux and docker is nice. i have a more wide range use case and prefer a VM hypervisor, and out of the available options proxmox is only a bit more management than linux and docker.

the people that use some sort of TV with emby pipe it into emby via a programmable tuner over IP. (like doing a web IPTV set up) i do get the feeling that importing any sort of live TV to emby is not a top 10 task for them though so it is a bit of a kludge. still, for 95% of users emby just works and has no negatives compared to plex.
 
ZFS works very well, but I need an *expandable* RAID array. Candidly, I don't want to come off the wallet for $4k+ in HDD's at once. I'd rather take it in two or three bites.
ZFS mirrored stripes then (RAID10). You can upgrade two at a time.
 
[QUOTE="Joust, post: 1044472006, member: 306607"

What considerations would you all have? I figured I'd do one 24-port RAID card - probably just one gigabit NIC would be enough, I'd think.
[/QUOTE]

Why? ZFS RaidZ is so much better than hardware raid these days.
 
I used to use proxmox for a while (couple of years), but have dropped it and moved to just running Ubuntu server with docker and my life is so much simpler now.
What didn't you like about Proxmox?
 
What didn't you like about Proxmox?
Just allocating HDD to containers, I never guess right about how much I needed and often over or under allocated. Also, unless you purchase the annoying popups. I often had to drop to command line in order to do things, which partially defeated the purpose of the web interface. It wastes more space than docker, since I can use the base Ubuntu image for multiple docker containers, it doesn't have many copies. You can do similar with proxmox with creating a new lxc based off another, but upgrades are a pain in the a$$. It just seemed to work ok, but wasn't really great for me and my work flow. Docker seems to fit much better for me. A good example is I can run 5 Minecraft servers based on the same image and it barely takes 10MB for each additional. Proxmox used much more, u less I did that based on another image thing. But then when I want to update I have to either start over or update each one, which defeats the purpose and increases disk space used. Docker I just re pull the latest parent and restart the containers. Much faster to create, test, remove and modify them for my purposes. I did have proxmox and docker running on the same server for a little while when I was as transitioning/learning, but finally.pullef the band aid off when I replaced the main drive with an SSD and upgraded my raid at the same time.
Nothing majorly wrong with proxmox, but for me I feel much more in control. I can more easily generate my own packages, try making your own LXC container based on an ISO...
 
  • Like
Reactions: Meeho
like this
[QUOTE="Joust, post: 1044472006, member: 306607"

What considerations would you all have? I figured I'd do one 24-port RAID card - probably just one gigabit NIC would be enough, I'd think.

Why? ZFS RaidZ is so much better than hardware raid these days.
[/QUOTE]
Better for? If that were the case they would stop selling hardware raid cards. I run hardware raid card with its own ram + battery backup. Not sure i would see a benefit going to ZFS and offload that load into my processor rather than the hardware taking care of it. One day maybe I'll get bored enough to test and benchmark them both, but i doubt it.
 
Better for? If that were the case they would stop selling hardware raid cards. I run hardware raid card with its own ram + battery backup. Not sure i would see a benefit going to ZFS and offload that load into my processor rather than the hardware taking care of it. One day maybe I'll get bored enough to test and benchmark them both, but i doubt it.
Better for data resiliency and integrity.
 
Last edited:
Better for? If that were the case they would stop selling hardware raid cards. I run hardware raid card with its own ram + battery backup. Not sure i would see a benefit going to ZFS and offload that load into my processor rather than the hardware taking care of it. One day maybe I'll get bored enough to test and benchmark them both, but i doubt it.
Better for data resiliency and integrity.
[/QUOTE]
Not sure I see a superiority there. Seems pretty equal to me. How do you figure?
 
According to the link, i cant even duplicate my RAID 10 setup.... Also they assume power loss means lost data, thats why real raid cards have battery backups like mine. I don't have those issues on power loss. And from your link "Overall tradeoff is a risk of write hole silently damaging limited area of the array (which may be more or less important) versus the risk of losing the entire system to a catastrophic failure if something goes wrong with a ZFS pool. ZFS fans will say that you never lose a ZFS pool to a simple power failure, but empirical evidence to the contrary is abundant."
So, I could maybe possibly lose a block in my data, or I could possibly lose my entire data set... Lol, yeah, so much safer let me switch asap! :). Don't take this personally, I know plenty who use raidz/zfs with good results, but I wouldn't consider it better than proper raid hardware by any stretch.
It seems to mostly be a comparison between raidz and raid5, most people recommend no longer using raid 5 due to the potential during a rebuild to have a secondary failure. It seems raidz being stuck to a single file system does give it an advantage that it can read the FS and only rebuild the data that's in use, even if it's slower to rebuild, it does less rebuilding. For best and average case it probably gives raidz the advantage, while almost full drives probably favor raid5 (you shouldn't be that close to full on your array so this gives, IMHO, the benefit here). If you can't get proper raid hardware, then it s Em's a reasonable alternative, but to just say it's better..... This is why I asked, better for what? In some instances it could make more sense, but I'm not dropping my hardware raid to switch.
 
Why? ZFS RaidZ is so much better than hardware raid these days.

That's personal opinion, and based on your experiences. Not wrong, or bad or anything, but that is not my experience. I've had nothing but great luck with hardware raid on my home stuff. Im supporting over 8ps of storage during the day, and absolutely none of those platforms use zfs.
 
ZFS is the de facto standard for reliable storage and it didn't get there by chance.

Not saying youre wrong, but thats personal opinion. I'd say isilons onefs is far superior. That's also personal opinion.
 
According to the link, i cant even duplicate my RAID 10 setup.... Also they assume power loss means lost data, thats why real raid cards have battery backups like mine. I don't have those issues on power loss. And from your link "Overall tradeoff is a risk of write hole silently damaging limited area of the array (which may be more or less important) versus the risk of losing the entire system to a catastrophic failure if something goes wrong with a ZFS pool. ZFS fans will say that you never lose a ZFS pool to a simple power failure, but empirical evidence to the contrary is abundant."
So, I could maybe possibly lose a block in my data, or I could possibly lose my entire data set... Lol, yeah, so much safer let me switch asap! :). Don't take this personally, I know plenty who use raidz/zfs with good results, but I wouldn't consider it better than proper raid hardware by any stretch.
It seems to mostly be a comparison between raidz and raid5, most people recommend no longer using raid 5 due to the potential during a rebuild to have a secondary failure. It seems raidz being stuck to a single file system does give it an advantage that it can read the FS and only rebuild the data that's in use, even if it's slower to rebuild, it does less rebuilding. For best and average case it probably gives raidz the advantage, while almost full drives probably favor raid5 (you shouldn't be that close to full on your array so this gives, IMHO, the benefit here). If you can't get proper raid hardware, then it s Em's a reasonable alternative, but to just say it's better..... This is why I asked, better for what? In some instances it could make more sense, but I'm not dropping my hardware raid to switch.

Lot of good notes here. Really at the end of the day.. absolutely zero raid type is the best raid type. You have to consider your work load, the hardware you have access to, manageability, and finally.. Budget.. Arguing which raid type is better ranks up there with windows vs apple debates. Both stupid, filled with misinformation, and lots of personal opinion.
 
Yeah I don’t mean to start a war. Use what you know and what you trust. After all it’s data and not easily recoverable Even with backups. And that’s not to say there’s not enterprise grade filesystems open and closed source that are probably better for different applications or use cases. And ZFS can, when run with default settings, be a bit of a ram hog. And then there’s the write penalty — I’ve never been able to get my ZFS setups to be as fast as say XFS but that’s due to copy on write and all the data integrity that it’s doing. Every block getting a checksum etc. for me I trade perf for peace of mind.
 
Totally fair.
Yeah I don’t mean to start a war. Use what you know and what you trust. After all it’s data and not easily recoverable Even with backups. And that’s not to say there’s not enterprise grade filesystems open and closed source that are probably better for different applications or use cases. And ZFS can, when run with default settings, be a bit of a ram hog. And then there’s the write penalty — I’ve never been able to get my ZFS setups to be as fast as say XFS but that’s due to copy on write and all the data integrity that it’s doing. Every block getting a checksum etc. for me I trade perf for peace of mind.

totally totally fair. Professionally, I would laugh ZFS out the building. I have zero love for it, due to the sector im in. Just wouldn't cut it for the criticality of my data and work loads. At home? I have 3 monster machines kicking around for my development environment. I do enough computer during the day. I have no time for zfs tinkering when im home. The closest I come to using it, is my synology nas. Again, my post is all personal opinion.
 
Lot of good notes here. Really at the end of the day.. absolutely zero raid type is the best raid type. You have to consider your work load, the hardware you have access to, manageability, and finally.. Budget.. Arguing which raid type is better ranks up there with windows vs apple debates. Both stupid, filled with misinformation, and lots of personal opinion.
Hopefully you see I wasn't trying to dig and said it makes sense in some cases. My argument was simply that he said software was better than hardware, which isn't always true. Learn the pros and cons of each and make your decision. Just because raid can be redundant doesn't mean it's perfect. RAID 5, RAID 1, raid 10, raid 50, raid 60, they can all still fail. It's just reducing the chances, but you can still be unlucky, that's what backups are for. I have an external HDD that I use for backups along with my hardware RAID 10. Overkill for my home server that has no critical data? Lol, yeah. It was mostly for the speed and learning. I get about 1GB/s from my spinning rust in sequential reads (not from cache).
 
I use four 8TB USB pulls in RAIDZ1 (RAID5) to backup four 6TB Ironwolfs in RAID10, still on ZFS...

Still getting ~750MB/s sequential across 10GbE. I had planned to add a few drives to that just to get performance up to 1GB/s, but I simply have no use for the space (yet).

I don't think the use of hardware controllers would be helpful in my case, while at work we use them for two-drive mirrors... ...because...
 
Well. I decided to go with two 24-port RAID cards with battery backup and VMware, running on a 1950x with some ECC RAM. Hardware incoming from every which way.
 
Hopefully you see I wasn't trying to dig and said it makes sense in some cases. My argument was simply that he said software was better than hardware, which isn't always true. Learn the pros and cons of each and make your decision. Just because raid can be redundant doesn't mean it's perfect. RAID 5, RAID 1, raid 10, raid 50, raid 60, they can all still fail. It's just reducing the chances, but you can still be unlucky, that's what backups are for. I have an external HDD that I use for backups along with my hardware RAID 10. Overkill for my home server that has no critical data? Lol, yeah. It was mostly for the speed and learning. I get about 1GB/s from my spinning rust in sequential reads (not from cache).

No, no, no digs at all. we are both saying a lot of the same things, just worded differently. Agreed, software raid isn't always better. Both have a place for the use case. Biggest driver usually comes down to budget.
 
Back
Top