Got 160 TB ZFS FreeNAS. Thinking of converting to ReFS Storage Spaces on Server 2016

pclausen

Gawd
Joined
Jan 30, 2008
Messages
697
I currently have a FreeNAS box with 160 TB of raw space as follows:

vdev1 10x2TB drives in raidz2 16TB net
vdev2 10x2TB drives in raidz2 16TB net
vdev3 10x2TB drives in raidz2 16TB net
vdev4 10x4TB drives in raidz2 32TB net
vdev5 10x6TB drives in raidz2 48TB net

I was at 80% capacity and since this is ZFS, it was time to expand. So I started replacing the 2TB drives in vdev5 one at a time with 6TB drives until I was done. This is after running badblocks on the all, which took about 100 hours per drive. Yes I did some in parallel, but still a huge time sink. It was a huge pain, but I preferred this method to adding vdev6 and then having to deal with 60 spinners consuming power at all times.

So I was wondering what other options are out there today for home media storage good for 150 - 250TB of storage?

Is ReFS finally getting to the point of being half way decent? I know that Storage Spaces is limited to 64TB per volume, but I can learn to live with that I suppose.

The FreeNAS server is rock stable and I run a handful of plugins on it, have a 10GigE connection to my core switch, but I don't like always having to expand when I reach 80% capacity. I'd also much prefer to run Windows since that is what all my other machines on the network are running. Large file copies over CIFS don't even come close to saturating my 10GigE network (running 10Gig to primary workstation as well).

Friend of mine has a similar setup, only he is hardware raid6 based with a pair of servers, each spotting Areca 1882 controllers with 24 x 4Tb drives in a single raid 6 volume. They mirror each other, so if he looses 3 drives at once in one of them, he can replicate from the other one. Besides, I act as his offsite backup and visa versa. He's running Windows server 2008 on those, and is able to saturate his 10G network copying files back and fourth between the servers or to a workstation. And he doesn't have to worry about staying at 80% capacity.

So should I stay with ZFS and FreeNAS, or are there other options out there for "home storage" in the 100TB + range? I have tried FlexRaid (tRAID), drivepool/snapraid, and hardware raid6 using Areca controllers, but I don't really want to go there again. FlexRaid and drivepool were very slow on writes and couldn't even saturate 1GigE, let alone 10Gig. Raid6 on the Areca controllers was ok, but very picky about drives timing out, etc. I thought about Raid60, but all my drives would need to be the same size. I could set up multiple raid6 arrays and then pool them, but I suspect I would have issues with drives wanting to drop out and then loosing the whole pool. Like I said, FreeNAS has been rock stable, but I don't like to 80% limit, nor the fact that I can't run Windows natively. My main plugin is Emby, but the FreeNAS version is always behind the windows one, and with 4 or more clients online, my whole FreeNAS server bogs down. Didn't have this issue when I was running Emby natively under windows.

My current server hardware is as follows:

SuperMicro X10SRL-F mobo
Xeon E5-2683 14 Core 28 Threads
4x Samsung 16GB DDR4 ECC 2133
1x Intel X520 SFP+ NIC
2x LSI SAS9200-e8
1x LSI SAS9211-i8
3x Supermicro 846 chassis with a total of 72 hot swap bays

So is ReFS as serious option at this time with Server 2016, or should I stick it out with FreeNAS/ZFS for another year or two? Anything else to consider?
 
Last edited:
I, uhh... need to see pic's of the set up.

But honestly, I ran/run Storage spaces on 2013 and it is fine BUT I can't speak for storage of that size. I imagine its got a better implementation in 2016 but I have yet to try 2016 storage spaces.
 
I, uhh... need to see pic's of the set up.
Sure...

Main chassis:

846main.JPG


Disk only chassis:

846diskonly.JPG


Disk / HTPC chassis:

846htpc.JPG


Rack view (4th chassis is for testing stuff)

846bothracks.JPG


Here's the FreeNAS box starting to sweat a little with 6 1080p streams being transcoded:

6ffmpegs.PNG


But honestly, I ran/run Storage spaces on 2013 and it is fine BUT I can't speak for storage of that size. I imagine its got a better implementation in 2016 but I have yet to try 2016 storage spaces.

Well that's somewhat encouraging. Are you running a mirror like config or with parity? I know on the older implementations, write performance was terrible with parity? Hopefully that has improved. Not sure how "parity" works with like 50 drives, but given the 64TB volume limit, I probably can't have more than 12ish drives (assuming 6TB) per volume. Once thing that is nice about ZFS is how performance increase with the number of vdevs. With the 5 I currently have, a scrub of the entire volume at 80% capacity takes about 9 hours. Resilvering (like when I replace a 2TB drive with a 6TB drive), takes around 6 hours or so.

I wonder what kind of "paralleling" takes place with ReFS / Storage Spaces?

Downloading Server 2016 Technical Preview 4 now. I guess I'll setup a test config using the 2TB drives I just swapped out for the 6TB ones to see what kind of performance I get over 10 Gbps.
 
Last edited:
That picture of your nice rack gave me a boner :D

Can you tell us about all of the hardware you have in those racks?

I am also interested in ReFS and Storage Spaces. I was thinking about using ZFS Guru but since I prefer using something more compatible with Windows, I just may go with ReFS if it is mature enough.
 
I'm personally using FreeNAS with 30TB of space for my blu-ray rips, but from the few articles I've read I have not been impressed with Storage spaces with ReFS. I am not sure I would trust 30TB, much less 100+ to it.

Quick google search ran into this:
https://forums.servethehome.com/index.php?threads/server-2012-r2-storage-tiering-ntfs-vs-refs.3103/

and this:
https://forums.servethehome.com/ind...e-spaces-storage-pools-vs-hardware-raid.5344/

I've considered running UnRaid 6 since it's easy to expand raid arrays with it (or at least easier than FreeNAS), but the thought of having to move all of that data to a backup source makes me not want to bother with the trouble.
 
The rack pic is a bit dated in that the NetGear switch has now been replaced with Ubiquiti UniFi gear (48 port PoE switch with a pair of SFP+ 10GIg ports, and a USG-Pro4 router/gateway). There are UPS units in the bottom of the rack on the left and the right rack is filled with audio power amps down low, followed by an Onkyo Pre-Amp and then a Denon AVR. In the right rack there's also a eq to tune the subs in the main theater and a pair of dedicated Crown amps to drive them. The other amps are all Adcom for the main and surround channels.
 
@Sufu, thanks for the links. Very interesting reads. Will be interesting to see if Server 2016 does better with ReFS and Storage Spaces compared to 2012 R2. I do have a some 128GB 840 Pro SSDs that I can add to the mix as well for benchmarking.

My test rig will be an Asus Z87-PRO mobo with an i5-4690K and 16GB RAM. That mobo has 8 6Gbps SATA ports I can play with for now. I probably also add a LSI 9211 HBA to the mix depending on how it goes initially and of course an Intel X520 SFP+ 10 Gbps NIC to test throughput on large file transfers, which is the key metric I'm interested in.
 
That is a ton of storage. I can't imagine how much it costs in electricity to run all those 4U disk shelves. We use those in my DC for some of our Cloud storage offerings.
 
The computer equipment in the left rack consumes just under 1000 watts according to the primary UPS. Looking at my electrical bill, my monthly usage varies from a low of around 2000 kWh in the spring and fall, to a high of around 4000 kWh in the winter. Dividing the kWh into the dollar amount on my last bill, it works out to $0.118 per kWh with distribution charges, taxes, etc. 3 years ago I upgraded my HVAC from the original 1998 8 SEER 6 ton unit to a Trane XL20i 20 SEER 6 ton unit. Peak winter consumption dropped from 6000 kWh down to 4000. So that alone more than made up for the cost of running the servers. At least that is what I keep telling myself. :D

So 1 KWH x 24 x 30 x $0.118 = $85 per month to run my servers. I "cut the cable" with DirecTV years ago, so in a way, this is actually less expensive. :D

Running my servers represents between 25% and 50% of my total power bill, depending on the time of year. To me that's worth it... Over the next 5 years, I hope to convert over to all SSD, which I believe will significantly reduce my operating cost.
 
I've considered running UnRaid 6 since it's easy to expand raid arrays with it (or at least easier than FreeNAS), but the thought of having to move all of that data to a backup source makes me not want to bother with the trouble.

Unraid caps out at 26 disks, you won't be able to use all those shelves with UnRAID sadly until Lime Tech allows for more disks in release 7.
 
Not to mention your physical footprint if you get something like 2-4TB drives in a 2.5" form factor. Then just throw them into something like this - http://www.supermicro.com/products/chassis/4U/417/SC417E16-RJBOD1.cfm

AND the ability to easy saturate a 40Gbs link should you ever get one.

only problem with that rig is that he'd have disks in behind the rack he'd have deal with. If any of them break, unless he has rails or a way in behind the servers (I see a door) maintenance is a giant pain in the balls.
 
You've got GREAT electric rates. Out here in PG&E country you'd be paying >$0.50/kwh and hating your electric bill from that 1kw rack (rates here are tiered - the more you use the higher your rate/kwh - thank you enviro-wakos). Oddly, if you own an electric vehicle you can get onto an alternate rate plan that works out to about $0.21/kwh for heavy users. I've saved enough each month to more than cover the lease on the car...arbitraging stupid subsidy regulation can be fun.

OK - that as OT. Sorry.

FreeNAS does support spin-down. Do you use it or do you just let the arrays run all the time? You'll lose that with SS/ReFS as Microsoft will not support spin-down on SS arrays.

I did a lot of benching/testing with SS in 2012R2. In general, its OK but not great. With ReFS it just sucked for performance. Supposedly improved in Server-2016, but from what I've seen in the tech previews I'd say not worth it (yet). Stick with ZFS at least for another maturity cycle (maybe ready in 2016R2).
 
After looking at the new #'s for ReFS you'd have issues saturating a 10Gbe connection, NTFS MAY work. You'd be best sticking with ZFS for performance reasons.
 
only problem with that rig is that he'd have disks in behind the rack he'd have deal with. If any of them break, unless he has rails or a way in behind the servers (I see a door) maintenance is a giant pain in the balls.
Been looking that chassis. :D I do have easy access behind the racks.

846backside.JPG


I need to work on my wiring. :)
 
You've got GREAT electric rates
Yep, we're on a co-op and do enjoy great rates. They were actually down around $0.07 until a few years back.

FreeNAS does support spin-down. Do you use it or do you just let the arrays run all the time? You'll lose that with SS/ReFS as Microsoft will not support spin-down on SS arrays.

Drives spin 24/7. I know folks are split on what's best for the drives. In my case, there is almost always something writing to the array, or reading from it, so spindown would likely not amount to much for me.

I did a lot of benching/testing with SS in 2012R2. In general, its OK but not great. With ReFS it just sucked for performance. Supposedly improved in Server-2016, but from what I've seen in the tech previews I'd say not worth it (yet). Stick with ZFS at least for another maturity cycle (maybe ready in 2016R2).


I got 2016 up and running with just 3 drives in raid5 for now. Haven't had a chance to do any real testing yet, but I like the interface and how SS and virtual disks work in general. Very easy to deal with from what I can tell. Looks like to old disk manager is gone altogether, which is a good thing. Back when I was running flexraid with 48 physical drives (with 48 matching virtual drives), disk manager took forever to initialize under windows 8.1.

I'll play with it some more and install Emby server and my other core apps and see how it does.
 
After looking at the new #'s for ReFS you'd have issues saturating a 10Gbe connection, NTFS MAY work. You'd be best sticking with ZFS for performance reasons.

I'll definitely play around with lots of different configs and see how it does. I of course won't be able to truly compare apples to apples since the FreeNAS hardware is so much stronger than my test rig, but the Z87 chipset with the 4690K shouldn't be too bad I don't think. I'll keep an eye on resources to see where the bottlenecks are with 10 Gig.
 
I doubt that Windows ReFS will give you a comparable performance or options like ZFS.
NTFS may be faster as ReFS but is a huge step backwards regarding security.
If performance is your main reason for switching, you may compare a Solarish ZFS option like
Oracle Solaris 11.3, the most feature rich ZFS server at the moment or OmniOS, an OpenSource fork.

Mostly the multithreaded and kernelbased Solaris SMB server is faster than SAMBA.
Beside that it offers a better Windows integration with ntfs alike ACL, Windows SID and
working Windows "Previous Versions" without any problems.

see some out of the box 10G benchmarks without server tunings beside jumboframes.
https://www.napp-it.org/doc/downloads/performance_smb2.pdf

I am currently adding some tuning options on server side
 
My one cent (not sure if it's worth two):

I have a Server 2012 system dedicated to storage (6 x 256 Samsung 840 Pro + 10x2TB HDD) mostly serving up VMs like a SAN box over to my primary Hyper-V host via tiering (which requires mirroring). I played around with parity for bulk storage but even aside from performance issues, it didn't strike me as being even close to as mature as ZFS in terms of maintenance and reliability. Seeing where MS is going with Server 2016 (resiliency), without having actually tried it it looks to me like they are dedicating the bulk of their technical development to cloud friendly requirements (server/cluster level) and not worrying about low level parity performance. For example, note that even as they added a rebalancing capability in Server 2016 it only works on simple and mirrored spaces.

In other words, MS is so far behind ZFS on the parity front I wouldn't bet on Server 2016 reaching that level. Long run I wouldn't be surprised if BTRFS gets better bulk parity first.

I also have a 170TB (raw) ZFS server (Solaris 11.3), having grown over the past few years.

What I did was to ignore "best practice" and created multiple pools, living with the minor pain of setting up my CIFS shares in a way that I could balance capacity usage. Oh, and I've pushed my pools past 94% full and somehow survived too.

At one point in history I did do like you and did a drive-by-drive replacement within a 10 disk vdev (though I was buying 1 or 2 drives each month so there was unusable capacity until I got to the end). After that I decided it was easier and personally less stressful to bring a new pool online and just copy over if I was going to obsolete disks.

My underlying assumption is that I'm going to be replacing entire pools with vdevs made of drives approaching twice the size (in fact, for my last expansion I replaced a 10x2TB single vdev pool with a 10x8TB single Seagate archive vdev pool, discovering that read-wise the Seagate archive HDDs are actually the fastest vdev in my system now and way more than adequate write-wise to keep up with my general use cases).

Of course, I'm just 1Gbps link (technically 2x1Gbps teamed). Without striping across vdevs it might be hard to keep up with 10Gbps but the principle of having more than one pool in my system has given me more choices for expansion..
 
I think your biggest bottleneck is FreeNAS itself, there's a lot of tweaking regarding Samba to ensure compatibility rather than speed unless that changed recently. For instance, I think SMB3 is still disabled which hampers performance and Samba 4.3 performs by far much better than the older ones. FreeNAS still seems to be based on 9.X series instead of 10.X which most likely doesn't help in the performance department. Not sure if you run compression or anything but -HEAD handles load much better than 10.X but it is bleeding edge so you might want to be a bit careful. I've used -HEAD on my FreeBSD box and it's been very stable and ZFS has also played nice. Obviously you wont get a WebUI in that case however.
 
Appreciate all the feedback!

I would say that my primary goal is to be able to run Windows natively on the 14 core server. My primary application is Emby Media Server. I have about a dozen clients, most of which are directly connected via 1Gig, mostly under my roof, but a couple of clients are in another dwelling down the street, connected via a 1Gig SFP dark fiber link.

The issues I have with my current FreeNAS "appliance" setup are:

1. When 4 or more clients are on at once, and the server transcodes the media to at least 2 clients, and if those clients skip around in the material they watch, then the experience gets bad for everyone, even the ones just browsing the media library.

2. If I'm copying large amounts of new content from a workstation to the FreeNAS server, then this will impact the response for clients.

3. FreeNAS is an appliance of sorts and I don't really have a lot of visibility into what's going on. I'm running a number of plugins, which are basically VM's dedicated to specific functions. So none of them run "bare metal" on the primary OS.

4. Emby Server is my primary app, and it is primarily written for Windows. Ports exists for other OS flavors such as FreeBSD, Linux, OSX, QNAP and Synology. However, the stable releases lag behind the pace at which they are released on the primary platform, and beta and dev releases are non-existent on those other platforms.

Due to issue #1 and to a lesser extent, issue #2, I upgraded my CPU from a 4 core Xeon E5-1620 LGA 2011v3 CPU to a Xeon E5-2683 14 Core CPU. I was expecting that to resolve my issues. It did not. It helped with scenario 1 a little (not much) and actually made scenario 2 worse.

So I think I'm pretty set on wanting to convert my primary server over to be Windows based and realize that I have a limited number of storage (direct attach) options to choose from.

At the end of the day, speed is not the #1 priority, but addressing item #1 above is.

I found this to be a very interesting read:

http://social.technet.microsoft.com...storage-spaces-designing-for-performance.aspx

I did play around with the FreeNAS 10 Alpha, but it is still quite a while away from even getting to the Beta stage due to the GUI being completely re-written. One things that looks compelling to me about 10, is that jails and plugins are being replaced by iocage, which I understand will make it much simpler to run VMs. It might mean that I could run a true windows VM and then run Emby server natively there. Would not be bare metal, but perhaps it would alleviate some of the issues I'm currently experiencing.

Another option would be to remove the Emby plugin from the FreeNAS server and stand up a separate physical server running windows, dedicated to Emby, but I'm trying to decrease my server count and power consumption, not increase it. Plus there's the maintenance overhead and expense in general with that approach.

So I did move my 2016 test rig into a SuperMicro 846 chassis with a SAS2 expander backplane and added an LSI 9200 HBA and moved the disks into the expander chassis as well. I was pleased to see that once I booted up, my Storage Pool came right up. I currently have a 6 disk pool configures (mix of 5200 and 7200 RPM 2TB drives) in parity mode. I need to fiddle with NIC drivers since I swapped mobos, and then setup to do some performance testing.

As stated above, raw file transfers isn't my primary goal, and the fact that I have a buddy with 2 separate servers that contains full copies on my content, means that resiliency isn't a top priority either. None of my data is mission critical. I toyed with the idea of just setting up a Simple Space only, until I realized that would stripe all data across all drives. I'm sure that would be very fast with 50 drives, but I doubt I'd last a month before loosing everything, Maybe once 16TB SSD drives are a dime a dozen, I'll just setup 2 identical Spaces and sync them frequently. LOL
 
Last edited:
1. When 4 or more clients are on at once, and the server transcodes the media to at least 2 clients, and if those clients skip around in the material they watch, then the experience gets bad for everyone, even the ones just browsing the media library.

...

At the end of the day, speed is not the #1 priority, but addressing item #1 above is.

I think that means you're hijacking your own thread. ;) Or maybe that's an excuse for me to hijack it. :D

So anyway, I happen to be playing around with Emby myself as well - if I were in your situation (because, actually I'm pretty close and am looking into it), I would take your option of building a standalone Emby hardware server and optimizing the storage separately. In that way I could play around with specific hardware dedicated to that sole purpose.. e.g. QuickSync today, maybe GPU based h265 hardware transcode support tomorrow. You're right about the cost but my Solaris storage server is still doing fine on my original v1 Xeon E3-1240 and mobo from years ago so the way I look at it is that I've realized those TCO savings enough today to justify some additional expense for a specialized purpose. And I know I continue to keep storage server TCO lower when I don't have to mess with it trying to get it to do something else.

diizzy may be right - you might be bottlenecking at a Samba layer or similar abstraction. Solaris 11.3 has improved to support SMB 2.1 but that was literally only a couple of months ago.

Anyway let us know how it goes... I'm sure you're far from the only one trying to solve the same problems, at least conceptually.
 
Yeah, after playing a bit with parity SS and doing some testing, there's a huge penalty on writes, like 50% or so. The strange thing is that CPU utilization doesn't go above like 20%, so something else in another layer is causing the issue. Switching to NTFS didn't make much of a difference, maybe 10% improvement.

Storage Spaces might work with a Simple Pool of say 10x2TB drives for torrent activity and such, and then have Parity Pools for the permanent media, each with a pair of SSDs associated with them. But then things are already getting pretty complicated and I'm not sure the effort of tuning all that would be worth it.

I'll stick with ZFS in FreeNAS for the time being I suppose. I might play around with standing up a dedicated server for Emby, instead of running it as a plugin, using the 4690K based system I have laying around. I'll continue to play with the FreeNAS 10 Alphas as well. Hopefully sometime this year I'll stumble upon a solution that just works. I should have the server hardware and network infrastructure to support a dozen users, for crying out loud. I just need to discover the magic mix of software and configuration to properly leverage it. ;)
 
Have you considered running Solaris based OS with Napp-IT gui on top? It is actually pretty simple to install and I believe is a lot more robust than FreeNas. I haven't noticed any performance issues with my setup, granted I only have a 1Gb network at home. At work I have similar setups that support hundreds of VMs at a time over 10Gb.

Even better would be to do an all-in-one system where you boot ESXi, run Solaris/Napp-IT as a VM passing through the HBA cards. That way you could run Emby as a Windows VM. There is a massive thread on this topic in this forum about this. http://hardforum.com/showthread.php?t=1573272
 
In the picture labelled "Disk only chassis", what is the "motherboard" device and how is the "disk only chassis" connected to the "Main chassis" if you don't mind me asking?

PS: Awesome setup!
 
What kind of network topology do you have this connected to for your neighborhood? Town? Apartment building? ... which would allow for such backups and streaming?
 
In the picture labelled "Disk only chassis", what is the "motherboard" device and how is the "disk only chassis" connected to the "Main chassis" if you don't mind me asking?

PS: Awesome setup!

JBOD w/jbod power board
 
Unraid caps out at 26 disks, you won't be able to use all those shelves with UnRAID sadly until Lime Tech allows for more disks in release 7.

Wow that is a really crappy, I did not think LimeTech had such a low limitation. I currently run two raid z2 arrays of 8x4TB disks each, and one shelf of 4x3TB disks on a 3U chassis. I guess I'd be halfway to the max disk point, and I still have plans to expand the disk shelves once I fill it to capacity.

OP - have you posted on the FreeNAS forums? My first suspicion would be either that you are not running enough memory (rule of thumb is 1GB of ram per 1TB of space), or that you need a level 2 cache. SSDs work great for cache once you've maxed out the memory on the server. I am definitely not a FreeNAS expert, but everyone always screams to max out ram first.
 
I would agree with the idea of separating out your media serving/transcoding from your storage hardware.

TL;DR. I have no input on anything not ZFS or BTRFS. FreeNAS is not bad but a little mickey mouse for what you're doing. Migrate to something illumos. Put your media on dedicated hardware or as a single-VM hypervisor.

I do basically what you do but on a smaller scale. I run illumos (openindiana / omnios) with napp-it and run a separate hypervisor (esxi) with a media virtual machine and a few others. in your situation I'd probably just run my media software of choice directly and not in a VM but even a single-VM hypervisor would be tempting for snapshots during upgrades, tweaks, etc. Rolling back is awfully nice when you're maintaining something like this.

Anyhow back on topic. Others have already mentioned reasons to divide and conquer so I won't say any more on that. A couple thoughts about your performance though. You're running 5 unbalanced vdevs; even if we pretend the data is balanced, you have to remember with ZFS that performance is generally 1vdev = 1 drive of performance (this isn't a perfect truth but true enough for our purposes). Fresh hdds can run 120MB/sec but run more like 80MB/sec towards the end of their capacity. Let's split the difference and call it 100MB/sec. That means with 5 vdevs and an average of 100MB/sec per vdev, you can expect to see 500MB/sec throughput. Optimistically probably more like 750MB/sec on the right file, and possibly less on the wrong file (when most of the file is on your 48TB vdev files written towards the ends of the disks) but you won't hit 750MB/sec on FreeNAS SAMBA, period. You need a more modern SMB implementation like you can find in the later illumos builds (Gea already mentioned this).

So yeah, I'm not surprised that you aren't saturating your 10gbe. Your configuration is built for capacity at the sacrifice of performance. If you compare that to your buddy's hardware raid6 well that's a different animal. His parity raid does expand in throughput linearly with each drive added beyond 3 but he is configured for performance by sacrificing resiliency (more on that later).

You mentioned maybe running Raid60. You *are* running Raid60 right now (striping across double-parity raid). The reason this doesn't scale out in performance like one might think at first is because of how the data is arranged. I'm pretty sure this isn't just a ZFS thing when it comes to hybrid raids btw... Each of your vdevs is only serving a portion of a top level stripe (a raid0 stripe across vdevs), you only have the whole portion once the slowest disk has coughed it up (because the portion itself is striped with double-parity inside the vdev). Therefore you only have the full top level stripe after all portions have finally been coughed up, each portion coming in whole only as fast as the slowest disk (sans parity devices).

So, your pool is roughly as fast as a traditional 7-disk Raid6 volume except that your pool is imbalanced so often times it's going to be slower. Compare that to your friend's 24-disk Raid6 volume and yes, his is going to scream in comparison. However, he's very well beyond recommendation running that many drives with double parity, especially with drives that large. He's making sacrifices that you aren't making in order to see that performance. You could see it too with a different arrangement. (10 vdevs of 5 disk raidz1 would maintain the same capacity, and double the throughput and iops (plus a smidge b/c z1 is "cheaper" than z2) for example).
 
Actually Limetech is supposedly increasing the useably disk limitation beyond 26 in their next release which is 6.2 where they also are introducing a second parity drive. Supposed to be releases 'soon'.
 
I think you are right sticking with FreeNAS (for now). FreeNAS, IMO, is the only consumer friendly data storage system that actually protects your data (excluding a backup).

Have you considered rotating a volume out and a new one in? Might be simpler (less stressful) than upgrading disk by disk. Add new vol, copy data, remove old vol.
 
Last edited:
I admit dealing with that much storage/hardware is out of my element, but if you wanted to try something different than FreeNAS (besides any OpenSolaris / Illumos or similar ZFS type systems), have you considered giving Rockstor a try? Like FreeNAS it is free and open source, but is built on Linux and uses BTRFS instead of ZFS - I know that is a conversation in and of itself and people feel both file systems have their strengths and weaknesses, but they have some of the same features in terms of dealing with large amounts of data, redundancy and the like.

I can't say I've used Rockstor myself (I've used FreeNAS quite some time ago on a much smaller storage configuration), but I'm looking into it for when I upgrade my home server / NAS.. It seems to be built on CentOS towards storage focused usage like NAS servers and the like, but also has a lot of features and options if you wish to extend it and unlike some of the other NAS/SAN style distros, it seems to be tailored to something beyond the standard home file server. Rockstor can run quite a few CentOS (and other Linux packages) without any trouble, but they have special support for Docker plugins and what they call "RockOns" which are basically super easy to set up. It just so happens that Emby is now an official RockOn plugin (along side a lot of other applications like OwnCloud, Transmission, SyncThing, Plex ), so that may make it easy if you want to try it out. Note that the Rockstor wiki does not yet list the Emby RockOn, as its pretty new - http://emby.media/community/index.php?/topic/30126-rockstor/ - but there's a link to the Emby devs talking about it on their forums and heading over to github, just in case it doesn't "automagically" list as a RockOn from within the Rockstor dashboard itself as of yet.

Not sure if any of this will be useful to you, but maybe worth checking on. Good luck!
 
Update:

I invested in an Areca 1882IX-16 controller and the OS is Windows 10 64-bit. I currently have 3 arrays configured on the controller as follows:

Array 1 4 x Samsung 840 Pro 120GB in raid0. This is my "work" area for downloading torrents, unrar, convert ISO to MKV, etc.

Array 2 12 x WD 6TB RED in Raid 6. This holds part of my media collection.

Array 3 12 x Seagate 4TB ST4000DM000 in Raid 6. This holds the other parts of my media collection

Here's a view of the inside before I installed the SSD boot drive:

brama02.JPG


Performance wise, I appear limited to about 750 MB/s transferring between arrays. Crystal Disk reports 1700 MB/s ish on both read and writes to the SSD raid0 array as seen here:

crystalssdraid0.PNG


On the FreeNAS server, I was running 64GB of ram, so I should be good there.

I have not abandoned FreeNAS at this point. I actually just received a Supermicro 826 chassis with a X8DT6-F motherboard. It has 12 hot swap bays. I'll be dropping in a pair of Xeon L5630 quad cores in along with 48GB of DDR3 ram. It has a build-in 8 channel SAS controller connected directly to the SAS2 backplane.

Here's a shot of it:

IMG_1486.JPG


I convinced my friend to switch to running RAID 60. So he has now reconfigured both of his servers to consist of RAID 60 arrays made up of 12 x 4TB drives times 2. He's still able to saturate his 10 Gig network running that config, which makes sense since he's striping the 2 RAID 6 arrays into the RAID 60.

He sold me his old 2TB drives. All 7200 RPM Enterprise drives with 6Gbps interfaces. Seagate Constellation 3 and Hitachi. So in addition to my "production" server running Windows with the Areca 1882 controller, I now have 60 additional 2TB drives, 24 x 2 housed in a 846 chassis and the last 12 in the newly acquired 826 chassis.

So my plan is to do another FreeNAS install with 60 2TB drives. Based on the comments above about each vdev basically performing like a single drive, I figure I'll setup 12 5 disk raidz1 vdevs this time around vs. 5 10 disk raidz2 vdevs like before.

This should allow me to sustain 750MB/s on transfers from the main server. My plan is to use the FreeNAS server has my local backup. Fire up that monster once a month or so, and sync it up against the production server. I might also experiment with running Emby on it again, but the server with the 4 and 6 TB drives will be the only one powered on all the time.

Can I RAID 60 my 12 x 6TB and 12 x 4TB arrays? I was reading somewhere that in RAID 60, all sub-arrays have to contain the same number of drives of the same capacity. If not, that's great news and I should be able to increase my performance significantly by striping my current raid 6 arrays. Down the road, I plan to upgrade the 12 4 TB drives to 6TB REDs and then go RAID 60 for sure.

Visual of what I'm thinking of doing:

prodandbackup.JPG
 
Last edited:
Anyone reading this thread even in 2017 it is a bad idea of going from ZFS or even BTRFS to ReFS. ReFS doesn't even support atomic writes, compression, dedup,disk quotas,ext attributes,object ids and windows still cannot even boot from it 5 years after it has been released. MS basicallys said look we have ReFS it has checksums just like ZFS and that is where the similarities end. Bad that MS hasn't invested in ReFS, since 2012 it has hasn't introduced any of the features to even consider it even remotely close to an SAN storage file system.

Sad that MS hasn't put more effort into ReFS, it makes it a joke in the enterprise space with all the lacking features.
 
Back
Top