Server OS and configuration.

Chellexelle

Weaksauce
Joined
Sep 8, 2014
Messages
108
So I was on here about three weeks ago asking for advice on what parts to get for a media server I was going to build using a Norco RPC-4224 case and I have all but the motherboard which is arriving tomorrow. So I would like to start a thread on how to set it up once I get the final part and put them all together.

I stated in my original thread that I wanted to use ZFS and set it up with two arrays with one (bottom three rows) being hidden and mirroring the other array (top three rows) and I would like to set it up so I can plug my server into my PC and view it like a NAS or external hard drive and transfer files from my PC to it directly.

So any instructions or advice on operating system and how to configure the two ZFS arrays how I want it will be very helpful.
 
I can't recall the thread in question but I can add this, what a waste of space by the sounds of it.

If you are choosing to run ZFS, run two arrays with 12 drives each in RAIDz2 (RAID-6) and keep the array as usable space. I fail to see why you would want to essentially loose half the capacity to store a mirror, especially on the same machine. If you are that worried about the data and loosing it, the "Mirror" or copy of it needs to be on another machine otherwise, the chances of a hardware failure killing the system are too high.

As for asking questions on the OS and setup, I would have suggested that long before placing orders for gear.

As for my opinion, I won't/wouldn't use ZFS but that's purely because I am a Windows person and know how to make Server 2012R2 and Storage spaces work for me.
 
Here is a LINK to the original thread for future reference.

Thanks for your feedback Benji but I am not a windows person and do not trust a platform as unstable and flawed as Windows to manage my precious data. I can not afford to build two servers so I will have the mirror or the same server and the mirror is why I chose to use a 24 drive chassis. By the time I fill all the bays and fill it with data, I will be able to build another.

I already decided on ZFS and will be using a Solaris OS, I just need to know how to set up my ZFS RAID and configure it how I specified. Can anyone help with that?
 
Here is a LINK to the original thread for future reference.

Thanks for your feedback Benji but I am not a windows person and do not trust a platform as unstable and flawed as Windows to manage my precious data. I can not afford to build two servers so I will have the mirror or the same server and the mirror is why I chose to use a 24 drive chassis. By the time I fill all the bays and fill it with data, I will be able to build another.

I already decided on ZFS and will be using a Solaris OS, I just need to know how to set up my ZFS RAID and configure it how I specified. Can anyone help with that?
You can't nest vdevs, so you can't have a mirror of raidz's. You could do a raid10, but not a raid5+1 or raid6+1.

Code:
echo|format  (to get the drive identifiers)
zpool create -o version=28 -O version=5 NAME_OF_POOL raidz2 DISK1_IDENTIFIER DISK2_IDENTIFIER DISK3_IDENTIFIER DISK4_IDENTIFIER DISK5_IDENTIFIER DISK6_IDENTIFIER  DISK7_IDENTIFIER DISK8_IDENTIFIER DISK9_IDENTIFIER DISK10_IDENTIFIER DISK11_IDENTIFIER DISK12_IDENTIFIER
zpool add NAME_OF_POOL raidz2 DISK13_IDENTIFIER DISK14_IDENTIFIER DISK15_IDENTIFIER DISK16_IDENTIFIER DISK17_IDENTIFIER DISK18_IDENTIFIER  DISK19_IDENTIFIER DISK20_IDENTIFIER DISK21_IDENTIFIER DISK22_IDENTIFIER DISK23_IDENTIFIER DISK24_IDENTIFIER

Example: My raid60 pool!
Code:
root@backup:~# zpool status backup
  pool: backup
 state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
        pool will no longer be accessible on older software versions.
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        backup                     ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t5000C50034F447C7d0  ONLINE       0     0     0
            c0t5000C50034D58277d0  ONLINE       0     0     0
            c0t5000C50034E7A243d0  ONLINE       0     0     0
            c0t5000C50034F37E2Fd0  ONLINE       0     0     0
            c0t5000C50034E7B3AFd0  ONLINE       0     0     0
            c0t5000C50034F435C7d0  ONLINE       0     0     0
            c0t5000C50034FF4993d0  ONLINE       0     0     0
            c0t5000C50034F41723d0  ONLINE       0     0     0
            c0t5000C50034FF2B87d0  ONLINE       0     0     0
            c0t5000C50034F76637d0  ONLINE       0     0     0
            c0t5000C50034E9A92Bd0  ONLINE       0     0     0
            c0t5000C50034FBC2D7d0  ONLINE       0     0     0
          raidz2-1                 ONLINE       0     0     0
            c0t5000C50034EDB303d0  ONLINE       0     0     0
            c0t5000C50034EC07EBd0  ONLINE       0     0     0
            c0t5000C50034F3718Bd0  ONLINE       0     0     0
            c0t5000C50034F4426Bd0  ONLINE       0     0     0
            c0t5000C50034F38ADFd0  ONLINE       0     0     0
            c0t5000C50034FF5EB3d0  ONLINE       0     0     0
            c0t5000C50034F4467Bd0  ONLINE       0     0     0
            c0t5000C50034EB7D9Fd0  ONLINE       0     0     0
            c0t5000C50034F3DD6Bd0  ONLINE       0     0     0
            c0t5000C50034F3DF0Bd0  ONLINE       0     0     0
            c0t5000C50034F43AEBd0  ONLINE       0     0     0
            c0t5000C50034F39007d0  ONLINE       0     0     0

errors: No known data errors
Code:
echo|format  (to get the drive identifiers)
zpool create -o version=28 -O version=5 NAME_OF_POOL mirror DISK1_IDENTIFIER DISK2_IDENTIFIER
zpool add NAME_OF_POOL mirror DISK3_IDENTIFIER DISK4_IDENTIFIER
zpool add NAME_OF_POOL mirror DISK5_IDENTIFIER DISK6_IDENTIFIER
zpool add NAME_OF_POOL mirror DISK7_IDENTIFIER DISK8_IDENTIFIER
...Repeat until all 24 disks are used.

If you want the extra redundancy of a raidz2 array and just want your data duplicated once a night (or even once an hour), you could do that with a cron job. Just create two separate raidz2 arrays and rsync them.
Create array 1:
Code:
echo|format  (to get the drive identifiers)
zpool create -o version=28 -O version=5 NAME_OF_POOL1 raidz2 DISK1_IDENTIFIER DISK2_IDENTIFIER DISK3_IDENTIFIER DISK4_IDENTIFIER DISK5_IDENTIFIER DISK6_IDENTIFIER  DISK7_IDENTIFIER DISK8_IDENTIFIER DISK9_IDENTIFIER DISK10_IDENTIFIER DISK11_IDENTIFIER DISK12_IDENTIFIER

Create array 2:
Code:
echo|format  (to get the drive identifiers)
zpool create -o version=28 -O version=5 NAME_OF_POOL2 raidz2  DISK13_IDENTIFIER DISK14_IDENTIFIER DISK15_IDENTIFIER DISK16_IDENTIFIER DISK17_IDENTIFIER DISK18_IDENTIFIER  DISK19_IDENTIFIER DISK20_IDENTIFIER DISK21_IDENTIFIER DISK22_IDENTIFIER DISK23_IDENTIFIER DISK24_IDENTIFIER

Sync the two arrays every night at midnight:
Code:
crontab -e  (then at the bottom of the file, add the following line:)
0  0 * * * rsync -av --delete /NAME_OF_POOL1/ /NAME_OF_POOL2/
This could actually work out nicely if you wanted to hide the "backup" array. You could tell cron to import the pool, then rsync, then export the pool. Presto, instant "hidden" pool. Something like the following would do that every hour on the hour:
Code:
crontab -e  (then at the bottom of the file, add the following line:)
0  * * * * zpool import NAME_OF_POOL2; rsync -av --delete /NAME_OF_POOL1/ /NAME_OF_POOL2/; zpool export NAME_OF_POOL2

Note that if you're not planning on buying all the drives at once, the raid10 solution may be the best one. Only the raid10 solution allows you to add drives 2 at a time. The others all require you to destroy your pool, build a new one, and copy your data over, then destroy your backup pool, build a new one, and copy the data back. With 3TB drives, it took me about 24 hours to do this when going from a 8-drive raidz2 array to a 9-drive raidz2 array. Going from one chassis to another takes a couple of days unless you've got 10GigE or better.
 
Last edited:
My plan is to go with ZFS and just have two ZFS arrays with one mirroring the other. As I stated in my original post, I am a complete noob when it comes to this so forgive if I sound dumb but because ZFS includes RAID-Z features including mirroring, I don't need to set up a "RAID", again, noob here.

I don't have all 24 drives. I only bought 14 4TB drives and will add more later and plan on gradually replacing 4TB drives with larger drives as they become more affordable so a setup that allows me to do just that without having to destroy the data in the array is essential.
 
My plan is to go with ZFS and just have two ZFS arrays with one mirroring the other. As I stated in my original post, I am a complete noob when it comes to this so forgive if I sound dumb but because ZFS includes RAID-Z features including mirroring, I don't need to set up a "RAID", again, noob here.

I don't have all 24 drives. I only bought 14 4TB drives and will add more later and plan on gradually replacing 4TB drives with larger drives as they become more affordable so a setup that allows me to do just that without having to destroy the data in the array is essential.
raidz = raid 5.
raidz2 = raid6.
mirror = raid1.
regular ol' pool = raid0.

It's just how I refer to things. With traditional raid, you can mix and match. For example, if you wanted two raid5 arrays mirroring each other, you could create the two raid5 arrays, and then create a mirror out of the result. This was known as raid5+1. You can't do this in ZFS.

I would probably recommend the ZFS equivalent of a raid10 setup for ease of use. That way, if you want to add extra capacity, it's as simple as buying two drives, plugging them in, and adding them to the raid. It's easy and won't put a bad taste in your mouth about ZFS.

Suppose you decided to make two raidz arrays and sync them every night. Things are going good. You've got a backup of your data on the second array, and a little bit of uptime protection with your raidz configuration. You run out of space. You need to add two more drives. This is where it gets complicated. You need to destroy array1 completely. All data on that array is lost. You still have all of your data on array2, but array1 is toast. Now you stick one of the new disks in and create a new array out of the new disk and the old disks from array1. You now have a bigger array1, but it's empty. So, you copy your data from array2 over to array1. Array1 now has the extra capacity and the data. Now you destroy array2 entirely, add a new disk, create a new array2, and copy the data back. You now have two copies of your data again, and the extra capacity. It's very possible to do this (I do it regularly), but you can see it's a lot more complicated than issuing one command:
Code:
zpool add MyPool mirror New_Disk1 New_Disk2

What you described your initial post is (to me) the equivalent of a raid5+1 or a raid6+1, which cannot be done in ZFS. The ultra-long-procedure is as close as I could think of to it. There are numerous downsides to it, with only one upside - Better uptime protection. If I were in your shoes, I'd do the raid10 equivalent instead.

Downsides to raid10:
If you lose two disks in the same vdev, your data is lost
Upsides to raid10:
Better capacity than raid5+1 (28TB vs 24TB)
Easier to expand
Faster

Downsides to raid5+1:
Complicated
Reduced capacity
Slower
Upsides to raid5+1:
You can lose EVERY disk in one vdev and still have your data
 
Thanks for your feedback Benji but I am not a windows person and do not trust a platform as unstable and flawed as Windows to manage my precious data. I can not afford to build two servers so I will have the mirror or the same server and the mirror is why I chose to use a 24 drive chassis. By the time I fill all the bays and fill it with data, I will be able to build another.

I already decided on ZFS and will be using a Solaris OS, I just need to know how to set up my ZFS RAID and configure it how I specified. Can anyone help with that?

My plan is to go with ZFS and just have two ZFS arrays with one mirroring the other. As I stated in my original post, I am a complete noob when it comes to this so forgive if I sound dumb but because ZFS includes RAID-Z features including mirroring, I don't need to set up a "RAID", again, noob here.

Removed your link, link to the thread, not a screen shot of it!

Sorry dude but when I see comments like that I have marked, then see it followed up but the rest of your content, all I can suggest is, be prepared to loose data and learn the hard way.

Windows has been around long enough and longer than most of the alternates, yes it has some occasional bugs but so does all the others. Bagging out windows in favour of something that you have made clear that know nothing about either is just plain ignorant and arrogant. Good luck with it but you current ideas opinions are not looking good.

P.S Placing mirror in same box with same PSU, same board, same controller/HBA, same OS is defeating of the purpose.
 
Thread no longer exists which is why I linked to a screenshot of it.

You sound butthurt and I am sorry if I offended you but as a long time Windows user, I can attest to just how much it sucks. The latest Windows, 8.1, has given me so much trouble and made me lose my temper and snap in ways nothing else ever has. Windows 8, the Xbox One, Zune, Windows Phone, need I go on? Microsoft is full of incompetent dumbasses that couldn't make a decent product even if they had a gun to their heads. The company is one big f*ckup after another.

Anyways... back on topic. I have decided not to mirror the array for obvious reasons but will just implement RAID-Z3 along with ZFS. I still need help setting this up if anyone can help me. Should I go with something like OmniOS or FreeNAS. Can I configure the server to act as an external hard drive and connect it to my computer?

Any other useful information or tips you think I might need is welcome.
 
I have decided not to mirror the array for obvious reasons but will just implement RAID-Z3 along with ZFS. I still need help setting this up if anyone can help me. Should I go with something like OmniOS or FreeNAS. Can I configure the server to act as an external hard drive and connect it to my computer?

Any other useful information or tips you think I might need is welcome.
You'll still have issues when it comes time to upgrade your array. Unlike traditional raid, you cannot simply "add another disk". You'll need to add them in sets called vdevs. You define how big a vdev is when you first build the system. Your array can consists of any number of vdevs, but they must be all the same size. For RAID-Z3, the minimum number of disks in a vdev is 4. With 14 disks, you could create 3x RAID-z3 vdevs and have a 12TB array. Obviously, a horrible capacity! You could also create two 7-disk RAID-Z3 vdevs, which would give you 32TB... A far cry from your 56TB raw capacity. Also note that these numbers don't include the ZFS overhead (~1-4%) or binary-decimal conversion losses (1024 vs. 1000).

For example:
You decide to put all 14 drives into one RAID-Z3 vdev. You buy ten more drives. In order to add the drives, you must back your data up somewhere, completely destroy the initial pool, and build a new one. Total capacity: 84TB, but you must have a backup! In addition, you're limited to the IO/s of a single disk.

Alternately:
You plan ahead, and only put 12 drives into a RAID-Z3 vdev. Two drives sit on your desk until you buy 10 more. You build a second RAID-Z3 vdev, and add this vdev to the first one. Your capacity is now 54TB. You lost 30TB in capacity, but you have the IO/s of 2x4TB drives.

As for acting like an external drive, I don't think that's likely. You can, however, have it act like a network drive. Solaris / OmniOS includes a CIFS server built in, so you can map it in Windows and treat it like a local disk.

For OS choice, I personally have always used Solaris or OmniOS. I am partial to Solaris for no particular reason.

It's all a balancing act. What's important to you? Sequential transfer speed? IOs per second? Uptime? Integrity? Every configuration has pros and cons.

To create a raidz3 vdev:
Code:
zpool create -o version=28 -O version=5 POOLNAME raidz3 DISK1 DISK2 DISK3...
 
Last edited:
Data Integrity and transfer speeds are most important to me. Before this build I have been keeping my data on 8TB external hard drives with RAID 1 and keeping those in a large fireproof file cabinet when I am not using them.

I am fine with losing up to half of my capacity if it means my files will be safe which is what I had planned on with mirroring in the first place but if I can achieve a certain level of data protection without sacrificing so much space, all the better.

I am going to go with OmniOS and ZFS but I don't really know what I am going to do regarding what RAID scheme to use which is why I am coming to you guys. Before going into this, I read that I I can replace drives with larger drives at any time by swapping out each drive at a time and rebuilding the array and I figured I could do the same thing by adding drives and rebuilding the array. It is important to me that whatever I do (according to the advice I get) that I am able to easily add and swap out drives at any time without having to destroy everything and start over since I don't have another server to backup everything on unless I got back to the mirroring idea.
 
I am fine with losing up to half of my capacity if it means my files will be safe which is what I had planned on with mirroring in the first place but if I can achieve a certain level of data protection without sacrificing so much space, all the better.
If you place your data on a RAIDZ2 your chances losing data to a hard drive failures are already slim. If you apply regular snapshots you are not likely to lose lots of data due to accitental file deletion. It is more likely that you lose a lot of data due hardware malfunctions (controller, backplane, mainboard, memory, power supply), fire, natural disaster or user interaction with root access. A second mirror pool in the same machine will not protect you against most of these. The right way to do this is to have a second server in a remote location or at least offsite cold storage. If you can afford 24 drives you also build a second computer with less potent hardware for the backups, even if you cannot place it in a remote location.

I am going to go with OmniOS and ZFS but I don't really know what I am going to do regarding what RAID scheme to use which is why I am coming to you guys. Before going into this, I read that I I can replace drives with larger drives at any time by swapping out each drive at a time and rebuilding the array and I figured I could do the same thing by adding drives and rebuilding the array. It is important to me that whatever I do (according to the advice I get) that I am able to easily add and swap out drives at any time without having to destroy everything and start over since I don't have another server to backup everything on unless I got back to the mirroring idea.
I think the first thing to do is to get familiar with the environment. Before I moved my data to ZFS I a had a system running for a few weeks to try the command line interface and test some failure scenarios.
I just recently switched from a 11-disk RAIDZ3 pool to multiple single-disk ZFS pools with SnapRAID, because of the higher flexibility and much lower power consumption/noise.
Reading your comments I get the impression that you would be better off with an off-the-shelf NAS solution from Synology or QNAP.
 
Last edited:
If you place your data on a RAIDZ2 your chances losing data to a hard drive failures are already slim. If you apply regular snapshots you are not likely to lose lots of data due to accitental file deletion. It is more likely that you lose a lot of data due hardware malfunctions (controller, backplane, mainboard, memory, power supply), fire, natural disaster or user interaction with root access. A second mirror pool in the same machine will not protect you against most of these. The right way to do this is to have a second server in a remote location or at least offsite cold storage. If you can afford 24 drives you also build a second computer with less potent hardware for the backups, even if you cannot place it in a remote location.

I can not afford a second server nor can I afford off-site backup with the amount of data I have. I also could not afford 24 drives which is why I only bought 14. I can not afford to build a second server or PC at the moment, this server has already cost me $3,800 for all the parts. I will do anything I can to safeguard my files but I am restricted by my means.

I think the first thing to do is to get familiar with the environment. Before I moved my data to ZFS I a had a system running for a few weeks to try the command line interface and test some failure scenarios.
I just recently switched from a 11-disk RAIDZ3 pool to multiple single-disk ZFS pools with SnapRAID, because of the higher flexibility and much lower power consumption/noise.
Reading your comments I get the impression that you would be better off with an off-the-shelf NAS solution from Synology or QNAP.

I don't have weeks, all my external and internal drives are full and I am not going to spend more money on yet another external hard drive to use while I familiarize my self with a new operating system. Command line? what is this 1980? I don't mean to come across as a dick but eww, GUI or GTFO is who I am. As I said in my earlier posts, I am a noob and I do not know how to use or care to ever use a command line interface, next thing you will tell me is that I should learn to read maps, learn to use a stick shift or read books and use my imagination instead of waiting for them to come out as movies.

The whole reason I am doing this is because of the cost. Two Synology NAS units would cost me nearly $7,000 and I just can not afford to keep buying external hard drive or NAS drives, building my own server is the cheapest option in the end plus I want to learn how to do this kind of stuff.



I have decided to go with RAID-Z2 and just make one 12 drive zpool and another when I get 10 more drives and then make one storage pool.
 
You've got something completely backwards regarding GUI vs. CLI.

Even Microsoft is switching back to CLI for their server OS'.

You're in the classic pick-two-out-of-three situation:
Enterprise features
GUI
Free

Pick any two. (napp-it being the only option that comes to mind that delivers all three)
 
You're in the classic pick-two-out-of-three situation:
Enterprise features
GUI
Free

Pick any two. (napp-it being the only option that comes to mind that delivers all three)
Nitpick: Napp-it's enterprise features are not free, and you shouldn't count ZFS's (enterprise) features as being napp-it's.

But yeah, user dissing Windows and CLIs, without the proper budget and know-how but wanting it all is rather ironic ;)
 
Chellexelle, how much storage are you currently using? We can help you decide which configurations best fit your needs, but knowing the needs would help.
 
Just create two separate raidz2 arrays and rsync them.
Why rsync over ZFS send/recv?

Only the raid10 solution allows you to add drives 2 at a time. The others all require you to destroy your pool, build a new one, and copy your data over, then destroy your backup pool, build a new one, and copy the data back.
Yes, or you can replace every disk in your pool with larger drives. IOW, I agree that mirrors are a great solution for ZFS.


I only bought 14 4TB drives and will add more later and plan on gradually replacing 4TB drives with larger drives as they become more affordable so a setup that allows me to do just that without having to destroy the data in the array is essential.
My 1st thought for 2 pools - a primary & a hot backup - with 14 total drives might sound a bit odd...

  • Primary: 8 HDs in 4 mirrors = 16TB raw capacity & solid performance. 8 is also a convenient number for affordable HBAs.
  • Backup: 6 HDs in RZ2 = 16TB raw capacity.

Downsides include:
  • no spare drives - You could degrade the backup pool to RZ1 (temporarily) if the main drops a disk or, perhaps, crack open 1 of your externals.
  • less flexibility expanding the backup pool - I'd personally feel OK with completely rebuilding the backup pool once in a while IFF the main pool was running well AND I had a spare HD for it just in case.
  • backup pool would offer less performance than main

I can not afford to build a second server or PC at the moment, this server has already cost me $3,800 for all the parts.
Remember that the backup server doesn't need to be fancy; the main expense should be the drives, which you already own. An old dual-core with 2GB RAM would be 100% fine here, and using 6 HDs in the backup pool could let you skip an HBA by attaching direct to motherboard.
 
Chellexelle, how much storage are you currently using? We can help you decide which configurations best fit your needs, but knowing the needs would help.

VIuZl18l.jpg


whpYGzgl.jpg


These are most of the external drives I have accumulated over the years. I have a box with old internal drives and external drive that failed in storage.

My current external drives are:

80GB
300GB
500GB
1TB x 3
1.5TB
2TB x 2
3TB
4TB x 4
6TB
8TB x 5

Total = 74.38TB

4 of my Micronet 8TB drives have a RAID 1 option which I am using so I am only getting 4TB out of them. I am not planning on moving all my data to this new server, just my music and videos. I will eventually build two more servers for backups, my porn and everything else.

According to this THIS I will have 80TB total which will give me 20TB of space for my media and 60TB of additional space for the future.

I am not doing all of this just to migrate my data from external drives to one single device and only get a few extra TB of space. I am doing this so I won't need to continue to waste thousands of dollars on external drives while giving me some level of data protection and ample space for the future.

With all the money I spent on external drives which can so easily fail in any number of ridiculous ways (like losing it's format when plugging it in to the computer (losing it's format!!! have you ever heard of such a stupid thing!?), I could have used that money to build two more servers just like the one I am currently building.
 
I just had a drive die in my "raid60"-style array. (The one you're considering with the 12x drives now, 12x later, only I have 2TB drives instead of 4TB.) I thought you might like to see what it's like to replace a failed disk.
Code:
root@nas:~# devfsadm                                                                     (Scans for new disks)
root@nas:~# echo|format|grep SEA                                                         (Get the identifier of the new disk)
       0. c0t5000C50034F36CFFd0 <SEAGATE-ST32000444SS-0006-1.82TB>
       1. c0t5000C50034EB58BBd0 <SEAGATE-ST32000444SS-0006-1.82TB>
       2. c0t5000C50034F44577d0 <SEAGATE-ST32000444SS-0006-1.82TB>
       3. c0t5000C50034E85E4Bd0 <SEAGATE-ST32000444SS-0006-1.82TB>
       4. c0t5000C50034F422B7d0 <SEAGATE-ST32000444SS-0006-1.82TB>
       5. c0t5000C50034E85C3Fd0 <SEAGATE-ST32000444SS-0006-1.82TB>
       6. c0t5000C50040CF0C4Fd0 <SEAGATE-ST2000NM0001-0002-1.82TB>
       7. c0t5000C500409AE567d0 <SEAGATE-ST2000NM0001-0002-1.82TB>
       8. c0t5000C500409946FFd0 <SEAGATE-ST2000NM0001-0002-1.82TB>
       9. c0t5000C50034FBE17Bd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      10. c0t5000C5003C95ABDFd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      11. c0t5000C50034F3DFC7d0 <SEAGATE-ST32000444SS-0006-1.82TB>
      12. c0t5000C50034F3CC5Fd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      13. c0t5000C50034F3E81Fd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      14. c0t5000C50034EA0857d0 <SEAGATE-ST32000444SS-0006-1.82TB>
      15. c0t5000C50034FF6167d0 <SEAGATE-ST32000444SS-0006-1.82TB>
      16. c0t5000C50034F3DECFd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      17. c0t5000C50034F421C7d0 <SEAGATE-ST32000444SS-0006-1.82TB>
      18. c0t5000C50034F3DAEBd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      19. c0t5000C50034FF1B8Bd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      20. c0t5000C50034F42DB7d0 <SEAGATE-ST32000444SS-0006-1.82TB>
      21. c0t5000C50034F3D3ABd0 <SEAGATE-ST32000444SS-0006-1.82TB>
      22. c0t5000C50034E011D3d0 <SEAGATE-ST32000444SS-0006-1.82TB>
      23. c0t5000C5003C95A907d0 <SEAGATE-ST32000444SS-0006 cyl 60798 alt 2 hd 255 sec 252>    (Gee, guess which one is the new disk?)
root@nas:~# zpool replace pool c0t5000C500409C667Fd0 c0t5000C5003C95A907d0               (zpool replace POOL_NAME BAD_DISK GOOD_DISK)
root@nas:~# zpool status pool                                                                      (Checking to make sure it worked.)
  pool: pool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function in a degraded state.
action: Wait for the resilver to complete.
        Run 'zpool status -v' to see device specific details.
  scan: resilver in progress since Sat Sep 13 17:56:35 2014
    29.8G scanned out of 11.7T at 587M/s, 5h46m to go
    1.24G resilvered, 0.25% done
config:

        NAME                         STATE     READ WRITE CKSUM
        pool                         DEGRADED     0     0     0
          raidz2-0                   DEGRADED     0     0     0
            c0t5000C50034F36CFFd0    ONLINE       0     0     0
            c0t5000C50034EB58BBd0    ONLINE       0     0     0
            c0t5000C50034F44577d0    ONLINE       0     0     0
            c0t5000C50034E85E4Bd0    ONLINE       0     0     0
            c0t5000C50034F422B7d0    ONLINE       0     0     0
            c0t5000C50034E85C3Fd0    ONLINE       0     0     0
            c0t5000C50040CF0C4Fd0    ONLINE       0     0     0
            c0t5000C500409AE567d0    ONLINE       0     0     0
            c0t5000C500409946FFd0    ONLINE       0     0     0
            replacing-9              DEGRADED     0     0     0
              c0t5000C500409C667Fd0  UNAVAIL      0     0     0
              c0t5000C5003C95A907d0  DEGRADED     0     0     0  (resilvering)
            c0t5000C50034FBE17Bd0    ONLINE       0     0     0
            c0t5000C50034F3DFC7d0    ONLINE       0     0     0
          raidz2-1                   ONLINE       0     0     0
            c0t5000C50034F3CC5Fd0    ONLINE       0     0     0
            c0t5000C50034F3E81Fd0    ONLINE       0     0     0
            c0t5000C50034EA0857d0    ONLINE       0     0     0
            c0t5000C50034FF6167d0    ONLINE       0     0     0
            c0t5000C50034F3DECFd0    ONLINE       0     0     0
            c0t5000C50034F421C7d0    ONLINE       0     0     0
            c0t5000C50034F3DAEBd0    ONLINE       0     0     0
            c0t5000C50034FF1B8Bd0    ONLINE       0     0     0
            c0t5000C50034F42DB7d0    ONLINE       0     0     0
            c0t5000C50034F3D3ABd0    ONLINE       0     0     0
            c0t5000C50034E011D3d0    ONLINE       0     0     0
            c0t5000C5003C95ABDFd0    ONLINE       0     0     0

errors: No known data errors

The raid calculators are good for rough estimates, but they're typically off by a bit. If I recall correctly, my array's usable space is 35.46TB. According to that calculator, I should have 40TB. If I were you, I'd expect closer to 70TB than 80.

Why rsync over ZFS send/recv?
Either works well. I'm just familiar with rsync, so I use it.
 
Last edited:
I just had a drive die in my "raid60"-style array. (The one you're considering with the 12x drives now, 12x later, only I have 2TB drives instead of 4TB.) I thought you might like to see what it's like to replace a failed disk.

I know what its like to experience hard drive failure, I have experienced it 7 times in the past that I can recall, mostly with external drives which is the worst. I know that drives will fail, it is like rain, taxes and death. I expect it to happen again. Losing a drive in a RAID-Z2 array isn't that bad, you just rip the damaged drive out, slot the new drive in, rebuild the array and then get down on your knees and prey to whatever deity you believe in (if any) that two or more other drives don't fail while the array is being rebuilt :).
 
The raid calculators are good for rough estimates, but they're typically off by a bit. If I recall correctly, my array's usable space is 35.46TB. According to that calculator, I should have 40TB. If I were you, I'd expect closer to 70TB than 80.

A 12-drive RAIDZ2 with 4 TB disks should result in a ~36.48 TB or ~33.18 TiB pool using ashift=12 and predominatly 128k blocks.
With ashift=9 this would be ~39.84 TB or ~36.23 TiB. Of course these are the theoretical maximum, it will get smaller with a lot of small files and a lot of metadata.
What ZFS shows is lower because it reserves a bit for metadata. You can easily find out how much you would get by creating a pool on sparse files.
12 disks is not very optimal from an efficiency point of view (with ashift=12). There is a lot of padding involved. 13 or 14 would be more efficient.
Normal RAID6 calculators are not very precise for ZFS as they do not consider the stripe layout.
 
Last edited:
Nitpick: Napp-it's enterprise features are not free, and you shouldn't count ZFS's (enterprise) features as being napp-it's.

But yeah, user dissing Windows and CLIs, without the proper budget and know-how but wanting it all is rather ironic ;)

No, Napp-it adds a GUI to a free OS that has enterprise features. You get all three.
 
I finally finished putting my new server together but am now confused and frustrated due to the difficulty in trying to put an operating system on my damn machine. I had decided to go with OmniOS but am annoyed because it comes as a usb-dd file with no instructions on what do do with it. So I guess my options now are Solaris, OpenIndiana or Ubuntu with the "Native ZFS for Linux" added in. I strongly prefer Ubuntu because it is the only Linux OS I am familiar with and I know how to install and use (somewhat) and it is the only Linux OS that actually looks like a real OS but because this is a media server and I am hardly ever going to be looking at the OS, it really doesn't matter.

What I need is an OS that is easy to download, mount to a USB stick (from within Windows) and install, is easy to setup the ZFS array (with a GUI) and is easy to setup a CIFS server (with a GUI) so I can map it to Windows and see it as a network drive.
 
Its not self explaining but quite easy

1. downlod OmniOS ISO and burn a CD or
download USB-dd image and copy it to a USB stick
you can use any imager tool example a Windows imager tool
from my site http://napp-it.org/manuals/to-go.html

2. Boot this cd or stick and setup

3. now you need to manually setup network
(the most difficult step), see http://napp-it.org/downloads/omnios.html

4. setup napp-it online via one command
wget -O - www.napp-it.org/nappit | perl

connect the GUI via webbrowser and setup ZFS, shares, users...

more http://www.napp-it.org/doc/downloads/napp-it.pdf
download pdf manuals: http://archive.today/snZaS to learn Solaris
(Oracle Solaris Express manuals are perfect for OmniOS)
 
Last edited:
Its not self explaining but quite easy

1. downlod OmniOS ISO and burn a CD or
download USB-dd image and copy it to a USB stick
you can use any imager tool example a Windows imager tool
from my site http://napp-it.org/manuals/to-go.html

2. Boot this cd or stick and setup

3. now you need to manually setup network
(the most difficult step), see http://napp-it.org/downloads/omnios.html


4. setup napp-it online via one command
wget -O - www.napp-it.org/nappit | perl

connect the GUI via webbrowser and setup ZFS, shares, users...

more http://www.napp-it.org/doc/downloads/napp-it.pdf
download pdf manuals: http://archive.today/snZaS to learn Solaris
(Oracle Solaris Express manuals are perfect for OmniOS)

Hey Gea, I am stuck. So I installed OmniOS and tried following your instructions for the rest but when I enter "ipadm create-if e1000g0" I get "Could not create e1000g0: Could not open DLPI link".

Not having a GUI environment to work in is pissing me off.
 
Hey Gea, I am stuck. So I installed OmniOS and tried following your instructions for the rest but when I enter "ipadm create-if e1000g0" I get "Could not create e1000g0: Could not open DLPI link".

Not having a GUI environment to work in is pissing me off.

Lol... if you expected to get ZFS up and running on a custom built server without ever using a command line, you are sorely misguided....
 
First enter:
dladm show-link

This gives you the available nics.
e1000g0 is the name of a first 1 Gb Intel nic.
If you have another network adapter, use its name.
 
Back
Top