OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Since it's a 3-way mirror, I'd say offline the drive (command line: zpool offline <pool> <device ID>), pull the drive, then go to Disks> Replace. So long as it doesn't report as a 4k drive you should be gold. If it gives you the error I got then you need to find another older model drive that reports 512-byte sectors, or else use another set of drives to re-build the pool and build it with 4k sectors (ashift 12). You can always put a 512-byte drive in a 4k vdev, but you cannot place a 4k drive in a 512-byte vdev.

EDIT: DOH! You can use Disks>Hotswap>Set-Offline to prep disk for removal.

- When using hotswap to Offline the drive, do I then replace the drive while everything is still running or should I only use the Hotswap to Offline the drive, shutdown Napp-IT + ESXi, replace the drive in the same slot, start ESXi + NappIT and configure the drive..?
- I also read somewhere in the thread that the disk has to be initialized first before using it.?
- Hotswap states: Hotswap is only possible on a supported disk-controller, I'm assuming M1015 LSI based cards are supported..? (passed-through by ESXi)

I'm pretty sure the replacement 2 TB drive is 512-bytes so that shouldn't pose any problem.
 
Last edited:
All the napp-it menus under Disks > Hotswap are only needed (and only sometimes helpful) with non-hotswap capable controllers.
Your IBM 1015 when flashed to 9211-IT is a real hotplug controller.

If you hot-plugin/plugout a disk, it is discovered by the OS.
You do not need to care about anything.

- unplug the damaged disk
- insert a new disk (with a valid partition but without a ZFS Label)
- do a Disk - Replace (missing -> new)

- your removed disks keeps in inventory as missing until next reboot
(does not matter)

If your new disk is new from stock and has problems with parted:
- do a Disks - Initialize (Rollovermenu)

If your new disk has a ZFS label (was part of a ZFS pool)
- Repartition/reformat, a Disks - Initialize should be enough

-This is the same with napp-it 0.8 and 0.9

If your new disk is a 4k
- maybee you can force the disk to 512b in sd.conf
(I have not tried myself but this may work too)
 
All the napp-it menus under Disks > Hotswap are only needed (and only sometimes helpful) with non-hotswap capable controllers.
Your IBM 1015 when flashed to 9211-IT is a real hotplug controller.

If you hot-plugin/plugout a disk, it is discovered by the OS.
You do not need to care about anything.

- unplug the damaged disk
- insert a new disk (with a valid partition but without a ZFS Label)
- do a Disk - Replace (missing -> new)

- your removed disks keeps in inventory as missing until next reboot
(does not matter)

If your new disk is new from stock and has problems with parted:
- do a Disks - Initialize (Rollovermenu)

If your new disk has a ZFS label (was part of a ZFS pool)
- Repartition/reformat, a Disks - Initialize should be enough

-This is the same with napp-it 0.8 and 0.9

If your new disk is a 4k
- maybee you can force the disk to 512b in sd.conf
(I have not tried myself but this may work too)

Thanks _Gea,
To be sure I think I want to initialize the disk first before using it..

Can I use the same cable which had the broken disk to attach my new drive..?
 
Last edited:
Unless the cable is damaged in some way, or if it was that causing the problems, then it will be fine to reuse the cable :)
 
disk is a 4k
- maybee you can force the disk to 512b in sd.conf
(I have not tried myself but this may work too)

I thought about that but since my array is all 4k 3TB drives I figured there was probably no way to force just one. I'm re-arranging my pools when I get my next shipment of drives in so I can have the new pool be ashift=12. It was my noobie mistake not forcing it on the pool in the first place.
 
Well I just checked how much space I 'm using on my 3-way mirror and saw that I have about 500 GB free so I decided to order a 1TB drive for my 3 way mirror and also a 2 TB drive to add to my other volume so both volumes will become 3 way mirrors..
Going to visualize my actions :D and replace+add the drives tomorrow evening.
Does resilvering kill performance or is this a background process? I will try to shutdown as many VM's during the resilvering process.

Will report back with feedback.
 
resilver and scrubs are both background processes, they have knobs that affect how performance impacting they are.

The defaults say, scrubs will never impact performance, but resilvers very slightly do. This can be bad and cause scrubs to take months and resilvers to take weeks. I normally adjust mine at night to be very impacting, and during the day to be slightly impacting.
 
resilver and scrubs are both background processes, they have knobs that affect how performance impacting they are.
The defaults say, scrubs will never impact performance, but resilvers very slightly do. This can be bad and cause scrubs to take months and resilvers to take weeks. I normally adjust mine at night to be very impacting, and during the day to be slightly impacting.
Can you adjust the priority of resilvering from the Napp-IT gui or do you need adjust this through commandline..? (I'm running on OI)
 
command line, I use this:

echo "Going Fast"
echo "zfs_scrub_delay/W0t1" | sudo mdb -kw
echo "zfs_resilver_delay/W0t1" | sudo mdb -kw
echo "zfs_scan_idle/W0t2" | sudo mdb -kw

defaults are:
scrub_delay = 5
resilver_delay = 3
scan_idle =50

This means, scan_idle is how long there has to be NO IO before it does something.
the delay is how long to pause in ms between each iop.

So normal is to wait for 50ms without any i/o on the disks, before it even does an operation, and only do an operation every 3 or 5 ms.

Now my fast settings make it perform more like a normal raid impacting rebuild.
 
I'm stuck :(

I removed the faulted disk from my server and replaced it with a 1 TB seagate disk

Then I tried to replace the drive but the drive to be replaced was not listed:
Bf7Oxmt.png


I then removed the faulted disk from the list
YcxDWhp.png

This worked

I then tried to add the drive
z5GJvBT.png


But I'm getting the dreaded sector alignment message
FXNLC71.png


I tried to initialize the disk but I can't find that option in the menu..
7gqMQ9w.png


What to do..?
 
You cannot replace a 512B disk in a ashift=9 vdev with a 4k disk
- you either need a 512B replacement or you may try to force 512B for that disk in sd.conf.

(disk initialize will not help - option available in newer napp-it 0.9 releases)
 
Hi _Gea,

I've checked the sd.conf but it's a little confusing:
f400lxA.png


6uPNiIv.png


What do they want me to type here to force the disk to 512 bytes:
"ATA plus 6 spaces ST1000DM003-1CH1", "physical-block-size:512"; ???
Or to force the Samsung drives to use 4096 ??:
ATA-SAMSUNG HD103SJ-00E4", "physical-block-size:4096";

Does the change go in effect immediately or do I need to reboot first..?

If I decide to re-create the vdev with ashift=12 (which seems like the best option going forward) is there already a consensus what would be the best way to create a future proof ashift=12 vdev? (looking at the options below). I mean if I move from OI to OmniOS what options should I use..? (just use the sd.conf option or the zpool binary..)
Vx1lHQE.png
 
Last edited:
if you like to force ashift via sd.conf:
edit with physical size:512, reboot and create a testpool to check.

(or unconfigure/configure disk)

if you want ti recreate a pool with ashift=12
- edit sd.conf with blocksize:4096 for older disks or
- create a vdev with one disk that reports 4096 blocksize (can be replaced with 512b later) or
- use the modified binary (i have not heard of problems but eventually unstable)
 
if you like to force ashift via sd.conf:
edit with physical size:512, reboot and create a testpool to check.

(or unconfigure/configure disk)

if you want ti recreate a pool with ashift=12
- edit sd.conf with blocksize:4096 for older disks or
- create a vdev with one disk that reports 4096 blocksize (can be replaced with 512b later) or
- use the modified binary (i have not heard of problems but eventually unstable)

Well I did buy a 1TB and a 2 TB drive so might as well use them.

So would this be an option?
1) Create a new Pool on the 2 TB, migrate my VM's off the ashift=9 pool to this temporary nfs pool.
2) Create an ashift=12 Pool on the new 1 TB disk (since it appears to report as 4096 anyway :( )
3) Delete the old pool on the 512 bytes disks
4) Edit sd.conf: (+reboot to force the SAMSUNG drives to 4096?) like so:
Code:
[COLOR="DeepSkyBlue"]
sd-config-list = "ATA     SAMSUNG HD103SJ ", "physical-block-size:4096";[/COLOR]
5) Then add the 512 bytes disks to the ashift=12 pool (Napp-IT: Disks, Add) then select 1 of the 512 drives and add them to the already existing pool (which then becomes a mirror automatically?)
(you mentioned that 4096 blocksize can be replace with 512 bytes later, why would you want to..?)
6) Move my VM's from the temporay 2 TB pool to the new ashift=12 pool?

EDIT: should I use atime=off for a NFS datastore for VMWare..?
 
Last edited:
Think at the following basics:

- ashift is a vdev property (not a pool or disk property)
- if you have a ashift=12 vdev, you can add (mirror, ashift value is kept) or replace a faulted disk with 512B or 4k ones.
- if you have a ashift=9 vdev, you can add or replace only 512B disks (newer disks 1TB+ are nowadays always 4k)

This is the reason to build ashift=12 vdevs even with older 512B disks
If you already have a ashift=12 vdev, you do not neeed the sd.conf manipulation.

Atime=On is a performance killer, disable when there is no need to log last file access time due to security policies.
 
Think at the following basics:

- ashift is a vdev property (not a pool or disk property)
- if you have a ashift=12 vdev, you can add (mirror, ashift value is kept) or replace a faulted disk with 512B or 4k ones.
- if you have a ashift=9 vdev, you can add or replace only 512B disks (newer disks 1TB+ are nowadays always 4k)

This is the reason to build ashift=12 vdevs even with older 512B disks
If you already have a ashift=12 vdev, you do not neeed the sd.conf manipulation.

Atime=On is a performance killer, disable when there is no need to log last file access time due to security policies.

Thanks _Gea,

That is what I'm so confused about.
If I click in Napp-IT on Pools, Add vdev, I can only add my new drive to an already existing pool: (which has a pool with ashift=9 because the disks are 512 bytes)
2adNKjz.png


If click on Pools, Create Pool I can select my drive and the pool that is created is ashift=12.
7Vo82cj.png


CZchyJr.png


So to keep it simple for myself:
1) Does the 2nd picture show the correct menu option to create an ashift=12 vdev..? (if not what is the correct way to do this from Napp-IT?)
I currently only have the 1 new drive, the other 2 (512 bytes) are pumping their VM's to another disk. when I create the pool I want to add all 3 at once in a mirror)
2) If i do this I have 3 disks, of which 2 are still reporting 512 bytes. If my 1 disk that reports 4K dies, can I replace it with another 4K drive even if the remaining drives are 512 bytes..? (because the pool was first created with ashift=12?)


EDIT: I noticed that I can't create a pool with one disk and then add another drive to create a mirror. (so this needs to be done in step 1 by adding all drives)
 
Last edited:
Well yes this is the way to create a 1 drive vdev/pool with ashift=12 but there is no point in doing that with the 1TB. Now what you talked about with the 2TB, that makes sense. Use zfs send/receive to copy your pool to the 2TB, destroy the original pool, create vdevs using the 1TB every time (physically), that way they're ashift=12, zfs send/receive the data back.

I just did this for testing purposes, with old 1.5TB 512b drives, a 2TB 4K, and a 3TB 4K.

I made a 3 drives RAIDZ1 with the 1.5TB, ashift=9. I zfs send/received this to the 3TB, then made a 5 drives RAIDZ1 with 4 1.5TB and one 2TB, this gave me an ashift=12 vdev, and I then replaced the 2TB with a fifth 1.5TB, and resilvered (the pool was empty so resilvering was very quick), then zfs send/receive from the lone 3TB to the new pool.
 
create vdevs using the 1TB every time (physically).
Hi Aesma,

What do you mean by creating the vdevs everytime (physically) ?
My plan now is:

- Wait until I've moved everything of the 512 bytes drives (with ashift=9 pool)

- Then destroy the old zfsfolder + pool.

- Edit SD.conf just for good measure to make the old 512 bytes drives report as 4k (0x1000)
using this method: http://hardforum.com/showthread.php?p=1039675048&highlight=conf#post1039675048

- Create a new Pool using the 3 drives (1 new + 2 old) to create a new mirror

- Check the status to see if ashift=12

- Move my data back

I think I get confused because of the mixed usage of the terms vdevs and pools across the internet. (If I understand correctly, the Mirror I'm creating is my vdev. The ashift=12 is a property of the mirror.The Mirror in turn can sit in a Pool)
 
Last edited:
Ok,
Current status:
- I've moved all my data from the old pool, vol01
- destroyed the vol01 pool.
- edited the sd.conf (reboot)
- did not happen what I expected [notice un 4,6,7]:
YWtAm9B.png


- hashed out my edit to the sd.conf and saved..drives appear to behave without sd.conf? [the original pool with ashift=9 was once created under Nexenta before I discovered Napp-IT !)
r07b2lq.png


- updated my OI with all the updates (reboot)
- updated Napp-IT to 0.9a9 (reboot) (@_Gea which looks very slick btw, excellent job my friend!)
- Tried to re-create Pool vol01 (not possible already exists)
- Initialized all the 3 drives
- Created a temporary mirrored Pool vol077 (which did work)
- Deleted the Pool vol077
- Again tried to create Pool vol01 (because it wasn't listed under zpool,import anymore after create vol077): ( but still same error: Pool or folder with this name exists)
- Gave up on that and created another mirrored pool: vol00
hxWf5DZ.png


Looks like I'm in the clear now..? [sweating while awaiting answer..]
kR3CEl3.png
 
Last edited:
nice to hear - you did it

btw
with current napp-it you can check menu
Disks >> Details >> prtconf diskinfos to display physical sectorsize of disks
 
Thanks _Gea,
I'm so tired that even matchsticks don't even work anymore..next up is to re-create my jobs (found something odd during testing but I'll save that question for tomorrow :) ) and finally add my 2 TB disk to my vdev vol02 mirror
Any proper way from Napp-IT to delete the zfsfolder + pool information and everything else on that new 2 TB so that I don't get bugged about that error with "Pool or folder with this name exists"..?

Thanks again @_Gea and everyone else who chipped in..!
Off to bed...
 
Last edited:
What I meant was that as long as there is a 4K drive in the mix when you create a vdev, it will have ashift=12, so no need to play with drivers or whatever.
 
Thanks Aesma,

So I'm back after a good nights sleep for 2 tasks:

Task 1)
The reason that I found that disk to be broken was just luck. So I was wondering if the email task "Alert To" would help me to pro-actively warn me that a disk is failing.

I've set the task and tried running it to see what the output would look like but I'm not receiving any mails.
When I click the email joblog, it says: cannot open/var/web-gui/_log/jobs/16556546544.log: no such file or directory.
Re-creating the job hasn't worked.

The email job for the "Status To" has also been configured and when I run it, I receive emails perfectly so the mailconfiguration seems to be in order.
Does the Alert job provide mailoutput even if there is no Alert when I run it by hand..?

Task 2)
Add my new 2TB drive to my mirrored vdev to create a 3-way mirror.
By using Napp-IT, to do this:

Use Disks, add, select my new disk and select 1 of the 2 already existing disks of my mirrored vdev
j0vSFXm.png

EDIT:
Task 2 is completed. The way to do this is described above! About 511Hours to go before re-silvering is complete.
So I'll try that speed tip mentioned earlier..


Some feedback on Napp-IT:
I love the new way that Napp-IT shows what it's been doing at the bottom. If you "mouse" over the bottom the log jumps up but also disappears quite quickly, would be nice
if it would just stand still so you can read the text. [seen this with google chrome and firefox]
 
Last edited:
Allright I'm stumped. I got all the rest of the disks online and then when I tried to start a zfs send to the new pool it crashed. When I rebooted, I found this:

BLo7pel.png


I'm fairly sure I wouldn't go from 1 dead drive (unvail) to 5 dead drives.

What is the best way to mount this read-only and attempt to copy the data off (which is what I was planning on anyways)

EDIT: And now it looks like a good chunk of my brand-new drives have started acting up. That pool was working just fine when I was trying to rsync last night.

tsMWdUZ.png
 
Last edited:
Right now it's running using SFF-8087 cabling on 3 IBM M1015 controllers. What would be the best way to test the controllers since I don't have any in reserve and no real funds to buy more at this point?
 
MistrWebmaster, try replacing the PSU and see if the errors go away.

Or perhaps you're daisy-chaining too many power connectors for the hdds?

EDIT: Try removing the power from most of the hdds and see if the remaining powered ones go online?
 
All the drives are showing up on the LSI control utility, and it's a single rail 900-something watt PSU. However, it is running a Norco 1-8 molex expansion. I'll try re-wiring it with the first few rows using the straight PSU molex. Otherwise I'll have to track down another PSU.
 
Gea, many thanks for putting together and maintaining napp-it. I am trying for the first time to backup my nexenta boxes. As suggested, i used latest omniOS OVA:

http://omnios.omniti.com/media/OmniOS-bloody-first-boot.ova

Then i followed the all-in-on how to (including the omnios specific on installing vmware tools)

I was able to install napp-it successfully and i can login to the webgui, but when i try to do anything, like create a pool, i get the following error on the webpage:

Code:
Perl API version v5.16.0 of IO::Tty does not match v5.14.0 at /usr/perl5/5.14.2/lib/i86pc-solaris-thread-multi-64int/DynaLoader.pm line 213.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.

Any suggestions on how to fix?
 
Code:
Perl API version v5.16.0 of IO::Tty does not match v5.14.0 at /usr/perl5/5.14.2/lib/i86pc-solaris-thread-multi-64int/DynaLoader.pm line 213.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.

seems like i have a newer version of IO::Tty than that of the installed perl.. wierd. should i not being using bloody as a base for napp-it? Will try the stable ver tomorrow and see if its any diff. im too newb with solaris saddly to just recompile/fix IO::Tty to the right version.
 
MistrWebmaster, try replacing the PSU and see if the errors go away.

Or perhaps you're daisy-chaining too many power connectors for the hdds?

EDIT: Try removing the power from most of the hdds and see if the remaining powered ones go online?

If I could hug you I would. I completely removed the new pool and the entire original array came online. It's absolutely I overloaded the molex expander from Norco. Now I just need to figure out a way to power all 6 backplanes without the use of expanders. Or else, not chain all of them off a massive one like I did. Got any tips? This is the PSU my server is running (since it's damn near impossible to find a generic ATX server-grade PSU): http://www.amazon.com/gp/product/B00284AJ1G/ref=wms_ohs_product?ie=UTF8&psc=1
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Gea, many thanks for putting together and maintaining napp-it. I am trying for the first time to backup my nexenta boxes. As suggested, i used latest omniOS OVA:

http://omnios.omniti.com/media/OmniOS-bloody-first-boot.ova

Then i followed the all-in-on how to (including the omnios specific on installing vmware tools)

I was able to install napp-it successfully and i can login to the webgui, but when i try to do anything, like create a pool, i get the following error on the webpage:

Code:
Perl API version v5.16.0 of IO::Tty does not match v5.14.0 at /usr/perl5/5.14.2/lib/i86pc-solaris-thread-multi-64int/DynaLoader.pm line 213.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7.
Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 22.
Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.
BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/zfslib.pl line 1874.

Any suggestions on how to fix?

The regular wget installer for the ISO takes care about that.
You can manually use the correct IO:TTY

try either
cp/var/web-gui/data/tools/omni_bloody/. /var/web-gui/data/napp-it/
or
cp /var/web-gui/data/tools/omni_stable/. /var/web-gui/data/napp-it/)

more
http://napp-it.org/downloads/omnios.html
 
Thanks Aesma,

So I'm back after a good nights sleep for 2 tasks:

task1: email alert
they are only send when an error occurs

task2: add a disk to a already created basic or mirror vdev
use menu disks - add

menu pool extend:
creates a whole new vdev

about the minilog:
If you activate top-level menu Edit and then click on Log, you can see the complete log
(requires monitor extension or monitor evalkey)
 
I just have a question regarding upgrading my zfs pool it have 2 X 6 hdd in 2 vdev there are 2TB HDD can I upgrade just one vdem to 4TB whit out upgrade the orther to 4TB HDD also or do I nede to go all in ?
 
Does anyone use Crashplan to backup their zfs pool? I have Crashplan installed on OI and have it backup my zfs pool. It works and backs everything up, however whenever I reboot machine crashplan thinks that everything has changed in the pool and backs it all up again.

Any ideas what might cause this issue?

Thanks,
P.S. I could have sworn I posted this earlier today but do not see my post. So if you see a dup I apologize.
 
The regular wget installer for the ISO takes care about that.
You can manually use the correct IO:TTY

try either
cp/var/web-gui/data/tools/omni_bloody/. /var/web-gui/data/napp-it/
or
cp /var/web-gui/data/tools/omni_stable/. /var/web-gui/data/napp-it/)

more
http://napp-it.org/downloads/omnios.html

Gea, Thanks! I decided to just re-install with the stable ISO, i didnt realize the OVA was the "bloody" version. After redoing it, it works like a charm.
 
task1: email alert
they are only send when an error occurs

task2: add a disk to a already created basic or mirror vdev
use menu disks - add

menu pool extend:
creates a whole new vdev

about the minilog:
If you activate top-level menu Edit and then click on Log, you can see the complete log
(requires monitor extension or monitor evalkey)

Thanks Gea,
Task 1:
In that case, I'll have the alert task run every day in case something happens!

Minilog:
That minilog has been hidden away quite nicely but I found it now thanks..!


I just have a question regarding upgrading my zfs pool it have 2 X 6 hdd in 2 vdev there are 2TB HDD can I upgrade just one vdem to 4TB whit out upgrade the orther to 4TB HDD also or do I nede to go all in ?
I think you can add bigger drives in your own tempo. say if you have 2 drives of 1 TB, you can add a 3 TB drive today and another one next year but you can't use the extra 2 TB of space (expand) until you've replace all your 1 TB drives with 3 TB drives. but it might also depend on what kind of vdev you've created..
 
Last edited:
I just have a question regarding upgrading my zfs pool it have 2 X 6 hdd in 2 vdev there are 2TB HDD can I upgrade just one vdem to 4TB whit out upgrade the orther to 4TB HDD also or do I nede to go all in ?

That is my usual strategy too with my filer and backup systems.
I started with a pool build from 2 x Raid-Z2 - 1 TB.
After some time when I need more space. I replaced all 1 TB disks in one vdev with 2 TB disks.
Next step is to replace the other 1 TB disks in the second vdev with 3 or 4 TB disks.

No problem beside that pools are unbalanced so sometimes your performance is only same as with one Raid-Z vdev but mostly it is better and mostly not a limitation.
 
Back
Top