OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

One nice thing about zfs is that random writes tend to get turned into sequential writes...
 
You're probably right. Things like this are one reason I let other people slide down the bleeding edge :)
 
Hi,
Can anyone offer me some advice on setting up ZFS with my existing WD 4k 2TB drives?

I currently have 4 of them and using 3 in a FreeNas system setup as raidz with the 4th drive as a spare.

I plan on moving on to Openindiana and Napp-it. Reading through this thread makes me wonder if this is possible with my drives. I have installed OI and Napp-it and napp-it recognises the pool of drives, is it just a matter of importing them into the newer version of ZFS, if so how about the special formatting that FreeNas did for the 4k drives?

Because of the 4k problem is it wise to destroy the data on the drives and start over again with the newer version of ZFS?
 
Hey... I have a problem when i try to install napp-it on openindiana 151 dev. I think it is because of a perl update.
see http://pastebin.com/VLeyqBQG

Thanks

please retry newest installer
wget -O - www.napp-it.org/newest | perl

(i have removed use strict command, which seems to produce the error)

Gea
 
Last edited:
I thought the issue was that if you had them formatted "the wrong way", Opensolaris wouldn't even recognize the pool. I believe if you can import the pool, you should be good to go. My suggestion: if you are using less than 1/2 the pool, remove the spare from the pool, create a temp pool on it, and copy everything from the working pool to the temp pool. Import the temp pool to OI. If it works, destroy the temp pool and add that drive back as a spare. If it fails, you can blitz the 3 drives under FreeNAS and format them the Opensolaris way. On a different note: do you really need a spare? Currently, you are getting 2 drives worth of usable data, so I would prefer a raidz2 - same number of drives and no resilvering while the spare is coming into service. Also, you might want to consider a raid10 (create pool foo mirror X Y mirror P Q). Gives you better all around performance (particularly reads), and the odds of two drives failing such that you are hosed is only 33%.
 
Updated OI 151 desktop release
http://dlc-int.openindiana.org/151/oi-dev-151-x86-20110608-1.iso

Known outstanding issues for desktop users to be aware of:

* libusb regression causes gtkam seg fault (bug #697)
* Available languages/scripts in IIIM Input Method not shown (Bugs 1084,1086)
* Update to Java 6_u26 per CVE security notice

Fixes and improvements since oi_148:

* More bug and security fix backports
* Unicode 6.0.0 compatibility
* More bug fixes to JDS GNOME 2.30.2
* Perl 5.10.0 update
* GCC 4.6.0 (SFEgcc-46) & GDB 6.8
* Java 1.6_25
* Brasero 2.30.3
* Areca 18xx SAS/SATA controller support
* Nvidia 275.09.04 driver
* More bug fixes to ZFS pool 28 and ON_147 from Illumos project.
* Over 6634 software packages available from NetBSD pkgsrc 2011Q1 project

From: http://wiki.openindiana.org/oi/oi_151
 
Hi,
The spare drive is exactly that and not part of the pool, I was thinking of getting another WD drive and adding it to make a 5 drive raidz, but because of the problems I have read about 4k sector drives am now not sure of this.
There’s about 1.2TB of data on the Freenas approx 30% used, I will backup all of this before proceeding any further. If I was to scrub all of the drives, do I let OI re format them or is there any special way of dealing with these 4k drives?
I am using the storage mainly as a database for photos and my music collection, I have the capacity for up to 6 drives on the system and any advise would be greatly appreciated......
 
I would just try using them under OI first. If you can import the pool, you should be okay. Afterward, then do zfs and zpool upgrades.
 
I'm having problems getting my disk pinned down.
It works but the auto service is interfering with the device-treshold.
Now for me it is spindown or auto service enabled, as my drives are polled every 15min.
Actually it makes sense, but is there a way around it?
It would be nice to have an option to run things once a day or something, I don't know I don't understand the mechanics behind it.
It would be nice if both could be combined.

Gea could you shed some light on this?
Sorry for being impatient.
 
Hi,
Can anyone offer me some advice on setting up ZFS with my existing WD 4k 2TB drives?

I currently have 4 of them and using 3 in a FreeNas system setup as raidz with the 4th drive as a spare.

I plan on moving on to Openindiana and Napp-it. Reading through this thread makes me wonder if this is possible with my drives. I have installed OI and Napp-it and napp-it recognises the pool of drives, is it just a matter of importing them into the newer version of ZFS, if so how about the special formatting that FreeNas did for the 4k drives?

Because of the 4k problem is it wise to destroy the data on the drives and start over again with the newer version of ZFS?

To get the best performance out of 4k drives, I would destroy the pool and then reformat it using a FreeBSD/ZFSGuru liveCD, which can create properly 4k-aligned pools. Then you can import the pool back into OI and upgrade it, and it will maintain the 4k alignment. I've done this a couple of times and it works fine so far (knock on wood).

My only caveat is that I don't know if _Gea or any other ZFS/OpenSolaris experts have officially "blessed" this technique, or if there are any hidden problems with it.
 
I'm having problems getting my disk pinned down.
It works but the auto service is interfering with the device-treshold.
Now for me it is spindown or auto service enabled, as my drives are polled every 15min.
Actually it makes sense, but is there a way around it?
It would be nice to have an option to run things once a day or something, I don't know I don't understand the mechanics behind it.
It would be nice if both could be combined.

Gea could you shed some light on this?
Sorry for being impatient.

currently, auto-service has 1min and 15min timeframe as a root-cronjob
the autojob service itself is only reading the os-disk which you cannot use
for energy management ever.

Datapools are only affected by autojobs if you have set an status, alert, snap, scrub or replication job.
-> set these jobs to a daily timeframe

another thing is the Solaris fault management, see
http://nexenta.org/boards/1/topics/1414#message-1450
if its running, your disk will never sleep but Solaris is always up to date with pool errors.

Gea
 
Can anyone tell me how i see if any of my drives has bad sectors or other smart errors?

Also is it possible to get a email if a drive fail or gets smart errors? I can't find it in the menus.
 
Thanks danswartz. But it only says smart OK, But I'm im sure the disk is not hiding anything? I really like to see the raw values to be sure. Is that possible?

Also thanks about the E-Mail stuff i guess i need to setup a mail server first :)
 
Not sure where you are seeing the Smart OK thing? The top-level disks tab shows a field like S:0 T:0 H:0 that are the errors. What version are you running? I am on 0.500f.
 
I just moved my server to my new apartment and now its not reporting its hostname. Its connected to a Linksys E4200 via a couple gigabit switches, My ubuntu laptop reports its hostname properly but the solaris box doesn't seem to and I can't access it using the host name. Is there somthing I can do from solaris?

Thanks!

ps. Gea Napp-it is great!
 
Which OS are you using? OI? I think the default nowadays is NWAM (auto network config), which uses DHCP, and it should be registering the name with the DHCP server. Can you check the linksys and see if that is true?
 
I am using NWAM (dhcp) but its not registering the hostname with the router :(
 
I just moved my server to my new apartment and now its not reporting its hostname. Its connected to a Linksys E4200 via a couple gigabit switches, My ubuntu laptop reports its hostname properly but the solaris box doesn't seem to and I can't access it using the host name. Is there somthing I can do from solaris?

Thanks!

ps. Gea Napp-it is great!

Try disabling and enabling SMB service.
 
the funny thing is that it used to work when the old linksys router was also the dhcp server, it was running tomato so maybe thats why it was working. The new one is running stock firmware. I set a static dhcp IP and told it the hostname but it still doesn't resolve properly.

Oh well. I can access it via the static IP. I'm running Solaris 11 Express.
 
Not sure where you are seeing the Smart OK thing? The top-level disks tab shows a field like S:0 T:0 H:0 that are the errors. What version are you running? I am on 0.500f.

I have that too. I just don't know what S T and H means :D
 
Soft errors, Hard errors, Transport errors. More info if you click on the Controller tab.
 
Anyone using ramdisk for ZIL? I thought I read that ZIL device failure isn't a big deal anymore.
 
Not as big of a deal. If you are going to use ramdisk, may as well just run with sync=disabled, no?
 
How so? If your ZIL is in ramdisk and the appliance crashes, your data is just as gone, no? Also, why would it be slower? The whole point of the ZIL is to speed up sync writes :)
 
How so? If your ZIL is in ramdisk and the appliance crashes, your data is just as gone, no? Also, why would it be slower? The whole point of the ZIL is to speed up sync writes :)
I should have been more specific sorry. I guess when I said more dangerous I was assuming a hard drive is more likely to die than the system crashing or losing power, but that may not be the case. As for slower, doesn't the ZIL soak up write I/O allowing your spindles to write sequentially?
 
The purpose of the ZIL (AFAIK) is to ACK a synchronous client write. The sequential write buffering is a basic part of ZFS - it delays writing data until (if possible) it can coalesce blocks being written. Still not sure I understand your danger concern? As far as I know if a drive fails and your poll is not redundant, you are hosed, whether you have a ZIL or not. If you are redundant, it doesn't matter...
 
The purpose of the ZIL (AFAIK) is to ACK a synchronous client write. The sequential write buffering is a basic part of ZFS - it delays writing data until (if possible) it can coalesce blocks being written. Still not sure I understand your danger concern? As far as I know if a drive fails and your poll is not redundant, you are hosed, whether you have a ZIL or not. If you are redundant, it doesn't matter...

so sync=disabled is safer than volatile ZIL?

What about a local ramdisk mirrored with an equal size iSCSI LUN from a ramdisk on another host?
 
Not safer, equally safe. The risk is a host crash before the ZIL transaction(s) can be committed. A remote ZIL makes no sense either - if you don't have a separate ZIL, ZFS uses space on the local pool for the same purpose. Note that if the host crashes before an async transaction is committed, you won't get ZFS filesystem damage, but user info written might be lost. Generally, folks use a small SSD - you don't need more than about 1/2 physical RAM on the host for ZIL space, so a small very fast SSD is good. Since my appliance is virtualized, the risk of a crash is not relevant, so I just run with sync=disabled.
 
Soft errors, Hard errors, Transport errors. More info if you click on the Controller tab.

Thanks again. I found that too. I also tried google, it seems like hard errors is what i should look for. But it doesn't state what kind of error is reported anywhere does it?
 
Not as far as I know. If you are getting real errors, run the smart tool manually and see...
 
Back
Top