OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Check 3 lines down from type: 'raidz' .... it says nparity: 2 .... this means raidz with 2 parity drives ie raidz2
 
The components of my All-in-One are arriving. I will be creating a 6x2TB raidz2 ppol using Hitachi 512k drives. In the mean time, I have created a VM running OI 151a and napp-it to begin learning.

Question 1: What is the most stable release of OI that I should use?
Question 2: If Solaris Express 11 is more stable than OI, would you recommend SE11 (I have no need for encryption).
Question 3. In my test VM, I created a 6 drive raidz2 pool, however, I can't tell if the pool that was created is actually a raidz2 or raidz pool (the pool status differs from the pool details).

1.
Most stable is OI 151a

2.
If you need a stable state or security bugfixes now, you must select Solaris Express
and If you use it commercially, you must pay Oracle

For all-in-one i would say OI is stable, at least stable enough for me to use
it on my production servers. OI is expected to publish a stable release next months.
(Main missing things are security updates)

3.
you have a raid-z in flavour of a raid-z2
 
Disk Spin Down.

1. Is is possible to employ disk spin down if the drive pool is shared via NFS to store VMs?
2. I've enabled spindown under power management but haven't seen a spindown status under disks. I'm also unsure if my CPU is scaling back its frequency at rest. Does my powerconf look right?

device-dependency-property removable-media /dev/fb
autopm default
autoS3 default
cpu-threshold 1s
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior
autoshutdown 30 9:00 9:00 noshutdown
cpupm enable
device-thresholds /dev/dsk/c3t0d0 1800s
 
Okay, so if I understand you, you need SE11 due to encryption, but have hit a bug you need a fix for? If so, good luck...
 
"zpool status" hangs when you pull disks from a pool you've decommissioned. I'm sure there was a way to do it gracefully. But this was a test pool I'd created and I was just pulling out the hot swap SAS disks from the server. Napp-it wouldn't load, and I checked the console it would just hang after issuing the zpool status command. SSH in and run it same issue. I should have destroyed the pool I presume first. What I noticed which concerned me was that from a server I attempted to access a mapped drive which resolved to that pool I just decommissioned. The server lost all connectivity to the solaris box for a few seconds, this was connectivity to the other live pools.

Anyhow, somewhat worrisome that the other pools loss connectivity, but I believe it's related to my decomm. Of note, the drives are not jbod but RAID 0 single disk via HP E200I controller.
 
Hi Folks,

first post here, first questions.
I am rather a linux kinda guy and not a solaris master, I'd like to admit....

For a couple of months now, I am a new and happy user of an all-in-one,
based on sol-ex11 (need encryption).
Many thanks to _Gea for this fine piece of work!

I have build a small pool with 3 vdevs, based on mirrors (2TB drives each), so far.
Based on some research here and in other forums around napp-it, I managed
to enable power-management and drives are spinning down successfully.
However, this involved disabling fmd services.

Now I am thinking of employing 1-2 spares to my pool and the following questions
came up, where my google-fu did not bring up an answer so far.

Is this feasible?
Will spares work at all without fmd running?
Can I re-enable fmd and still have disk spindown?

TIA,
Hominidae
 
Last edited:
Can somebody explain how I can import (or transfer) a pool?

Right now I'm using a dedicated file server machine. It is using onboard SATA plus LSI 9211-8i SAS card.

I'm moving everything into a brand new ESXi "all-in-one" box using 2x LSI 9211-8i cards and won't be using onboard SATA.

How can I transfer my pool from the old system to the new (and virtualized) one with the slightly different hardware?
 
Can somebody explain how I can import (or transfer) a pool?

Right now I'm using a dedicated file server machine. It is using onboard SATA plus LSI 9211-8i SAS card.

I'm moving everything into a brand new ESXi "all-in-one" box using 2x LSI 9211-8i cards and won't be using onboard SATA.

How can I transfer my pool from the old system to the new (and virtualized) one with the slightly different hardware?

Export the pool on the "old" server.
Disconnect the disks and reconnect them to the "new" server.
Import the pool on the "new" server.

It really works - and it really is that easy. Only complication is if you are using a non-solaris ZFS implementation on either end which might use a different partition table format. Specifically, some pools built on FreeBSD might not transfer gracefully (including freeBSD gui wrappers like zfsguru).

As always, if the data on the pool is important or difficult to replace then do a backup first. Just in case...
 
Last edited:
Thanks! I saw the import button in napp-it but somehow missed the export button...

I'll be using the same operating system (with napp-it) so hopefully it goes smooth. Thanks again for all of your help and advice!
 
Thanks! I saw the import button in napp-it but somehow missed the export button...

I'll be using the same operating system (with napp-it) so hopefully it goes smooth. Thanks again for all of your help and advice!


export/ import between servers with any non hardware-RAID disk controller is that easy with ZFS and software Raid
It even works without problem if you forget to export or if your first server dies.

problems:
your new server must support the ZFS version of your pool
problem with partitions created on BSD (works if you format disks as GEOM)
 
Last edited:
I am about to dump solaris express 1 for oi 151a, is that a good idea, i have no need for encryption......

Its easier to answer whether or not that is a good idea if we understand a bit about what motivates the change. Is there a specific feature available only in OI that you need? Are you impaired by a bug in SE11 that won't get fixed until the full release? Etc.
 
partly thanks to this thread, i'm up and running with oi_151a and napp-it, 5 drives in RAID-Z2, and joined to the AD domain.

Does anyone know of a walkthrough somewhere for setting the ACLs (through windows) on my ZFS folders?
 
Hmmm... I tried to export my pool (so I can move the disks to a new server) but it says "cannot export 'tank': pool is busy"

How can I fix that?
 
Hmmm... I tried to export my pool (so I can move the disks to a new server) but it says "cannot export 'tank': pool is busy"

How can I fix that?

Figure out what is holding it active. Normal culprit is comstar. If you use Comstar, disable it and reboot, then it should export.

If that doesn't do it, go into a shell as root and enter the command "zpool export -f poolname".
 
partly thanks to this thread, i'm up and running with oi_151a and napp-it, 5 drives in RAID-Z2, and joined to the AD domain.

Does anyone know of a walkthrough somewhere for setting the ACLs (through windows) on my ZFS folders?

In domain mode:
SMB connect your share from another computer (also member of domain) as root
and set desired ACL (right click-property-security)

or
add a ID-mapping (see menu service-smb-mapping) and add a mapping like
winuser: somedomainuser = unixuser:root

If you SMB-connect as this user you have also root permission on the share and can modify ACL

OI mostly behaves like a real Windows server. Main problem with ACL like conversion between
Windows SID and Unix UID is done automatically. You should know that Windows ACL looks first
at deny rules then for allows while OpenIndiana cares about order of rules - but this is usually not
a problem (see also napp-it menu extensions-acl)
 
I am getting slow READ performance under ESXI. (5)

VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)

The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.

On bare metal its fine, problem only occurs under ESXi. Any suggestions?

- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.
 
I am getting slow READ performance under ESXI. (5)

VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)

The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.

On bare metal its fine, problem only occurs under ESXi. Any suggestions?

- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.

i would try to disable jumbo frames and connect the pc directly with a cross-over
cable to reduce components, try also another PC (another Windows version?)

Is this problem new with ESXi5?
 
I am getting slow READ performance under ESXI. (5)

VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)

The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.

On bare metal its fine, problem only occurs under ESXi. Any suggestions?

- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.

What protocol are you using to communicate with the physical PC? If its CIFS/SMB this is a known issue. I get close to 100MB/s write and 25-30MB/s read when connecting to any outside physical Windows 7 PC I have using CIFS/SMB. I really wish I knew a workaround for this but unfortuantely I don't.
 
I am getting slow READ performance under ESXI. (5)

VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)

The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.

On bare metal its fine, problem only occurs under ESXi. Any suggestions?

- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.

Also try a different Windows PC / NIC. Of course, try to stay away from Realtek NICs.
 
this may be a noob question so please dont hate:

I'm ready to build my raid-z2 with 6 drives, what is the best way to label them ahead of time so I know which one to pull when one dies? Do I plug in one by one and check the ID or will that change?
 
this may be a noob question so please dont hate:

I'm ready to build my raid-z2 with 6 drives, what is the best way to label them ahead of time so I know which one to pull when one dies? Do I plug in one by one and check the ID or will that change?

Serial number should be on the drive and this can be viewed from your OS, I read my serial numbers from napp-it through my Br10i controller on Solaris 11 Express.

I also put a piece of tape on the drive and mark which SATA port on the controller/breakout cable it is connected to.
 
im not using sata ports but an HBA with two sff-8087 connectors with sff-8087 to quad SATA cables so I'm not sure if marking the SATA ends will mean anything.
 
What are the least expensive disks recommended for an ESXi data store (hosted by ZFS on an "All-In-One" box) ?

I'm thinking about using RAID 10 (4 disks) with something like 250 GB disks but 250 GB disks aren't much cheaper than 2 TB disks! :(
 
Gea, I'm wondering if you know whether openindiana supports sun's storagetek availability suite.
 
What are the least expensive disks recommended for an ESXi data store (hosted by ZFS on an "All-In-One" box) ?

I'm thinking about using RAID 10 (4 disks) with something like 250 GB disks but 250 GB disks aren't much cheaper than 2 TB disks! :(

With ESXi, NFS and sync writes you should look for I/O performance
All large and cheap disk are quite bad with this.

Least expensive could mean a lot
Depending of your money and capacity needs, i would prefer:

Use as much vdevs as possible, prefer mirrors
Prefer 10k disks like WD Raptors or at least 7200 U/m drives, avoid green ones
Avoid 4k disks, add SSD write caches (prefered as a mirror, used size is 50% of RAM)

I would look at WD Raptors or a lot of Hitachi disks (7200 U/m, non 4k)
 
In domain mode:
SMB connect your share from another computer (also member of domain) as root
and set desired ACL (right click-property-security)

I can get that far ;)

I mean if I come up with a ACL scheme from scratch (for a group projects share for example) I will probably do some Dumb Stuff or at least re-invent the wheel. I don't know how it is I can't find any google results that say "set your permissions like this". And it seems like I need to do some things in a certain order to get things propagated the right way.
 
With ESXi, NFS and sync writes you should look for I/O performance
All large and cheap disk are quite bad with this.

Least expensive could mean a lot
Depending of your money and capacity needs, i would prefer:

Use as much vdevs as possible, prefer mirrors
Prefer 10k disks like WD Raptors or at least 7200 U/m drives, avoid green ones
Avoid 4k disks, add SSD write caches (prefered as a mirror, used size is 50% of RAM)

I would look at WD Raptors or a lot of Hitachi disks (7200 U/m, non 4k)
How would a RAID 10 of 4x Hitachi 5k3000 2 TB disks work?

...or is there another combination of 4x Hitach 5k3000 2 TB disks that would result in better performance? I don't even really need a total of 2 TB of storage for VMs (but I already have two of those disks and was thinking about ordering two more)
 
How would a RAID 10 of 4x Hitachi 5k3000 2 TB disks work?

...or is there another combination of 4x Hitach 5k3000 2 TB disks that would result in better performance? I don't even really need a total of 2 TB of storage for VMs (but I already have two of those disks and was thinking about ordering two more)

Raid-10 is the best config you can do for a ESXi datastore.
Its not as fast a a pool of more smaller mirrored vdevs but it depends on your needs
if it is good, good enough or too slow.

You may disable sync writes on this pool on an all-in-one to increase speed
 
Back
Top