OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Hi Gea


i have an issue lately.... i thought it could be my cable or sata port issue... but seems like doesnt look like either.... i have 2 disk in mirroring which was initially connected to my hba card.... but after rearranging my cables....i put them back to the onboard sata port

ever since i keep getting this error, any idea? did i do something wrong? and the moment i connect it back to the hba sata's it works again

oierror.jpg
 
I think it's because the device names are different. To migrate them, try this: move one drive over. If you do the 'format' command it should tell you the new name (example: 'xxx', where 'yyy' was the old one.) Use the zpool replace command to replace the old name with the new. The repeat with the other drive.
 
I think it's because the device names are different. To migrate them, try this: move one drive over. If you do the 'format' command it should tell you the new name (example: 'xxx', where 'yyy' was the old one.) Use the zpool replace command to replace the old name with the new. The repeat with the other drive.

huh....i thought the raid settings are on the disk and not in the os? if my os disk die....i wont be able to attach it to another oi box?
 
Is it normal for OpenIndiana to peg the CPU usage for the vCPUs assigned to the VM? I'm running OI 148 with 8 GB of memory and 2 vCPU assigned to the VM? Performing a storage vMotion between two datastores being served by the OI VM maxes out the vCPUs assigned to that VM. Even just running something like CrystalDiskMark within a VM on one of those datastores almost pegs the vCPUs. Both Dedup and Compression are turned off and I disabled Sync on the ZFS folders...
 
Hi Gea


i have an issue lately.... i thought it could be my cable or sata port issue... but seems like doesnt look like either.... i have 2 disk in mirroring which was initially connected to my hba card.... but after rearranging my cables....i put them back to the onboard sata port

ever since i keep getting this error, any idea? did i do something wrong? and the moment i connect it back to the hba sata's it works again

if you move pools between ports/hosts, you need a reboot if your controller is not hot-plug and (mostly) you must re-import the pool-
depends on your hardware. The pool/ ZFS configuration is on disk not the mount info.

Prior a move, you should export the pool but import works also without exporting (hot-unplug or sudden crash/ power-loss)
 
Last edited:
huh....i thought the raid settings are on the disk and not in the os? if my os disk die....i wont be able to attach it to another oi box?

the hba has a different controller number than the onboard one, so a moved drive will have a different ID. yes, you can move it, but the problem here is you are renaming it on the same box - have you tried what i suggested?
 
I am reinstalling a fresh OI+napp-it SAN using ESXi 5 because I am completely changing pool setup and everything.

I got OI 151 up and running with napp-it installed but I can't get the disks to be recognized with the controller in pass-through mode. I destroyed the pool and everything on the old setup (using -f command even). I popped the disk controller in and out of passthrough mode and rebooted multiple times with that.

I can still create my old VM from the datastore and it still recognizes the HBA controller but the new one won't. I guess my question is how do I get it to recognize the controller in the new VM and not the old?
 
@mmmmmmdonuts: what MB are you using? Does it use a PCIe switch or bridge on the MB to increase the number of PCIe slots? ESXi passthrough does not work if there is a switch or bridge between the PCIe slot and the chipset. Good example of a server class MB with this issue is the SM X8SIA-f. The two PCIe x8 slots are shared on a switch - you can't do passthrough on these slots, though you can on the others that don't have the switch.

The symtoms you describe are exactly what you would see - ESXi would let you configure the passthrough but it doesn't work.
 
I supermicro X9SCM.

I had the passthrough working on my old VM without a problem on both ESXi 4.1 and ESXi 5 (it still works when I recreate that VM). I am trying to install a new OpenIndiana VM and the passthrough no longer works even though it says it does. The M1015 controller seems to be locked to that certain (old) VM or the disks do because I don't see them and am not allowed to configure them anywhere in the new VM. In the old VM I still have free reign to do as I please with the disks. It shows a PCIe controller 60 in Napp-It but does not show the disks in the new VM.
 
So you removed the passthru controller from the old VM, and added it to the new one but it doesn't see it? Might be a vsphere bug. Try removing it from old one, mark it as not-pass thru, reboot vspehere, mark it as passthru, add to new VM and try again?
 
So you removed the passthru controller from the old VM, and added it to the new one but it doesn't see it? Might be a vsphere bug. Try removing it from old one, mark it as not-pass thru, reboot vspehere, mark it as passthru, add to new VM and try again?

Yes. I tried your suggestion already. Thank you though. The interesting thing that happens is if I have the old VM still active (with no passthrough in the settings) it will not allow me to add the controller to the new VM. It gives me an error.

I am probably just going to reinstall ESXi and hope that resolves the problem.

Another quick question. When removing a VM I select remove from inventory. Is this the proper way to remove a VM but not delete it from the datastore or should I select delete from disk?
 
Another quick question. When removing a VM I select remove from inventory. Is this the proper way to remove a VM but not delete it from the datastore or should I select delete from disk?

when you only remove it from inventory, you can re-add it if you open the ESXi
file browser, open the VM-folder, right-klick on the .vmx file and select add to inventory

this is not an option if you delete it from disk (unless you have ZFS snaps to recover)
 
anyone has reduced performance after upgrading to 0.6?

I notice my disk reads/write at 30mb/s+ max.... i use to have about 80mb/s
 
anyone has reduced performance after upgrading to 0.6?

I notice my disk reads/write at 30mb/s+ max.... i use to have about 80mb/s

switch to 0.5 and check against
0.6 adds a always running background agent with low priority for background, error handling
and monitoring tasks. it should not use much cpu but some ram

0.5 is a pure cgi + cron system only which is not enough for a higher level of system control
but this should not affect performance in such a way.
if there is no active background task, its just a low-priority-niced running shell-script in a loop, looking for
task-files on rpool every 3s

or you may kill the agent process
the only side effect is that replication is not working

you may also call /etc/init.d/napp-it stop
that will end all napp-it activity (webserver and agent) and check performance with bonnie or dd at cli
use /etc/init.d/napp-it start (or restart) re-enable napp-it

last word
0.6 is preview state, not ready for common use
some parts like displaying jobinfos in job overview are not yet working
(some delay, i'm on holiday)
 
Last edited:
switch to 0.5 and check against
0.6 adds a always running background agent with low priority for background, error handling
and monitoring tasks. it should not use much cpu but some ram

0.5 is a pure cgi + cron system only which is not enough for a higher level of system control
but this should not affect performance in such a way.
if there is no active background task, its just a low-priority-niced running shell-script in a loop, looking for
task-files on rpool every 3s

or you may kill the agent process
the only side effect is that replication is not working

you may also call /etc/init.d/napp-it stop
that will end all napp-it activity (webserver and agent) and check performance with bonnie or dd at cli
use /etc/init.d/napp-it start (or restart) re-enable napp-it

last word
0.6 is preview state, not ready for common use
some parts like displaying jobinfos in job overview are not yet working
(some delay, i'm on holiday)

oops....i guess i will just revert :)
Thanks
 
Figured out my problem. Basically it boils down to I am an idiot and forgot that I needed this driver fix for my LSI card.

Actually, you should give yourself some credit. I too experienced issues recently. I rebooted my ESXi box and magically all my drives within Napp-It showed up as UNAVAILABLE. Here's my story:

Interesting I thought. Eventually, I checked the device drivers withing OpenIndiana and found that the IBM M1015s (LSI 9240-8i) were "misconfigured". So, I reinstalled the latest driver from LSI for the 9240-8i's (v4.26). No go. "Crap!" I thought. So I reinstalled the OpenIndiana VM completely and updated the driver again. "Crap!" again and then I messed around with a bunch of other stuff.

Now, for the solution.
Eventually, I went and tried the previous driver from last December (v3.01), and it worked! So, it appears that OpenIndiana grabbed the newest drivers from the OpenIndiana repository on it's own. I ran a manual update from CLI (Package Manager is broke in OpenIndiana b151) "pkg update --accept" and haven't had any issues since, but I'm keeping my eye on it. At least I know what's happening now.

So, it seems that LSI decided to "break" the M1015s with their newest 4.26 driver. I'm assuming this is because they want us to buy full retail and not these OEM models. If someone is feeling ambitious, I'm sure there is a way to get these newer drivers working with a hack, but I'm just not experienced enough with Solaris drivers to be able to do it myself.

*FYI - I also posted this in the M1015 thread to further investigate if possible/necessary.
 
Last edited:
Any recommendation on file transfer improvement?

I'm using marvel nic In my oi box.... Hp procurve 1800 nO vlan.... My pc is using realtek.... Could my pc be e weak link?

I peak about 60+ but usually 40+, sometimes not even 1mb/s when copy small little files....

and could i use the remain space in rpool for zil or l2arc? my rpool is an 80gb intel ssd :D
 
Last edited:
small files will always be alot slower due to the constant seeking a harddrive has to do to find those smaller files.

realtek could be an issue - software nic cards tend to not always perform best versus say a $30 intel nic.
 
Hi Gea,

I'm hoping you or someone could answer this question for me!;

I've created a raidz1 array with 6*2tb hard drives which gives me 10.9T pool but when i try to make a zfsfolder i notice there is already a folder in there called "Storage (pool)" but the available size is only 8.89T [100%].

Am I doing something wrong here? should i not be seeing a folder with a size of 10.9T? it seems (in my very limited zfs knowledge) that im using two drives for redundancy??
 
Hi Gea,

I'm hoping you or someone could answer this question for me!;

I've created a raidz1 array with 6*2tb hard drives which gives me 10.9T pool but when i try to make a zfsfolder i notice there is already a folder in there called "Storage (pool)" but the available size is only 8.89T [100%].

Am I doing something wrong here? should i not be seeing a folder with a size of 10.9T? it seems (in my very limited zfs knowledge) that im using two drives for redundancy??

Disable the reserved space, when you create your pool it will ask you about using 90% of the space as reserved.

Also when you create a pool, it automatically makes a ZFS folder. You have to add another ZFS folder inside the one that the pool makes to be able to use it with SMB or NFS.... At least thats the only way I could get it working.
 
Last edited:
Anyone got any idea why my seagate 1tb drives (ST31000340AS) only get ~33MB/s write speed in OI and SE11? i got 13 of them and they are all slow under zfs but bench 80-90MB/s in windows. They are hooked up to intel sasuc8i controllers and I've tried both Open Indiana 148 and Solaris Express 11 with same results.
 
Disable the reserved space, when you create your pool it will ask you about using 90% of the space as reserved.

Also when you create a pool, it automatically makes a ZFS folder. You have to add another ZFS folder inside the one that the pool makes to be able to use it with SMB or NFS.... At least thats the only way I could get it working.



Cheers for the reply leeleatherwood - I have unticked that "overflow protection (use max 90% of current space)" option but i still get this is the automatically created zfsfolder - "8.89T [91%]" then if i create a zfs folder within that i get "8.00T [100%]"
 
Cheers for the reply leeleatherwood - I have unticked that "overflow protection (use max 90% of current space)" option but i still get this is the automatically created zfsfolder - "8.89T [91%]" then if i create a zfs folder within that i get "8.00T [100%]"

I've been curious about this, specifically why we have to create a second ZFS folder within the root ZFS folder?
 
Cheers for the reply leeleatherwood - I have unticked that "overflow protection (use max 90% of current space)" option but i still get this is the automatically created zfsfolder - "8.89T [91%]" then if i create a zfs folder within that i get "8.00T [100%]"

Start from scratch over again.

Delete your ZFS folders and your pool, then remake everything. Should take no more than a couple minutes.
 
Start from scratch over again.

Delete your ZFS folders and your pool, then remake everything. Should take no more than a couple minutes.



done that already - i've even decided to load the beta (151) version of OI. Ill post some screenys of the options i get when trying to create the pool so hopefully that helps


[edit] heres the screenys;

creation of the raid - ive unticked the use max space option

pool_creation.png


this is the pool after creation;

created_pool.png


and this is what is shown in the zfs folders;

zfs_share_size.png


[/edit]
 
Last edited:
Hi Gea,

I'm hoping you or someone could answer this question for me!;

I've created a raidz1 array with 6*2tb hard drives which gives me 10.9T pool but when i try to make a zfsfolder i notice there is already a folder in there called "Storage (pool)" but the available size is only 8.89T [100%].

Am I doing something wrong here? should i not be seeing a folder with a size of 10.9T? it seems (in my very limited zfs knowledge) that im using two drives for redundancy??

several reasons
when you create a pool from a first vdev/raidset, the pool is a zfs dataset itself as the parent
container for your folder/datasets. while it may possible to use the pool dataset to store data,
its not possible to set some properties on the pool itself (ex pool itself is always case sensitive)
and any sharing is not supported with napp-it. Think of it as a place to set default properties.

the pool capacity as shown in menu pools from zpool list is brutto (addition of all disk without caring
about redundancy bits) so 2 x 6 TB = 12 TB pool capacity

Next problem. If a disk manufacturer sells 1 TB, the usable space reported by the os is smaller
read about: http://hardforum.com/showthread.php?t=1347640


The folder/ dataset menu shows you the available space thats cares about redundancy.
In case of a raid-z1, one 2 TB disk is used for redundancy, so your usable space is about 9 TB
of the pool = space avalable for your datasets

the overflow protection does not matter about this. Its just a reservation on the pool and prevents
from filling up space on all of your datasets to more than 90% (you should not due to performance reasons)
 
Last edited:
Anyone able to get Sendmail with SmartHost and AuthInfo (SMTP-Auth) working on SE11?
I have errors like authinfo.m4a missing, as if SE11 would not package the Auth-Info Sendmail-Macro.
Any idea anyone?
 
Hi,

Completely new to ZFS and at first was going to go with FreeNAS but then started reading about the performance differences with zfs on freebsd compared to solaris so decided it should be solaris and napp-it looks pretty easy too :).

So I have been using unraid up until now so will use the hardware I have been using on that.
I am really just after hints on how best i should use my drives..

I have 5*1 terrabyte drives combo of WD EADS's and a couple of samsung's.
and 7*2 terrabyte drives, 1 is a black WD- faex, the rest are green WD's EARX's or EARS's.

I have a SUPERMICRO MBD-X9SCL-F-O with a I3 in it
8 gigs ddr ecc memory
and I have 2 AOC-USASLP-L8I cards

This is only for home use, and mainly for media so not too worried about speeds etc, speed on unraid was fine.
I dont want to loose too much space.. so was thinking of 2 raidz1's?
but im very new so open to suggestions..
is it worth me getting 8 more gigs of memory? or is this fine for my small family use?
likewise should i get a xeon?

Also.. open indianna or express? :)
 
SE11: newest ZFS build, including ZFS-Crypto

So I am using SE11 until the ZFS-Crypto *Source* is released by Oracle and integrated into Illumos/OpenIndiana...
 
Just came across a issue with my new build.

Hardware:
X3440
Supermicro X8SIL-F-O
16gb Kingston DDR3 ECC
LSI 1068E
10x Hitachi 2tb drives
1 640gb system drive
1 64gb Crucial SSD

I have installed ESX 5 on a thumbdrive, enabled VT-D in the bios, and installed Opensolaris 11. Napp-it installed fine, and I passed the two storage controllers through. The LSI is working perfect, 8 2tb drives and all. The local Intel controller are not showing anything.



Uploaded with ImageShack.us

Any clues?
Running cfgadm -avl gives a ton of empty drives, each with their own pcieXXX address.


Uploaded with ImageShack.us

The only thing I can think of is that there are 6 sata ports on this board. In ESX these show as two and four port controllers. I only passed the one with 4 drives through ( and are using the 2 for ESX mirrored data pool drives).
I do not want to build my pools until I have all the drives.
 
vmdirectpath doesn't work with sata contollers that are integrated into SB/NB. So that's problem you cannot fix.
 
Back
Top