Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
My All-In-One was build a year ago and I am in need of additional disk.
I have a RAIDZ2 vdev with six Hitachi GST Deskstar 5K3000 HDS5C3020ALA632 (0F12117) 2TB SATA 6.0Gb drives attached to a M1015. I have ordered another M1015 for the next set of drives.
ESXi 4.1.1
OI 151a5 - nappit 0.8
SM X9SCM-F-O
E3-1230
32GB RAM
Norco 4220
Q1: I am not sure if I am going to use six or eight 2TB or 3TB drives in the new vdev. Does the new raidz2 vdev have have to be the the same size (I don't believe so, and this is not a mirror, but an expansion of the zpool)?
Q2: The currents disks in my vdev are 512 sectors. Is it still recommended to use drives that have the 512 sectors?
Q3: Are there certain formatting commands that need to be run on 4K drives in ZFS
NappITGotcha/FolderTest off on unavail on 3.48T [100%] 1.91G none none none none standard off off n.a. o=full, g=full, e=full - 777+ off
drwxrwxrwx+ 6 root root 8 Nov 2 02:55 (777)
ACL User/ Group acl acl-set details inheritance type option
0 everyone@ rwxpdDaARWcCos full_set rd(acl,att,xatt) wr(acl,att,xatt,own) add(fi,sdir) del(yes,child) x, s file,dir allow delete
1 group@ rwxpdDaARWcCos full_set rd(acl,att,xatt) wr(acl,att,xatt,own) add(fi,sdir) del(yes,child) x, s file,dir allow delete
2 owner@ rwxpdDaARWcCos full_set rd(acl,att,xatt) wr(acl,att,xatt,own) add(fi,sdir) del(yes,child) x, s file,dir allow delete
3 user:root rwxpdDaARWcCos full_set rd(acl,att,xatt) wr(acl,att,xatt,own) add(fi,sdir) del(yes,child) x, s
root@server:/mnt/pve/Nappit# chown root:root test
chown: changing ownership of `test': Operation not permitted
10.1.1.103:/NappITGotcha/FolderTest on /mnt/pve/Nappit type nfs (rw,vers=3,addr=10.1.1.103)
We just got a new Dell r720xd server that came with a built-in Dell Perc h310 raid controller [based on LSI SAS 2008 controller] (looks like a daughter card on the motherboard as opposed to a PCIe add-on card). We also purchased an LSI 9211-8i (which we have used with great success on an older HP server). The Perc h310 has SFF-8087 cabling to a SAS backplane with 12 hot-swap SAS 3.5in disks (actually these are near-line SAS drives). We have two Corsair Force 60gb SSD drives for boot drives. Since the cables that came with our LSI 9211-8i are SFF-8087 at the controller end with 4 fan-out SAS/Sata connectors at the drive end we decided to use the 9211 in standard LSI IR mode [hardware raid] and make a raid 1 mirror of the two SSDs as a boot drive for ESXi 5.1. Booted ESXi 5.1 and configured an OpenIndiana VM with the original Dell Perc h310 passed through (after setting each drive as what Dell calls "non-raid" which is Dell speak for JBOD).
Our problem: after OI boots and comes up with the Gnome desktop, an error comes that there is unrecognized hardware. The Dell Perc h310 shows up as:
LSI Logic / Symbios Logic MegaRAID SAS 2008 [Falcon]
and for driver OI has "UNK". Before bringing up OI I tried to flash the Dell to the LSI 9211 IT mode firmware but the only controller that showed up was the actual 9211 add-in card -- not the Perc h310.
At this point our options appear to be either:
1. Find a suitable Solaris driver for the Dell Perc h310 which so far has eluded us and a Google search.
2. Buy two SFF-8087 multi-lane cables and connect the SAS backplane to the 9211-8i and then somehow use the fan-out cables from the Dell Perc h310 and set up a raid 1 volume of the SSDs on that. Then use PCI passthrough of the 9211-8i instead of the Dell Perc.
3. Find different LSI firmware that can flash the Dell Perc h310 to something that OI can recognize.
Any and all help gratefully requested!!
--peter
Can anyone recommend a good mini-itx board to use with openindiana and napp-it?
I will use a sas controller, so doesnt need to have many on board sata controllers.
Will the new z77 ivy board from asus work?
I would go 2.
ESXi may be happy with that as well as OI
Is there an easy way to determine NIC saturation in OI? Right now I am running on the 2 gigabit ports on my Rackable setup and I am thinking of adding some SSD storage somewhere for our databases. I just wanted to make sure that the current NICs can handle the load, but I need to have some metrics to go by.
I love the setup at http://www.zfsbuild.com/ and I think that's a nice long term goal for our storage. OI and Napp-It have been working very well so far for all of our backup storage and this is on some very low end hardware. Once I can prove some performance numbers and reliability to management then I think a big SuperMicro based storage server will be an easy sell.
EDIT:
nicstat 1.92 worked. I pulled the latest version from SourceForge and while the shell script didn't know about OI, I was able to run it with:
./.nicstat.Solaris_11_i386 1
After that I was able to see nge0 and nge1. Other versions of nicstat just saw lo0 and nothing else. This will help a lot in working through our current networking needs.
ESXi seems to have no problem with the Dell Perc H310. -- it is OpenIndiana that has the problem. The issue with #2 is that the embedded Dell Perc controller seems to use a custom SAS connector at the controller end that looks like two SFF-8087s joined together. I haven't removed it yet and will have to investigate. Right now I am flashing the H310 to LSI MegaRAID firmware 20.10.1-0061. My first attempt to do this failed because the Dell firmware had version 20.11.0-002 and the flash program (MegaCLI) gave an error that the firmware version was older than what was on the controller. I did discover that there is a "-NoVerChk" option to the flash command and I will try that shortly.
This has been quite frustrating!
--peter
Hey there!
I have a share and I want to assign a user with certain permissions to it. I log to the share as a root and try to add the user, but I get an error saying: Unable to lookup user names for display.
I can add Power users and Administrators group, but can't add SMB user i created in nappit... What am I doing wrong?
lp, Matej
We are up and running. Napp-it sees the 8 2TB drives on the SAS backplane. If anyone else is trying this, here are the steps taken:
1. Install and configure OpenIndiana as a VM under ESXi.
2. Download the following Solaris/OI driver:
http://www.lsi.com/downloads/Public/MegaRAID Common Files/3.03_2008_Solaris_Driver.zip
3. Follow instructions in the README to install the driver (imr_sas) in your OI host.
4. Power down your OI VM.
5. Put ESXi host into maintenance mode and power down the Dell server (ours is an r720xd, but probably any Dell server with Perc H310 will work the same).
6. Make a bootable DOS pen drive.
7. Flash the Dell Perc H310 to LSI MegaRAID firmware 20.10.1-0107 from:
http://www.lsi.com/channel/products/storagecomponents/Pages/MegaRAIDSAS9240-8i.aspx#Driver (click on the Firmware choice).
You need LSI's MegaCLI DOS exe for this (put on pen drive).
Command syntax: MegaCLI -adpfwflash -f imr_fw.rom -NoVerChk -a0
Without the "-noVerChk" you will get an error that the firmware on the device is newer than the one you are trying to flash.
8. Power on the server and boot ESXi.
9. Set ESXi to use PCI passthrough of the controller.
10. Add controller to OI VM configuration.
11. Bring up OI VM. It should recognize the controller and use the imr_sas driver.
12. Install napp-it and proceed to Valhalla :=)
--peter
I have the desktop edition of OI151a5 on ESXi 5.1 and have been trying to log into the desktop after I changed my PW of the user on napp-it. It allows me to log in, but it asks me to redirect my shares and regardless if I say yes or no, it fails and the desktop does not open. Anyone have any ideas short of reinstalling the operating system? I can access napp-it with zero issue. Thanks.
as I know,I am afraid I was a bit premature about everything working. Although napp-it could see the 8 2TB disks, when I tried to create a pool, it failed. The disks were not accessible. At this point we decided to order another LSI 9211 and disconnect the Dell Perc H310 from the SAS backplane. Hope I didn't mislead anyone.
--peter
as I know,
you should flash with Dell host 6G HBA firmware, this is the easy way without messing flashing with LSI firmware since Dell Sas card is kind of picky..(not like M1015)
Mates,
Im currently using OI+Napp-it installed on a external usb disk for testing and its working great. Im looking forward to put this server on production so i need to get rid of this usb disk asap.
Which option would be the best: a cheap 500gb hdd for $100 or a 64gb ssd for $200.
Yes, im outside-us so the prices arent too attractive
If i install OI on this Vertex4, the spare can be partitioned to be used as cache? Is it worth it?
The server is no big deal. A Dell T420 with 8Gb ECC memory and a 3x2Tb Raidz1 over a M1015.
Most of the files are office documents (10+ users) with some sporadic write of 2Gb+ media files.
Thanks in advance![]()
We did purchase another LSI 9211-8i and had a dog of a time cabling it to the Dell r720xd SAS backplane. Unfortunately after getting OI powered on with the backplane cabled in, we found the same problem -- the disks were recognized but could not be accessed. The errors in /var/adm/messages were like this:I am afraid I was a bit premature about everything working. Although napp-it could see the 8 2TB disks, when I tried to create a pool, it failed. The disks were not accessible. At this point we decided to order another LSI 9211 and disconnect the Dell Perc H310 from the SAS backplane. Hope I didn't mislead anyone.
--peter
Nov 12 08:54:30 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426eaebf failed to power up.
Nov 12 08:57:27 san last message repeated 10 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426eb25b failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426eb737 failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426eb95b failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426ee097 failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426efb47 failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426efe27 failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:27 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500427454a7 failed to power up.
Nov 12 08:57:27 san last message repeated 3 times
Nov 12 08:57:29 san genunix: [ID 353554 kern.warning] WARNING: Device /scsi_vhci/disk@g5000c500426eaebf failed to power up.
Nov 12 08:57:31 san last message repeated 2 times
sd-config-list =
"SEAGATE ST32000645SS", "power-condition:false";
We did purchase another LSI 9211-8i and had a dog of a time cabling it to the Dell r720xd SAS backplane. Unfortunately after getting OI powered on with the backplane cabled in, we found the same problem -- the disks were recognized but could not be accessed. The errors in /var/adm/messages were like this:
Doing some google research I found a fix to this problem in Illuminos Bug #2091 with a workaround. Like the vshift fix, I had to add a stanza to /kernel/drv/sd.conf:
Most of what I read said that after making changes to sd.conf, you need to run the following command to restart the sd driver:
# update_drv -vf sd
Well, unfortunately that did not make the drives power on. Rebooting OI, however, worked.
We are now getting really FANTASTIC disk I/O performance as measured via the napp-it dd benchmark:
562.99 Mb/s Write
864.80 Mb/s Read
Needless to say we are now very happy. We had decided to abandon OI/Solaris and use Oracle Linux 6.3 with the UEK2 which has official support for BTRFS. But now we remain in the OI/napp-it camp.
--peter
it is coming. the 60 drive JBOD of theirs is already confirmed working, and working well. i know of a pretty large installation that just occurred with a boat load of the 12 drive dell JBODs.You should ask your Dell reps for a better ZFS/Solaris support as well.
it is coming. the 60 drive JBOD of theirs is already confirmed working, and working well. i know of a pretty large installation that just occurred with a boat load of the 12 drive dell JBODs.
beware of the intel i350 nics though. there is currently a strange bug that causes the NICs to fall asleep. not just a dell problem either.
i'm still hoping LSI comes out with a 16 port x16 pci-e 3.0 card. they had a pci-e 2 x16 with 16 ports but it was hard to get.
24 port SAS12 ... that is more bandwidth than even pci-e 3 x32 can deliver. 16 port sas12 seems more reasonable at about 20% over what x16 can deliver.
although idk, would be nice for some scenarios. should keep the interrupts lower and you dont 'have' to use all the ports if you don't want too. many of the quad socket boards are shipping with 5 or more x16 slots too which is nice.
Hmmm,
Just made a new pool (raidz) with 4x3TB drives and created a ZFS folder with only 7.8TB usable space? Isn't that bit low?
It's normal. raidz [or good 'ol HW RAID5] of 4x3tb = 9 TB usable space. Actually its over 8.39 TB-you know,HDD manufacturers counts bytes in different manner from most of type of OSes. Storage medium is measured in decimal ,meaning 1MB=1000 bytes. Operating systems measure the same thing in binary, meaning 1MB=1024 bytes.Add to this " overflow protection (use max 90% of current space)" in napp-it ->Polls->Create Pool "on" at default [which is good] and you have the number![]()
What says "zfs get all * ZFSfolder i.e. tank1/testfolder*" ?
--------------------------------------------------------------------------------NAME PROPERTY VALUE SOURCE
Qmedia2/Media2 type filesystem -
Qmedia2/Media2 creation Wed Nov 14 11:34 2012 -
Qmedia2/Media2 used 4.94T -
Qmedia2/Media2 available 2.84T -
Qmedia2/Media2 referenced 4.94T -
Qmedia2/Media2 compressratio 1.00x -
Qmedia2/Media2 mounted yes -
Qmedia2/Media2 quota none default
Qmedia2/Media2 reservation none default
Qmedia2/Media2 recordsize 128K default
Qmedia2/Media2 mountpoint /Qmedia2/Media2 default
Qmedia2/Media2 sharenfs on local
Qmedia2/Media2 checksum on default
Qmedia2/Media2 compression off local
Qmedia2/Media2 atime off local
Qmedia2/Media2 devices on default
Qmedia2/Media2 exec on default
Qmedia2/Media2 setuid on default
Qmedia2/Media2 readonly off default
Qmedia2/Media2 zoned off default
Qmedia2/Media2 snapdir hidden local
Qmedia2/Media2 aclmode passthrough local
Qmedia2/Media2 aclinherit passthrough local
Qmedia2/Media2 canmount on default
Qmedia2/Media2 xattr on default
Qmedia2/Media2 copies 1 default
Qmedia2/Media2 version 5 -
Qmedia2/Media2 utf8only on -
Qmedia2/Media2 normalization formD -
Qmedia2/Media2 casesensitivity insensitive -
Qmedia2/Media2 vscan off default
Qmedia2/Media2 nbmand on local
Qmedia2/Media2 sharesmb name=Media2,guestok=true local
Qmedia2/Media2 refquota none default
Qmedia2/Media2 refreservation none default
Qmedia2/Media2 primarycache all default
Qmedia2/Media2 secondarycache all default
Qmedia2/Media2 usedbysnapshots 0 -
Qmedia2/Media2 usedbydataset 4.94T -
Qmedia2/Media2 usedbychildren 0 -
Qmedia2/Media2 usedbyrefreservation 0 -
Qmedia2/Media2 logbias latency default
Qmedia2/Media2 dedup off default
Qmedia2/Media2 mlslabel none default
Qmedia2/Media2 sync standard default
Qmedia2/Media2 refcompressratio 1.00x -
Qmedia2/Media2 written 4.94T -