OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

watching video http://smartos.org/2011/08/24/video-smartos-the-modern-operating-system/

- ALPHA! :( not ready for production use yet but they're moving fast. didn't give an ETA. Not meant for install, but there are instructions to try it.

- QEMU/KVM in a sparse Zone/container. So the security holes recently found are not an issue. Attacker just gets stuck in a container/zone.

- like vmware free version has no GUI for managing VM's. unless Solaris & derivatives has something? The managing software SmartDataCenter like vSphere I suppose.

- sometimes VM gets faster than bare metal due to ZFS's ARC

- Xen is for the history books

- likes OS level virtualization as default - use KVM for hardware virtualization only for legacy stuff that you have to do it.

- no fan of SANS "centralized storage", like storage local with ZFS. From bad experience.

- wants to play well with others: illumos, kvm, bsd, gnu, etc communities. Open and willing to help to get other projects on there

- projects they'd help but not doing: AMD CPU's (??), spice VDI, ??installer for smartOS

- saw a visualization app for dtrace - graph of a linux VM writing to ext3 FS , cool!

- VM live migration for KVM, not for OS level VM's

- local z-pool for VM storage, zfs send/receive for VM migration

- 6.5 release in mid-october.

- major in-house upgrade for joyent in couple months. ??will have close match brtw in-house and public repository

- installer for smartos will be tough - they don't install but run on dram. boot off usb for head node.

- ext3 over ZFS does have a penalty. use OS virtualization if performance matters.

- chance of a free version of SmartDataCenter. Not decided.

- metering for resource based billing

- regular podcasts in the future, stay tuned
 
Last edited:
If anything listening to the webcast made me think that doing bare-metal OI/Illumos install is a good idea. Esp now with the KVM port. (disclosure: noobish not expert opinion)

Tweets from OI:

openindiana OpenIndiana
@vv111y We're definitely very pro smartos and are very supportive of Joyent's contributions
Aug 25, 12:42 PM via Nambu

openindiana OpenIndiana
@vv111y OI is a general purpose server/workstation OS and smartos is designed for cloud hosting so they solve different problems
Aug 25, 12:41 PM via Nambu
 
Not zfs, it would be up to the host OS. OI certainly can.

sorry i meant the OS...hee...but different people here use diff os like OI,NX or SE....so i just generalise its as ZFS :)

oh anyone does any extra disk check for the data disk running on OI ?
 
oh anyone does any extra disk check for the data disk running on OI ?

some people run the main OS disk in RAID1
not really needed if you are only using the OS to access the ZFS data, takes half an hour to set it up again
 
Is ZFS capable of detecting SMART errors ?
It is not necessary with SMART. ZFS is far more sensitive than SMART. You should trust ZFS, and not trust SMART. ZFS will alarm before SMART issues an alarm.

People are saying they had disks crashing without any SMART reporting errors. And they had disks where SMART reported errors, so they used the disk as scratch disk to store nothing important on these disks. But the disk continued to work perfectly for years!

Dont trust SMART. Trust ZFS - it is the safest solution on the market right now.
 
Has anyone had a chance to try SmartOS yet? Considering a rebuild of my home server, might be able to integrate my separate PFsense router box into the same hardware as my file server.

Proposed build:
[*]6*1TB disks + 6*2TB disks + 6*3TB disks (already have)
[*]Supermicro AOC-USAS2-L8i (already have)
[*]Norco RPC-4220 case (already have)
[*]HP SAS expander (already have)
[*]OCZ Vertex 2 60GB (l2arc) (already have)

Any comments on this build? I know it seems KVM isn't working on AMD yet, but that might happen at some point, and I already have hardware that works, so there's no loss of functionality.
It strongly suggested to use raidz2 with large drives.

A question; how do you connect the norco chassi to your PC? Or, do you have a motherboard in your norco, which means you dont have a separate PC?
http://hardforum.com/showthread.php?t=1632189
 
sorry i meant the OS...hee...but different people here use diff os like OI,NX or SE....so i just generalise its as ZFS :)

oh anyone does any extra disk check for the data disk running on OI ?

I run my rpool on a mirror and I scrub it weekly for errors.
 
I have a Raid-Z configuration with 5 disk currently. I just got a spare disk in and I seem to be having problems in napp-it to change it to a Raid-Z2. What exactly do I have to do. I don't want to lose any data and I figure I should ask here before I do anything stupid. I was looking in add vdev and I managed to configure the new drive as a spare but is that the same as Raid-Z2? (I don't think it is)
 
You can not migrate from a raidz to a raidz2 without moving your data off the pool to some other storage and then remaking your vdev as a raidz2 vdev.

You can, however, add the drive as a spare, but as youre aware, it is NOT the same - far from it in fact.

If you can, find temporary storage somewhere, maybe a friend or something, and then remake your zpool.
 
You can not migrate from a raidz to a raidz2 without moving your data off the pool to some other storage and then remaking your vdev as a raidz2 vdev.

You can, however, add the drive as a spare, but as youre aware, it is NOT the same - far from it in fact.

If you can, find temporary storage somewhere, maybe a friend or something, and then remake your zpool.

Thank you. Thats what I was afraid of. I have the data backed up elsewhere currently so it won't be to big of an issue. I just will need some time to sit down and do it.
 
actually if running OI with disks only on mirroring....it doesnt matter what cpu u r using? since mirroring no need for calculation of parity? :confused:
 
Mirroring AFAIK just involves the two writes.

so xeon or atom no diff ?


im thinking 8 disk of 2TB in raid10.....lol....so as long as not 2 disk from the same mirror down i will still be able to rebuild yet with raid10 performance
 
Last edited:
How does this look for reads/writes.. comparable to what it should be.

Im happy with it as its more than I'll need to saturate a few gige lines.. looks to be 10gige ready :)

NAME SIZE Bonnie Date(y.m.d) File Seq-Wr-Chr %CPU Seq-Write %CPU Seq-Rewr %CPU Seq-Rd-Chr %CPU Seq-Read %CPU Rnd Seeks %CPU Files Seq-Create Rnd-Create
Media 21.8T start 2011.08.29 32G 113 MB/s 98 1025 MB/s 83 330 MB/s 38 119 MB/s 91 759 MB/s 32 1084.6/s 3 16 +++++/s +++++/s

Bit odd to me though as it writes at 1025MB/s but only reads at 759MB/s.. seems off but will work for me. But could have to do with me accessing the GUI while it was benchmarking..
 
_Gea since 0.600a napp-it seems to have lost some functionality.

All situated at the "Pools" tab.

zpool list doesn't work anymore.
screenshot: http://k003.kiwi6.com/hotlink/he33ejkjse/1.jpg

and the sub-tab Poolinfo doesn't seem to do anything too
screenshot: http://k003.kiwi6.com/hotlink/9z8b40lcqn/2.jpg

bscrx

0.6a is buffering zfs and disk infos to have a better performance with lots
of ZFS or disks. try menu zfs or disk reload prior the other items.

(0.6a is a very early build with lots of internal changes, use only to evaluate
background agents)
 
I'm having problems setting up NFS and was hoping someone could help. from the napp-it interface i created a pool called NFS using RAIDz. I then went to the ZFS folders tab and created a ZFS dataset, enabled SMB, its also called NFS

Then on my ESXi server i try and add a NFS filesystem, i point it to the server and the NFS folder but it fails to create the data store everytime. this is the error i keep getting

Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "10.19.136.51" failed.
Operation failed, diagnostics report: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.
 
thats the problem, i don't see an option for NFS just smb, and i swore it said in the napp-it all in one pdf he has to enable both.
 
I'm confused. Are you looking at the zfs folders page? If so, every folder should have sharesmb and nfs links you can click on. If not that page, what are you looking at?
 
this is what i see on the create zfs folder page
zfs.PNG
 
Thats not the right page. Dont click create.

Just click on ZFS Folder. It shows your datasets/folders. In Red under the NFS colum for your folder click it. Set it to on.
 
Hi Gea, long time no speak. Just dropping by to say keep up the great work and glad to see your project continuing. The OP also looks great, haven't seen it in a while and it's really expanded.
 
0.6a is buffering zfs and disk infos to have a better performance with lots
of ZFS or disks. try menu zfs or disk reload prior the other items.

(0.6a is a very early build with lots of internal changes, use only to evaluate
background agents)

It worked after a reload in the zfs menu, thanks.
 
Hi Gea, long time no speak. Just dropping by to say keep up the great work and glad to see your project continuing. The OP also looks great, haven't seen it in a while and it's really expanded.

thanks about the comment.
Not only napp-it but the whole ZFS thing is growing rapidly with Illumos, Nexenta* and OpenIndiana.
The by some expected Oracle dilemma seems to end in a much better supported and developed free enterprise OS than the the former Sun OpenSolaris
 
It strongly suggested to use raidz2 with large drives.

A question; how do you connect the norco chassi to your PC? Or, do you have a motherboard in your norco, which means you dont have a separate PC?
http://hardforum.com/showthread.php?t=1632189

Yep, I'm running raidz2 vdevs, 6-wide.

I have a motherboard in the norco. It runs OpenSolaris, and it's got an LSI controller with an 8087 cable going from the LSI controller to the HP SAS expander.
 
Hi Gea, long time no speak. Just dropping by to say keep up the great work and glad to see your project continuing. The OP also looks great, haven't seen it in a while and it's really expanded.

I second this. Gea, I've only been using Napp-It for a few weeks but I'm very impressed with it. I plan on making a donation soon.

Any plans for including link aggregation and jumbo frame support?
 
After digging through this thread I have 2 questions:

1) Can you run the Napp-It web GUI with the Nexenta Community Edition web gui side by side? I couldn't get it working tonight but Napp-It on the Nexenta core was a breeze to setup and configure.

2) This is probably not a Napp-It issue, but is there a way to change the LUN label presented over iSCSI? We are working with Win2k8 servers and we see the awful device names that we can't differentiate. It would be nice to be able to see a label that is easier to track back to Napp-It. Again, not sure if this is even something on the Napp-It end of the equation or not.
 
Any plans for including link aggregation and jumbo frame support?

you can use them with CLI but i currently do not intend to include it in napp-it.
Speed advantage is mostly minimal and it complicates things a lot.

It's time for 10 GBe if you really need more speed
My own server infrastructure is completely 10GBe. Nics are from 300 Euro and even
switches like HP Procurve 2910 are at about 450 Euro per 10 GBe port.
In 2012 I expect a massive break-through of 10 GBe over TP
 
After digging through this thread I have 2 questions:

1) Can you run the Napp-It web GUI with the Nexenta Community Edition web gui side by side? I couldn't get it working tonight but Napp-It on the Nexenta core was a breeze to setup and configure.

2) This is probably not a Napp-It issue, but is there a way to change the LUN label presented over iSCSI? We are working with Win2k8 servers and we see the awful device names that we can't differentiate. It would be nice to be able to see a label that is easier to track back to Napp-It. Again, not sure if this is even something on the Napp-It end of the equation or not.

1.
only in theory. you can install but it won't run.
Reason: root account is disabled by config file in /root folder
if you enable it, you may use it as a napp-it replication source

The next problem are the mountpoints
While NexentaStor mounts pools under /volumes, napp-it and all other Solaris
distris mounts them under /. napp-it is currently not aware to handle this in share and ACL settings -
and i do not want to offer a concurrent GUI for NexentaStor

Also the repositories are different.
The Stor one does not include some tools that are requested by the installer

Conclusion: use either NexentaStor CE with Nexenta GUI or NexentaCore/OI/SE11 + napp-it GUI

2.
you may use a separate target for each LUN
You can set the names in napp-it
 
Last edited:
I did notice one other possible issue today and I didn't find it in this thread:

Using VSphere 4 I added a new disk on the fly. I could not get Napp-It to display the disk until after a reboot of the VM. Will we have to do a reboot each time we add storage? I'm hoping we can just add storage as necessary and create our shares live.

I am using Nexenta Core as the base OS if that makes any difference.
 
I don't think you can hot-add hardware via vsphere itself (with very few exceptions.) On the other hand, if you are doing passthrough, it will work, since vsphere is not involved...
 
Hmmmm, I could have sworn I read you couldn't, but looking back, I think this refers to other devices. I assume there is some opensolaris command that will scan for disks, but don't know what offhand...
 
Just as a sanity check I did a side-by-side test with a Nexenta Community Edition VM we are running. I created a new 8GB thin VMDK and hot added to the VM. Nexenta CE did not see the disk until I select the Refresh -> Refresh Device Links option in the Settings -> Disks tab.

So, it appears there is a command to force the scsi bus to refresh and I just need to figure out what it is OR where that same command is within Napp-It.
 
devfsadm? I can't confirm this since my SAN VM has a PCI passthru device (the HBA) so hot-add of devices is disabled.
 
Back
Top