OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Long story short, I had a physical server running Napp-it, unfortunately this got wiped out by an electrical fault, this was setup as follows

Physical Napp-IT Server 3 Nics 1 for management 2 for iSCSi, the iSCSi was a separate subnet and was part of an iSCSi vlan

ESXi Server 1, 2 Nics 1 for management 1 for iSCSi in same subnet & Vlan as as the 2 iSCSi ports for Napp-IT

ESXi Server 2, identical setup to ESXi Server 1

I have now installed on one of the ESXi servers the AIO solution

I have tried to setup the iSCSi the same as the physical network, but I can't seem to do it, am I over complicating the build?

I was hoping to achieve on the Napp-IT server, iSCSI on separate subnet and Vlan using a dedicated nic, another nic for the management, to achieve this I have set it up by adding a Vmkernel switch own subent and Vlan on my physical switch

ESXi2 (not hosting Napp-IT) 1 nic for management and vms, and a separate vmkernel separate subnet and vlan to same vlan as NApp-IT iSCSi on physical switch

Result is the ESXi cant see the Targets

I have tried adding a Vmachine Port group sep subnet, both nics assigned to the whole then the nics assigned via Nic Teaming with only a single nic exclusive to iSCSI subnet mirrored on ESXi

Result is ESXi cant see targets

Currently I have removed the Nics from the ESXi hosts, single Nic in Napp-IT, both ESXo hosts access teh targets without issue.


I may be over complicating it but feel that the network performance on the 2nd ESXi host could be better if it was accessing the iSCSi on a dedicated Nic, obviously the ESXi hosting Napp-IT dies not have this issue

A guide or help would be greatly appreciated

I have had a nightmare with this :( when the original physical machine died it took the OS hard drive with it and along with it all the info Luns share names, I have managed to get them back and recreate the Luns with the old share name from the CLI but it was a nightmare, I have only just started playing with iSCSi and Vlans so be gentle, my introduction to this new world coudl have been easier


Thanks and apologies for the long first post, any questions and I will do my best to try and answer
 
If this is an AIO setup you don't need to even assign a NIC to the iSCSI vSwitch. Just make sure you have a vmkernel with an IP on the separate LAN and it should work. I had that same setup just this last week doing some testing with my new AIO setup and it had no problems connecting via iSCSI or NFS.
 
Just to clarify I add a another vmkernel to my existing Nic with a separate LAN subnet? Without assigning an additional spare card it will all be routed through the primary nic without using an additional Nic, I was looking to do this so that the esxi not hosting Napp-it would benefit with increased network?
 
If it's an AIO setup you don't need to assign any NIC to the vSwitch. You will need to assign a NIC if this is to be used by external (to the AIO box) machines though.
 
Real quick one, to save me the headache -- I installed the most recent stable of OmniOS on ESXi, then got napp-it up, etc... all fine.

But when I try to install AFP it doesn't seem to register as a service or be able to be started from within the napp-it frontend. It always stats at disabled there, and I can't even see it available to svcadmin as an option.

Is AFP only supported on bloody or is there something I may be missing?
 
Real quick one, to save me the headache -- I installed the most recent stable of OmniOS on ESXi, then got napp-it up, etc... all fine.

But when I try to install AFP it doesn't seem to register as a service or be able to be started from within the napp-it frontend. It always stats at disabled there, and I can't even see it available to svcadmin as an option.

Is AFP only supported on bloody or is there something I may be missing?

The current napp-it netatalk 3.01 installer runs only on bloody
 
Long story short, I had a physical server running Napp-it, unfortunately this got wiped out by an electrical fault, this was setup as follows



Thanks and apologies for the long first post, any questions and I will do my best to try and answer

On aio machine create a vswitch and on here add a vm network for the san vlan. Add a kernal port and assign ip in a private range you decide. Add network card to this switch. Set mmtu to 9000 in this vswitch everywhere you can. Repeat these steps on other box. Note you won't use the vm network on this machine but good to add it anyway as you can link a vm here and test ping the aio box if you have problems.

Now on san vm assign a vmnet3 network card to the san vm network. Once booted assign ip in same range as kernal port and set it to 9000 mtu. Should be good to go now.

Michael
 
Can anyone advise if these would/are supported in OI151a5 to enable iSCSI between 2 OI+Napp-it servers (Or any other suggestions on how best to achieve this)

Plan is to join 2 servers together as such or is there a better way of going behond 20 drives (I currently have 16 drives in one chasis (Maxed out) and 12 in the other primary chasis which has space for a further 8 drives.

I am on a very limited budget at the moment but can obtain 2 x cards and a cable for arround the equivalent of $50 here in the UK if they are any use that is.

Advise gratefully accepted please guys

Doug

I guess it is a waste of time then
 
......Now on san vm assign a vmnet3 network card to the san vm network. Once booted assign ip in same range as kernal port and set it to 9000 mtu. Should be good to go now.

Michael

On the VM San assign network card to the san network? By this do you mean allocate the ip in the same range as you then go onto say allocate ip to same range, in my san I have 2 Vmxnet3 one on the management subent so I can hit the Napp-it gui, the other assigned to subnet of the of the newly created Vswitches on the ESXi hostsis this correct?

Thanks
 
Ok I have taken some screenshots of my setup

http://imgur.com/a/SfUs5#0

Please can you advise if this is correct? the 2 nics on subnet 10.0.1.x are plugged into the vlan 10 on the physical switch

Aldo getting the licence error as shown im the last pic? I never got this on my previous install on the physical box

Thanks
 
Ok I have taken some screenshots of my setup

http://imgur.com/a/SfUs5#0

Please can you advise if this is correct? the 2 nics on subnet 10.0.1.x are plugged into the vlan 10 on the physical switch

Aldo getting the licence error as shown im the last pic? I never got this on my previous install on the physical box

Thanks

Sorry my instructions were not well written as I'm too lazy to get my laptop out and instead typing on my phone.

Your second vswitch needs a virtual machine port group added. And then add a second virtual nic to napp vm and set it to this network. Also it shows a red cross on the nic connected to this vswitch. Connect a cable between the two machine's nics or too the same switch.

Now in napp vm config this new network to the right ip address etc and you can do test pings from the vm to ip of the two iscsi kernel ports
 
Thanks I have made the changes as in the pics

aFcks.png


and

oXCzE.png



However adding the host ip either the 10.0.0.x or the 10.0.1.x the luns are not detected on ESXi as in the the screenshot below

naYss.png



Have I set it up correctly? Napp-It has 2 nics
 
So, just following up on some preliminary testing - This is the initial results of exporting a 4x1tb F3 Raid 10 array via 4gbps FC to a windows 7 test vm:

screenshot20130112at336.png


There's no zil/l2arc ssd setup on here (yet?) - do these results look reasonable? What could I do to improve the 4k reads/writes? I'm assuming a decent l2arc could help with the 4k reads - would adding a zil slog help anymore than simply running async?
 
Thanks I have made the changes as in the pics

Have I set it up correctly? Napp-It has 2 nics

in your top network pic I can't see your vm called napp connected to the second vswitch's vm port group called "VM Network2". edit this vm's settings and go to it's network adapters and you can change one of them to point to this second VM port group. This is how you control where in your virtual network each of your VM nics talks out.

Their has to be a path for the iscsi traffic to get from ESXi's kernal ports to the SAN vm for iscsi to work. Also remember when thinking about this networking don't think that the word virtual means not real. In VMware the vSwitches are REAL switches that are just implemented in software and not hardware. They act the same in most ways as physical switches other than the fact they can be expanded easier. Virtual Machine port groups are just like plug in modles for your vswitch that have some network ports for connecting into VM's Nics. And Kernal port groups are a NIC attached to the kernel of the ESXi box which are plugged into the vswitch as well. when you assign a physical NIC to a vswitch you are makeing that NIC the main External Trunk port for that switch so it can connect to other physical switchs/computers. Once you understand that things work the same as physical networking you can use all the knowledge/skill you have with networking to good use.

Also learn how to use multiple VM port groups and rename them to names that work for you. Names of VM port groups are very important as they have to be consistent across all your ESXi's machines so you can move VM's around without problems. Note when you rename one it can mean you may have to fix all the vm's using it to change them to the new name as i've had it cause problems before. This is one of the first things you should get right in your ESXi config. I normally set up extra VM port groups for each of the subnets i might be using like an internet subnet which firewall VM's will use to access your internet router/modem. You can also have secure management subnets and I like to make a testing one as well which is useful for connecting new or cloned VM's to when you don't want them to see the real network. To have multiple port groups work you need to assign a vlan tag to each and have a smart layer 2 physical switch that support VLAN taging etc. This lets you use one (or a pair for failover) of physical nics to connect multiple of these networks subnets between your two or more hosts.

Michael
 
Quick question, since I find the current info somewhat confusing and cannot find my path around the options:

I do want/need the native full disk encryption with SOL11.
What combinations of versions of SOL11 and napp-it will allow me
to configure my pools with encryption and pools V31+ via the GUI?

Background (long story)

I am going to refurbish my box, adding more disks and such.....
The current setup runs SOL-EX11 and napp-it 0.8g.
I originally started it with napp-it v0.5k a while ago...
.
During set-up of an intermediate box with SOL11 and napp-it 0.8, I noticed
that creating pools with native encryption (pool version V31+) did not work via
napp-it gui...only pools up to V.28 were selectable / were created.


many thanks in advance!

regards,
Hominidae
 
Any showstoppers on bloody I should know about? I don't mind running slightly unstable, heh.

I found my own showstopper for my environment -- bloody doesn't seem to build dhx2 modules for afp (I tried hacking them together but it wasn't pretty), which means my Mountain Lion hosts couldn't talk to the netatalk server :(

OI worked fine though, so I'll start there and transition over to OmniOS if the next stable release has that part working.
 
Is it possible to define a "degraded" raidz2 at creation time?

What i want to do is the following

working setup: raidz2 4+2 disks
define of the raidz2 4+1 disks and one holding temp the migration data

after copying the data complete the raid z2

greetings schleicher
 
Has anyone done some benchmark to compare Solaris 11.1 / OmniOS / OI lately?

I am interested mostly in the SMB / NFS performance, as well as vmware esxi support (vmxnet3).

ZPool version 33 seem to add some nice improvements to SMB support.
 
Is it possible to define a "degraded" raidz2 at creation time?

What i want to do is the following

working setup: raidz2 4+2 disks
define of the raidz2 4+1 disks and one holding temp the migration data

after copying the data complete the raid z2

greetings schleicher

You "should" be able to do this using a sparse file the same virtual size as the 5 disks (or larger). You'd have to do all this from the OS command line though!


Create the pool using 5 disks plus the sparse file, and then delete the sparse file itself before you copy any data to the pool (you don't want to actually write any data to the sparse file - you are just using it as a placeholder). Then copy your data to the degraded pool, and finally replace the failed sparse file "device" with the now available real disk device.

On Solaris/OI, you can use the truncate command to create the sparse file (if it's not available, you can use dd)
 
BTW - you may have to export and then re-import the pool after deleting the sparse file (but before you copy any data), to get the pool properly recognised as degraded.
 
in your top network pic I can't see your vm called napp connected to the second vswitch's vm port group called "VM Network2". edit this vm's settings and go to it's network adapters and you can change one of them to point to this second VM port group. This is how you control where in your virtual network each of your VM nics talks out.

Their has to be a path for the iscsi traffic to get from ESXi's kernal ports to the SAN vm for iscsi to work. Also remember when thinking about this networking don't think that the word virtual means not real. In VMware the vSwitches are REAL switches that are just implemented in software and not hardware. They act the same in most ways as physical switches other than the fact they can be expanded easier. Virtual Machine port groups are just like plug in modles for your vswitch that have some network ports for connecting into VM's Nics. And Kernal port groups are a NIC attached to the kernel of the ESXi box which are plugged into the vswitch as well. when you assign a physical NIC to a vswitch you are makeing that NIC the main External Trunk port for that switch so it can connect to other physical switchs/computers. Once you understand that things work the same as physical networking you can use all the knowledge/skill you have with networking to good use.

Also learn how to use multiple VM port groups and rename them to names that work for you. Names of VM port groups are very important as they have to be consistent across all your ESXi's machines so you can move VM's around without problems. Note when you rename one it can mean you may have to fix all the vm's using it to change them to the new name as i've had it cause problems before. This is one of the first things you should get right in your ESXi config. I normally set up extra VM port groups for each of the subnets i might be using like an internet subnet which firewall VM's will use to access your internet router/modem. You can also have secure management subnets and I like to make a testing one as well which is useful for connecting new or cloned VM's to when you don't want them to see the real network. To have multiple port groups work you need to assign a vlan tag to each and have a smart layer 2 physical switch that support VLAN taging etc. This lets you use one (or a pair for failover) of physical nics to connect multiple of these networks subnets between your two or more hosts.

Michael

Thanks I set this up exactly as you said and it worked perfectly, however there seems to be an anomaly, when I let the iscsi flow over the management card the transfer rates to and from the san are significantly better than using a dedicated route and card, these are identical nics as well, so I am at a loss as to why this would be so, so for all my faffing I am currently letting the iscsi flow through the management card and achieving better throughout

Thanks again for taking the time to reply and point me in the right direction
 
Having a strange problem!

When starting my VM in ESXi I try login via the console but get a msg from OI to configure my 1st time login; I fill in what is required but then get a msg OI can't create the folders due to permission problems! Result : Windows of OI doesn't start and only get the blue screen with the OI logo,nothing else!
Any ideas?

Ty
 
Can anyone tell me how to fix this error had it since install, comes up when I click the system menu and when I first login

1I3oN.png
 
Can anyone tell me how to fix this error had it since install, comes up when I click the system menu and when I first login

This happens, if you rename the host with (eval) extension keys installed.
Goto menu extension - register - edit and delete the key.
 
Thanks that sorted it, I wasnt aware that I had changed the host name, although I did import the pool from a former install?
 
Since your guide is superb and has been available for a long time, I would rather see they did something we all are waiting for: OmniOS 100% tested&working all-in-one guide :D Any ETA on that perhaps?

omni-os stable: ok
napp-it support: ok
netatalk3 support ok (not finally tested, since today)
manual to update ESXi 5.1 and install vmware tools: ok

missing: some testings
but i switch all my new installations to Omni

current state and insights:
http://www.napp-it.org/downloads/omnios_en.html
 
Anyone able to passthrough a Sil3124 based controller card successfully? The bootup errors out with these messages. I'm able to passthrough the onboard P55 SATA controller just fine.

capture1de.jpg


I've flashed the controller card to the non-raid/jbod image. Read around somewhere that that might help, but I still get the same errors. I'm not too well versed in Linux but can poke around with directions. Any ideas?
 
So I finally managed to purchase all of the gear I needed and build my fileserver.

As expected, it's not behaving as expected. ;)

Pertinent info:
ESXi 5
Gigabyte Z77N-Wifi
Solaris 11
Intel 3770
7x Hitachi 7k4000
Currently 8GB of RAM but will upgrade to 16 shortly (though the Napp-it box will still only get 6-8). .
Direct passthrough of a M1015
ESXi running on Intel X25

Currently assigned to Napp-it box is 6GB of memory and 2 vCPU. RAIDZ2 with all 7 disks. Samsung Force GT 64GB as L2ARC.

So there are a couple of issues - first and foremost, what type of performance should I be expecting out of this configuration? Currently seeing approximately 40MB/s sequential reads and 70MB/s sequential writes in CrystalDiskMark. This seems abysmally low.

In my initial testing tonight, I tried d/ling a torrent (legally) that resulted in Disk Overload at 100%? This was over SMB. I increased my uTorrent cache but that only helped temporarily. Subsequently, download speeds dropped to 7kb/s. What am I missing here?
 
Hm, Bonnie never appears to run? I hit start and continuously reload but I'm not getting results.

Just saw that Bonnie isn't working in Solaris 11. Will run dd bench momentarily.

Edit:

Memory size: 8192 Megabytes

write 16.777216 GB via dd, please wait...
time dd if=/dev/zero of=/vdev1/dd.tst bs=2048000 count=8192

8192+0 records in
8192+0 records out

real 1:21.8
user 0.0
sys 5.3

16.777216 GB in 81.8s = 205.10 MB/s Write

wait 40 s
read 16.777216 GB via dd, please wait...
time dd if=/vdev1/dd.tst of=/dev/null bs=2048000

8192+0 records in
8192+0 records out

real 22.6
user 0.0
sys 4.3

16.777216 GB in 22.6s = 742.35 MB/s Read
 
Last edited:
Unfortunately, Realtek on the ESXi side w/ VMXNET3.

Other NICs are Realtek, Broadcom and Intel.
 
Last edited:
I had similar slow performance and it was due to nic, make sure you're not using the windows generic drivers.. Also there is trick to disable the 20% nic reservation in windows google it, it helped me.

As for the realtek on the esxi i'm not too sure.... maybe try to get a intel entreprise nic (maybe a dual one) on ebay, they are fairly cheap (~50$).

If nothing does it try OmniOS or OI.
 
Gea or company,

In the new napp-it 0.9a7 why cant you click the ip and change it like you could in previous versions? i click dhcp and try to unbind it but it stays dhcp no matter what.
This is open indiana + napp it
 
While there are sometimes problems to keep the former dhcp-adress after switching to static
(with the need to setup static ip locally) I saw such a problem only with more than one Nic
where the first Nic is not used. (This is a general OI problem).

Use the first Nic or delete unused Nics
 
Back
Top