OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

You have to make a Virtual Switch in ESX and then assign virtual lans to that switch... something in that direction...

Use search, it's been discused before.
 
What I did: make the perms 777 on the NFS share folder, and use an IP restriction on the share so only esxi can access it. Like this:
tank/esxi-datastore sharenfs [email protected] local
 
You have to make a Virtual Switch in ESX and then assign virtual lans to that switch... something in that direction...

Use search, it's been discused before.

If I search for the error here I get nothing. I found the following article online but having added the VMKernal Port I still get the same behaviour.

http://blog.jeffcosta.com/2011/01/1...-error-when-setting-up-vmware-nfs-data-store/

dan, I tried your suggestion too with no luck.

Will try some more tomorrow. Thanks.
 
Are there any tools that will rebalance a pool between two vdevs? One vdev was nearly full when I added the 2nd. Copying and then deleting the old copy seems to do it, but then the files last modified time changes to the current date.
 
Drive speed not limiting how much CPU do you need to keep up with a 10GbE connection with ZFS? I am trying to decide if a G620 is enough or do I need to go with something more powerful. I plan on keeping this for a while also so I want to be able to grow.
 
I was able to get good 10 gbE speeds on an older Core2Quad Q9200 machine (Asus P5Q-E motherboard). Just be sure that the slot you put it in can handle true 8x PCIe speeds if the 10 gbE card only supports PCIe 1.1 (like my Dell XR997).

You will likely need to do the recommended Solaris tweaks for improved 10 gbE throughput.

For starters (kind of old):

http://www.solarisinternals.com/wiki/index.php/Networks

EDIT: using benchmark programs I could easily saturate the 10 gbE connection between the OpenIndiana server and my Windows box, using 10 gbE.
 
Thanks, I will look into the tweaks.

I was able to get good 10 gbE speeds on an older Core2Quad Q9200 machine (Asus P5Q-E motherboard). Just be sure that the slot you put it in can handle true 8x PCIe speeds if the 10 gbE card only supports PCIe 1.1 (like my Dell XR997).

You will likely need to do the recommended Solaris tweaks for improved 10 gbE throughput.

For starters (kind of old):

http://www.solarisinternals.com/wiki/index.php/Networks

EDIT: using benchmark programs I could easily saturate the 10 gbE connection between the OpenIndiana server and my Windows box, using 10 gbE.
 
Current NexentaStor 3, based on OpenSolaris 134 is end of live.
New next Nexenta 4 will be based on Illumos just like OpenIndiana and should come out soon.
So you must rethink you boot environment in any case.

You said that you use a hardware raid-1
Why? Nexenta and Solaris supports software ZFS raid-1 based on standard mainboard sata

There are also driverless Sata Raid-1 enclosures if you want an all-in-one based on ESXi

Well ... Indeed need to rethink the system ... I'll need to find some sata drives on the cheap somewhere, will be a challenge. I used the Raid 1 because I was too lazy to change it :D

If I configure it as a RAID 0 it should work no? Then make ZFS out of it.

Or ... Just use some Sata drives, it would free up the slot ... hmmm

Thanks for the info.
 
Rectal Prolapse

You were dot on the money. :D.
I am running iSCSI with spindown working no problem. However, it is probably working because FMD configuration needed to be changed:

http://www.nexenta.org/boards/1/topics/1414

Look for the post about testing by disabling FMD service and also the conf edits.

It was the FMD configuration that fixed it. I set the 'interval' property to 24 hours in the 'disk-transport.conf' file and sure enough, the disks now spin down based on the device threshold set in power.conf. This is great. My server idles at ~ 55W after the disks spin down.

Now only if there's a way to 'spin down' the case fans (I have 3 140mm fans that run all the time), maybe I'll be able to knock off another 10~15 watts (blue sky :p ?).
 
Great work groove! I wasn't sure if changing FMD conf would make a difference, because at the time I wasn't using iSCSI and it seemed to work fine without it - but I did the change anyways and later added an iSCSI drive to my system.

So I'm happy you confirmed that it makes a difference! :)
 
Still can't get ESXi to mount a datastore via NFS. I've managed to turn on the console and have got the following out of the vmkernel.log - does it help anyone identify what might be wrong?

Code:
2011-12-07T23:46:49.088Z cpu6:2854)NFS: 157: Command: (mount) Server: (192.168.1.113) IP: (192.168.1.113) Path: (/tank/nfs) Label: (nfs) Options: (None)
2011-12-07T23:47:09.909Z cpu7:2854)WARNING: NFS: 1367: Server 192.168.1.113 supports read transfer of 1048576 truncating to 65536
2011-12-07T23:47:09.909Z cpu3:2854)NFS: 292: Restored connection to the server 192.168.1.113 mount point /tank/nfs, mounted as 16a11c43-d0a1f87b-0000-000000000000 ("nfs")
2011-12-07T23:47:09.909Z cpu3:2854)NFS: 217: NFS mount 192.168.1.113:/tank/nfs status: Success
2011-12-07T23:47:40.183Z cpu0:2854)WARNING: NFS: 4608: Failed to get port, RPC error 13 (RPC was aborted due to timeout) for program (100005) version (3) protocol (tcp) on Server (192.168.1.113)
2011-12-07T23:47:40.183Z cpu0:2854)WARNING: NFS: 4616: Unable to get port for program 100005 version 3 protocol tcp on 192.168.1.113
2011-12-07T23:47:40.183Z cpu0:2854)WARNING: NFS: 1242: RPC unable to create socket to send umount for /tank/nfs on host 192.168.1.113 (192.168.1.113)
2011-12-07T23:47:40.183Z cpu0:2854)VC: 770: Unmount VMFS volume b00f 40 16a11c43 d0a1f87b f5d68c54 c9e2d408 4000a 0 57f0 4000a 57f000000000 2d00000000 100000000 c987d7800000000: Not found
 
No idea why you ESX is not mounting the NFS share. FWIW if you have that Ubuntu install still available maybe export a directory on ubuntu and see if ESXi will mount it. That would probably rule out ESXi or Solaris as the problem.
 
Still can't get ESXi to mount a datastore via NFS. I've managed to turn on the console and have got the following out of the vmkernel.log - does it help anyone identify what might be wrong?

Code:
2011-12-07T23:46:49.088Z cpu6:2854)NFS: 157: Command: (mount) Server: (192.168.1.113) IP: (192.168.1.113) Path: (/tank/nfs) Label: (nfs) Options: (None)
2011-12-07T23:47:09.909Z cpu7:2854)WARNING: NFS: 1367: Server 192.168.1.113 supports read transfer of 1048576 truncating to 65536
2011-12-07T23:47:09.909Z cpu3:2854)NFS: 292: Restored connection to the server 192.168.1.113 mount point /tank/nfs, mounted as 16a11c43-d0a1f87b-0000-000000000000 ("nfs")
2011-12-07T23:47:09.909Z cpu3:2854)NFS: 217: NFS mount 192.168.1.113:/tank/nfs status: Success
2011-12-07T23:47:40.183Z cpu0:2854)WARNING: NFS: 4608: Failed to get port, RPC error 13 (RPC was aborted due to timeout) for program (100005) version (3) protocol (tcp) on Server (192.168.1.113)
2011-12-07T23:47:40.183Z cpu0:2854)WARNING: NFS: 4616: Unable to get port for program 100005 version 3 protocol tcp on 192.168.1.113
2011-12-07T23:47:40.183Z cpu0:2854)WARNING: NFS: 1242: RPC unable to create socket to send umount for /tank/nfs on host 192.168.1.113 (192.168.1.113)
2011-12-07T23:47:40.183Z cpu0:2854)VC: 770: Unmount VMFS volume b00f 40 16a11c43 d0a1f87b f5d68c54 c9e2d408 4000a 0 57f0 4000a 57f000000000 2d00000000 100000000 c987d7800000000: Not found

Hey ... It might be the firewall ... Check out this article: http://kb.vmware.com/selfservice/mi..._1_1&dialogID=257602035&stateId=0 0 257600249
 
No idea why you ESX is not mounting the NFS share. FWIW if you have that Ubuntu install still available maybe export a directory on ubuntu and see if ESXi will mount it. That would probably rule out ESXi or Solaris as the problem.

Very good suggestion!

I'm no expert (or even capable noob) with NFS so I followed the guide on the Ubuntu help pages and, once I had changed the permissions on my export, I could mount it inside of ESXi. So this suggests something with my OI installation.

Next steps?

[EDIT: actually, I can mount the NFS export on OI from both Linux and Mac so that now points to something within ESXi preventing the connection. I have the NFS checkbox selected in the firewall settings. I can also use vmkping to ping the OI host. Anything else that springs to mind to check?]
 
Last edited:
So the default MTU in windows 7 64 for my myricom 10gbe card was 1500, in openindiana it was set to 9000 by default... I changed it to 9000 in win 7 and my iperf test went from 50% utilization to 93%...

[ ID] Interval Transfer Bandwidth
[308] 0.0-120.0 sec 128 GBytes 9.17 Gbits/sec

over 1 gigabyte per second :D

Edit: using a 1mb tcp window size its even better:

[288] 0.0-60.0 sec 68.5 GBytes 9.81 Gbits/sec
 
Last edited:
Couple of questions about openindiana -

How can i get openindiana to start the VNC server when it boots up and not have to login to an account?

When I have SMB enabled how do i entern the credentials for a shared folder. I was trying server\username then the password that account has on the openindiana box and it would not work.
 
Couple of questions about openindiana -

How can i get openindiana to start the VNC server when it boots up and not have to login to an account?

When I have SMB enabled how do i entern the credentials for a shared folder. I was trying server\username then the password that account has on the openindiana box and it would not work.

1.
i have not tried but read
http://broken.net/openindiana/how-enable-or-install-xvnc-on-openindiana-148/

2. per default, only the owner/ creator of the files (root) has access.
login as root and/or set ACL-permissions of the shared folder for
other users or for everyone@=modify or full access

or enable guest access to connect without login
 
Continuing my NFS pain I posted on the VMWare forum for some help. The first question that came back was to make sure I'm using NFSv3 and the no_root_squash option.

Having done some research it seems OI defaults to NFSv4 and I can't find the Solaris equivalent of no_root_squash.

Could these be problems?
 
Thanks Dan, I already have the permissions set up. However, I seem to get different error messages when I retry creating the datastore. This time I got:

Code:
Call "HostDatastoreSystem.CreateNasDatastore" for object "ha-datastoresystem" on ESXi "192.168.1.144" failed.
NFS mount 192.168.1.113:/tank/nfs failed: The NFS server does not support MOUNT version 3 over TCP.

which indeed suggests a version clash. But then I don't understand how others have done it if they're using all the defaults.
 
Here are my current share settings:

Code:
sjalloq@openindiana:~$ zfs get sharenfs tank/nfs
NAME      PROPERTY  VALUE                                   SOURCE
tank/nfs  sharenfs  [email protected]/24,[email protected]  local
sjalloq@openindiana:~$ ls -al /tank
total 13
drwxr-xr-x+  4 root root    5 2011-12-06 01:22 .
drwxr-xr-x  25 root staff  45 2011-12-10 15:41 ..
-rw-r--r--+  1 root root  780 2011-12-04 17:30 Bonnie.log
drwxrwxrwx+  3 root root    4 2011-12-08 00:07 datastore
drwxrwxrwx+ 10 root root   13 2011-12-06 00:25 nfs
 
OK, finally solved it. Thanks to the lovely people on the OI IRC channel.

This post details what is needed: http://mordtech.com/2008/12/11/leveraging-zfs-nfs-for-esxi/

I had to set the sharenfs as follows:

Code:
$ sudo zfs set [email protected]
This isn't detailed anywhere in the guides for napp-it so I'm confused how others have it working.

EDIT: I just noticed I'd already tried this, but a lot of the problems I've had today have been due to the mountd service crashing. I need to file a bug against OI. What I could see was that when I got the error in post 2185 I could see mountd had gone down using 'rpcinfo -p'. I would then have to restart by doing:

Code:
sudo svcadm disable svc:/network/nfs/server:default
sudo svcadm enable -r svc:/network/nfs/server:default
 
OK, finally solved it. Thanks to the lovely people on the OI IRC channel.

This post details what is needed: http://mordtech.com/2008/12/11/leveraging-zfs-nfs-for-esxi/

I had to set the sharenfs as follows:

Code:
$ sudo zfs set [email protected]
This isn't detailed anywhere in the guides for napp-it so I'm confused how others have it working.

EDIT: I just noticed I'd already tried this, but a lot of the problems I've had today have been due to the mountd service crashing. I need to file a bug against OI. What I could see was that when I got the error in post 2185 I could see mountd had gone down using 'rpcinfo -p'. I would then have to restart by doing:

Code:
sudo svcadm disable svc:/network/nfs/server:default
sudo svcadm enable -r svc:/network/nfs/server:default

you have two options to solve the permission problem with NFS3
You can either use the NFS root option or you can assign permissions to everyone
like 777 to the shared folder and ACL everyone@=full or modify with file and folder inheritance=on
to inherit these settings to newly created files.

For already created files you must set these permissions recursively to access them via NFS.
In this case, it works with just set sharenfs=on
 
you have two options to solve the permission problem with NFS3
You can either use the NFS root option or you can assign permissions to everyone
like 777 to the shared folder and ACL everyone@=full or modify with file and folder inheritance=on
to inherit these settings to newly created files.

For already created files you must set these permissions recursively to access them via NFS.
In this case, it works with just set sharenfs=on

Hmmm, I was about to argue that your steps don't work but I've just tried again and they do. I created a ZFS folder using napp-it, turned on NFS sharing and left all the defaults. I tried that multiple times over this last week and nothing worked! Perhaps I've now restarted the relevant services enough times to change something. :mad:
 
Yo,

Am just wondering what the enable list buffering function is in the top right corner in Napp-it?

gr33tz
 
1.
i have not tried but read
http://broken.net/openindiana/how-enable-or-install-xvnc-on-openindiana-148/

2. per default, only the owner/ creator of the files (root) has access.
login as root and/or set ACL-permissions of the shared folder for
other users or for everyone@=modify or full access

or enable guest access to connect without login

When i install that xvnx and connect it just shows a Black Screen,

As for share access the guest always works but not if guest access is off. I just need to give a user the permission then it is server\username and password at the windows login prompt?
 
Now that I can actually mount my NFS share I have another question regarding user names and permissions.

I'm accessing my shares from OS X and Linux hosts via NFS and AFP. I will have futher clients, probably Linux, running under ESXi and all will use the same username of 'sjalloq'. Obviously user 'sjalloq' on each client has a different user id so when I list folders the owner of files are different. Is there any way to get all userid's to display as 'sjalloq' or is it impossible? Does it even make a difference? Or should I be trying to restrict folder access to only one client and one UID?

Thanks.
 
Yo,

Am just wondering what the enable list buffering function is in the top right corner in Napp-it?

gr33tz

Current napp-it is a pure web-cgi application. On each access it must read the complete
disk and zfs configuration. With a lot of ZFS and snaps this can slow down the web-GUI.

If you enable ZFS buffering, napp-it stores the ZFS state at startup and then on actions that
modifies something, otherwise it loads the config from a file to improve performance.

Problem: it may happen that the config is not up to date
 
When i install that xvnx and connect it just shows a Black Screen,

As for share access the guest always works but not if guest access is off. I just need to give a user the permission then it is server\username and password at the windows login prompt?

You cannot give a user a permission directly!
You set ACL on shares, files and folders to allow someone access. (based on local users)
Only these users can connect.

You can set these ACL from CLI, from some Windows versions when connected as root
or via napp-it ACL extension.

If you are in workgroup mode, do not set any user-mappings via idmap
 
Now that I can actually mount my NFS share I have another question regarding user names and permissions.

I'm accessing my shares from OS X and Linux hosts via NFS and AFP. I will have futher clients, probably Linux, running under ESXi and all will use the same username of 'sjalloq'. Obviously user 'sjalloq' on each client has a different user id so when I list folders the owner of files are different. Is there any way to get all userid's to display as 'sjalloq' or is it impossible? Does it even make a difference? Or should I be trying to restrict folder access to only one client and one UID?

Thanks.

You need a directory service with a centralized user database like Microsoft Active Directory (or LDAP) to have host independant users (at least for SMB).
If you simply want to access one share from different hosts via smb, afp or nfs4, the user id is irrelevant because you access the share via username/pw.
I would use SMB with ACL for access.

(With NFS3 there is no user-management. Access is based on hosts)
 
Last edited:
Quick question, I cant get smart values to work through the Disks / smart info page.

So I am going to add a new page to do it - whats the correct code to exercute and print a command?

Cheers
Paul
 
Quick question, I cant get smart values to work through the Disks / smart info page.

So I am going to add a new page to do it - whats the correct code to exercute and print a command?

Cheers
Paul

create a new private menu item under disk example specials'
(then this private menu item is independent from napp-it updates)

You only need one action like:
print &exe("smartctl .....");

or if you want to modify the result or create a nice table:
$r=&exe("..");
&print2table($r);

if you get a not found, add the path to your command like
print &exe("/path/command ..");

ps
have you newest napp-it?
there are now nore disks showing infos
http://www.napp-it.org/downloads/changelog_en.html
 
Thansk gea

My issue with smarts is that in the nappit disks they show up as c2t0do but in zpool status they show up as c2t0d0s0.

Manually running smart to against the non s0 label says disk not found but works when I include the s0.

My install is standard (whole disk) and mirrored rpool.

The temperature monitoring is the only real reason I want smart to work.

Cheers
Paul



create a new private menu item under disk example specials'
(then this private menu item is independent from napp-it updates)

You only need one action like:
print &exe("smartctl .....");

or if you want to modify the result or create a nice table:
$r=&exe("..");
&print2table($r);

if you get a not found, add the path to your command like
print &exe("/path/command ..");

ps
have you newest napp-it?
there are now nore disks showing infos
http://www.napp-it.org/downloads/changelog_en.html
 
s0 is only a partition or slice... So c1t1d1s0 is the same as c1t1d1. AFAIK!

Matej
 
Understand what youre saying but:

paul@Indiana:~$ sudo smartctl -a /dev/rdsk/c2t0d0s0 -d scsi
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Serial number: E3X31X63KEUGXN
Device type: disk
Local Time is: Sun Dec 11 21:25:42 2011 EST
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature: 42 C
Manufactured in week 00 of year 0000
Specified cycle count over device lifetime: 100
Accumulated start-stop cycles: 0

Error Counter logging not supported
No self-tests have been logged



paul@Indiana:~$ sudo smartctl -a /dev/rdsk/c2t0d0 -d scsi
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Smartctl open device: /dev/rdsk/c2t0d0 failed: No such device
 
Understand what youre saying but:

paul@Indiana:~$ sudo smartctl -a /dev/rdsk/c2t0d0s0 -d scsi
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Serial number: E3X31X63KEUGXN
Device type: disk
Local Time is: Sun Dec 11 21:25:42 2011 EST
Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature: 42 C
Manufactured in week 00 of year 0000
Specified cycle count over device lifetime: 100
Accumulated start-stop cycles: 0

Error Counter logging not supported
No self-tests have been logged



paul@Indiana:~$ sudo smartctl -a /dev/rdsk/c2t0d0 -d scsi
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Smartctl open device: /dev/rdsk/c2t0d0 failed: No such device

correct, i also add s0 to device id's within napp-it to display temp
with current napp-it i do also a check against sata, ata and scsi to display smart vaues with more disks
 
Back
Top