OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Have you flashed the 9211 IT firmware to your IBM 1015?
If not, you should do

http://www.servethehome.com/ibm-serveraid-m1015-part-4/

I'm going to give that a shot. The boot screen said IT firmware, but it could still be running the old IBM firmware. I'm getting the boot files together to do a mass flash of all 3 cards and I'll see how that works out.

EDIT

Reflashing the cards worked just fine. The IT firmware I was seeing must have been the M1015 firmware and flashing to the 9211IT firmware had everything running inside of the IO VM without any problems. The only issue I did have is that I had to initialize every disk manually because they were not showing in the vdev creation screen. I'm now running some dd benchmarks to see what I can get out of this with an SSD ZIL and L2ARC.
 
Last edited:
Hi all,

I am trying to setup a zfs folder with full access to everyone but I am struggling and would appreciate some assistance if anyone can help..... Ive read some of posts that say that when guest =on it shouldnt ask for any usernames or password but this isnt the case for me (I have other shares but only one is smb the rest are nfs or afp).

The zfs folder is setup as "o=full, g=full, e=full full_set 777+" but I try to delete something that was already on the share I get the message " You Require permissions from server/root to make changes to this folder"

With this in mind I tried the ACL Folder extension and gave @everyone full access and enabled inheritance for files and folders, then when running the reset ACL's it dumps errors that i cant find certain files - those with spaces such as ebooks and mp3's (Ive Chmodded -R 777 to check) and the output of that reset job is lots of "chmod: WARNING: can't access /dpool/datatank/Apps/iBooks/A Peoples" before it just stops.

I have also tried the other option of logging into the smb share as root and setting the permissions there but on some files (maybe written by a different user) I see the error "No mapping between account names and security Id's was done" but only for some files.. Edit : have found this error occurs for all files created by user 501 and im certain its the user for NFS (doesnt matter which client all files written are as 501).

Would you have any ideas where I should go from here?

At the end of the day I need a SMB & NFS share which is open to everyone on my network (full access to old and new content) and multiple other AFP shares just for timemachine which are secured with local user accounts.

Thankyou
Paul
 
Last edited:
I'm going to give that a shot. The boot screen said IT firmware, but it could still be running the old IBM firmware. I'm getting the boot files together to do a mass flash of all 3 cards and I'll see how that works out.

EDIT

Reflashing the cards worked just fine. The IT firmware I was seeing must have been the M1015 firmware and flashing to the 9211IT firmware had everything running inside of the IO VM without any problems. The only issue I did have is that I had to initialize every disk manually because they were not showing in the vdev creation screen. I'm now running some dd benchmarks to see what I can get out of this with an SSD ZIL and L2ARC.

Good stuff..!

Very interested in what your findings are for using a ZIL+ L2ARC vs benchmarks when you've disabled this (sync=disabled)
 
Good stuff..!

Very interested in what your findings are for using a ZIL+ L2ARC vs benchmarks when you've disabled this (sync=disabled)

I'm running the zfsbuild.com IOMeter tests today and over the weekend. I forgot to disable sync on my first test and my 4 drive 7200RPM mirror was about half the performance of my 8 drive 15k RPM RAID10 array on our MD3000i. Considering the IOPS differences between a 7200RM SATA drive and a 15k SAS drive that seems to be expected. I haven't tweaked any settings at all at this point and I am just going through the UI to create my pools. I will re-run my tests with sync off and see how that changes the performance from the first round.
 
Hi all,

I am trying to setup a zfs folder with full access to everyone but I am struggling and would appreciate some assistance if anyone can help..... Ive read some of posts that say that when guest =on it shouldnt ask for any usernames or password but this isnt the case for me (I have other shares but only one is smb the rest are nfs or afp).

The zfs folder is setup as "o=full, g=full, e=full full_set 777+" but I try to delete something that was already on the share I get the message " You Require permissions from server/root to make changes to this folder"

With this in mind I tried the ACL Folder extension and gave @everyone full access and enabled inheritance for files and folders, then when running the reset ACL's it dumps errors that i cant find certain files - those with spaces such as ebooks and mp3's (Ive Chmodded -R 777 to check) and the output of that reset job is lots of "chmod: WARNING: can't access /dpool/datatank/Apps/iBooks/A Peoples" before it just stops.

I have also tried the other option of logging into the smb share as root and setting the permissions there but on some files (maybe written by a different user) I see the error "No mapping between account names and security Id's was done" but only for some files.. Edit : have found this error occurs for all files created by user 501 and im certain its the user for NFS (doesnt matter which client all files written are as 501).

Would you have any ideas where I should go from here?

At the end of the day I need a SMB & NFS share which is open to everyone on my network (full access to old and new content) and multiple other AFP shares just for timemachine which are secured with local user accounts.

Thankyou
Paul

Hi Paul-
I had a ton of difficulty getting NFS from Ubuntu and SMB from Win7 to work correctly. The keys that were useful to me:

Ubuntu-
  • Installed nfs-common
  • Hard mounted in the /etc/fstab as nfs4
  • Turned on NEED_IDMAPD=yes in /etc/default/nfs-common
  • Set Domain=<your domain here> in /etc/idmapd.conf, make sure this matches your Solaris box's configured domain
  • /etc/init.d/idmapd start
  • Made sure that users in Solaris/Ubuntu had the same username and uid/gids

Here is an example ls -V of the parent folder that is shared to everyone:
drwxrwxrwx+ 6 root root 6 Dec 26 22:26 home
everyone@:rwxpdDaARWcCos:fd-----:allow
user:root:rwxpdDaARWcCos:fd-----:allow

Here is an example of restricting a subfolder of the home folder listed above, this shared folder I have is writable by writeuser or anyone in the group writeuser, and readable by everyone, including readuser or people in the group readuser:

drwxrwxr-x+ 4 writeuser writeuser 4 Nov 28 21:11 shared
user:writeuser:rw-pdDaARWcCos:f-i---I:allow
user:writeuser:rwxpdDaARWcCos:-d----I:allow
group:writeuser:rw-pdDaARWcCos:f-i---I:allow
group:writeuser:rwxpdDaARWcCos:-d----I:allow
user:readuser:r-----a-R-c--s:f-i---I:allow
user:readuser:r-x---a-R-c--s:-d----I:allow
group:readuser:r-----a-R-c--s:f-i---I:allow
group:readuser:r-x---a-R-c--s:-d----I:allow
owner@:rw-pdDaARWcCos:f-i---I:allow
owner@:rwxpdDaARWcCos:-d----I:allow
group@:rw-pdDaARWcCos:f-i---I:allow
group@:rwxpdDaARWcCos:-d----I:allow
everyone@:r-----a-R-c--s:f-i---I:allow
everyone@:r-x---a-R-c--s:-d----I:allow

And then from Windows you login using the credentials of one of the Solaris accounts for permissions.

Hope that helps!
 
The zfsbuild IOMeter tests take a LOOONG time to run doing the full suite, so I will probably trim things down and try to run more tests over the weekend with my current build. One thing that strikes me as interesting is that the SSD pool I built pretty much sucks in the performance arena. I wonder if I am doing something wrong. Now, this is not a super test due to not having many SSD drives, but I wanted to benchmark the worse case performance to see how it stacked up.

Looking at the 4k random 67% read 33% write tests I am getting what I consider to be horrible performance.

My test pool is a set of mirrored Intel 330 120GB drives with an Intel 520 120GB L2ARC and mirrored 30GB slices for ZIL from 2 Intel 320 SSD drives.

Code:
pool: tank1
 state: ONLINE
  scan: none requested
config:

	NAME                         STATE     READ WRITE CKSUM     CAP            Product
	tank1                        ONLINE       0     0     0
	  mirror-0                   ONLINE       0     0     0
	    c5t5001517BB2A91588d0    ONLINE       0     0     0     120 GB         INTEL SSDSC2CT12
	    c5t5001517BB2ABE8D9d0    ONLINE       0     0     0     120 GB         INTEL SSDSC2CT12
	logs
	  mirror-1                   ONLINE       0     0     0
	    c5t5001517972E2F74Ed0p2  ONLINE       0     0     0     23.2 GB        INTEL SSDSA2CW08
	    c5t5001517972E30212d0p2  ONLINE       0     0     0     23.2 GB        INTEL SSDSA2CW08
	cache
	  c5t5001517BB29CD4F6d0      ONLINE       0     0     0     120 GB         INTEL SSDSC2CW12

errors: No known data errors

Sync is disabled on this pool and testing shows

IOPS: 1047
Read IOPS: 344
Write IOPS: 703

Where my 2 mirror SATA pool (4 Toshiba 2TB drives) with the same ZIL and L2ARC config and sync to standard gave me

IOPS: 932
Read IOPS: 308
Write IOPS: 623

I would have expected the numbers to be a lot better for the SSD drives even with just 2 in a mirror.

Ultimately my config will be 2 pools with the first pool using 14 mirrored 450GB 15k SAS drives (same ZIL and L2ARC) and 8 mirrored 2TB 7200 SATA drives (same ZIL and Intel 330 L2ARC), so this is not my final configuration. I need to do a migration from our current store before I can utilize all of the disks.

So, I know spindles matter, but this leads me to believe that I have something out of whack. I'm running this with 16GB on the AIO server and testing on a 4GB Win2k8 server.

I'm tempted to pick up another pair of SSD drives for additional testing, but I still thought I would see better performance than this starting out.

Hopefully I am just missing something obvious.
 
I upgraded napp-it from 0.8l to 0.9pre1. After the reboot, OI would not start and just hangs.(Fig1). In the boot menu, the last bootable entry is Netatalk-3.0.1 (Fig2). When I boot into that entry and log into nappit, v0.9pre1 is running.

How do i fix the OI boot menu?

Fig1
6czIX.png


Fig2
InVOT.png
 
Thanks for that, the NFS clients are running OSX but I will see what I can do.

Ive tried to add in the anonuid and anongid's to match my preffered user ID's and Groups but it doest seem to work (If i can make this work then thats the easiest).

Otherwise would you know if its possible to map the Id's on the server side to my user (Ive tried adding a new one in nappit but it doesnt let you choose the user ID (not that I could see).

At least Ive found out the user 501 is a default first user on OSX and the group 20 (Games on openindiana) is the staff group on OSX - just need to align them somehow...or ditch nfs.

Paul

Hi Paul-
I had a ton of difficulty getting NFS from Ubuntu and SMB from Win7 to work correctly. The keys that were useful to me:

Ubuntu-
  • Installed nfs-common
  • Hard mounted in the /etc/fstab as nfs4
  • Turned on NEED_IDMAPD=yes in /etc/default/nfs-common
  • Set Domain=<your domain here> in /etc/idmapd.conf, make sure this matches your Solaris box's configured domain
  • /etc/init.d/idmapd start
  • Made sure that users in Solaris/Ubuntu had the same username and uid/gids

Here is an example ls -V of the parent folder that is shared to everyone:


Here is an example of restricting a subfolder of the home folder listed above, this shared folder I have is writable by writeuser or anyone in the group writeuser, and readable by everyone, including readuser or people in the group readuser:



And then from Windows you login using the credentials of one of the Solaris accounts for permissions.

Hope that helps!
 
Otherwise would you know if its possible to map the Id's on the server side to my user (Ive tried adding a new one in nappit but it doesnt let you choose the user ID (not that I could see).

You can definitely specify both the uid and gid of users in Solaris, and edit them as needed.. just look at the /etc/passwd and /etc/group files. Or delete the existing users in Solaris and recreate them specifying the numbers you want.
 
Hi everyone,

I'm not sure if this is the right place to post this, but I'm hoping someone else here has experienced this.

I'm running a M1015 in IT mode in passthrough. I just upgraded from ESXi 5.0 to 5.1 7xxxxx, then updated to 914609. I have reinstalled the vmware-tools on my OI vm. After installing 914609/new vm-tools, I am seeing greatly reduced SMB speeds. I used to always get around 100MB/s, but now it averages around 30.

Let me know If I should provide any other info.

Thanks!

EDIT: Well, it's not quite as fast, but I switched out the vmxnet3 adapter for an e1000e. I'm getting about 90MB/s now.
 
Last edited:
Anyone know any solution that allows me to have a web interface for my smb share?

As in able to manage the files over http/https with just using the browser. clientless
 
Anyone know any solution that allows me to have a web interface for my smb share?

As in able to manage the files over http/https with just using the browser. clientless

I would say that this should be possible with any web or ftp server software. You can install apache for example and set it up to share your folder. it wont be a smb share though it will be a http web site with just the files from your shared folder on it. By default most web servers do not allow full folder browsing/listing but this is just an option you can turn on. The end user will just see a white web page with blue links to all the folders and files plus a parent folder option.

Michael
 
I setup iSCSI on my OI and created a a vSwitch with a 2nd card I have with dual NICS for the iSCSI VMkernal. I noticed however my OI VM has 0 usage on iSCSI vswitch nics.

It doesnt seem normal to me that I would create a specific switch for iSCSI traffic but the OI VM doesn't use it for traffic. Is that switch merely for ESXi iscsi traffic to the non-OI vms?
 
I setup iSCSI on my OI and created a a vSwitch with a 2nd card I have with dual NICS for the iSCSI VMkernal. I noticed however my OI VM has 0 usage on iSCSI vswitch nics.

It doesnt seem normal to me that I would create a specific switch for iSCSI traffic but the OI VM doesn't use it for traffic. Is that switch merely for ESXi iscsi traffic to the non-OI vms?

Is this an all in one where OI is running as a VM and ESXi is connected via iSCSI to this local VM? If this is the case then all iSCSI traffic will be internal virtual networking and will never go though a real NIC. There will be ways to configure it to go out on physical network cards in theory if you wanted too but if the OI VM and VMkernal are on the same vSwitch then it will be all ultra high speed virtual networking. This is the way it should be on an all in one setup but if your OI napp-it setup is on a different machine than your ESXi machine then it would have to go though a physical network card of some kind.

Also with basic setups there is only really a need for one vSwitch with the vmkernal on the same switch. But many ways to do it.

On the current machine I'm working on I have created a second vSwitch and put the VMkernal on that switch instead and I have connected no physical NIC's to this switch. Then i created a virtual machine port group for the SAN VM and connect it here.

My SAN VM has two network adapters with two different address ranges. The first one connects to the main vswitch with physical NIC's for another external ESXi machine to use but this one does not have jumbo frames turned on. And the second one is to this second all virutal vSwitch that has jumbo frames turned on and just does internal NFS/iSCSI.

When thinking about vSwitches think of them like smart configurable and expandable layer 2 switches. Thinking of them in your head like real physical switches may help understand how they work and how to best set them up for your use.

Michael
 
OK - So I just exported/imported my pools moving from Solaris 11 to Solaris 11.1 and I'm having some problems...

sharemgr doesn't seem to exist anymore. In napp-it 0.9, it says:
shared folders: sharemgr show -vp
sudo: sharemgr: command not found

...and I also can't change the CIFS workgroup.

If I try to change it manually on the command-line using "smbadm join -w newworkgroup" it says:
After joining newworkgroup the smb service will be restarted automatically.
Would you like to continue? [no]: yes
failed to join newworkgroup
failed to contact smbd - No such file or directory

Can somebody help me out?
 
Did someone successfully use highpoint 2340 or other 23xx controllers in openindiana/omnios?

The controllers are on the wiki list of supported hardware using the marvell88sx driver, but I can't get them recognized. Trying to add_drv marvell88sx returns unrecognized device model.

I guess the marvell88sx driver is closed source, but is there anything else I can try to get this working?
 
Did someone successfully use highpoint 2340 or other 23xx controllers in openindiana/omnios?

The controllers are on the wiki list of supported hardware using the marvell88sx driver, but I can't get them recognized. Trying to add_drv marvell88sx returns unrecognized device model.

I guess the marvell88sx driver is closed source, but is there anything else I can try to get this working?

You will not be happy at all.
Sell it or use it with Windows and buy a IBM M1015
 
Having a small issue...

I have a media player which I want to give read-only permissions so I created a user/pw and added that under folder/SMB acl with read_set but when I test my media player still is able to write data on my zfs folder (makes nmj folder for jukebox)

thanks
 
Having a small issue...

I have a media player which I want to give read-only permissions so I created a user/pw and added that under folder/SMB acl with read_set but when I test my media player still is able to write data on my zfs folder (makes nmj folder for jukebox)

thanks

Have you deleted the everyone@=modify default?
 
Have you deleted the everyone@=modify default?

Yep I did but now I can't login with my media player on to my zfs folder

I've added user/pw both in ACL on folder/ACL on SMB shares but am unable to login now!
Dunno what's wrong!
 
Dear Gea,

I use esxi 5.1 with the latest patch. In it I run a openindiana vm with your latest napp-it.

Unfortunately, I have no more pcie slots available so I tried to use rdm for 3 disks as described by http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/

This rdm mapping seems to be working. I use scsi (1:x) as a virtual device node.
Napp-it does list the disks when clicking on disks (see below).

Unfortunately, your smart info tab does not list them. (I can do a smartctl on the console and that seems to be ok) (side question - how do I use the commandline to find out which disks are attached?)
Also I cannot create a pool of these disks or make a zfs folder with the napp-it software. (side remark - ramdisks are also not seen)

Any idea on how to fix this?

Happy new year and thanks for your great software,

Nick


all known disks and partitions from iostat (includes removed):
id part identify stat diskcap partcap partcap2 error vendor product sn
c15t0d0 0 via dd ok 2000 GB 2 TB 1.8 TiB S:6 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1CZ403484
c15t3d0 0 via dd ok 2000 GB 2 TB 1.8 TiB S:6 H:0 T:0 ATA SAMSUNG HD203WI S1UYJ1CZ412199
c15t4d0 0 via dd ok 2000 GB 2 TB 1.8 TiB S:6 H:0 T:0 ATA SAMSUNG HD204UI S2H7J1CB707464
c3t50014EE204251903d0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EARS-00S WDWCAVY2736347
c3t50014EE25C418DE1d0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EURS-63S WDWMAZA8893228
c3t50014EE25C418F1Ad0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EURS-63S WDWMAZA8881871
c3t50014EE25C47C37Ad0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EURS-63S WDWMAZA9178019
c3t50014EE25C480C88d0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EURS-63S WDWMAZA9057120
c3t50014EE2B0618343d0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EARS-22M WDWCAZA6161380
c3t50014EE2B19D9A4Ed0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EURS-63S WDWMAZA8931150
c3t50014EE2B19DB827d0 1 via dd ok 2000 GB 2 TB 1.8 TiB S:1 H:0 T:0 ATA WDC WD20EURS-63S WDWMAZA9035627
c5t0d0 1 via dd ok 34.4 GB 34.4 GB 32 GiB S:0 H:0 T:0 VMware Virtual disk 6000C294E5A7062
 
I've got a problem with my 15" retina MacBook Pro when connecting to NFS shares exported from a Solaris 11.1 server virtualised on an ESXi host, on which I've used napp-it to set up all the shares etc. The problem I'm seeing is that connecting to the shares takes a long time, browsing the directory structure takes forever and the network shares disconnect intermittently. File transfers never complete and the Finder frequently hangs with the annoying beach ball that I haven't been plagued with since the time pre-SSD!

I'm running OS X 10.8 with the latest updates from Apple and I've tried re-installing OS X 10.8 from scratch twice, erasing the disk beforehand and the problem still persists. I haven't installed any extra software apart from the base OS. I've also tried disabling the wireless card and using my Thunderbolt Display ethernet connection and the same problem still occurs.

My wife has a 13" retina MacBook Pro, and I also have a Mac Mini, neither of which suffer the same problem. Connecting to NFS shares on either is lightning quick, and I'm able to browse the directory structures quickly and transfer files without any problems.

If there was a problem with my Solaris server then I'd expect the same problem to happen on other clients, which is not happening. All the machines are set to acquire IP addresses via DHCP, and I have a local DNS server running. As far as I can tell, there are no differences in the network configurations on any of the machines.

Does anyone have any ideas what might be causing this problem? I'm starting to pull my hair out on this one!
 
I've got a problem with my 15" retina MacBook Pro when connecting to NFS shares exported from a Solaris 11.1 server virtualised on an ESXi host, on which I've used napp-it to set up all the shares etc. The problem I'm seeing is that connecting to the shares takes a long time, browsing the directory structure takes forever and the network shares disconnect intermittently. File transfers never complete and the Finder frequently hangs with the annoying beach ball that I haven't been plagued with since the time pre-SSD!

I'm running OS X 10.8 with the latest updates from Apple and I've tried re-installing OS X 10.8 from scratch twice, erasing the disk beforehand and the problem still persists. I haven't installed any extra software apart from the base OS. I've also tried disabling the wireless card and using my Thunderbolt Display ethernet connection and the same problem still occurs.

My wife has a 13" retina MacBook Pro, and I also have a Mac Mini, neither of which suffer the same problem. Connecting to NFS shares on either is lightning quick, and I'm able to browse the directory structures quickly and transfer files without any problems.

If there was a problem with my Solaris server then I'd expect the same problem to happen on other clients, which is not happening. All the machines are set to acquire IP addresses via DHCP, and I have a local DNS server running. As far as I can tell, there are no differences in the network configurations on any of the machines.

Does anyone have any ideas what might be causing this problem? I'm starting to pull my hair out on this one!

I would start by installing wireshark on both the Solaris VM and your mac and then examine packet captures between the two, looking for obvious errors. I have solved some pretty daunting network problems this way. You might also compare the packets from one of the working macs to the ones on your failing mac.

Some things that come to mind are routing issues (is your mac on a different subnet than the Solaris server?), firewall issues on the mac, etc.

Good luck!
 
Dear Gea,

I use esxi 5.1 with the latest patch. In it I run a openindiana vm with your latest napp-it.

Unfortunately, I have no more pcie slots available so I tried to use rdm for 3 disks as described by http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/

This rdm mapping seems to be working. I use scsi (1:x) as a virtual device node.
Napp-it does list the disks when clicking on disks (see below).

about Smart
napp-it supports (S)ATA and SCSI disks
- check /var/web-gui/data/napp-it/zfsos/_lib/get-disk-smart.pl for details

napp-it 0.9 does not check for disks but partitions, due to
- support of partitions
- faster detection of unplugged disks

If you insert a disk without a valid partition table you now need to call
-menu disk initialize
 
Thanks for the tips.

I've just used NFS Manager on OS X to manually mount one of the NFS shares on my MacBook Pro, specifying the nolocks & rdirplus options and now I'm able to connect and browse the shares without any problems as I can do on my wife's MacBook Pro and Mac Mini.

This raises the question: why do the other machines work fine with the default settings, without me having to specify the options above?
 
Looks like James Gosling (of Java fame) had the same issue that I was having: NFS on Snow Leopard

However, there is a huge problem with this: OS X does a phenominal amount of file locking (some would say, needlessly so) and has always been really sensitive to the configuration of locking on the NFS servers. So much so that if you randomly pick an NFS server in a large enterprise, true success is pretty unlikely. It'll succeed, but you'll keep getting messages indicating that the lock server is down, followed quickly by another message that the lock server is back up again. Even if you do get the NFS server tuned precisely the way that OS X wants it, performance sucks because of all the lock/unlock protocol requests that fly across the network. They clearly did something in Snow Leopard to aggravate this problem: it's now nasty enough to make NFS almost useless for me.

Fortunately, there is a fix: just turn off network locking. You can do it by adding the "nolocks,locallocks" options in the advanced options field of the Disk Utility NFS mounting UI, but this is painful if you do a lot of them, and doesn't help at all with /net. You can edit /etc/auto_master to add these options to the /net entry, but it doesn't affect other mounts - however I do recommend deleting the hidefromfinder option in auto_master. If you want to fix every automount, edit /etc/autofs.conf and search for the line that starts with AUTOMOUNTD_MNTOPTS=. These options get applied on every mount. Add nolocks,locallocks and your world will be faster and happier after you reboot.


I added nolocks,locallocks to my NFS mount options and now the NFS shares work perfectly. I'm still not sure why there is such a difference between the different machines though.

Here are the steps I followed to resolve the issue:

1. sudo nano /etc/auto_master
2. Add locallocks,nolocks to /net
3. sudo nano /etc/autofs.conf
4. Set AUTOMOUNTD_MNTOPTS=nosuid,nodev,locallocks,nolocks
 
Thanks Gea,

The Initialize disks worked.

Another small question regarding Openindiana. I have my datastore on a SSD.
I would like to use the noatime option, just like in linux etc to save my ssd from premature death.
I cannot locate the / root mount point in the /etc/vfstab. Any idea where I have to put that noatime option?

Is noatime possible for the zfs folders or pools?

In linux I use also a swappines=10 option somewhere. Is such a thing also needed in Openindiana?

Thanks,

Nick
 
You don't specify mount options, cause the options are part of the filesystem.

use zfs get all, to get your current settings.

zfs set atime=off pool/dataset

I haven't noticed it to be swap happy, but seems to perfer not to use swap, much unlike linux that perfers to dump everything out of ram to make more room for stuff.
 
Oh, I should have been more correct. I meant the os system. Every time something happens in the os, the os will write the date/time accessed of every file. that will ruin the ssd in the short run.

I also get constant log messages in the /var/adm/message file: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no. It fills up the log and I googled this and also someoneelse reported this, but no solution.
 
Oh, I should have been more correct. I meant the os system. Every time something happens in the os, the os will write the date/time accessed of every file. that will ruin the ssd in the short run.

I also get constant log messages in the /var/adm/message file: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no. It fills up the log and I googled this and also someoneelse reported this, but no solution.

Atime setting is the same with rpool (bootdisk on Solaris is also zfs)
zfs set atime=off rpool

about the logs:
I have no solution but a modern MLC SSD is better than you may think.
 
I'm having a problem with Solaris 11.1 and napp-it 0.9.

For some reason, everytime I reboot the server, the SMB server service stays offline and I need to manually disable/enable it to restart it.

Here what napp-it shows upon boot:
offline svc:/network/smb/server:default

...and
Current state of SMB/CIFS Server: offline
Current membership: workgroup Workgroup : devsrc

SMB/CIFS server service: [online]

As you can see, it contradicts itself above saying it's offline but also online.

Can anybody help me out?
 
Maybee another Solaris 11.1 surprise but my first thought is
that you have forgotten to reboot after installation of napp-it
(napp-it creates and activates a BE so you must reboot)

Although if you use the default BE it should be ok on the second boot
 
I have a M1015 in passthrough (esxi 5.1). Now I want to create a zfs2 system with 10 disks (8+2 parity).
Can I use rdm ( http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/ ) for the remaining two drives? (These two drives are attached to the standard AHCI ports of the intel z77 / i7-3770 motherboard. Unfortunately passingthrough this AHCI panther controller will break the esxi system - even with the datastor (which contains openindiana) on a simple, separate pxie1 controller - bummer).

Will rdm disks combined with M1015 passthrouged disks impair the safety of the zfs z2 data in any way?
 
Last edited:
Hi,

just wanted to mass-delete some snapshots I was creating with time-sliderd on several rpool datasets but had to find out that napp-it is only showing snapshots on non rpool pools. Is this correct? Is there a way around it?


Kind regards,
JP
 
Maybee another Solaris 11.1 surprise but my first thought is
that you have forgotten to reboot after installation of napp-it
(napp-it creates and activates a BE so you must reboot)

Although if you use the default BE it should be ok on the second boot
Yeah, it's booting into the napp-it 0.9 BE...

I really, really wish I could switch to OpenIndiana but I simply can't re-create my pools by copying data to offsite storage because it's 20 TB... :( (Can anybody think of any "reasonable" way I can do this? my ZFS pools are v31)
 
Yeah, it's booting into the napp-it 0.9 BE...

I really, really wish I could switch to OpenIndiana but I simply can't re-create my pools by copying data to offsite storage because it's 20 TB... :( (Can anybody think of any "reasonable" way I can do this? my ZFS pools are v31)

I don't have this problem but I never use napp-it to configure anything. Everything is done through Solaris native commands.

This may not be a Solaris 11.1 bug but a napp-it issue.
 
Back
Top