OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

WRT C202/204/206 chipset has anyone had success with Openindina (b148b or above) or Solaris (SE11) is running on any motherboard (I am not interested in running on top of EXSi or vSphere). ref: http://hardforum.com/showpost.php?p=1037186247&postcount=46

Sorry if this is kindof a dupicate post, however "unclerunkle", http://hardforum.com/member.php?u=222332 , did seem to indicate that he would do some testing to this effect. ref: http://hardforum.com/showpost.php?p=1037115252&postcount=34 - yet I never quite was able to see a definitive answer in this forum.

Jon, please see http://hardforum.com/showpost.php?p=1037190898&postcount=47

No need to fill up this thread with motherboard compatibility questions. Do be aware, however, that 8GB ECC UDIMMS are not available at this time. Max currently available are 4GB sticks. (4 slots x 4GB each = 16GB max RAM)
 
AFAIK, raidz2 vs raidz3 should not affect CPU usage - just mainly wasting more space. Unless you have a hugely compelling reason, raidz3 does not make a lot of sense here. Maybe consider two 5-disk raidz2 vdevs added together, giving you 6 disks of usable storage?
 
edited parameter:

you may try the following:
edit the smtp mailtest menu script

look for the line:
$smtp = Net::SMTP->new($server) || &mess("could not connect smtp-server $server");

and change it to:

$smtp = Net::SMTP->new($server,Port => 465) || &mess("could not connect smtp-server $server");

see info about Perl Net::SMTP module
http://linoleum.leapster.org/archives/48-Using-Perls-NetSMTP-module.html

Gea


Gea - I'm having the same problem with Gmail. Where is the script located that should be changed?
 
Not quite the version of thin provisioning I was referring to...

You are describing starting with an empty disk, thin provisioned.

I am referring to starting with a completely configured "baseline" VM and then using thin provisioning such that only the changes to that model VM are actually stored on your virtual hard drive. This is supported in both ESXi 4.1 and Hyper-V r2 sp1. In this model, it is only the delta's between the fully provisioned "baseline" (OS+apps, in your example above) that take up space. For your "50 VM" example the total disk space used is limited to blocks that contain local changes and is just marginally more than using dedupe, but the processing/memory overhead to achieve it is almost zero (compared to a significant overhead with dedupe).

This is very interesting idea I wasn't aware was available in ESXi 4.1 Can you tell me what terms to search on to get more information ? Thin Provisioning for 4.1 only gets me the information I already know.

Thanks in advance for any pointers.
 
2.) ZFS_Best_Practices_Guide#RAIDZ_Configuration_Requirements_and_Recommendations
ZFS_Best_Practices_Guide#Should_I_Configure_a_RAIDZ.2C_RAIDZ-2.2C_RAIDZ-3.2C_or_a_Mirrored_Storage_Pool.3F
This links might give you some idea on number of hard drives and perfomance...

1.) I see no reason why not...
3.) You can mix LSI and onboard SATA... I run my configuration like that... 8x from LSI and 4x from board + 2x board for system.

Oh and by the way, why with F2 1.5TB? If you are planning on buying the hard drives, go with F4 2TB. Less drives and more space:)

Matej
 
Gea - I'm having the same problem with Gmail. Where is the script located that should be changed?

Per default (per menu napp-it-setup), admin can edit/ add/ delete/ rename napp-it menus.

Howto:

- Go to menu jobs, email, smtp-test
- Look at the upper right corner. There is a red link 'edit'
if you click on it (or set eding in napp-it setup to always), you can edit menu items.
-Klick on the 'e' right of menu smtp-test to edit the menu-script.

Edit the script, save it and test it.
You can go back to the last three versions.


Gea
 
Not quite the version of thin provisioning I was referring to...

You are describing starting with an empty disk, thin provisioned.

I am referring to starting with a completely configured "baseline" VM and then using thin provisioning such that only the changes to that model VM are actually stored on your virtual hard drive. This is supported in both ESXi 4.1 and Hyper-V r2 sp1. In this model, it is only the delta's between the fully provisioned "baseline" (OS+apps, in your example above) that take up space. For your "50 VM" example the total disk space used is limited to blocks that contain local changes and is just marginally more than using dedupe, but the processing/memory overhead to achieve it is almost zero (compared to a significant overhead with dedupe).

Even better, in Hyper-V and combined with dynamic memory, the actual in-memory footprint of these 50VMs is also limited to just the dirty pages (the locally unique data for each VM). Dedupe can't get you that...

Wow, this is news to me. I am currently running 4.0, and was going to move to 4.1 with the new build. I will read up on this feature. I have 10-15 XP VMs, that were at one time cloned from each other. I imagine I will need to re-build a baseline and re-build each XP box, but they don't have anything special on them anyway.

Does this method support doing shared updates too? If I want to apply an SP, is it possible to apply it to the baseline? I assume not, but it would be a nice feature.
 
This is very interesting idea I wasn't aware was available in ESXi 4.1 Can you tell me what terms to search on to get more information ? Thin Provisioning for 4.1 only gets me the information I already know.

Thanks in advance for any pointers.

Nevermind...that was for workstation.
 
Guys can you take a look at the screenshot below. I am not sure why I have 167GB being used in TANK when I have no clue whats being stored there. My NFS store and CIFS are the other 2 directories taking up space.

nexenta.png
 
Per default (per menu napp-it-setup), admin can edit/ add/ delete/ rename napp-it menus.

Howto:

- Go to menu jobs, email, smtp-test
- Look at the upper right corner. There is a red link 'edit'
if you click on it (or set eding in napp-it setup to always), you can edit menu items.
-Klick on the 'e' right of menu smtp-test to edit the menu-script.

Edit the script, save it and test it.
You can go back to the last three versions.


Gea

Thanks! I've tried editing the smtp-test script but I still get the same error:

"could not connect smtp-server smtp.gmail.com"

Is there a simple/easy mail forwarder that I can install on Solaris Express 11 without any long configuration?
 
Main disadvantage is the disk I/O is slower. There was a thread in the virtualization forum on this. A vmware employee says it should not be much slower but I and others see slowdown of up to 50%. Unless you have vt-d support (requiring a compliant motherboard and cpu), so you can pass the disk controller(s) to the NAS VM, I wouldn't bother.
 
_Gea;1036634645 I would avoid SAS2 Controller ex LSI 2008 when not needed because of the WWN problem. (LSI 2008 will report disk related WWN instead of controller based ID) read [URL="http://www.nexenta.org/boards/1/topics/823" said:
http://www.nexenta.org/boards/1/topics/823[/URL]

Just to confirm, the only reason you suggest against the 2008s is that they use WWN? I just bought some, and if that's the only "problem" I think I prefer the WWN long-term anyway. But if there are other issues I want to be prepared.
 
Do you mean comparing those three to each other on ESXi, or comparing any of those with ESXi vs. without ESXi?
The later. Or any ZFS OS on ESXi.

Main disadvantage is the disk I/O is slower. There was a thread in the virtualization forum on this. A vmware employee says it should not be much slower but I and others see slowdown of up to 50%. Unless you have vt-d support (requiring a compliant motherboard and cpu), so you can pass the disk controller(s) to the NAS VM, I wouldn't bother.
I have vt-d a passthrough mobo and cpu, so will there any amount of slowdown or is it 100% native?
 
The later. Or any ZFS OS on ESXi.

I have vt-d a passthrough mobo and cpu, so will there any amount of slowdown or is it 100% native?

I don't know yet, but intend to go the ESXi route myself. The only drawback I am concerned with is if there is any performance hit vs. physical. From all the posts I have seen, people report no noticeable slowdown. The key benefit I see is I can put small lower-priority VMs on the same host then and leverage the rack space better. I put in 48GB ram and dual 5645s which is more than overkill, but the differential price compared to a dedicated NAS was a small portion of the whole project. And if the NAS gets resource starved I can keep giving it more until it maxes the server out.

If mine were a true production environment, I doubt I would do it this way, but for a home lab it seems a great compromise.
 
Just to confirm, the only reason you suggest against the 2008s is that they use WWN? I just bought some, and if that's the only "problem" I think I prefer the WWN long-term anyway. But if there are other issues I want to be prepared.

Identify a disk by the simple port-id like C1t0d0 on a SAS1 controller was easier.
But now i also prefer 2008 based SAS2 Controller due to better performance
and write down the disk related WWN's.

I have also added a disk detection to napp-it via dd which can identify disks in a lot of configurations and hope for a free tool for SES backplanes in a future OS-version

Gea
 
Identify a disk by the simple port-id like C1t0d0 on a SAS1 controller was easier.
But now i also prefer 2008 based SAS2 Controller due to better performance
and write down the disk related WWN's.

I have also added a disk detection to napp-it via dd which can identify disks in a lot of configurations and hope for a free tool for SES backplanes in a future OS-version

Gea

Great, I look forward to seeing it. I intend to just pop a label on each one right away, so for locating a disk there will be no question.
 
Thanks! I've tried editing the smtp-test script but I still get the same error:

"could not connect smtp-server smtp.gmail.com"

Is there a simple/easy mail forwarder that I can install on Solaris Express 11 without any long configuration?


A mail forwarder does not help.
You need a cpmplete SSL user authenitification.

You may try another perl smtp module or
Look for another free mailaccount ot maillist and forward it to gmail.

Gea
 
Last edited:
I don't know yet, but intend to go the ESXi route myself. The only drawback I am concerned with is if there is any performance hit vs. physical. From all the posts I have seen, people report no noticeable slowdown. The key benefit I see is I can put small lower-priority VMs on the same host then and leverage the rack space better. I put in 48GB ram and dual 5645s which is more than overkill, but the differential price compared to a dedicated NAS was a small portion of the whole project. And if the NAS gets resource starved I can keep giving it more until it maxes the server out.

If mine were a true production environment, I doubt I would do it this way, but for a home lab it seems a great compromise.

The main advantage of a virtualized SAN for ESXi is:
- best performance with VMware vmxnet3 10 Gb driver between SAN and VM's at no costs
- less hardware needed (but you should have the same RAM like with separate machines)
- less single point of failures compared with "one SAN for all" (each ESXi has its own SAN)

You have to care about:
You can use ESXi hot-failover like with a dedicated SAN server, but mostly you have a ESXi + corresponding SAN failure in case of problems

You may have more SAN-Server to manage
You need a strategy about updates and crash scenarios

For my own (computer center of a small university) and my 7 all-in-ones, i have done it the easiest way.
- I have always one spare/test all in one server with enough free disk slots available (server nr 7)
- The six other server build pairs with enough free disk slots for the corresponding box
- Basic ESXi net/ vlan-config is always identical

In case of updates (ESXi or SAN) or problems, i export and/or unplug the pool with the VM's, plug it in another server,
import the pool, share it via NFS, and import the VM's to inventory. Its a clear and easy procedure and takes about
20 minutes per All-In-One with up to 10 VM's. All is done with free software and there is no need to care about a complicated "high-end"
crash and SAN failover scenario. I have backups but i have never needed them since i do it this way together with ZFS snaps.

Optionally i thought about separate disk-boxes with expanders to just plug the SAS cable to another box,
but the current way is easy enough.

Other option is a async replication between SAN's but then there is a time delay like with backups.
Sync replication on the other hand is complicated and slow.

Gea
 
What are the advantages and disadvantages of running OI/OS/Nexenta on ESXi?

An advantage: if you are serving up reads&writes to VMs on the same physical box, you can get really good network performance, since the traffic never leaves the physical box.
 
Originally Posted by _Gea View Post
edited parameter:

you may try the following:
edit the smtp mailtest menu script

look for the line:
$smtp = Net::SMTP->new($server) || &mess("could not connect smtp-server $server");

and change it to:

$smtp = Net::SMTP->new($server,Port => 465) || &mess("could not connect smtp-server $server");

see info about Perl Net::SMTP module
http://linoleum.leapster.org/archive...TP-module.html

Gea

Thx Gea. Your reference link had me search for more info on Net::SMTP and gmail. Seems that we need the Net::SMTP::TLS module instead. It is not installed so I'm trying to figure out where and how to add the module but being new to this it is a tough slog.

If anyone can point me in the right direction I would appreciate it. Otherwise I'll try as I can to install the module.
 
Originally Posted by _Gea View Post
edited parameter:



Thx Gea. Your reference link had me search for more info on Net::SMTP and gmail. Seems that we need the Net::SMTP::TLS module instead. It is not installed so I'm trying to figure out where and how to add the module but being new to this it is a tough slog.

If anyone can point me in the right direction I would appreciate it. Otherwise I'll try as I can to install the module.

You have to look at the perl path. Your libs have to be there.
ex. /usr/perl5/5.8.4/lib is in the path of perl in OI.

if you use use Net::Smtp you need the module SMTP.pm in
/usr/perl5/5.8.4/lib/Net

If you need Net::Smtp::TLS you need TLS.pm in
/usr/perl5/5.8.4/lib/Net/Smtp basically (or in a similar subfolder within your path like /var/web-gui/data/napp-it/CGI/Net/SMTP/)

-> sometimes there are depencies with other needed modules!
and: I have added /var/web-gui/data/napp-it/CGI to Perl' s path and included some needed modules there
- like the auth-modules for Net:Smtp (not part of a default Perl installation)

Modules there could be added automatically by napp-it update

Gea
 
Last edited:
Gea: Are you using absolute URI's for the napp-it web interface?

I'm trying to add napp-it to my reverse proxy server but it isn't working because of the URIs...
 
I have NexentaStor CE v3.0.5 32-bit installed. I also installed Napp-IT 0.418 by using the Perl installer script. Everything appears to be working, however I cannot see any disks, volumes, iSCSI targets, or any configuration parameters that Napp-IT should be able to manage. All the web pages look to be rendered properly but there is no configuration data shown. What should I do to fix this?
 
Please help.
I am a Samba newb and I cant get this straight. All I want to do is create 4 folders and set permissions for two users. I can only log into my shares as root. Also, I have added 2 users to the admin group thru the gui, but whne I go to computer management, i cannot get them to show up i tried with root, admin , and chad. I did delete the user and re do the password after the fact of napp-it being installed


May 2 20:15:49 Zfs-srv1 smbsrv: [ID 138215 kern.notice] NOTICE: smbd[ZFS-SRV1\guest]: movies access denied: guest disabled
May 2 20:15:49 Zfs-srv1 last message repeated 1 time
May 2 20:20:43 Zfs-srv1 smbsrv: [ID 138215 kern.notice] NOTICE: smbd[ZFS-SRV1\guest]: movies access denied: guest disabled
May 2 20:20:52 Zfs-srv1 last message repeated 8 times
May 2 20:21:10 Zfs-srv1 smbsrv: [ID 138215 kern.notice] NOTICE: smbd[ZFS-SRV1\chad]: movies access denied: share ACL
May 2 20:21:15 Zfs-srv1 last message repeated 1 time
May 2 20:24:04 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:04 Zfs-srv1 smbd[1607]: [ID 136767 daemon.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:04 Zfs-srv1 smbd[1607]: [ID 817528 daemon.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:04 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:04 Zfs-srv1 smbd[1607]: [ID 136767 daemon.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:04 Zfs-srv1 smbd[1607]: [ID 817528 daemon.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:15 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:15 Zfs-srv1 smbadm[1665]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:15 Zfs-srv1 smbadm[1665]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:16 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:16 Zfs-srv1 smbadm[1666]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:16 Zfs-srv1 smbadm[1666]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:16 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:16 Zfs-srv1 smbadm[1671]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:16 Zfs-srv1 smbadm[1671]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:16 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:16 Zfs-srv1 smbadm[1672]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:16 Zfs-srv1 smbadm[1672]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:16 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:16 Zfs-srv1 smbadm[1677]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:16 Zfs-srv1 smbadm[1677]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:16 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:16 Zfs-srv1 smbadm[1678]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:16 Zfs-srv1 smbadm[1678]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:24:16 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@ failed, error code -9961
May 2 20:24:16 Zfs-srv1 smbadm[1683]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:24:16 Zfs-srv1 smbadm[1683]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:43 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:43 Zfs-srv1 smbd[1840]: [ID 136767 daemon.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:43 Zfs-srv1 smbd[1840]: [ID 817528 daemon.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:43 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:43 Zfs-srv1 smbd[1840]: [ID 136767 daemon.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:43 Zfs-srv1 smbd[1840]: [ID 817528 daemon.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1882]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1882]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1883]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1883]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1888]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1888]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1889]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1889]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1894]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1894]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1895]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1895]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:25:48 Zfs-srv1 idmap[634]: [ID 523480 daemon.notice] AD lookup of winname chad@zfs-srv1 failed, error code -9961
May 2 20:25:48 Zfs-srv1 smbadm[1900]: [ID 136767 user.error] smb_lgrp_getsid: failed to get a SID for user id=103 (-9961)
May 2 20:25:48 Zfs-srv1 smbadm[1900]: [ID 817528 user.error] smb_lgrp_iterate: cannot obtain a SID
May 2 20:27:10 Zfs-srv1 smbsrv: [ID 138215 kern.notice] NOTICE: smbd[ZFS-SRV1\Admin]: movies access denied: share ACL
May 2 20:27:11 Zfs-srv1 last message repeated 1 time
May 2 20:27:17 Zfs-srv1 smbsrv: [ID 138215 kern.notice] NOTICE: smbd[ZFS-SRV1\chad]: movies access denied: share ACL
May 2 20:27:18 Zfs-srv1 last message repeated 1 time
May 2 20:28:42 Zfs-srv1 smbd[2045]: [ID 812811 daemon.notice] logon[KIMBERLY-PC\chad]: WRONG_PASSWORD
May 2 20:28:54 Zfs-srv1 smbd[2045]: [ID 812811 daemon.notice] logon[zfs-srv1\chad]: WRONG_PASSWORD
May 2 20:34:01 Zfs-srv1 smbsrv: [ID 138215 kern.notice] NOTICE: smbd[ZFS-SRV1\root]: movies share not found
May 2 20:34:02 Zfs-srv1 last message repeated 8 times
 
Last edited:
Here's a fresh install. I know what my password is, why doesn't Samba?

May 2 21:39:07 Zfs-01 smbd[1630]: [ID 812811 daemon.notice] logon[CHAD-PC\chad]: WRONG_PASSWORD
May 2 21:40:37 Zfs-01 smbd[1630]: [ID 812811 daemon.notice] logon[CHAD-PC\root]: WRONG_PASSWORD


I can connect to the share as both users. However I cannot manage acl through windows. And i did the passwd for each user and rebooted.
 
I have napp-it running great on my little S11E ZFS server, I have a couple linux clients and a couple windows clients. I have the linux clients using NFS and everything works great, however, when a windows client copies files via SMB (user is smb, group staff, member of the smb power user group). When I go to look at the files on the server (they don't appear to the linux boxes via NFS) the permissions are all ?????? instead of dxwr etc. I can do chmod 777 and this fixes stuff but its annoying to do this all the time. Is there a good way around this?

Thanks!

p.s. gea napp-it is great!
 
gremlin, try setting your root password again using "passwd root" and reboot. It is in Gea's instructions and having done that I haven't had any issues.
 
I have napp-it running great on my little S11E ZFS server, I have a couple linux clients and a couple windows clients. I have the linux clients using NFS and everything works great, however, when a windows client copies files via SMB (user is smb, group staff, member of the smb power user group). When I go to look at the files on the server (they don't appear to the linux boxes via NFS) the permissions are all ?????? instead of dxwr etc. I can do chmod 777 and this fixes stuff but its annoying to do this all the time. Is there a good way around this?

about Samba:
With Solaris/ napp-it you are always using the Solaris-kernel based CIFS/SMB Server -not Samba-!

Samba is an alternative SMB-Server for SMB/ Windows clients (could be installed instead and manually - without napp-it support-).
Solaris CIFS Server is not as flexible, but faster with better ACL support and easier to config.


about SMB/ NFS
Solaris CIFS Server use always ACL (like a real Windows server), not unix permissions
NFS (al least V3) use always and only unix permissions.

ACL and unix-permissions are not a either-or, they depend on each other
(both have to allow, if you restrict one, the corresponding value on the other will automatically change if necessary).

Easiest way to handle your problem:
- use SMB on your Linux clients or
- set top-level ACL of your share to everyone@ (or default NFS user) to modify


Gea
 
Last edited:
I have NexentaStor CE v3.0.5 32-bit installed. I also installed Napp-IT 0.418 by using the Perl installer script. Everything appears to be working, however I cannot see any disks, volumes, iSCSI targets, or any configuration parameters that Napp-IT should be able to manage. All the web pages look to be rendered properly but there is no configuration data shown. What should I do to fix this?

napp-it should show disks and volumes.

BUT
You could either use NexentaStor CE/EE OR
NexentaCore/ OI/ SE11 + napp-it, not both.


They are not compatible, ex
NexentaVore mounts a pools as /volumes/pool
where NexentaCore/ OI/ SE11 mounts it as /pool

Gea
 
Last edited:
The main advantage of a virtualized SAN for ESXi is:
- best performance with VMware vmxnet3 10 Gb driver between SAN and VM's at no costs
- less hardware needed (but you should have the same RAM like with separate machines)
- less single point of failures compared with "one SAN for all" (each ESXi has its own SAN)

You have to care about:
You can use ESXi hot-failover like with a dedicated SAN server, but mostly you have a ESXi + corresponding SAN failure in case of problems

You may have more SAN-Server to manage
You need a strategy about updates and crash scenarios

For my own (computer center of a small university) and my 7 all-in-ones, i have done it the easiest way.
- I have always one spare/test all in one server with enough free disk slots available (server nr 7)
- The six other server build pairs with enough free disk slots for the corresponding box
- Basic ESXi net/ vlan-config is always identical

In case of updates (ESXi or SAN) or problems, i export and/or unplug the pool with the VM's, plug it in another server,
import the pool, share it via NFS, and import the VM's to inventory. Its a clear and easy procedure and takes about
20 minutes per All-In-One with up to 10 VM's. All is done with free software and there is no need to care about a complicated "high-end"
crash and SAN failover scenario. I have backups but i have never needed them since i do it this way together with ZFS snaps.

Optionally i thought about separate disk-boxes with expanders to just plug the SAS cable to another box,
but the current way is easy enough.

Other option is a async replication between SAN's but then there is a time delay like with backups.
Sync replication on the other hand is complicated and slow.

Gea

Short question:
When you run your OS with ZFS on ESXi do you assign the disks to the VM through RDM or through some other way made possible by Vt-d support?

Or do you use VMFS at the bottom and then ZFS on top of that?

I’m currently ruining Solaris express 11 with zfs on ESXi but I’m using RDMs to give Solaris direct disk access. This isn’t working flawlessly though and it would be interesting to know how other people are doing it.

Thanks!
 
Short question:
When you run your OS with ZFS on ESXi do you assign the disks to the VM through RDM or through some other way made possible by Vt-d support?

Or do you use VMFS at the bottom and then ZFS on top of that?

I’m currently ruining Solaris express 11 with zfs on ESXi but I’m using RDMs to give Solaris direct disk access. This isn’t working flawlessly though and it would be interesting to know how other people are doing it.

Thanks!

1. RDM (Raw disk mapping)
- not officially supported by VMware, could cause problems
- slow:
Solaris -> virtual Solaris disk driver -> ESXi -> Esxi Disk Driver -> Disk
instead of
Solaris -> Solaris disk driver .> disk

-->> not suggested with ZFS

2. ESxi virtual disk/ local datastore

- Disks/ Pools are not movable (You have no ZFS disks)
- no real disk access (you loose advanced ZFS raid and error correction)
- slow

--> not suggested with ZFS

3. I/O virtualisation via Intel vt/d or AMD IOMMU
- real controller and disk access for Solaris
- fast
- like running on real Hradware

--> I can only recommend this way.


Gea
 
Last edited:
Legen, what issues are you having? I have tried RDM and a single big VMDK on each physical disk, but was seeing performance hits as high as 2X over native mode. Until I can afford a vt-d cpu/mobo I'm sticking with a native NAS/SAN.
 
When I click in the check box to enable passthrough from the Intel 82574L I get message:


Do I want passthrough for this device?

Right now ESXi doesn't have drivers for the 82579LM nic, but once it does, using the "NIC Teaming" function can combine the 2 NICs into a real 2gbit link that OpenIndiana can use? (Instead of 1gbit link with a failover link)?
 
1. RDM (Raw disk mapping)
- not officially supported by VMware, could cause problems
- slow:
Solaris -> virtual Solaris disk driver -> ESXi -> Esxi Disk Driver -> Disk
instead of
Solaris -> Solaris disk driver .> disk

-->> not suggested with ZFS

2. ESxi virtual disk/ local datastore

- Disks/ Pools are not movable (You have no ZFS disks)
- no real disk access (you loose advanced ZFS raid and error correction)
- slow

--> not suggested with ZFS

3. I/O virtualisation via Intel vt/d or AMD IOMMU
- real controller and disk access for Solaris
- fast
- like running on real Hradware

--> I can only recommend this way.


Gea
Thanks for the answer! It really clarified my options, vt-d/AMD IOMMU is the way to go then :D!

Legen, what issues are you having? I have tried RDM and a single big VMDK on each physical disk, but was seeing performance hits as high as 2X over native mode. Until I can afford a vt-d cpu/mobo I'm sticking with a native NAS/SAN.

Mostly I’m having the performance issue. It’s slow but I haven’t done any testing to measure how slow it is compared to native mode (you say 2X, I think mines even slower :rolleyes:). The other problem that I think is related to the RDM pass-through setup is random freezes of the whole ESXi box, they happen on irregular occasions varying from days to weeks but more often when the server is under heavy load.
 
I had RDM pass through work almost as badly as vmfs filestore access. It is not supported and I had I/O errors constantly at least one time I tried it. If I were you, I'd go the route of creating a datastore on each disk, and create one huge vmdk on each datastore and add that to the VM until you can go the vt-d method (I can't afford it right now, if you can, good luck!)
 
I am a bit confused by this whole All in One concept. If everything is going to be local, why setup a virtual SAN? I mean, that would be fine for testing, but I dont see why it would ever be done in production as i previously stated. There must be something I am easily missing.

The reason why I am asking is that I currently run an dell md3000i (iSCSI san), with 3 clustered XenSource based servers running on CentOS with dual dedicated intel nics and an iscsi "optimized" layer 2 switch. Anyway, its warranty expires in a couple weeks and i had 4 hour hardware failure response on it. Since I am not going to have that anymore, i want to build a secondary san that I can use as an option should the primary san fail. I have the xen guests backed up to another server on the lan, but if my SAN goes out at this point, im SOL and wont have anything really to restore it to. Ive always been interested in using iscsi/zfs together to make my own "enterprise" quality san, so this post really got me excited to try some things out.

Also, i see that most people are using sata drives. Are people needing that much storage that they dont use 15K SAS instead? I currently use 15K sas drives, so was thinking about throwing in 9 x 146gb 15K sas drives (1 for hot spare) to start as i can get used ones for about $115 each. Am planning on running them in whatever the ZFS version of raid10 is called.

Oh yeah, i need to get another switch as well (for redundancy purposes). I had bought some cheap HP unmanaged switches just to have around in case of failure and they were supposed to support jumbo frames, but their performance was horrible when jumbo frames were enabled. Im thinking this though was because the buffer memory was to low. If so, what amount should i be looking at minimally?

Just a note, I do plan on using iSCSI block devices for all my guests storage. I dont like to use file based storage for VM's, so i dont see the benefits/need for NFS/CIFS. Maybe someone knows something that I dont know? Second note: I dont consider myself a sysadmin, but do have a decent amount of experience being an owner of a small web hosting company and having to do most of the work myself. Feel free to correct me whenever i say something that is wrong and/or from left field. =P All my current stuff is linux based, so solaris will be a bit new to me.

Last question. I see that napp-it works on multiple OS's. Which one is preferred? I would think OpenIndiana would be the best since its probably the new core driving force behind *solaris development. Thoughts?

I think my questions are all pretty much on topic, so I apologize if any of them are not.
 
Last edited:
The point is that a NAS/SAN like OI+napp-it is easy to manage and has nice features. If you are running esxi as the host, with guests living on the virtual SAN storage, it is all in one box and easy to manage. Additionally, you can get very good network performance to the VSAN, since the traffic never leaves the esxi vswitch. As far as drives are concerned, I don't think the intended use is for enterprise level installs where you are going to shell out for 15K SAS drives...
 
Back
Top