OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

The point is that a NAS/SAN like OI+napp-it is easy to manage and has nice features. If you are running esxi as the host, with guests living on the virtual SAN storage, it is all in one box and easy to manage. Additionally, you can get very good network performance to the VSAN, since the traffic never leaves the esxi vswitch. As far as drives are concerned, I don't think the intended use is for enterprise level installs where you are going to shell out for 15K SAS drives...

When i say enterprise, i mean anything above hobby/home use. I understand that you get better performance when you host everything on one server and dont need external hardware like a switch, but that also becomes a single point of failure and to me, defeats the purpose of having a SAN when your doing everything local. But again, if just for testing/home use, its fine.
 
On further thought, I think the single failure point is backwards. If you only have one box in each role (e.g. one VM host and one SAN), you have a single PoF anyway - if either fails, you are down hard anyway, and with two boxes, you have more HW and more things to fail.
 
When i say enterprise, i mean anything above hobby/home use. I understand that you get better performance when you host everything on one server and dont need external hardware like a switch, but that also becomes a single point of failure and to me, defeats the purpose of having a SAN when your doing everything local. But again, if just for testing/home use, its fine.

Traditionally, you often have several vmware-server + one big SAN-server with all the vm*s. The vm-server could fail and you can easily move virtual machines from one vmware server to the next with the correct expensive ESXi licence. All the different machines are connected by a really expensive SAN-network.

Because of the SAN-server is a single point of failure you should have a second failover SAN-server. Now its going to become complicated. Active active replication with hot failover would be nice, but thats really really complicated with a lot of potential problems and it is slow and expensive.

So mostly, you have only one active SAN-server with backups (with a time delay) to a second server or backups.

My All-in-one concept is a decentral concept, mostly based on virtual fast inter-server network connections between SAN and vmware-server. Each vmware server has its own dedicated virtualized SAN-server for its own. You do backups for sure and its nice have a fast SAN-network at all. But in case of problems, you do not move VM's or restore backups usually. You move your ZFS-disks/pool to another machine - in the simpelst case by plug-in the pool to another All-In-One, import the pool, share/import the NFS storage to ESXi and start up the VM's. Needed time for a complete move and restart of a died all-in-one: about 20 min for 10 VM's in a straight and easy forward manner.

Its just simpel, easy -. you do not need a single costly software.
All is free or cheap -do not move Terabytes over slow networks, move your ZFS pools/disks manually instead. (and do not tell netapp or vmware...)

think KISS
Keep It Simple Stupid


Gea
 
Last edited:
Im not sure where anyone got the idea that i suggested or use just two servers. I actually said I had 3 xen servers. Using an actual SAN allows me to upgrade/maintain my xen hosts while having zero downtime since I can just do a live migration. If a xen host goes down before I can move them, i can simply restart the guests within seconds on another host. This also makes it easier for me to add new xen hosts and move guests around that might need a bit more resources or are becoming troublesome. No idea where some was thinking the software (what software?) was costly and this All-in-One concept is nowhere near keeping things simple for a single server. When you mention moving things and not having to move TB of data through the network, what are you talking about? If you move things to a different physical server and your not using a san, of of course that amount of data has to be moved using your scenario.

I swear i must be missing something if you guys think the All-In-One setup is KISS for a single server environment and if your doing multiple servers, it just seems plain crazy to use such a method. More than likely im missing some key piece of information that is going to make me look like a moron, but so far, i havent seen that info.
 
_Gea can you see my post #516? Simply what I'm asking is can 2x 1gbit Teamed NICs be presented as a virtual 2gbit link in OI and actually get 2gibt speeds?
 
I've been having quite a few issues installing openindiana on a number of P55 boxes (different motherboards, different CPUs) with hangs during the setup program, and just discovered that most can be traced to problems with the Realtek 8xxx embedded NICs that they all seem to have. I run into similar problems installing ESXi to put underneath openindiana.

Is it fair to say that if I scavenge a NIC from a server (say one of the Intel Gigabit ET dual-porters based on 82576) I should have zero problems whatsoever with either ESXi/openindiana or openindiana by itself?
 
gremlin, try setting your root password again using "passwd root" and reboot. It is in Gea's instructions and having done that I haven't had any issues.

Yeah, I tried that multiple times. I have connect to a share as root and then went into computer mangement and did a search using my chad and root login and did not find any users. I mapped a share as chad and then went into computer management and tried both root and chad and did not find any of my unix users. I looked in both /var/smb/osmbpasswd and /var/smb/smbpasswd and both files have a hash password for my users. I also restart smb while trying different commands. chad is in the adm group on the napp-it web page. I can see my users fine in the computer management tab. I'm really at a lost and very frustrated to have a 20lb paper weight at this point


One quick question, after I get this figured out. Should I be changing my permissions on the share permissions tab or the security tab?


Oh, and I do have a second question. Why if I map a share nfs to my esxi server and then try to add all the drive space (over 2TB) to a hard disk in esxi it gives me an error "file Imagename.vmdk is larger than the maxium size supported by the datastor? It seems there's a 1.99tb limit which googling says block size, but apparently that does not apply to NFS shares. Also, can I grow this space after the fact?
 
Last edited:
Right now ESXi doesn't have drivers for the 82579LM nic, but once it does, using the "NIC Teaming" function can combine the 2 NICs into a real 2gbit link that OpenIndiana can use? (Instead of 1gbit link with a failover link)?

You can pass-through and OI can see the NIC and can team them.
But i would always use only ESXi + ESXi virtual switch for guest networking.

Gea
 
Im not sure where anyone got the idea that i suggested or use just two servers. I actually said I had 3 xen servers. Using an actual SAN allows me to upgrade/maintain my xen hosts while having zero downtime since I can just do a live migration. If a xen host goes down before I can move them, i can simply restart the guests within seconds on another host. This also makes it easier for me to add new xen hosts and move guests around that might need a bit more resources or are becoming troublesome. No idea where some was thinking the software (what software?) was costly and this All-in-One concept is nowhere near keeping things simple for a single server. When you mention moving things and not having to move TB of data through the network, what are you talking about? If you move things to a different physical server and your not using a san, of of course that amount of data has to be moved using your scenario.

I swear i must be missing something if you guys think the All-In-One setup is KISS for a single server environment and if your doing multiple servers, it just seems plain crazy to use such a method. More than likely im missing some key piece of information that is going to make me look like a moron, but so far, i havent seen that info.


How do you handle a SAN-server failure?

Gea
 
Yeah, I tried that multiple times. I have connect to a share as root and then went into computer mangement and did a search using my chad and root login and did not find any users. I mapped a share as chad and then went into computer management and tried both root and chad and did not find any of my unix users. I looked in both /var/smb/osmbpasswd and /var/smb/smbpasswd and both files have a hash password for my users. I also restart smb while trying different commands. chad is in the adm group on the napp-it web page. I can see my users fine in the computer management tab. I'm really at a lost and very frustrated to have a 20lb paper weight at this point


One quick question, after I get this figured out. Should I be changing my permissions on the share permissions tab or the security tab?


Oh, and I do have a second question. Why if I map a share nfs to my esxi server and then try to add all the drive space (over 2TB) to a hard disk in esxi it gives me an error "file Imagename.vmdk is larger than the maxium size supported by the datastor? It seems there's a 1.99tb limit which googling says block size, but apparently that does not apply to NFS shares. Also, can I grow this space after the fact?


1. connect a share as a user who is a member if Solaris SMB group administrators, not root, start computer management, connect to Solaris and set share level ACL

(root is only best, if you want to set ACL on shares directly)

2. Share or folder ACL?
A client has to respect both. Set according to your needs

3. your NFS datastore can be avove 2TB, not a single virtual disk
- Why do you need such a huge virtual disk?

Gea
 
I've been having quite a few issues installing openindiana on a number of P55 boxes (different motherboards, different CPUs) with hangs during the setup program, and just discovered that most can be traced to problems with the Realtek 8xxx embedded NICs that they all seem to have. I run into similar problems installing ESXi to put underneath openindiana.

Is it fair to say that if I scavenge a NIC from a server (say one of the Intel Gigabit ET dual-porters based on 82576) I should have zero problems whatsoever with either ESXi/openindiana or openindiana by itself?

I'm running OI on P55 chipset and it works like a charm. I have dual Intel Pro/1000 network cards, but I don't think I had to disable the onboard network card. On the other hand, I had to disable Legacy USB support for keyboard and hard drives, otherwise OI wouldn't boot.

Matej
 
Hey ! Sorry I'm late, been without internet for a couple of days.

AFAIK, raidz2 vs raidz3 should not affect CPU usage - just mainly wasting more space. Unless you have a hugely compelling reason, raidz3 does not make a lot of sense here. Maybe consider two 5-disk raidz2 vdevs added together, giving you 6 disks of usable storage?

Thanks. Let's say i have 8 disks, what would be the advantage if i go with two 4 disks raidz vdev vs 8 disks raidz2 ? I guess that takes less time to resilver when a disk needs to be replaced but is that all ?


2.) ZFS_Best_Practices_Guide#RAIDZ_Configuration_Requirements_and_Recommendations
ZFS_Best_Practices_Guide#Should_I_Configure_a_RAIDZ.2C_RAIDZ-2.2C_RAIDZ-3.2C_or_a_Mirrored_Storage_Pool.3F
This links might give you some idea on number of hard drives and perfomance...

1.) I see no reason why not...
3.) You can mix LSI and onboard SATA... I run my configuration like that... 8x from LSI and 4x from board + 2x board for system.

Oh and by the way, why with F2 1.5TB? If you are planning on buying the hard drives, go with F4 2TB. Less drives and more space:)

Matej

Thanks ! I already have 6 of them atm that's why :)

I searched on the web but couldn't find any graphs that show the cpu utilisation % on raidz, raidz2 and raidz3. I will probably go with 8 or 9 disks raidz2, or two 4 or 5 disks raidz1 vdev and i'm wondering if for exemple a Pentium E6500 would be more than enough or too weak ?
 
Reads from 2 4-disk raidz striped should be faster than an 8-disk raidz2, but you have less redundancy...
 
1. connect a share as a user who is a member if Solaris SMB group administrators, not root, start computer management, connect to Solaris and set share level ACL

(root is only best, if you want to set ACL on shares directly)

So I do those steps, but when I click the find button, it prompts me for a username and password. I then try my accounts and it never finds my Solaris users. I map \\computername\foldername as chad who is a user of the Adm group according to my napp-it user page


2. Share or folder ACL?
A client has to respect both. Set according to your needs

If I do it on the share, does it just not show up when I browse too it? I don't know what the difference is


3. your NFS datastore can be avove 2TB, not a single virtual disk
- Why do you need such a huge virtual disk?

Gea

I was trying to get around the problems I was having by taking my pool and making it a large hard disk b/c I'm going to be running whs 2011 for a few reasons

Thanks again for the direct response. I'm such a newb with this. I have it setup now that if the person is part of the Adm group they can connect to the share. I just can't get it to setup individual users to a share.
 
Going to give S11E a shot on SNB hardware this Friday/Saturday, will report back and see how that goes (expecting it to panic on boot, but could be a pleasant surprise I guess).

Apparently the integrated NIC isn't supported which is unfortunate, but I've got PCIe NICs, so not a game-killer.
 
So I do those steps, but when I click the find button, it prompts me for a username and password. I then try my accounts and it never finds my Solaris users. I map \\computername\foldername as chad who is a user of the Adm group according to my napp-it user page




If I do it on the share, does it just not show up when I browse too it? I don't know what the difference is

1.
You have to connect as root and set ACL, to allow chad to read the files

2.
share level ACL are used to restrict access for user who are connected via SMB
ex.

If your files are set to full@all and your share-ACL are set to read@all,
all persons who are connecting via SMB have only read access, even root.

Gea
 
1.
You have to connect as root and set ACL, to allow chad to read the files
Okay I got the rest, but I still don't have the first :(

Are you saying under computer management, share folder, security, add, advance, find now I enter the root user and password?

Or I map the drive first using the servername, then do the above steps?

The second step is the way I'm trying.
 
I think i might have it???
I go computer management, share folder, security, add, advance, find now I enter the chad and password? All the users then show up. I can then assign the 2 users directly to the share?

Well when I try that, it says access is denied to change the permisssions. However, when I use the root user, none of my users show up??? So confused
 
Okay I got the rest, but I still don't have the first :(

Are you saying under computer management, share folder, security, add, advance, find now I enter the root user and password?

Or I map the drive first using the servername, then do the above steps?

The second step is the way I'm trying.

setting share level ACL is more complicated than folder ACL.
in most cases only folder and files ACL are needed.

if you want to set share ACL:
1. smb connect as a user who is a member of Solaris smb group administrators (root is not a member per default!)
2. start computer management
3. connect computer management to Solaris
4. set share ACL

Gea
 
Last edited:
setting share level ACL is more complicated than folder ACL.
in most cases only folder and files ACL are needed.

if you want to set share ACL:
1. smb connect as a user who is a member of Solaris smb group administrators (root is not a member!)
2. start computer management
3. connect computer management to Solaris
4. set share ACL

Gea

Gea,
i know this seems elementary, but I can't get it to work

I get a step 5 asking for a username and password. Does that make sense to you? Is that suppose to be happening?

I get this if i try both share or folder level acl
 
Gea,
i know this seems elementary, but I can't get it to work

I get a step 5 asking for a username and password. Does that make sense to you? Is that suppose to be happening?

I get this if i try both share or folder level acl

if you are already connected as a user who is member af administrators, you should not be asked again.

but start with folder ACL
(SMB connect as root, right click on folder, properties-security)

Gea
 
if you are already connected as a user who is member af administrators, you should not be asked again.

but start with folder ACL
(SMB connect as root, right click on folder, properties-security)

Gea

1. mapped drive z: to \\zfs-01\movies as root
2. right click on z: drive and go to properties and then security
3. click edit, add, advance, find now
4. get prompted for a username and password???
5. Bang head on keyboard because it sounds so simple!

BTW, this is win7 x64 if that makes a difference?
 
I think from what gea has said in other posts, you maybe want to try the 'root' and the root password for the OI box?
 
I think from what gea has said in other posts, you maybe want to try the 'root' and the root password for the OI box?

1. mapped drive z: to \\zfs-01\movies as root

that's correct right?

Trying it for the second prompt does nothing. Infact entering any username or password does nothing there
 
Last edited:
have you reset the root pw after first install of napp-it?
console: passwd root

(needed to create a smb password beside the unix pw)


ps
you do not need a mapping
just enter \\zfs01 in the adress-field of a windows-file-explorer


Gea
 
have you reset the root pw after first install of napp-it?
console: passwd root

(needed to create a smb password beside the unix pw)


ps
you do not need a mapping
just enter \\zfs01 in the adress-field of a windows-file-explorer


Gea

Yes, I have done passwd root multiple times. Also tried sudo passwd root.

So how do you get it to prompt you for your username and password when you right click properties? Other wise when I click the security tab I get a permission denied.
 
Yes, I have done passwd root multiple times. Also tried sudo passwd root.

So how do you get it to prompt you for your username and password when you right click properties? Other wise when I click the security tab I get a permission denied.

it does prompt, when you connect the smb-share, not when you click
on the security tab.

check:
guest access must be disabled
(guest do not need a pw but are not allowed to change ACL)

unix permissions (keep 777)

Gea
 
it does prompt, when you connect the smb-share, not when you click
on the security tab.

check:
guest access must be disabled
(guest do not need a pw but are not allowed to change ACL)

unix permissions (keep 777)

Gea

I was able to do the steps needed to change permissions, but not on win7. I used win 2003 and it worked perfectly the first time! So either 2 win7 machines are messed up, or there may be some incompatibility? I'm going to try an xp machine tonight just to verify
Thanks again for all your help!
 
5k3000s are not 4k drives. http://www.hitachigst.com/tech/techlib.nsf/techdocs/02D9197756A273D0862577D50024EC1D/$file/DS5K3000_ds.pdf

I am looking at SAS cards right now too, and was curious if anyone has used the AOC-USAS2-L8I? It is $100 cheaper than the other 2008 cards I looked at, which is worth having to rig up a replacement back plate (assuming I am correct that it can be done).

I'm using a AOC-USAS-L8I and it works like a champ under Nexenta 3.0.1 Core with nappit. I have it in the x16 slot of a Gigabyte motherboard, 4x1TB and 4x2TB attached to it with no issues. I went with it specifically because this is for a home setup and was significantly cheaper than other cards. The only trick was modifying the bracket to get it to fit in a regular ATX case, because it is designed for some proprietary setup.
 
Hmmm, this poked a button in my memory. When I was playing around with nexentastor, I found a pdf describing changes you had to make to your win7 host to connect to nexentastor CIFS. Given this is the same CIFS as you are using here, it might be relevant?

http://www.nexenta.com/corp/static/docs-stable/Win7CIFS.pdf

yep I found that today too! I haven't had the chance to play with those settings just yet, but it does sound like it may help. I also found references to windows live assistant causing problems with cifs too. Honestly, if I can setup permissions with minixp, I won't care if win7 doesn't work correctly in that reguard.
 
Let us know if you try the win7 fix and it works. Gea could update his howto then...
 
When you map the drive, select login using different credential and type user 'root' and password whatever password you setted for user 'root' in Solaris. Then you will be able to change ACL.
 
I got my first build up and running. It was relatively easy and I'm pretty happy with it and the napp-it interface (great work). Here is the link to my build thread (for which parts I used):

http://hardforum.com/showthread.php?t=1586873

I did run into a problem while testing. I am using it present storage to an ESXi host. I went with creating a volume and using comstar to create the iscsi target. Afterwards my performance was bad and I realized I used the wrong block size (incidently that would be a great thing to add to the napp-it GUI when creating a volume - the ability to change the block size). I then tried to delete the volume... of course it wouldn't let me because there was an iscsi target attached to it. So then I tried to figure out what all I needed to detach to get the volume removed. Well after going through and deleting most of the comstar configuration (through the web gui), I still couldn't delete the volume and both the web gui and the solaris gui became unusable. I left it overnight but it was still doing whatever it was doing. I hard booted it and then it sat trying to boot forever. I then left it for another day and a half and finally it is now responsive.

This was just a test so I don't mind wiping it out. However could you tell me what the proper procedure for destroying a volume that is being used through comstar?
 
i have problems with setting up ACL. (i have read the pdf setup file and tried as described)

did after install:
passwd root

opening samba share. login as root .so far so good.
edit properties of folder, then i searched for users, but no users appear!
tried to add root to group smb admin -> no succes

tried different user within smb admin group, nothing works

when i connect with computer management i can see the users in the normal user list.
then i go to shares, properties, add, advanced, and hit search button i can only see users like SYSTEM, SERVICE etc.. but not my smb users

:(
win7 x64

however, thanks for your work on napp-it. using it now for a couple of days. when everything is setup i will donate some money for your work
 
Last edited:
As far as I know, getting rid of the view should do it. Also, you said your performance was bad until you changed the block size. It would be nice if you shared what worked better...
 
Back
Top