OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Do you guys think I can get by running an 8 drive vdev with only RAID-Z1? 95% of the time the disks will be sitting there spinning mostly inactive and once in a while will serve up a 1080p movie. That's it, nothing constantly intensive.
 
Last edited:
Two quick questions regarding CIFS/ACLs:

1) I'm in a workgroup environment logged on as "John" with password "1234". In ZFS, I've created an SMB user with the same name and password.

If the root user grants "John" access (using Windows) to a folder, should I receive the Windows authentication prompt? I thought it would automatically authenticate me if the user and passwords matched.

2) How can I grant and deny certain SMB users privileges? Do I keep my permissions at 777 and then login as root (via Windows) to remove the Everyone permission and limit access to that share?

EDIT: One more question...

3) Is it possible to have napp-it auto-scrub every other Sunday? This is a home file-server and every week is a bit much I think. I'd also prefer to have it done late on Sunday since nobody will be using the network at all...

Thanks!
 
Last edited:
I had a problem where I had to recover a ZFS filesystem created on another system...this info took me a while to find and digest so adding it here for posterity.

To view a ZFS filesystem set created on another system follow these instructions:

This example is specifically for the 3ware 9650SE raid card used in ...

Place the driver file (filename: tw) in /kernel/drv/amd64

Then you have to add the driver to the system with this command: add_drv -c scsi -i '"pci13c1,1004"' tw

zpool import

The above command will give you the disk number something like c3d0s0 and the pool name

zpool import -f "poolname"

This will import the dataset info into the current system's zfs database.

Create a mount point: mkdir /mnt/raid

Next issue this command: zfs set mountpoint=legacy [poolname]/ROOT/opensolaris

Now mount the pool with: mount -F zfs [poolname]/ROOT/opensolaris /mnt/raid

The last two commands came from the following article: Sun Blog Article by Avinash Joshi
 
How would I do every other Sunday on that tab?

I see how I can do every Sunday, or pick certain days of the month, but not every other Sunday...
 
As far as I know, getting rid of the view should do it. Also, you said your performance was bad until you changed the block size. It would be nice if you shared what worked better...

For VMware, I found 64kb blocks to be much better... though I thought about testing a few other scenarios so I don't know if it is the absolute best one for this use.
 
By 'for vmware', do you mean as a datastore for vmware? How much of a difference did you see?
 
By 'for vmware', do you mean as a datastore for vmware? How much of a difference did you see?

Yes, for a datastore. I just did a quick test of cloning a 20GB VM. 8k blocksize took 99 minutes and 64k blocksize took 19 minutes. Which actually still sounds slow but I'm sure there are other things I need to tweak.
 
I wouldn't put that many drives on a single raidz. Either go raidz2 or raid10...

I've been thinking about what I want for my final config. I have up to 10 drives. Which would be better, two 5 disk raidz vdevs in one pool or one big 10 disk raidz2 (and why)?
 
Do you guys think I can get by running an 8 drive vdev with only RAID-Z1? 95% of the time the disks will be sitting there spinning mostly inactive and once in a while will serve up a 1080p movie. That's it, nothing constantly intensive.

i would not do.

In a raid-5/ zfs-1 with 8 disk a single disk may fail rather often, especially after using them a few years. If that happens and you replace the failed disk with a new one, the other have a hard work to rebuild/ resilver the pool. This resilver can last one or two days with 8 disks. If a second disk fails in this tine, all of your data is lost.

If your data are more valuable than a few hours of work, i would suggest:
up to about 5 disks: Raid Z1
up to about ten disks: Raid-Z2 or multiple smaller vdevs
more disk: Raid-Z3 or multiple smaller vdevs

Gea
 
Yes, for a datastore. I just did a quick test of cloning a 20GB VM. 8k blocksize took 99 minutes and 64k blocksize took 19 minutes. Which actually still sounds slow but I'm sure there are other things I need to tweak.

pretty big difference. in this case, you are reading from the datastore and writing right back to it, correct? I have found references that claim that the natural vmfs3 block size is 64k, so this would make sense, since you are aligning things...
 
I've been thinking about what I want for my final config. I have up to 10 drives. Which would be better, two 5 disk raidz vdevs in one pool or one big 10 disk raidz2 (and why)?

one big raid-z2: more capacity but slower and you have a much longer resilvering time in case or problems.

i use Raid-Z3 (15 disks) for my backup/media machines and a Raid-10 similar config (to be exact multiple striped 3x mirrors) for my high performance/ high reliability pools.
I would not use 2 x Raid-Z1 with 10 disks. Either optimize for capacity or speed and do not try to have both and get none of them.

Gea
 
pretty big difference. in this case, you are reading from the datastore and writing right back to it, correct? I have found references that claim that the natural vmfs3 block size is 64k, so this would make sense, since you are aligning things...

My opinion:
Thats always a question whether you look at big files or a lot of small files. The same with sequential reads vs IOP/s. Large blocks are better for the first, small ones for the second.

The defaults are optimized for usual loads.
You should only change if you know what you are doing and if you have a special workload.


Gea
 
I had a problem where I had to recover a ZFS filesystem created on another system...this info took me a while to find and digest so adding it here for posterity.

To view a ZFS filesystem set created on another system follow these instructions:

This example is specifically for the 3ware 9650SE raid card used in ...

Place the driver file (filename: tw) in /kernel/drv/amd64

Then you have to add the driver to the system with this command: add_drv -c scsi -i '"pci13c1,1004"' tw

zpool import

The above command will give you the disk number something like c3d0s0 and the pool name

zpool import -f "poolname"

This will import the dataset info into the current system's zfs database.

Create a mount point: mkdir /mnt/raid

Next issue this command: zfs set mountpoint=legacy [poolname]/ROOT/opensolaris

Now mount the pool with: mount -F zfs [poolname]/ROOT/opensolaris /mnt/raid

The last two commands came from the following article: Sun Blog Article by Avinash Joshi

If you do not had created a pool from a basic vdev on a hardware raid (never do that) its as simple as: connect your disks to any controller and just import the pool if the new OS supports the ZFS version. Otherwise use a newer ZFS OS like OI or SE11.
In case of problems, use the newest ZFS OS.

otherwise, you have a problem.

Gea
 
Two quick questions regarding CIFS/ACLs:

1) I'm in a workgroup environment logged on as "John" with password "1234". In ZFS, I've created an SMB user with the same name and password.

If the root user grants "John" access (using Windows) to a folder, should I receive the Windows authentication prompt? I thought it would automatically authenticate me if the user and passwords matched.

2) How can I grant and deny certain SMB users privileges? Do I keep my permissions at 777 and then login as root (via Windows) to remove the Everyone permission and limit access to that share?

EDIT: One more question...

3) Is it possible to have napp-it auto-scrub every other Sunday? This is a home file-server and every week is a bit much I think. I'd also prefer to have it done late on Sunday since nobody will be using the network at all...

Thanks!

1.
A Windows domain has this feature. In a Workgroup you have to authenticate always if you connect a server.

2.
Solaris CIFS server is ACL only (Like a real Windows server)
Start with perm 777 and restrict via ACL
Unix perm will follow automatically - do not touch manually for SMB use-

3.
Yes, but only if you modify the script.
But, Why is weekly too much. That is suggested for desktop drives,
Monthly is suggested for enterprise disks.

Gea
 
I'm remembering also that people have reported zvol iscsi I/O is slower than file-based targets. Have you tried that way too?
 
I've been thinking about what I want for my final config. I have up to 10 drives. Which would be better, two 5 disk raidz vdevs in one pool or one big 10 disk raidz2 (and why)?

See here for more info than you ever wanted: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Bottom line: RaidZ works best when the number of drives minus parity drives is either 2, 4 or 8. This means for RaidZ use 3, 5 or 9; for RaidZ2 use 4, 6 or 10, for RaidZ3 use 5, 7 or 11.

For your plan, 10 drives, performance of 2 x 5 drive RaidZ vs 1 x 10 drive RaidZ2 both comply with the recommended best practice and should have near equal performance. Personally I'd go with the 10 drive RaidZ2 because it gives the best resiliency.
 
i have problems with setting up ACL. (i have read the pdf setup file and tried as described)

did after install:
passwd root

opening samba share. login as root .so far so good.
edit properties of folder, then i searched for users, but no users appear!
tried to add root to group smb admin -> no succes

tried different user within smb admin group, nothing works

when i connect with computer management i can see the users in the normal user list.
then i go to shares, properties, add, advanced, and hit search button i can only see users like SYSTEM, SERVICE etc.. but not my smb users

:(
win7 x64

however, thanks for your work on napp-it. using it now for a couple of days. when everything is setup i will donate some money for your work

I'm beginning to think its related to the type of win7 you have. I just tried with Win7 Home Premium and it worked better then it works on my whs box which is based on 2003. However, I have 2 machines installed with win7 enterprise and I cannot get the acl permissions to set for the life of me. I noticed that secpol.msc does not exost on the home ver sion and does on the enterprise version. I am almost certain it has to do with this version of windows.
 
See here for more info than you ever wanted: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Bottom line: RaidZ works best when the number of drives minus parity drives is either 2, 4 or 8. This means for RaidZ use 3, 5 or 9; for RaidZ2 use 4, 6 or 10, for RaidZ3 use 5, 7 or 11.

For your plan, 10 drives, performance of 2 x 5 drive RaidZ vs 1 x 10 drive RaidZ2 both comply with the recommended best practice and should have near equal performance. Personally I'd go with the 10 drive RaidZ2 because it gives the best resiliency.

I would add that if you wish to expand consider that with a 5 disk RAIDZ you can add 5 disks at a time instead of 10 disks at a time with 10xRaidz2. Speed is excellent with 5 disks per vdev and with 2 of them you get stripping across them.
 
wheelz said:
I've been thinking about what I want for my final config. I have up to 10 drives. Which would be better, two 5 disk raidz vdevs in one pool or one big 10 disk raidz2 (and why)?

10 drive raid-z2

2 5drive raid-z1 might be slightly faster, but if you are accessing of a 1Gb connection both will be limited by your connection. In that case go with the 10 drive raid-z2 so you can *always* recover from 2 drive failures instead of only recover 50% of the time from 2 drive failures.
 
I would add that if you wish to expand consider that with a 5 disk RAIDZ you can add 5 disks at a time instead of 10 disks at a time with 10xRaidz2. Speed is excellent with 5 disks per vdev and with 2 of them you get stripping across them.

Good point - if you want to expand your pool later on this is worth taking into account.
 
My friend is trying to setup an all-in-one solution on a Dell PE 2900. He is having trouble trying to pass through his HBA card. Is vt-d not supported on his motherboard? Would changing the processors help? What other options are there if in fact he cant get passthrough to work?
 
This is perhaps a noob question, but here it is anyway :)

I have a ZFS machine (runing OI_148), /w 4GB of RAM and 16 2T Drives (10 Drive RaidZ2 + 6 RaidZ2). It works nice. operations against the array produce about ~240-300mb/sec (2 8 port controllers, point to point links, PCI-X Controllers).

But, when accessing this across the network, I get ~30-50mb/sec. Any reason for this? In general, it always seems I get a huge drop in performance when running across the network from Sun type OS.

What can I tune, or help the issue in some way ?
 
I'm beginning to think its related to the type of win7 you have. I just tried with Win7 Home Premium and it worked better then it works on my whs box which is based on 2003. However, I have 2 machines installed with win7 enterprise and I cannot get the acl permissions to set for the life of me. I noticed that secpol.msc does not exost on the home ver sion and does on the enterprise version. I am almost certain it has to do with this version of windows.

better to install win7home premium in virtual box and setup ACL?
think i dont have other options left

can other people confirm its not possible to setup acl with win7x64 ultimate ?
 
This is perhaps a noob question, but here it is anyway :)

I have a ZFS machine (runing OI_148), /w 4GB of RAM and 16 2T Drives (10 Drive RaidZ2 + 6 RaidZ2). It works nice. operations against the array produce about ~240-300mb/sec (2 8 port controllers, point to point links, PCI-X Controllers).

But, when accessing this across the network, I get ~30-50mb/sec. Any reason for this? In general, it always seems I get a huge drop in performance when running across the network from Sun type OS.

What can I tune, or help the issue in some way ?
I have something similar, but worse. Copying from my Macbook's SSD (150MB/s reads and writes, normally a bit higher) to the pool (capable of 100ish MB/s) via a crossover cable directly connecting the two devices results in only a 10MB/s or so transfer. Both devices are gigabit, so shouldn't I be getting closer to 100ish MB/s?

I'm interested in learning what tuning can be done as well.

EDIT: Oops, was using a Cat5e cable that my MacBook didn't like. Switching it to a Cat-6 resulted in a speed boost - To 56MB/s or so. Still about half of what I expected!
 
Last edited:
1.
A Windows domain has this feature. In a Workgroup you have to authenticate always if you connect a server.
Are you sure about this?

I was previously running Windows Server 2008 R2 (in Workgroup mode) and had a bunch of folders shared.

As long as a user existed on the server with the exact same username and password as the client machine, it never prompted for authentication.

Also, what's the difference between share-level ACLs and folder-level ACLs? I'm guessing that share-level ACLs restrict access to the entire share and folder-level ACLs restrict access to certain folders?
 
Also, what's the difference between share-level ACLs and folder-level ACLs? I'm guessing that share-level ACLs restrict access to the entire share and folder-level ACLs restrict access to certain folders?

1.
You have to connect as root and set ACL, to allow chad to read the files

2.
share level ACL are used to restrict access for user who are connected via SMB
ex.

If your files are set to full@all and your share-ACL are set to read@all,
all persons who are connecting via SMB have only read access, even root.

Gea

Are you sure about this?

I was previously running Windows Server 2008 R2 (in Workgroup mode) and had a bunch of folders shared.

As long as a user existed on the server with the exact same username and password as the client machine, it never prompted for authentication.

Yeah,
My whs box is like this. Never prompts me for un or ps. I was thinking it was a user mapping that needed to happen.
 
I think I will also chime in here. I as well have been unsuccessful getting Windows 7 x64 Ultimate to work setting share-based ACLs for Solaris. Couldn't get it working for the life of me. Did the "root passwd" and even verified the user was in the SMB share administrators group. Still no go. I eventually ended up just using folder-based ACLs as that's all I really needed and left it at that.


When I find some time after exams this week, I plan on trying this which was posted earlier by danswartz:
http://www.nexenta.com/corp/static/docs-stable/Win7CIFS.pdf
 
Last edited:
I think I will also chime in here. I as well have been unsuccessful getting Windows 7 x64 Ultimate to work setting share-based ACLs for Solaris. Couldn't get it working for the life of me. Did the "root passwd" and even verified the user was in the SMB share administrators group. Still no go. I eventually ended up just using folder-based ACLs as that's all I really needed and left it at that.


When I find some time after exams this week, I plan on trying this which was posted earlier by danswartz:
http://www.nexenta.com/corp/static/docs-stable/Win7CIFS.pdf


Windows 7 seems to be really a mess.
Not all versions supports easy Local security settings
see http://www.sevenforums.com/tutorials/7357-local-security-policy-editor-open.html

Or use XP/ 2003
(seems that ACL management works out of the box)

Also:
if you connect computer management with your Solaris machine,
always use hostname, not a IP.

ps
I use Windows 7 together with Microsoft AD without Problems

One more point.
If you start with a Windows user with the same name and PW like a Solaris user, you are not prompted to a user PW. But per default, this user has no permissions to root owned shared folders.

How would you change that?
Begin with different users, connect the share as root, set needed ACL, opt. give your default user full permissions and syncronise pw afterwards.

Now you have full permissions with your default account.

other Option:
use rmapping winuser:xx=unixuser:root



maybee someone has to write a ACL module to do the settings from Solaris
(do it manually on Solaris is more user unfriendly than doing it on Windows)



Gea
 
Last edited:
I created a user in napp-it with the same name/password as my Windows 7 account and then I added him to the SMB Administrators group.

However, when I try to connect to a share it prompts me for a user name and password.

Do I need to change the share-level permissions to explicitly allow this username? I thought it would like you described above because the username/password match and I belong to the SMB Admin group.
 
When trying to boot from the live cd, it takes me to the grub command prompt instead of booting to the gui, or letting me choose what boot option. Anyone have any tips for me? I have tried 2 different cd/dvd roms and still no go.
 
Nope. IDE.

My hardware:

AMD Athlon 64 X2 4600+ Windsor 2.4GHz Socket AM2 89W Dual-Core Processor ADA4600CUBOX

ASUS M2N AM2 NVIDIA nForce 430 MCP ATX AMD Motherboard

2 x A-DATA 2GB (2 x 1GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual Channel Kit Desktop Memory Model AD2U800B1G5-DRH

MSI NX7300LE-TD256EH GeForce 7300LE 256MB 64-bit GDDR2 PCI Express x16 Video Card

Syba PCI Express SATA II 4 x Ports RAID Controller Card SY-PEX40008

8x SAMSUNG Spinpoint F4 HD204UI 2TB 5400 RPM SATA 3.0Gb/s

1X Seagate 80 GB Ide hard drive (for OS)

1 x LG 22X DVD±R DVD Burner Black IDE Model GH22NP20
 
I created a user in napp-it with the same name/password as my Windows 7 account and then I added him to the SMB Administrators group.

However, when I try to connect to a share it prompts me for a user name and password.

Do I need to change the share-level permissions to explicitly allow this username? I thought it would like you described above because the username/password match and I belong to the SMB Admin group.

share level ACL are per default full access.
do not change if not needed

the owner of the shared folder is root.
only root is allowed to access per default
you have to login as root and allow access by other users

or
you need a mapping Winuser_xx=unixuser:root

Gea
 
This is perhaps a noob question, but here it is anyway :)

I have a ZFS machine (runing OI_148), /w 4GB of RAM and 16 2T Drives (10 Drive RaidZ2 + 6 RaidZ2). It works nice. operations against the array produce about ~240-300mb/sec (2 8 port controllers, point to point links, PCI-X Controllers).

But, when accessing this across the network, I get ~30-50mb/sec. Any reason for this? In general, it always seems I get a huge drop in performance when running across the network from Sun type OS.

What can I tune, or help the issue in some way ?
I am seeing the same here is my setup:

1x Supermicro SC846E26-R1200B Chassis
1x Supermicro X8DTH-6F Motherboard
1x Intel Xeon E5620 / 2.4 GHz processor
1x Kingston Memory (3x 4GB) 12 GB Total
10x Seagate Constellation 1 TB SATA Hard Drives

Running ESXi 4.1 then have Nexenta running on SSD Storage device. Nexenta has PCI Passthrough to the SAS Backplane and has 10 Segate SAS 1TB Drives installed running 4 mirrors striped to make up my datastore. 2 of the 1tb drives are mirrored and are shared as a separate datastore being shared to ESXi via NFS for my VM's.

I then installed a windows 7 ultimate VM. Added a iSCSI Drive that had been shared using Nexenta vDev from the 4 mirrored 1 TB drive datastore and ran benchmarks against it using several programs. Crystal Mark is consistently around the 50 MB/s. I feel like I should be much higher than this and can find other posts of people stating they are hitting around 200 MB/s.

Is this typical or am I mssing a setting . Any tips or tweeks. This is all contained inside an all in one box so I hoped for much higher speeds.
 
Is the iscsi share set for sync=disabled? ZIL off? Not saying to run that way live, but for benching purposes, it would answer some questions.
 
Back
Top