OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Built my very first NAS on OI/Napp-it All-in-one with ESXi 5. Everything has gone relatively smoothly. It seems that I can only get to the webUI of Nappit via OI and not any other computer on the network, does that seem right?

I've not done any network/linux/unix stuff, but I am willing to learn and play around. It seemed that FreeNAS provides an elegant and easy to use solution, but not as robust in performance, so I went with OI/Napp-it.

I've spent the last couple of nights trying to figure how to work with ACL and assign folder level permission to specific users and I haven't made any progress there. I've read quite a few threads on this forum and I really don't want to give up on OI/Napp-it for FreeNAS. Anyway, is there any tutorials on how to assign ACL permissions with CLI?

I've created various ZFS folders and SMB Groups trying to limit access for individual users as I will have 3 or 4 people in my family accessing the servers. I want one or two folders (media stuff, movies, music, etc) accessible by everyone and the remaining folder will be individual specific. Any help would be great appreciated.

One more question. I am accessing from Windows 7 Ultimate, and read somewhere that I can use Windows to actually manage the ACL, but not with Win 7 Ultimate. If used Win 7 Professional, can I manage the ACL settings and how well does it work? Are the settings retained well and will then show up in Napp-it as well?

Since I am asking a question, what is the best way to access my napp-it NAS while I am away from home?

Thank you for any and all the help in advance!
 
Last edited:
Hey liam137,
Thanks for checking back on this. Unfortunately, I could not make any progress as I didn't get back my mobo yet. Apparently, they've sent it in to the supplier in the Netherlands (I'm from Europe) to have it "repaired"... I don't know what to expect... maybe a plant in the middle of nowhere with lots of people re-soldering mobo's... :) Well, I will be selling it on ebay anyway when it gets back, as I decided that I want to use SAS as well, and therefore I ordered a Supermicro X8SI6-F. That will give me 8 additional SAS ports to the 8 ports on the LSI 9211-8i card. So I have ordered the X8SI6-F, but it will not be delivered before May 11th. :(
BTW the LSI card has been exchanged in 3 days, now waiting on my desk to be flashed again with the IT firmware. Also, did you see there's a new version P13?

I took out every single drive of my setup (7 Samsung 1 TB) and had them tested in another computer with the Samsung (Seagate really) testing tool. I also took the long tests, looked at SMART data... NOTHING. Since it affected over time all my disks in a rather random sequence, I assume it has nothing to do with the disks.

I have not looked in "fmadm faulty" for PCIe errors. I have switched slots out of pure desparation, and it got a lot better (not perfect though).

How did you measure the transfer rates? I have never managed to get Bonnie running... and the dd stats were very odd, but I don't remember them by heart. NEVER that high, I would be very satisfied with this. I just remember that copying from Windows 7 onto the NAS dropped quickly down to 10 MB/s and stayed there or even below it. It took me AGES to copy my 2+TB of data onto it...

My Setup will be
Supermicro X8SI6-F mobo with LSI2008 SAS-Ctrl for SAS Drives
Xeon L3426 (low power consumption)
12 GB of RAM, maybe upgrading to 24 once everything works as expected
LSI 9211-8i for SATA-Drives
7 x 1TB Samsung SATA

I did not quite understand what you mean by "phase" 10, 11, 12? Also I did not understand exactly what you mean with the multipathing? Can you please elaborate?

As for the Support question, I would never ask Oracle for support. This entire thing is pretty specific to home NAS'es and I doubt they will support you with this. Especially if it does not run on SUN hardware. Best thing would be if _Gea would offer some (paid) support, but I doubt he has time for this... _Gea?

About the Power Supply, I think you should check if it has enough power to supply that many devices. Maybe try connecting less disk drives. If your power supply does not have enough power to "feed" all devices accordingly, it can create very odd behaviour.

Best regards,

Cap'

Captain,

Sorry to hear you're still hanging in the ether. But I think you know exactly how unnerving it is not being able to just turn the server on and let it go. I'm glad I got my tape drive last week so I can at leat backup my important data and rest a bit easier.

I didn't see PH13 is out (phase 13 - that's where I got the phase 10, 11, 12). At least it isn't available from supermicro's site which is a moot point as I won't be using the card for much longer. I don't know about needing to flash it, to be honest. Everything that I've read says that, by default, the cards come in IT mode. Perhaps an email to LSI can confirm.

I, too, have tested every sodding drive with seatools' long test without finding any problems. I would then do a complete erase. I keep wondering if it's something with the drive timing-out that causes ZFS to mark it unavailable to the pool because the drive's time-out exceeds ZFS' time-out. cfgadm still shows the failed drive as configured. The odd thing is I've spent countless hours searching on our problem. I've been into hundreds of pages of google results. Which, I found, are utterly useless. The only occurrence of my situation is that of yours. The only similarity between us seems to be the drives. Oh, I don't know what kind of errors you were seeing, but mine seem to all be 'Transport' errors.

It's just so strange. I can't help but screaming to myself "there has to be something simple I'm missing!" I wish it was consistent so I can narrow-down the issue without throwing parts at it. As a former mechanic, I hate parts-swapping!

The power supply is a nzxt hale90 1000w. There are 7 drives in the storage pool, 3 drives in the rpool, 1 tape drive, 1 cdrom, and something like 7 fans. The power supply estimator says I'm at about 700w. I will grab my multi-meter and test for power, ground, and reference voltages.

Once I get everything up and running with the 9211-8i I'll check back. I'll also take a peak and see if it really does need to be flashed.

As for testing transfer rates, here's what I did:
this will create a 5GB file and display the time to write and average MB/s
dd if=/dev/zero of=speedtest bs=1M count=5k

this will read the file and display time and read average
dd if=speedtest of=/dev/null bs=1M

be sure to rm speedtest when finished. Also, be sure your working directory is in the pool you're testing.

I don't know what I did differently the first time. I reran on my storage pool and got 372MB/s write 180MB/s read. But there's a proxy and streaming pulling from it so it's not truly accurate. When using nautalis it shows 100-110MB/s when moving files. I've seen it hit 180MB/s at times. So, who knows. I read from a SMB share as fast as the local disks can write - typically 60-70MB/s. When moving data to the server I was able to hit 90MB/s write on large files and like 60MB/s on small files. It does take a bit to move it all over initially, though.

I forgot to mention, I found that splash screen rather annoying and disabled it and enable verbose boot. It helps me see what's going on at that moment instead of staring at oracle wondering what's hanging behind it.

Be sure you have a backup plan of some sort. It sure makes me sleep better at night!

Cheers,
Liam
 
Liam,
Here's the link to the new firmware. It says it's P13.

Yes, you definitely need to flash with IT firmware. IR firmware is with RAID functionality, and with ZFS, that's not what you want, usually, as in that case, ZFS's advanced features become more or less useless. IT firmware is removing the RAID functionality and basically makes a HBA out of the adapter, which ZFS can use to create its software RAID and its advanced features.

Here's a link on how to reflash. I have used sas2flsh -o -e 6 to erase it, and then sas2flsh -f <firmware> -b <bios> to reprogram it. Always worked flawlessly, however, never ever turn off the computer after erasing and before reprogramming. It would render the adapter useless. Erase, reprogram, then turn off.

Here's the official article for reprogramming.

I'll update you once I have received all my hardware and have put it back together. Hope you'll find more on your side - please share all the news here, I am checking this forum regularly.

Cheers,
Cap'
 
Built my very first NAS on OI/Napp-it All-in-one with ESXi 5. Everything has gone relatively smoothly. It seems that I can only get to the webUI of Nappit via OI and not any other computer on the network, does that seem right?

I've not done any network/linux/unix stuff, but I am willing to learn and play around. It seemed that FreeNAS provides an elegant and easy to use solution, but not as robust in performance, so I went with OI/Napp-it.

I've spent the last couple of nights trying to figure how to work with ACL and assign folder level permission to specific users and I haven't made any progress there. I've read quite a few threads on this forum and I really don't want to give up on OI/Napp-it for FreeNAS. Anyway, is there any tutorials on how to assign ACL permissions with CLI?

I've created various ZFS folders and SMB Groups trying to limit access for individual users as I will have 3 or 4 people in my family accessing the servers. I want one or two folders (media stuff, movies, music, etc) accessible by everyone and the remaining folder will be individual specific. Any help would be great appreciated.

One more question. I am accessing from Windows 7 Ultimate, and read somewhere that I can use Windows to actually manage the ACL, but not with Win 7 Ultimate. If used Win 7 Professional, can I manage the ACL settings and how well does it work? Are the settings retained well and will then show up in Napp-it as well?

Since I am asking a question, what is the best way to access my napp-it NAS while I am away from home?

Thank you for any and all the help in advance!

For the first issue, there is a field in the setup screen of Napp it that can restrict access to certain IPs. You likely entered the IP of the VM itself. I am away from home, but I think the field is called NAS Access or something along those lines.
 
I'm having a problem with SMB + AD (OI+NappIT obviously). It works great for a while (few days or weeks), and then suddenly it will just stop accepting logins from Windows workstations. In the logs (/var/adm/messages) it starts saying "guest access is disabled" if I try to log in from a Windows workstation, no matter what (AD) username/password(s) I use. Connecting from Linux workstations works totally fine with AD usernames.

One other point I find interesting is I cant even get the top level from an AD user, but I can get it to list the shares (from a windows box) if I use the OI login information, but then it does the same thing (timeout/invalid user "guest") once I try to go into a specific share.

I see others complaining of the same issue in mailing lists but I have yet to come across a solution. Has anyone else seen this and come up with a solution besides restarting SMB regularly?
 
I've spent the last couple of nights trying to figure how to work with ACL and assign folder level permission to specific users and I haven't made any progress there. I've read quite a few threads on this forum and I really don't want to give up on OI/Napp-it for FreeNAS. Anyway, is there any tutorials on how to assign ACL permissions with CLI?

I've created various ZFS folders and SMB Groups trying to limit access for individual users as I will have 3 or 4 people in my family accessing the servers. I want one or two folders (media stuff, movies, music, etc) accessible by everyone and the remaining folder will be individual specific. Any help would be great appreciated.

!

Solaris ACL are quite similar to Windows ACL (in contrast to Posix ACL, usually used on LInux/ Unix/ FreeBSD).
You can set them via CLI and Solaris chmod command, but thats only usefull if you want to script something.

You have three other options:
From (some versions of) Windows: SMB connect as root and set ACL like you would do with any Windows server
From napp-it, menu zfs -folder: set default basic ACL like everybody=modify
From napp-it ACL extension: (menu zfs folder - ACL extension): Setting trivial ACL (the Linux compatible ones) and user ACL on folders is free:
- start extension
- use ACL on folders
- select a folder
- add OI users with (Read/modify/full) permission

About access to napp-it:
If you have not restricted in napp-it settings, you can connect the Web-UI from you LAN if you have patched your OI machine via ESXi's virtual switch to your LAN
 
I'm having a problem with SMB + AD (OI+NappIT obviously). It works great for a while (few days or weeks), and then suddenly it will just stop accepting logins from Windows workstations. In the logs (/var/adm/messages) it starts saying "guest access is disabled" if I try to log in from a Windows workstation, no matter what (AD) username/password(s) I use. Connecting from Linux workstations works totally fine with AD usernames.

One other point I find interesting is I cant even get the top level from an AD user, but I can get it to list the shares (from a windows box) if I use the OI login information, but then it does the same thing (timeout/invalid user "guest") once I try to go into a specific share.

I see others complaining of the same issue in mailing lists but I have yet to come across a solution. Has anyone else seen this and come up with a solution besides restarting SMB regularly?

I have had the same problem two times the last months but found no other solution
 
Am running a all-in-one ESXi/ OI/Nappit setup, after rebooting the ESXi host this morning and adding a Intel 40GB SSD, I get the following:

cs4ehrwt.eko.png


Looking at that, i have a hard error on the 40GB Virtual disk that OI is installed onto? What would be the best/easiest way to recover from this? Just make another OI VM and import the pools? Is that a likely indicator the physical disk itself that OI is installed onto has errors?
 
I'm thinking of trying this over FreeNAS, just curious if there are any people who are having a trouble free setup?
All the questions posted here makes it seem like everyone is having problems?
Or maybe for most people it's working perfect and they aren't posting?

i've run it on non-approved hardware with no issues. Have moved pools between hardware, latest is x58 + 24Gb RAM (for now) and E5504. idle time for CPU is normally 100%, lol. MUCH better than FreeNAS
 
Am running a all-in-one ESXi/ OI/Nappit setup, after rebooting the ESXi host this morning and adding a Intel 40GB SSD, I get the following:

cs4ehrwt.eko.png


Looking at that, i have a hard error on the 40GB Virtual disk that OI is installed onto? What would be the best/easiest way to recover from this? Just make another OI VM and import the pools? Is that a likely indicator the physical disk itself that OI is installed onto has errors?

Soft/ Hard/ Transfer errors are only messages. The may or may not indicate a future failure of the device. If they suddenly increases, I would expect problems.

Beside that, a disk can fail at any time, be prepared with redundancy.
When that occurs, ZFS will inform you.

Beside that, your error is at the virtual CD disk, not the SSD
 
Thanks for that, I have got redundancy in place but am very new to ZFS so panicked a bit I think :)
 
The hard error can mean, your disk will die in 5s or 5 years. These errors can indicate a problem but sometimes its only a sector marked as bad with no more consequences.

You can replace a 2 TB disk with a 3 TB disk if your controller supports 3 TB disks.
Sometimes you have problems replacing older 512b disks with 4k ones

about virtual -> physical
I would just reinstall OI and import the pool. Most relevant settings (shares etc) are stored in the pool. You must re-set user, jobs and iSCSI settings

Everything went well moving my disks to a new motherboard... except the iscsi portion :(

In the All-in-one I had 4 drives in a striped-mirror configuration that I used for TV recording. I configured the pool and then shared it via iscsi to a virtual workstation which connected using the microsoft iscsi initiator.

After the import of the pool, I then went into the comstar configuration and reconfigured the targets, host groups, etc. The problem is creating the LU. It doesn't see an LU to allow me to import.. and when I configure a new thin LU I can see it from the workstation but it is listed as an unformated disk :(

Granted, it is just TV recordings and the world won't come to an end if I loose them (never back them up), but am I missing something to be able to recreate the iSCSI share so that the workstation can see all the data?

Edit - Nevermind.. think I got it working now. I didn't type the name of the LU exactly the same. After I deleted the LU and recreated it using the exact same name it now shows up on the workstation.
 
Last edited:
I have a feature request (or maybe you've got a better solution for me?) -

I have a couple SSDs that I'm using for ZIL (300gb Intel 320's mirrored) - I made 15GB partitions and am using them. I had to to it all manually, and it looks funny in the Napp-IT GUI. Looking funny isn't a big deal but Napp-IT still considers the drives un-used and would let someone re-configure the (whole) drives to be used, wiping out my configuration. I know I could probably set HPA on the drives, but partitioning was far easier at the time.

Basically, how much trouble is it to make Napp-IT partition aware? I'm afraid another admin might mistakenly "use" the drives for another purpose thinking I just have two unused drives in the system!

Also, before someone complains to me about using 320's for ZIL - it looks like under-provisioned by 95% I should get about 1.3+ petabytes write before I run out of Media-Wearout-Indicator time on the SSDs. Pretty solid, IMO.
 
I have a feature request (or maybe you've got a better solution for me?) -

I have a couple SSDs that I'm using for ZIL (300gb Intel 320's mirrored) - I made 15GB partitions and am using them. I had to to it all manually, and it looks funny in the Napp-IT GUI. Looking funny isn't a big deal but Napp-IT still considers the drives un-used and would let someone re-configure the (whole) drives to be used, wiping out my configuration. I know I could probably set HPA on the drives, but partitioning was far easier at the time.

Basically, how much trouble is it to make Napp-IT partition aware? I'm afraid another admin might mistakenly "use" the drives for another purpose thinking I just have two unused drives in the system!

Also, before someone complains to me about using 320's for ZIL - it looks like under-provisioned by 95% I should get about 1.3+ petabytes write before I run out of Media-Wearout-Indicator time on the SSDs. Pretty solid, IMO.

In my understanding, slicing discs is a nogo with Solaris.
Use always the whole disk is the rule. While it is a waste there is no reason not to use the whole SSD. (Only up th the half of RAM and max of about 5s incoming data write request is needed). I would also not expect very high write rates with the Intel 320. They may only help a little with sync writes compared to regular non-sync writes..

So, do not expect such a featute in napp-it.
If you really need high sync write values, you should look at acard, zeusram or ddrdrive. They are more expensive than a pair of 320GB SSD but much much faster - at least zeus and ddrdrive, the pro-options, otherwise a 40 GB Intel SSD is way much enough.

The problem with a seperate ZIL is, you need very high write rates and very low latency, otherwise its quite useless. (There are no really good cheap SSD. This includes Intel 320)
 
Last edited:
those will suffer the same problem other SSDs suffer from. sustained writes degrade performance quickly.
 
Why would eMLC with Random Write IOPS 5.5K and 110MB/s (100GB model) be better than SLC drive with 50k and 270MB/s (its just a spec, don't know the actual performance)?
 
Last edited:
Hi Gea, I was wondering, would napp-it be able to function on linux with the zfs kernel implementation (http://zfsonlinux.org). ?

Thanks!

napp-it as a tool to build a Web-UI will work.
You do not need more than a cgi-capable webserver like Apache or others

The ZFS server suite from Solaris will not work unless you modify libraries and menus about:
- basic installer
- service management
- disk naming
- some system tools are different
- use of Samba instead of the Solaris kernel SMB server with Windows compatible NFS4 ACL instead of Samba + Posix ACL
- a lot of minor other problems to expect

So the answer is: Yes possible, with a lot of work to do
Can you expect a napp-it for Linux some time: No, not planned

same like with FreeBSD and OSX, the other ZFS platforms
 
Gea,

I found a minor bug in the latest napp-it. If you create a folder with a single-quote mark in its name, like:

test ' directory

And then go to Extensions->acl-settings->ACL on folders and "reset ACL's". I chose folders, files, and recursive. Proceed and napp-it will print errors for every file in that directory that has the single quote. In acllib.pl it is using a single-quote to quote the filename before passing it to chmod, so that's causing the problem. I tried just changing the single quote in the code to a double-quote, and that fixes the issue, although this would of course then fail if the filenames have double-quotes in them (not sure if that's allowed or not?)

The same issue may exist in other functions than reset ACL's, that's all I've tried so far. Thanks,

Nick
 
Gea,

I found a minor bug in the latest napp-it. If you create a folder with a single-quote mark in its name, like:

test ' directory

And then go to Extensions->acl-settings->ACL on folders and "reset ACL's". I chose folders, files, and recursive. Proceed and napp-it will print errors for every file in that directory that has the single quote. In acllib.pl it is using a single-quote to quote the filename before passing it to chmod, so that's causing the problem. I tried just changing the single quote in the code to a double-quote, and that fixes the issue, although this would of course then fail if the filenames have double-quotes in them (not sure if that's allowed or not?)

The same issue may exist in other functions than reset ACL's, that's all I've tried so far. Thanks,

Nick

If you manipulate file properties on Linux/Unix via CLI commands, the command options are separated with spaces.
If you allow spaces in filenames, you must quote the filenames.

ex:
chmod 777 /folder/this file

will only work when using
chmod 777 /folder/'this file'

When using Unix as a NAS for Linux/ Unix/ OSX / Windows you should know:

1. Best is using 127 bit ascii in filenames without spaces
2. If you allow spaces in filenames, a quote is forbidden
3. characters like /\?: are always forbidden in filenames
4. non ascii 126 characters like german umlauts may cause problems
 
I'm having a problem with SMB + AD (OI+NappIT obviously). It works great for a while (few days or weeks), and then suddenly it will just stop accepting logins from Windows workstations. In the logs (/var/adm/messages) it starts saying "guest access is disabled" if I try to log in from a Windows workstation, no matter what (AD) username/password(s) I use. Connecting from Linux workstations works totally fine with AD usernames.

One other point I find interesting is I cant even get the top level from an AD user, but I can get it to list the shares (from a windows box) if I use the OI login information, but then it does the same thing (timeout/invalid user "guest") once I try to go into a specific share.

I see others complaining of the same issue in mailing lists but I have yet to come across a solution. Has anyone else seen this and come up with a solution besides restarting SMB regularly?

Just ran into this myself during my "upgrade" from an All-in-one to a seperate SAN storage. Figured while I was upgrading to the latest OI + Napp-it I might as well join it to my AD server to start building restricted folders. Everything seemed to work fine the first day and then that night I lost access to my media shares from all my windows systems and media extenders. Took awhile to figure out it was the SMB shares not working, but reseting the service broght it back. Beginning to think I should have left that part alone, or switch back to "workgroup mode".

I will say that I'm happy with my decision to move to a physical storage box. My virtual setup was 2 vCPU and 10 GB RAM hosted by ESXi 4.1 on a Supermicro X8ST3-F motherboard. The physical setup is a Supermicro X8SIL-F with a Xeon x3430 and 8 GB RAM using the same hard drives and hbas (M1015 and BR10i). My media pool went from a sporatic file transfer (with dropped connections) to a nice sustained 60+ MB/s transfer and no dropped connections. Just copied 10GB of data from my storage to a workstation this morning that would normal crap out after 1-3GB (and about the time I would get warnings in VMWare about high CPU usage).

It's also nice to be able to take my ESXi boxes down without taking the whole storage down, like troubleshooting my IPMI issues on my primary (used to run the OI VM) that quit working months ago. I could use IPMIView to access it, but not a web interface. Ended up needing to upgrade the firmware and reset it to default settings.
 
Ok, I've made another baby step in progress. I was able to enable the security sharing menus for XP, you have to disable simple sharing to get those settings to appear, for those that care. Windows XP SP3 is installed as virtual machine on the same ESXi box.

Now I try to set folder permissions from XP Pro SP3, it can't find the users. Nappit has User1 and User2. User1 is part of the administrator group, User2 is part of the Power Users group, but neither can be seen from windows.

The first time after a restart to add permissions, a log in prompt comes, I enter the username and password, then it proceeds to no object found. When I do a search for available users, none are found. Any ideas?
 
Ok, I've made another baby step in progress. I was able to enable the security sharing menus for XP, you have to disable simple sharing to get those settings to appear, for those that care. Windows XP SP3 is installed as virtual machine on the same ESXi box.

Now I try to set folder permissions from XP Pro SP3, it can't find the users. Nappit has User1 and User2. User1 is part of the administrator group, User2 is part of the Power Users group, but neither can be seen from windows.

The first time after a restart to add permissions, a log in prompt comes, I enter the username and password, then it proceeds to no object found. When I do a search for available users, none are found. Any ideas?

- You may try as user root
- set user ACL with napp-it (ACL extension: reset ACL, set trivial ACL and user ACL is free)
 
Yo,

A small question: here in Belgium we don't get a lot of power outages but it did today only briefly < 1sec! My server rebooted and I started a scrub but this has been finished by noon today. During that very short power outage my server was streaming video from one pool and that pool seems to be active even after 8 hrs past the scrub.The 8 drives are constantly busy and I don't know why? Rebuilding data? Allthough Napp-it says all pools are healthy? The other 2 pools were not accessed during the brief power outage and are not active and remain on standby.
So could any1 help figger this out?

thanks

EDIT: Seems is was still scrubbing and stopped after zpool scrub -s command! So why did I get msg : 1336556062 scrub finished 09.05.2012, 11:35 06 s ???? Would be nice to see like a progress bar or something! Anyways seems it was still scrubbing although system said otherwise!
 
Last edited:
In my understanding, slicing discs is a nogo with Solaris.
Use always the whole disk is the rule. While it is a waste there is no reason not to use the whole SSD. (Only up th the half of RAM and max of about 5s incoming data write request is needed). I would also not expect very high write rates with the Intel 320. They may only help a little with sync writes compared to regular non-sync writes..

So, do not expect such a featute in napp-it.
If you really need high sync write values, you should look at acard, zeusram or ddrdrive. They are more expensive than a pair of 320GB SSD but much much faster - at least zeus and ddrdrive, the pro-options, otherwise a 40 GB Intel SSD is way much enough.

The problem with a seperate ZIL is, you need very high write rates and very low latency, otherwise its quite useless. (There are no really good cheap SSD. This includes Intel 320)

The 300gb Intel 320's have MUCH higher IOPS than the smaller ones (40gb for example) - plus if you slice, partition, HPA, or whatever the disk down a lot (like in my example 95%) the SSD will last a LOT longer and perform better through it's life for the task at hand (SLOG). I'm well aware a DRAM based storage device will perform a lot better again, but I get extremely higher write IOPS for VMware than I do with just the disk pool.

My exact setup that I'm using in this particular use-case:
Quad-Xeon (2.6ghz), 48GB ram, 72-bay 2.5" 4U case, LSI 9211-8i.
I've got 70x 500gb 5400rpm disks in 7x 10disk RAIDZ2's. The throughput is solid, the IOPS suck pretty bad. I've actually got 2 of these servers now, because I have tons of the 5400rpm disks new "free", and they work excellently for storage for our backup server.

Due to a major issue with a SAN, I was asked to temporarily retrofit one of them to act in place of the dead SAN while we work out a new option. I had 300gb Intel 320's, and partitioning them was by far the best bet to get life out of them as SLOGs (it looks like I should get years at writing nearly as fast as they'll write) - I am getting somewhere around 1000% the lifespan I would get without partitioning them. They can do about 12,000-16,000 write IOPS (about 40-50x as many as the disk pool in a random fashion). In any case, I have them working just fine, partitioned and manually added with ZFS, the system sees and uses them fine but Napp-IT refuses to accept it.

In my experiences so far (pretty heavy IO load from ~35-40 VMware virtual servers across a cluster, which are writing about 2TB/week to them) the Intel 320's are excellent SLOG options if you can't afford a DRAM SLOG device. They meet all the needs (such as capacitors), other than limited IOPS and life - but again, pretty damn good for the price if you only make a small fraction available to ZFS for SLOG.

To reiterate; you get a LOT out of partitioning MLC SSDs smaller and you definitely *can* do it with Solaris.
 
Any, that are not specifically RAID drives:)

Actually, you can put in just about anything you like. I use WD Green series, Blue series, Samsung F3 and F4. Others use WD Black, Seagate Baraccudas,...

Matej
 
@_GEA

After upgrading to 0.8h my disks no longer spindown. It worked fine with 0.8g.

Running OI, tried "svcadm disable fmd" and "setprop interval 24h" in /usr/lib/fm/fmd/plugins/disk-transport.conf

Still no luck, any ideas?

/Jim
 
Bit of help if you guys don't mind.

Q Can I expand/ shrink a pool
A You cannot shrink (who wants?)
A A pool is build from one or more vdevs/ RaidZ
A You can expand a pool by adding more RaidZ/vdevs but you cannot expand a single Raidz of a pool

So for e.g. if I build my ESXi build with 4 1TB drives - what would my usable storage be with RaidZ2?

I only have (and ever plan to) have 4 SATA ports. Could I therefore swap the 1TB drives for 3TB drives sequentially as I need to?
 
Bit of help if you guys don't mind.



So for e.g. if I build my ESXi build with 4 1TB drives - what would my usable storage be with RaidZ2?

I only have (and ever plan to) have 4 SATA ports. Could I therefore swap the 1TB drives for 3TB drives sequentially as I need to?

Use ESXi from a USB key (4GB is enough). Consider using a small cheap SSD for VM storage.

ZFS is not really scaled for such small pools, but if you wan't to run with 4 disks only. I suggest using Raid Z1 instead. Of course this is individual, but I would never use Z2 with only 4 drives unless I was storing very important stuff where backup where minimal or monthly.

Raid Z1 = 3 disks useable for data, 1 parity drive
Raid Z2 = 2 disks useable for data, 2 parity drives

If you wish to expand you pool later on, that would require you to take out 1 drive at a time, then wait for rebuild, rinse, repeat with all 4 drives.
Better solution is to keep the pool as is, then purchase a cheap disk controller and add the new drives onto that, when that time comes.
 
Use ESXi from a USB key (4GB is enough). Consider using a small cheap SSD for VM storage.

ZFS is not really scaled for such small pools, but if you wan't to run with 4 disks only. I suggest using Raid Z1 instead. Of course this is individual, but I would never use Z2 with only 4 drives unless I was storing very important stuff where backup where minimal or monthly.

Raid Z1 = 3 disks useable for data, 1 parity drive
Raid Z2 = 2 disks useable for data, 2 parity drives

If you wish to expand you pool later on, that would require you to take out 1 drive at a time, then wait for rebuild, rinse, repeat with all 4 drives.
Better solution is to keep the pool as is, then purchase a cheap disk controller and add the new drives onto that, when that time comes.

But to create a new pool, i'd have to buy another 4 drives - right? I.e. I couldn't add one drive at a time.
 
But to create a new pool, i'd have to buy another 4 drives - right? I.e. I couldn't add one drive at a time.

You can't add drives to a vdev, once the pool is created. But you can create another pool with the amount of drives you wish. So you can't expand your current pool and datasets.

With the amount of drives you're considering(very low), you might be better off trying alternate methods. Have a look at something called "UNRaid". It allows to add one drive at a time of any size, thereby expanding your share.
 
YOu can't add one drive at a time, but you can build a new vdev out of, lets say, 3 drives and build raidz vdev with them and "attach" that to an existing pool thus resizing the pool.

Matej
 
Hey there! I have a problem with hidden share.

My shares name is nas$ and I can't write in it. Under SMB-share-all it says none and when I try to change it to full_set, I get an error:
chmod: WARNING: can't access /data/nas/.zfs/shares/naschmod: WARNING: can't access /data/nas/.zfs/shares/nas

I'm guessing $ is the problem. How to solve that?

Also, I want to have a share, where only 1 user will have access to. Where to set that user? I dont have Active Directory, just a simple workgroup. Do I have to create a user on solaris and map it to windows user?

lp, Matej
 
Having a spot of bother...

Running latest version but am unable now to delete folders or rename them. Permissions are set to 777 and haven't been altered. I have a folder with movies on a pool and even when login in directly on to the server the folder is empty while from windows I can see several movies. Also I tried to create an new folder but it doesn't let me rename it. It always says permissions not set for guest access but they are set! Never had this problem before and am out of ideas so plz need some help!

ty
 
Hello hardforumers,

I am using a Supermicro X8SIL-F mobo with two Intel NICs onboard for my Solaris 11 ZFS box. The adapters were working perfectly fine under Solaris Express 11 (no need for manual configuration, working immediately out of the box).

Following a HDD failure incident, I took the opportunity to upgrade from Solaris Express 11 to Solaris 11.

However, I am now facing huge networks issues: my NICs randomly disappear on start-up (current installation or using a Live CD), and when they are present, the network is only working first couple of seconds. For example, when I tried to ping my gateway/another machine, I got an answer for the two or three first sets of commands issued, and then systematic failure, "no answer". Same thing for Firefox: I am able to access google during the first minute, and then "server not responding". I would like to point out that the active adapter always appeared to be working under the network monitoring utility (no offline indication, still "active"), even if the network was in fact non responding.

Of cause, I tested both network adapters consequently, same (absence of) result.

I don't think it is a hardware failure, since I would be very surprised that both adapters simultaneously died, just after an OS upgrade.

Also, migrating outside of an Oracle environment is not an option, because of pool versions.

I checked on the Internet, found some vaguely similar problems, where people proposed to modify .conf files - however my files seem ok.

Any ideas? thanks.
 
Hey there! I have a problem with hidden share.

My shares name is nas$ and I can't write in it. Under SMB-share-all it says none and when I try to change it to full_set, I get an error:


I'm guessing $ is the problem. How to solve that?

Also, I want to have a share, where only 1 user will have access to. Where to set that user? I dont have Active Directory, just a simple workgroup. Do I have to create a user on solaris and map it to windows user?

lp, Matej

1. Set share-level ACL on hidden shares will work on next release
2. You must create a Solaris user. Goto share level ACL settings, remove everyone@ and add this user to give him exclusive access.

Never set a usermapping between a Solaris user and a Windows user in workgroup mode.
They are the same!!!, Use User-mappings only to map a Windows AD user to a Solaris user in Domain-mode
 
Hello hardforumers,

I am using a Supermicro X8SIL-F mobo with two Intel NICs onboard for my Solaris 11 ZFS box. The adapters were working perfectly fine under Solaris Express 11 (no need for manual configuration, working immediately out of the box).

Following a HDD failure incident, I took the opportunity to upgrade from Solaris Express 11 to Solaris 11.

However, I am now facing huge networks issues: my NICs randomly disappear on start-up (current installation or using a Live CD), and when they are present, the network is only working first couple of seconds. For example, when I tried to ping my gateway/another machine, I got an answer for the two or three first sets of commands issued, and then systematic failure, "no answer". Same thing for Firefox: I am able to access google during the first minute, and then "server not responding". I would like to point out that the active adapter always appeared to be working under the network monitoring utility (no offline indication, still "active"), even if the network was in fact non responding.

Of cause, I tested both network adapters consequently, same (absence of) result.

I don't think it is a hardware failure, since I would be very surprised that both adapters simultaneously died, just after an OS upgrade.

Also, migrating outside of an Oracle environment is not an option, because of pool versions.

I checked on the Internet, found some vaguely similar problems, where people proposed to modify .conf files - however my files seem ok.

Any ideas? thanks.

check Bios setting "Active State PowerManagement".
Disable when enabled
 
Back
Top