OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

When you have a windows clients accessing a folder it may also have a lock on the thumbs.db file. So anything else trying to delete it would, of course, fail. Don't know that this is a napp-it or netatalk problem.

Try having all windows clients disconnected from the share. Check to be sure. Bring up a cmd prompt on the windows machines and do a "net use" command. That'll show you whatever volumes the OS has active. Just closing an explorer window doesn't release a connection. But it should disengage any lock on the thumbs.db file.

See if you can delete it from the Mac when no windows clients are connected.
 
What I have built:
-Intel S1200BTS (actually S1200BTSR, supports v2 CPUs)
-Xeon E3-1225v2
-16GB Kingston ECC
-LSI 9211-8i SAS/SATA controller, flashed to newest IT firmware
-Hitachi 7K4000 4TB HDDs x8
-Intel 160GB x25m SSD (had laying around)
-SuperMicro CSE-M35T-1B SATA hot swap chassis x2
-Nexus Prominent 9 case

The CPU was originally in a board that needed a CPU with built-in video, thus the E3-xxx5 CPU choice, does me no good in this mobo but at least it still works.

Installed OI 151a5 (desktop) in a 30GB partition on the SSD. Maybe that's way too big, don't know. My thought is I can put the rest of the SSD to good use for something else later (ZIL, cache, I don't know). Suggestions welcome on the use of the SSD, like how big the OI boot partition should be and what I should use the rest for.

Made a raidz2 zpool out of the eight 7K4000 drives.

This is very interesting news for me. I have the older LSI 1068 based SAS3081E-R flashed to "older" IT firmware and my new 7K4000 4TB disk shows 2.2TB.

Can you tell me what phase your Firmware is from ? Perhaps even links ?

Thanks in advance.
 
I've got a 24 bay box with two drive controllers talking to the bays. 8 on an M1015 and 16 on an areca 1260. I'd like to rejigger the device target numbers so they more or less match up with the bay numbers.

Right now I've got the m1015 showing up as c4 with targets 9 through 16 (c4t9d1...c4t16d1). The ports on the card are connected to bays 1 through 8. I have nothing on the drives yet, so rearranging data isn't an issue.

How can I easily reconfigure them so they show up as c4t0 through 7? I can use either 0-start or 1, doesn't matter.

The I've got two drives on the areca showing up as c2t5d0 and c2t6d0. But they're actually connected to ports 0 and 1 on the 1260. I'd like to have the 1260 targets start at 9 (or 8 if we're zero-starting) and increment up from there.

I figure it's better to do this now rather than later, before there's a lot of data on the drives. That and to save me a lot of headaches down the road when trying to figure out which drive is connect to which port.

So what's the simplest way to approach this?

And when adding drives, what kind of partition table should I be putting on them? msdos, sun, or what? I'd like the drives to be portable between the different drive bays. This so I can move them should needs dictate (like heat, rearranging the hot ones away from backing ones on top of them). Or whatever. I'm thinking OI is smart enough to be able to re-map the drives if they get moved? Provided they're labeled right?
 
This is very interesting news for me. I have the older LSI 1068 based SAS3081E-R flashed to "older" IT firmware and my new 7K4000 4TB disk shows 2.2TB.

Can you tell me what phase your Firmware is from ? Perhaps even links ?

Thanks in advance.

LSI 1068: max 2 TB disks
LSI 2008 (LSI 9211): supports disks > 2 TB
 
nappit-1.jpg


don't know why its missing

if I go on putty and do the bonnie++ it says its version 1.03c

how can I add it back to napp-it?

thanks

klick on Pools - Benchmarks
dd is a submenu below
 
This is very interesting news for me. I have the older LSI 1068 based SAS3081E-R flashed to "older" IT firmware and my new 7K4000 4TB disk shows 2.2TB.

Can you tell me what phase your Firmware is from ? Perhaps even links ?

Thanks in advance.

Sorry, at work and can't look. But I just downloaded the newest from their site and flashed it. I do have the files here, the 2118it.bin file, which I think is the "main" one has a file date of 4/11/2012 on it. But I think Gea already answered your question... seems like your board is not capable of larger drives. Big bummer.
 
Sorry, at work and can't look. But I just downloaded the newest from their site and flashed it. I do have the files here, the 2118it.bin file, which I think is the "main" one has a file date of 4/11/2012 on it. But I think Gea already answered your question... seems like your board is not capable of larger drives. Big bummer.

Bummer, but the new card will allow me to migrate off the cursed WD Green drives. That alone is worth the price of entry. Thanks for replying!
 
klick on Pools - Benchmarks
dd is a submenu below

THANK YOU VERY MUCH

god I'm SUCK A NOOB

hmm wonder how hard it is to update bonnie++ :/

is it possible to setup Link Agregation in nappit or OI? thanks
 
Last edited:
I've got a simple all in one - OpenIndiana 151a5, ESXi 5.0, NappIt - latest version.
Hardware:
4x WD Caviar Black 1tb - Configured as 2x Mirrors
1x Crucial M4 128gb SSD - ZIL
The above are all connected to an IBM M1015 in IT mode
CPU = E3-1220
Memory for OI VM: 8gb

I'm getting seriously high benchmark numbers that don't seem within reason and would like to know why. Anyone got any ideas. Benchmark numbers below.
 
I've got a simple all in one - OpenIndiana 151a5, ESXi 5.0, NappIt - latest version.
Hardware:
4x WD Caviar Black 1tb - Configured as 2x Mirrors
1x Crucial M4 128gb SSD - ZIL
The above are all connected to an IBM M1015 in IT mode
CPU = E3-1220
Memory for OI VM: 8gb

I'm getting seriously high benchmark numbers that don't seem within reason and would like to know why. Anyone got any ideas. Benchmark numbers below.
No HBA?
I'm really curious on what speeds you are getting when copying stuff from windows 7 or 2008 R2 to 1 of your mirrors. (if you are using Napp-IT as SMB/CIFS shares for your network that is)
 
Is de-dupe and/or compression turned on for that pool? Might it be ZFS being smart about repetitive data? Heh, I wonder what writing /dev/urandom would do...
 
I've got a 24 bay box with two drive controllers talking to the bays. 8 on an M1015 and 16 on an areca 1260. I'd like to rejigger the device target numbers so they more or less match up with the bay numbers.

Right now I've got the m1015 showing up as c4 with targets 9 through 16 (c4t9d1...c4t16d1). The ports on the card are connected to bays 1 through 8. I have nothing on the drives yet, so rearranging data isn't an issue.

How can I easily reconfigure them so they show up as c4t0 through 7? I can use either 0-start or 1, doesn't matter.

The I've got two drives on the areca showing up as c2t5d0 and c2t6d0. But they're actually connected to ports 0 and 1 on the 1260. I'd like to have the 1260 targets start at 9 (or 8 if we're zero-starting) and increment up from there.

I figure it's better to do this now rather than later, before there's a lot of data on the drives. That and to save me a lot of headaches down the road when trying to figure out which drive is connect to which port.

So what's the simplest way to approach this?

And when adding drives, what kind of partition table should I be putting on them? msdos, sun, or what? I'd like the drives to be portable between the different drive bays. This so I can move them should needs dictate (like heat, rearranging the hot ones away from backing ones on top of them). Or whatever. I'm thinking OI is smart enough to be able to re-map the drives if they get moved? Provided they're labeled right?

Does this help you?
From
Solaris express 11
http://hardforum.com/showpost.php?p=1038783015&postcount=207

.
 

Not really as croinfo only exists on solaris, not in OI. But it does discuss some useful stuff. The trick will be finding tools that do the same thing for my setup. One thing that's interesting is the /etc/path_to_inst file. I'm wondering if editing that might allow getting the desired numbers on the m1015. Which I may just give up and flash to IT mode so the built-in stuff will see it.
 
I've got a 24 bay box with two drive controllers talking to the bays. 8 on an M1015 and 16 on an areca 1260. I'd like to rejigger the device target numbers so they more or less match up with the bay numbers.

Right now I've got the m1015 showing up as c4 with targets 9 through 16 (c4t9d1...c4t16d1). The ports on the card are connected to bays 1 through 8. I have nothing on the drives yet, so rearranging data isn't an issue.

How can I easily reconfigure them so they show up as c4t0 through 7? I can use either 0-start or 1, doesn't matter.


This is a matter of cabling
If you want targets c[n]t0..t7 connected to the first row of your backplane, you must physically connect the sata/SAS cables accordingly. This is not a settabel "label" but the physical port of the controller.

Other option is to use IT firmware on the 1015
It will display port independent WWN numbers like c3t600039300001EA56d0
This number is disc unique and keeps the same, when you change the bay.

(You can use SAS2 extension to detect the slot or you need to write down the WWN number or the serial and the bay)
 
Finally getting around to building my Raid box, I have 7 Hitachi 3TB green drives and am trying to decide what ZFS setup to use. It seems like my options are:

-Raid Z2 (15TB usable)
-Raid Z3 (12TB usable)
-3 mirrored zpools striped together (9TB usable) + 1 hot spare (can you have a spare that gets automatically used for whichever zpool needs it first?)

The striped mirror would have the fastest reads/writes, but I sacrifice a lot of capacity. For the Z2/Z3 configurations, does ZFS read the parity from all parity drives for every single read? Or can it schedule them alternatively like when reading from a mirror? IE in any Z* configuration am I ultimately getting the read/write performance of a single drive?

I also have questions about ashift, I just read about this today, and I believe these disks are 512B sector, but it sounds like I need to set ashift to 12 otherwise I can't ever swap them out with a 4k sector disk drive in the future?

My workload shouldn't be too terribly intense, I have a Media Center PC recording up to 4 streams and watching at most 1-2 at a time, then 2-3 home PCs that are going to store everything on the file server other than the OS/app executables, we do some light adobe premier video editing, etc.

Also another random question, I have an M1015, I've never used hot swap, but can I just plug/unplug disks while the machine is on (I have cables not a backplane)? Do I need to ensure the disks get plugged into power before/after SATA or does it not matter?

Thanks!
David
 
This is a matter of cabling
If you want targets c[n]t0..t7 connected to the first row of your backplane, you must physically connect the sata/SAS cables accordingly. This is not a settabel "label" but the physical port of the controller.
It's not a SAS backplane. It uses a breakout cable to go from SAS to SATA ports. For some reason it starts the numbering at 9. The box also has an areca 1260 and that numbers it's ports properly.
 
No HBA?
I'm really curious on what speeds you are getting when copying stuff from windows 7 or 2008 R2 to 1 of your mirrors. (if you are using Napp-IT as SMB/CIFS shares for your network that is)

I get the expected performance over SMB/CIFS shares - anywhere from 79-100Mb/s transfer rates. The HBA is the IBM M1015 card in IT mode mentioned above.
 
Is de-dupe and/or compression turned on for that pool? Might it be ZFS being smart about repetitive data? Heh, I wonder what writing /dev/urandom would do...

Compression = Yes
De-dupe = No

How would I go about testing performance using the method mentioned above?
 
I tried to update to 0.8k from 0.8h today and it seems to have killed napp-it; browsing to ip:81 now just gives:

Software error:
Can't locate UUID/Tiny.pm in @INC (@INC contains: /var/web-gui/data/napp-it/CGI /usr/perl5/site_perl/5.10.0/i86pc-solaris-64int /usr/perl5/site_perl/5.10.0 /usr/perl5/vendor_perl/5.10.0/i86pc-solaris-64int /usr/perl5/vendor_perl/5.10.0 /usr/perl5/vendor_perl /usr/perl5/5.10.0/lib/i86pc-solaris-64int /usr/perl5/5.10.0/lib .) at admin.pl line 713.
BEGIN failed--compilation aborted at admin.pl line 713.



I'll reboot later on today once the VMs have finished doing their jobs... anyone else had issues updating?


I'm also getting the same error.

I've tried reinstalling but im still getting the error message.
 
I'm also getting the same error.

I've tried reinstalling but im still getting the error message.

I cannot reproduce the error
do you have the file /var/web-gui/data/napp-it/CGI/UUID/Tiny.pm
(needed for netatalk 2)

ps
on errors during online-GUI-updates, you can reinstall via the base wget installer
 
I had a problem during Netatalk 3.0 installation... Even if the napp-it script was telling me that netatalk installation was succesfull, the package was not even built.

So I tried to make Netatalk manually and realized I was missing the math.h file and the compilation failed so did the make install afterwards.

I just installed the "header-math" package and everything was fine afterwards.

Hope it can help someone.
 
I had a problem during Netatalk 3.0 installation... Even if the napp-it script was telling me that netatalk installation was succesfull, the package was not even built.

So I tried to make Netatalk manually and realized I was missing the math.h file and the compilation failed so did the make install afterwards.

I just installed the "header-math" package and everything was fine afterwards.

Hope it can help someone.

thanks
added to the regular napp-it afp installer
 
I can't seem to get SMTP TLS working

Nappit shows

Code:
Software error:
invalid SSL_version specified at /usr/perl5/site_perl/5.10.0/IO/Socket/SSL.pm line 332
For help, please send mail to this site's webmaster, giving this error message and the time and date of the error.

[Tue Aug 7 01:32:24 2012] admin.pl: invalid SSL_version specified at /usr/perl5/site_perl/5.10.0/IO/Socket/SSL.pm line 332

Code:
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Tue Aug  7 01:31:04 2012
OpenIndiana (powered by illumos)    SunOS 5.11    oi_151a5    June 2012
You have new mail.
root@openindiana:~# sudo perl -MCPAN -e shell
Terminal does not support AddHistory.

cpan shell -- CPAN exploration and modules installation (v1.9800)
Enter 'h' for help.

cpan[1]> install Net::SMTP::TLS
CPAN: Storable loaded ok (v2.18)
Reading '/root/.cpan/Metadata'
  Database was generated on Tue, 07 Aug 2012 04:47:04 GMT
CPAN: Module::CoreList loaded ok (v2.13)
Net::SMTP::TLS is up to date (0.12).

"You have new mail in /var/mail/root" - what does this mean

"'YAML' not installed"

I have already installed Net::SMTP::TLS using the instruction provided on nappit TLS help read me

EDIT found fix
Openindiana 151a5 and newer SSL dont play well?

Code:
# Install older version of IO::Socket::SSL so I can make TLS work
install http://search.cpan.org/CPAN/authors/id/S/SU/SULLR/IO-Socket-SSL-1.68.tar.gz
from Zackreed
 
Last edited:
I get the expected performance over SMB/CIFS shares - anywhere from 79-100Mb/s transfer rates. The HBA is the IBM M1015 card in IT mode mentioned above.

coolrunnings,some questions about your setup:

- Have you done ANY tweaking on OI / ESXi / Windows / or Switch side..?
- The zil/l2arc ssd is it attached to your M1015..? ( I thought MLC SSD as zil was a no-no..?)
- You use the OI VM as your fileserver for your network...?


I was only able to achieve such speeds when using ISCSI, SMB..50/60 MB/s ( I don't have a zil or l2arc yet but unsure if they would boost smb write speeds)
 
Last edited:
"You have new mail in /var/mail/root" - what does this mean

This file is used for email and script-errors.
You can display the content via cat /var/mail/root
and delete if not important via rm /var/mail/root

Thanks for this info about TLS and OI 151a5
 
Also i have pool version 5000 as long as u dont change os or need to port it to another system, its ok
 
ugh finally had my storage back online after flashing an M1015 to the IT firmware, then the expander stopped working so i've got an intel on the way! I was freaking out because it said the pool was unavailable. Hope this finally fixes it once and for all!
 
coolrunnings,some questions about your setup:

- Have you done ANY tweaking on OI / ESXi / Windows / or Switch side..?
- The zil/l2arc ssd is it attached to your M1015..? ( I thought MLC SSD as zil was a no-no..?)
- You use the OI VM as your fileserver for your network...?


I was only able to achieve such speeds when using ISCSI, SMB..50/60 MB/s ( I don't have a zil or l2arc yet but unsure if they would boost smb write speeds)

1. It's pretty well bone stock. No jumbo frames, no partition alignment, nothing tweaked in ESXi... The VM has 8gb of VRAM assigned to it in ESXi. It's running OpenIndiana 151a5 with all the latest updates applied and VMWare tools installed.

2. The ZIL SSD is attached to the M1015 as well as the other 4 drives. I had heard that a ZIL can wear out an SSD pretty quickly but I hadn't heard not to use an MLC model. This is just for a lab anyway. That said, I'd really like to know how to under-provision the drive to increase lifespan. It's a 128gb model and I'm told you only really need about 20gb...

3. Yes, the OI VM is used as the network fileserver as well as the datastore for my VMs that I use for testing. I read that the ZIL doesn't seem to help SMB performance but it does seem to make a nice difference in VM performance. I haven't been able to scientifically test this, however.
 
1. It's pretty well bone stock. No jumbo frames, no partition alignment, nothing tweaked in ESXi... The VM has 8gb of VRAM assigned to it in ESXi. It's running OpenIndiana 151a5 with all the latest updates applied and VMWare tools installed.

2. The ZIL SSD is attached to the M1015 as well as the other 4 drives. I had heard that a ZIL can wear out an SSD pretty quickly but I hadn't heard not to use an MLC model. This is just for a lab anyway. That said, I'd really like to know how to under-provision the drive to increase lifespan. It's a 128gb model and I'm told you only really need about 20gb...

3. Yes, the OI VM is used as the network fileserver as well as the datastore for my VMs that I use for testing. I read that the ZIL doesn't seem to help SMB performance but it does seem to make a nice difference in VM performance. I haven't been able to scientifically test this, however.

Two more questions popped up in my head: (sorry for the shakedown)
- When you create a ZFS Folder, do you keep sync=standard..?
- Any settings you've enabled on your M1015 after you flashed it to IT or did you just disable the BIOS..? (enable writecache if that is even possible after flashing it to IT..?)


Thanks for the feedback on the other questions, I'm currently on holiday but the problem is still racking my brain why I'm unable to get better performance with state of the art hardware (MSI Z77 mobo, Core i5 3550, 32GB, M1015, 2 x 7200rpm Seagate 2TB disks and same problems occurred with Nexenta as with OI all in one). You have this speed so it has to be possible I guess.
I'll keep thinking of ways to exclude stuff, if that doesn't help I'll open a new topic here :)
(any tips welcome of course)

I read about the MLC ssd here:
http://constantin.glez.de/blog/2011...estions-about-flash-memory-ssds-and-zfs#types

This whole page has a lot of info maybe also what you were looking for in under-provisioning your ssd?
 
Last edited:
weird windows acls with napp-it + OI

setup:
napp-it + OI 151a5 all-in-one on esxi 5
supermicro x9scm-iif
intel xeon e3-1230v2
32GB 1600mhz ECC unregistered
2x m1015 w/ it firmware
10x hitachi 3TB drive storage
1x 128GB crucial m4 zil
1x 128GB crucial m4 l2arc
1x 80GB intel x25-m boot drive


I'm seeing this on the commandline when i do ls -l
??????????? ? ? ? ? ? 8-8 新作19連發

when it should look something like this?
drwxrwxrwx+ 2 squall staff 49 2012-01-06 03:39 2007년 무대영상 By KaiRaKu

i have the "squall" user listed under SMB-User
it is also the username that I login with on my windows 7 box

Under the ZFS folder, I have this set on the folder
user:squall:full_set:fd-----:allow

Under Extensions=>ACL on folders, i have this set
2 user:squall rwxpdDaARWcCos full_set rd(acl,att,xatt) wr(acl,att,xatt,own) add(fi,sdir) del(yes,child) x, s file,dir allow delete


im not sure what im missing at this point.

im also seeing fairly bad read speed from the windows box, not sure if its the NIC though as the NIC on the windows box is onboard realtek. going to try with a intel NIC and see if read speed improves.

but im really stuck on the permission issues, any ideas?

thanks
 
Two more questions popped up in my head: (sorry for the shakedown)
- When you create a ZFS Folder, do you keep sync=standard..?
- Any settings you've enabled on your M1015 after you flashed it to IT or did you just disable the BIOS..? (enable writecache if that is even possible after flashing it to IT..?)


Thanks for the feedback on the other questions, I'm currently on holiday but the problem is still racking my brain why I'm unable to get better performance with state of the art hardware (MSI Z77 mobo, Core i5 3550, 32GB, M1015, 2 x 7200rpm Seagate 2TB disks and same problems occurred with Nexenta as with OI all in one). You have this speed so it has to be possible I guess.
I'll keep thinking of ways to exclude stuff, if that doesn't help I'll open a new topic here :)
(any tips welcome of course)

I read about the MLC ssd here:
http://constantin.glez.de/blog/2011...estions-about-flash-memory-ssds-and-zfs#types

This whole page has a lot of info maybe also what you were looking for in under-provisioning your ssd?

Thanks for the info. So you got me thinking a little bit there. I had a spare server sitting around here so I loaded up OpenIndiana 151 A5 and the latest version of Napp-It. I put in 4x 500gb RE4 drives in it and set it up as 2 sets of mirrors. I tested from my Windows 7 box (with an SSD) to it and Windows reads around 90Mb/s. I tried with Unstoppable Copier and it reads closer to 67Mb/s until it gets to the last little bit then drops to 20Mb/s. Not exactly sure what to make of that. This server is running an old Xeon 2.4ghz socket 775 and has 2gb of RAM. It's running off the Intel ports in AHCI and has no ZIL. Not sure if those numbers help or not...

Regarding tweaks on the original setup we were talking about, no I didn't do anything special to the M1015. It has been flashed to IT mode as per the instructions on ServeTheHome.com and its BIOS is disabled. Sync is kept standard, as disabling it removes much of the reason I use ZFS. PM me if you want me to run any tests for you for comparison purposes.
 
Thanks mate,
I'm currently still on holiday but I'll be back next week so I might take you up on that.
(My current "production" OS is Nexenta with the passtrough M1015 and 8GB of RAM which is currently holding my prod. vm's through NFS) I build the OI VM on my local SSD on pure VMDK's expecting at least 80/90 MB/s.
I was hoping Nexenta v4 would be out but I've heard rumors saying before the end of the year :( .

Next week I first go over my setup and try some different tools to test copy speed.
I'll do most of the copy tests from a vm inside my esxi machine to exclude my GB switch (HP 1810 24 ports)
I also found this tool which might help in comparing results (http://808.dk/?code-csharp-nas-performance)

EDIT
I'm also curious to test and see if I get different results between SMB and NFS (using this also as a post-it reminder for later :) )
 
Last edited:
w


I'm seeing this on the commandline when i do ls -l
??????????? ? ? ? ? ? 8-8 新作19連發

when it should look something like this?
drwxrwxrwx+ 2 squall staff 49 2012-01-06 03:39 2007년 무대영상 By KaiRaKu


im not sure what im missing at this point.

im also seeing fairly bad read speed from the windows box, not sure if its the NIC though as the NIC on the windows box is onboard realtek. going to try with a intel NIC and see if read speed improves.

but im really stuck on the permission issues, any ideas?

thanks

can't see your command, but

Use v option to display ACL: ls -dv
-v The same as -l, except that verbose ACL information
is displayed as well as the -l output. ACL informa-
tion is displayed even if the file or directory
doesn’t have an ACL.

-V The same as -l, except that compact ACL information
is displayed after the -l output.

(only look at ACL with CIFS because it works like Windows: ACL only)

to set ACL on workgroups
- do not create any idmappings
- create a user (napp-it menu user)
- share a dataset with guest=disabled
- keep share ACL- defaults (everyone@=full)
- set folder ACL of the dataset to user or everyone@=full/modify

If you want to connect from Windows without login:
- create a user with the same pw like the one you use locally on Windows

about performance:
- If you have a Realtek: Its slower than Intels, use newest driver or it may be worse at all
- do not use any copy tools on Windows, they may slow down with ZFS
- http://808.dk/?code-csharp-nas-performance is good for tests
 
@coolrunnings, @Gea,
NICS are from Intel (82571EB). (MSI mobo has a Realtek onboard which I use for the connection to my modem)

These are my dd bench results of my test OI Napp-IT vm where (for now) everything is on vmdk (on a local SSD).


EDIT: Added NasTest and VM Network stats





The VM Network show a top of around 70 MB/s then drops, then spikes up again.(does this explain the average write of 50 MB/s ? why the delay then..?)

(Before I started the test I added a 5 GB VMDK write cache located on the same SSD to the pool)

Drivin' me nuts that I have worse performance than a HP N40 microserver :(
 
Last edited:
@coolrunnings, @Gea,

The VM Network show a top of around 70 MB/s then drops, then spikes up again.(does this explain the average write of 50 MB/s ? why the delay then..?)

(Before I started the test I added a 5 GB VMDK write cache located on the same SSD to the pool)

Drivin' me nuts that I have worse performance than a HP N40 microserver :(

If you want to compare performance from a virtualized NAS with a barbone NAS,
you must:

on ESXi
- use a separate Controller that you pass-through to OI
- optionally use vmxnet3 driver for best network performance over ESXi
- assign as much RAM to your OI VM as you have used with your barebone NAS

Storage on vmdk is not as fast, a extra ZIL on the same disk like your storage is also not helpful.
For a pure fileserver a ZIL is not used at all. A ZIL is only used for sync write requests
(ex. databases or ESXi that demands sync writes on NFS datastores for best datasecurity)


about the "write spikes" with ZFS:
On ZFS, all nonsync-write requests are going to RAM where they are collected.
After 5s they are written as one large sequential write. This is done for better performance,
especially with small random writes.

With sync write this is the same but each write must be commited from ZIL (separate Highspeed-ZIL,
otherwise a default ZIL on same pool is used) until the next can occur. This is the reason why you need a
DRAM Zil if you want to keep nonsync performance with the security of sync writes.
 
Last edited:
I've been toying with Napp-It running inside of VBox to become more familiar with it and now we are in a point where we need to pull the trigger in the next week on some hardware for our server running our PHD Backup appliances. Can someone point me to a good link describing how I would put the admin GUI on one network and the NFS shares on a totally different network? All of our shared storage runs on a 10. network and normal traffic is on a 192. network. I don't want to have the GUI on the 10. network because I want to stay totally isolated as we currently sit.

I've found posts about setting the static IPs on OI, but the split between GUI and NFS has me a bit stumped. Any suggestions?
 
Back
Top