OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

sorry, i misread the original post. autoreplace property is for something else. zpool spare drives should automatically come into service to replace failing drives. i think the key point here is that the drive in the pasted output is flagged as removed by administrator, which does not run in a spare. why that happened is another question...
 
Probably because it just "died", and not had a bad sector or such, which does not let it "react" to bad things, etc. It vanished, and system assumes it was removed physically.
 
I'm using OpenIndiana Build 148 (Desktop) + napp-it and loving it so far.
But i'm having troubles getting a static IP to work properly. After configuring it through the GUI it works, but after a reboot i'm losing my internet connection, locally it still works tho. Does anyone know what might cause this?
It's easlily fixed by just by going to the settings and clicking ok, still it's a little annoying.
 
Code:
NAME  PROPERTY     VALUE    SOURCE
fs1   autoreplace  off      default

It seems that autoreplace only affects disks inserted on the same port, not for spares.

thats correct for autoreplace
see http://download.oracle.com/docs/cd/E19253-01/819-5461/gfifk/index.html

in case of a disk failure a attached spare is used automatically - only in some cases
(if the spare is different from the failed disk) i have had problems with the spares.

if you are uncertailn, try a manual disk-replace failed -> new
you will get an error, if the spare is "incompatible"


if you want to see optional (critical) user actions, read the pool-history.
If there is no entry, the disk has just died.

Gea
 
thats correct for autoreplace
see http://download.oracle.com/docs/cd/E19253-01/819-5461/gfifk/index.html

in case of a disk failure a attached spare is used automatically - only in some cases
(if the spare is different from the failed disk) i have had problems with the spares.

if you are uncertailn, try a manual disk-replace failed -> new
you will get an error, if the spare is "incompatible"


if you want to see optional (critical) user actions, read the pool-history.
If there is no entry, the disk has just died.

Gea

I did check the pool-history, nothing there except when the pool was created, and the scrubs that were ran.

Now, I am kinda worried. Now I was here, and it happened in a controlled envirioment, but I have spares so they can recover failed disks. So this is quite worriesome if it does not work as intended. Any way to shore up the reliability in this department ?
 
I'm using OpenIndiana Build 148 (Desktop) + napp-it and loving it so far.
But i'm having troubles getting a static IP to work properly. After configuring it through the GUI it works, but after a reboot i'm losing my internet connection, locally it still works tho. Does anyone know what might cause this?
It's easlily fixed by just by going to the settings and clicking ok, still it's a little annoying.

i sometimes asked myself, how is it possible to make such a simple thing like assigning a static ip as complicated as
it is in Solaris. But it is as it is. It is one of the worst things i know. (beside set ACL in Solaris AND/OR Windows)

Problems: via CLI
- You have two services network:default (manual, traditional way) and network:nwam (auto-magic, the newer one).
- Traditionally persistent manual ip was set by ifconfig and a /etc/hostname like /etc/hostname.e1000g0 (already used by Nexenta)
- OI and SE11 use a tool name ipadm to set persistent manual ip settings
- there are several files involved

do it via Solaris UI
- the setting via Solaris UI is chaotic.

my best to use is:
- currently do not use napp-it with OI (it does not support ipadm, only the traditional way, not supported by OI after reboot)

- UseOI live version, use nwam, set nwan to manuel ip (unless you have more than one adapter, then network:default may be better)
- in the OpenIndiana menues (Applications, Places, System.. ) klick on the network icon (left of language)
- set IP to manual assigned and set your gateway
(set and press enter to confirm, recheck, often it does not accept due to not press enter or due to press enter the wrong moment)

- redo/ klick on network icon
- klick on locations and select automatic and edit
- set dns to manual, set a domain and server (does not matter) and under search a DNS (may use Google DNS, enter: 8.8.8.8)

-optional reboot
-hope the guys at Oracle/ Illumos will think about usability !!


Gea
 
netcat / replication speed

Gea..
I'm finding the speed to replicate a large zfs folder very slow. To transfer and replicate 5TB of data between systems is going to take a week or more. Is there a faster protocol besides netcat that could speed things up? At this rate I'm better off using sync via windows.. twice as fast.
Thanks!
 
I have read quite a bit about this all-in-one ESXi server solution and after getting my first file server up and running with OpenIndiana, I'm ready to try out an ESXi 4.1U1 (free license) based all-in-one server. I got a Supermicro X8DTL-3 with 16 GB RAM and 2x XEON L5518 as a test box and ran into the following issues so far:
  1. Gea's all-in-one.pdf guide lists adding the vmxnet3 adapter in two locations: chapter 7.8 (before installing the VMware tools) and then again in ch. 7.10: "That‘s all, you can now add a VMXnet3 highspeed network driver to OpenIndiana"
    Is there any difference, if I add that driver in 7.10?
  2. After installing the VMware tools for the first time (and rebooting), I didn't notice any improvements - I still have to press CTRL+ALT to leave the console window in ESXi. As I wasn't sure whether the install really worked (since I run into the issue with the OI upgrade manager telling me that I am on a Live CD, which I am not) I tried to install again, but got the message that the tools are already installed. So what benefit do I get from the VMware tools?
  3. Now, where I am stuck is the network setup:
    I got napp-it installed, created two SMB shares (777) and enabled SMB and NFS server services (and rebooted). So far, so good. I am in a network with a DHCP server (Win 2008) and ESXi as well as OI automatically got an IP address assigned (10.0.47.something) and automatically found the local DNS server (also good). I can access the shares I created in napp-it via \\10.0.47.142\share from my Windows box.
    Now, when I created the the vmxnet3 adapter, I assigned it a static, private IP address (192.168.0.1), but how would I configure OI/napp-it to use this extra network adapter and publish its shares there as well, so that ESXi can find them to use them as a datastore for the VMs? When I try to create a datastore via the address that works from 'the outside' (Windows), I get this error (but since I run ESXi, I cannot configure the firewall settings as suggested in the link). Also, since this IP is not guaranteed, this would work only for so long...

    What am I missing here?
    My network:
    networking.png

Any ideas to get me going again are highly appreciated!
-TLB

EDIT: I guess, the main question behind #3 is: How do I configure the vmxnet3 adapter in OI to use a local, static IP address that ESXi can 'see' while it still uses the DHCP for the external (physical network, E1000)?
 
Last edited:
I think the idea was to remove the e1000 adapter in the OI VM in esxi and add a vmxnet3 one, then reboot the OI VM.
 
I think the idea was to remove the e1000 adapter in the OI VM in esxi and add a vmxnet3 one, then reboot the OI VM.
But I need an IP address from the DHCP server, so that I can get to the OI SAN VM 'from the outside' (e.g. to perform backups or to control napp-it), right? If the IP address of the vmxnet3 adapter is not a static one (i.e. assigned by the DHCP), wouldn't I loose my datastore if the DHCP 'decides' to re-shuffle all IPs in my subnet?
That's why I'm thinking that I need both adapters - the vmxnet3 one for the connection to ESXi, the e1000 one to communicate with the real physical network.

-TLB
 
Last edited:
I have read quite a bit about this all-in-one ESXi server solution and after getting my first file server up and running with OpenIndiana, I'm ready to try out an ESXi 4.1U1 (free license) based all-in-one server. I got a Supermicro X8DTL-3 with 16 GB RAM and 2x XEON L5518 as a test box and ran into the following issues so far:
  1. Gea's all-in-one.pdf guide lists adding the vmxnet3 adapter in two locations: chapter 7.8 (before installing the VMware tools) and then again in ch. 7.10: "That‘s all, you can now add a VMXnet3 highspeed network driver to OpenIndiana"
    Is there any difference, if I add that driver in 7.10?
  2. After installing the VMware tools for the first time (and rebooting), I didn't notice any improvements - I still have to press CTRL+ALT to leave the console window in ESXi. As I wasn't sure whether the install really worked (since I run into the issue with the OI upgrade manager telling me that I am on a Live CD, which I am not) I tried to install again, but got the message that the tools are already installed. So what benefit do I get from the VMware tools?
  3. Now, where I am stuck is the network setup:
    I got napp-it installed, created two SMB shares (777) and enabled SMB and NFS server services (and rebooted). So far, so good. I am in a network with a DHCP server (Win 2008) and ESXi as well as OI automatically got an IP address assigned (10.0.47.something) and automatically found the local DNS server (also good). I can access the shares I created in napp-it via \\10.0.47.142\share from my Windows box.
    Now, when I created the the vmxnet3 adapter, I assigned it a static, private IP address (192.168.0.1), but how would I configure OI/napp-it to use this extra network adapter and publish its shares there as well, so that ESXi can find them to use them as a datastore for the VMs? When I try to create a datastore via the address that works from 'the outside' (Windows), I get this error (but since I run ESXi, I cannot configure the firewall settings as suggested in the link). Also, since this IP is not guaranteed, this would work only for so long...

    What am I missing here?
    My network:
    networking.png

Any ideas to get me going again are highly appreciated!
-TLB

EDIT: I guess, the main question behind #3 is: How do I configure the vmxnet3 adapter in OI to use a local, static IP address that ESXi can 'see' while it still uses the DHCP for the external (physical network, E1000)?

some thoughts about:
in ESXi VM-settings, you can add NICs to your VM, either e1000 or VMXNet3 type.
The second is much faster but needs special driver in your guest system

If you add a VMXNet3 Adapter and you have not installed VMware toold, OI will discover
a unknown network driver. After installing the tools and a reboot, it will use it.

In general, vmare tools adds some optimized driver for your guest-os and it adds management
capabilites like remote shutdown.

If you have more than one NICs, you can connect both to the same virtual switch (like you have done)
or to a different virtual switch (like you can do with a physical switch). to have physical separated nets.
(You can do the same with vlans on one switch).

These switch-cabling is completely different from the question, how to set IP-settings in OI.

what i would do:
use one VMXnet3 adapter only, give it a fixed IP and use it for all.
If you need separate networks, use vlans

or
you can try to set ip manually with OI for the second adapter with networks service nwam
read thread 767 about.

or
set network service to default and set persistent settings it via ipadm


If you use more than one IP or have more than one NIC, all servers are connected automatically to all of them
unless you restrict ex with a firewall setting. IP settings is one of the rare things, really annoying in Solaris*


Gea
 
netcat / replication speed

Gea..
I'm finding the speed to replicate a large zfs folder very slow. To transfer and replicate 5TB of data between systems is going to take a week or more. Is there a faster protocol besides netcat that could speed things up? At this rate I'm better off using sync via windows.. twice as fast.
Thanks!

You cannot really compare

Disk or Filesystem duplication/ replication is always slower than a simple file copy with rsync or robocopy
- especially with only a few large files. But Replication will create a real 1:1 copy with all Volumes, Snaps, Shares
and other ZFS properties like dedup, compress, ACL, .. and must copy more Data for that reason.
With rsync for example you just loose even simple file attributes like ACL.

Beside the exact 1:1 copy, you have two more advantages with replication

1. It is based on snaps, you can do during looad (no open files problem)
2. only the initial sync is a time problem. The next sync replicates only the changed data blocks based on snap-delta

Thats a huge difference then. If you have a 100 GB VM and just start it, a file based replication must
copy the complete 100 GB the next time while replication copies just a few KB within seconds.


about zfs transport methods.
i see three options

1. via mbuffer, i suppose the fastest way due to extensive buffering
i tried first but got problems/ interrupts with large ZFS

2. via netcat
no buffering but anso no protocol overhead

3. via ssh
encrypted, a lot of overhead, slowest way

Gea
 
Napp-it can display serial#'s for SATA drives connected via ICH10, but not via a SAS controller such as 9211-8i. Is this something that can be overcome?

nappit cant display SN from Supermicro AOC-USAS-L8i. however with smartmontools i can view SN!

execute:
Code:
smartctl -a /dev/rdsk/DISKID

get DISKID from zpool status c3etc..

NAME STATE READ WRITE CKSUM
WAREHOUSE ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t2d0p0 ONLINE 0 0 0
c3t0d0p0 ONLINE 0 0 0
c3t1d0p0 ONLINE 0 0 0
c3t3d0s2 ONLINE 0 0 0
 
Gea,

I ask you directly because I think you are using the same hardware.

When I boot up my All-In-One Box I get a error at the LSI Initialzation:

SAS Discovery Error 0x000040000 on Adapter 0, Port 3
SAS Discovery Error 0x000040000 on Adapter 0, Port 2
SAS Discovery Error 0x000040000 on Adapter 0, Port 1
SAS Discovery Error 0x000040000 on Adapter 0, Port 0
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 3
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 2
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 1
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 0

This only started Happening after I flashed the controller into IT Mode. I am using a X8DTH-6F-0 Motherboard whitch SAS2008 controller. And a Supermicro SC846E26-R1200b Chassis. I am only using one of the SAS HBA's to connect to the Backplane.

I was curious if this is something you experienced also? It still recognizes all the drives and I was able to get nexenta working with it, But it makes me a little uneasy to see it every time I boot.

Also while I have your attention do you use MPxIO on any of your all in ones or are you aware of any issues with it on the solaris platform?

Thank You for any response.

Have a great day!
 
i sometimes asked myself, how is it possible to make such a simple thing like assigning a static ip as complicated as
it is in Solaris. But it is as it is. It is one of the worst things i know. (beside set ACL in Solaris AND/OR Windows)

Problems: via CLI
- You have two services network:default (manual, traditional way) and network:nwam (auto-magic, the newer one).
- Traditionally persistent manual ip was set by ifconfig and a /etc/hostname like /etc/hostname.e1000g0 (already used by Nexenta)
- OI and SE11 use a tool name ipadm to set persistent manual ip settings
- there are several files involved

do it via Solaris UI
- the setting via Solaris UI is chaotic.

my best to use is:
- currently do not use napp-it with OI (it does not support ipadm, only the traditional way, not supported by OI after reboot)

- UseOI live version, use nwam, set nwan to manuel ip (unless you have more than one adapter, then network:default may be better)
- in the OpenIndiana menues (Applications, Places, System.. ) klick on the network icon (left of language)
- set IP to manual assigned and set your gateway
(set and press enter to confirm, recheck, often it does not accept due to not press enter or due to press enter the wrong moment)

- redo/ klick on network icon
- klick on locations and select automatic and edit
- set dns to manual, set a domain and server (does not matter) and under search a DNS (may use Google DNS, enter: 8.8.8.8)

-optional reboot
-hope the guys at Oracle/ Illumos will think about usability !!


Gea

Thank you, got it right now!
 
nappit cant display SN from Supermicro AOC-USAS-L8i. however with smartmontools i can view SN!

execute:
Code:
smartctl -a /dev/rdsk/DISKID

get DISKID from zpool status c3etc..

NAME STATE READ WRITE CKSUM
WAREHOUSE ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t2d0p0 ONLINE 0 0 0
c3t0d0p0 ONLINE 0 0 0
c3t1d0p0 ONLINE 0 0 0
c3t3d0s2 ONLINE 0 0 0

I can't seem to get smartctl to work (I'm using solaris 11 express)

lyle@prometheus:~$ sudo smartctl -a /dev/rdsk/c9t0d0
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

/dev/rdsk/c9t0d0: Unable to detect device type
Smartctl: please specify device type with the -d option.

Use smartctl -h to get a usage summary

lyle@prometheus:~$ sudo smartctl -a /dev/rdsk/c9t0d0 -d ata
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net


#######################################################################
ATA command routine ata_command_interface() NOT IMPLEMENTED under Solaris.
Please contact [email protected] if
you want to help in porting smartmontools to Solaris.
#######################################################################

Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)

A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

lyle@prometheus:~$ sudo smartctl -a /dev/rdsk/c9t0d0 -d scsi
smartctl 5.40 2010-10-16 r3189 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Serial number: ML0221F304WXPD
Device type: disk
Local Time is: Thu May 26 11:13:44 2011 CDT
Device supports SMART and is Disabled
Temperature Warning Disabled or Not Supported
Log Sense failed, IE page [aborted command]
scsiGetStartStopData Failed [aborted command]

Error Counter logging not supported
No self-tests have been logged

any one know how I can get smart info (the drive is a hitachi 2TB 5k3000 attached directly to the mobo sata port).
 
nappit cant display SN from Supermicro AOC-USAS-L8i. however with smartmontools i can view SN!

execute:
Code:
smartctl -a /dev/rdsk/DISKID

get DISKID from zpool status c3etc..

NAME STATE READ WRITE CKSUM
WAREHOUSE ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c3t2d0p0 ONLINE 0 0 0
c3t0d0p0 ONLINE 0 0 0
c3t1d0p0 ONLINE 0 0 0
c3t3d0s2 ONLINE 0 0 0


I get this result when running that command:

Code:
/dev/rdsk/c0t5000C5002624873Fd0: Unable to detect device type
Smartctl: please specify device type with the -d option.
 
I get this result when running that command:

Code:
/dev/rdsk/c0t5000C5002624873Fd0: Unable to detect device type
Smartctl: please specify device type with the -d option.

I got that too and tried various -d options, ata, scsi, auto, etc.
 
Here is what works on my OI install with sata drives:

smartctl -a -d scsi -T permissive /dev/rdsk/c3t0d0
 
Here is what works on my OI install with sata drives:

smartctl -a -d scsi -T permissive /dev/rdsk/c3t0d0

This works for me as well through a 9211-8i to sata and sas drives. What's stopping this from being part of the disks tab in napp-it?
 
This works for me as well through a 9211-8i to sata and sas drives. What's stopping this from being part of the disks tab in napp-it?

quite simple.
More developers are needed

-> look at disklib.pl (creates list) and get-disk.pl (collects disk informations)
these two libraries are involved :))

I am currently working on
-replication
-real time two way communication Server <-> Browser (websockets, html5)

Gea
 
Last edited:
Gea,

I ask you directly because I think you are using the same hardware.

When I boot up my All-In-One Box I get a error at the LSI Initialzation:

SAS Discovery Error 0x000040000 on Adapter 0, Port 3
SAS Discovery Error 0x000040000 on Adapter 0, Port 2
SAS Discovery Error 0x000040000 on Adapter 0, Port 1
SAS Discovery Error 0x000040000 on Adapter 0, Port 0
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 3
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 2
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 1
SAS Discovery Error 0x000040000 on Adapter 0, Expander ....... 0

This only started Happening after I flashed the controller into IT Mode. I am using a X8DTH-6F-0 Motherboard whitch SAS2008 controller. And a Supermicro SC846E26-R1200b Chassis. I am only using one of the SAS HBA's to connect to the Backplane.

I was curious if this is something you experienced also? It still recognizes all the drives and I was able to get nexenta working with it, But it makes me a little uneasy to see it every time I boot.

Also while I have your attention do you use MPxIO on any of your all in ones or are you aware of any issues with it on the solaris platform?

Thank You for any response.

Have a great day!

I use that mainboard, but I do not have any expander (although i may buy one like
you have - with new LSI SAS2 chipset)

What I would try:
Connect disks directly to 2008 controller

otherwise ask at http://forums.servethehome.com/showthread.php?148-Intel-RES2SV240-24-port-SAS2-Expander-Wiki&

(thread about Intel, but it seems the same chipset)

Gea
 
Running Solaris Express 2011 napp-it 0.500g and install AFP with the command "wget -O - www.napp-it.org/afp | perl"

When stopping AFP this is the error I get:

stopping netatalk daemons:\c papd\c afpd\c cnid_metad\c atalkd\c .sudo: /etc/init.d/avahi-daemon: command not found

server overview:
afp-server : disabled
afp-avahi : online and advertise afp shares with zero-conf

Does anyone have a copy of the startup/shutdown /etc/init.d/avahi-daemon script?
 
Running Solaris Express 2011 napp-it 0.500g and install AFP with the command "wget -O - www.napp-it.org/afp | perl"

When stopping AFP this is the error I get:

stopping netatalk daemons:\c papd\c afpd\c cnid_metad\c atalkd\c .sudo: /etc/init.d/avahi-daemon: command not found

server overview:
afp-server : disabled
afp-avahi : online and advertise afp shares with zero-conf

Does anyone have a copy of the startup/shutdown /etc/init.d/avahi-daemon script?

ignore the error,
its only relevant for Nexenta (must do a "if OS=Nexenta then..")


Gea
 
quite simple.
More developers are needed

-> look at disklib.pl (creates list) and get-disk.pl (collects disk informations)
these two libraries are involved :))

I am currently working on
-replication
-real time two way communication Server <-> Browser (websockets, html5)

Gea

My question may have come across a bit cranky, but not the case I promise :) Your pace of development is actually quite impressive at the moment. I look forward to the progress of your current roadmap, and am quite satisfied in the knowledge that some disk stats aren't missing because of a technical roadblock.

Regarding dev contribution - I'm no real dev by any measure, so anything I'd try to do would just be smashing my face against code until I happened to luck upon some combination of copy/pasting and live trial/error to make something work, which wouldn't be much of a contribution anyway; so I'll try to stay on this side of the curtain for now.
 
ignore the error,
its only relevant for Nexenta (must do a "if OS=Nexenta then..")


Gea

Then is there a way to shutdown afp-avahi so I don't see my server being advertised on my iMac when there aren't any AFP shares?

Shouldn't it read:
afp-avahi : disabled

GREAT job btw! I really appreciate the UI you have put together and shared.
 
Hi.

I hope it is okay i ask here.

I have been thinking about using ZFS for my new NAS server. I already have a jpod case and a server running ESXi that could have ZFS running as a VM. http://www.supermicro.com/products/chassis/4U/847/SC847E16-RJBOD1.cfm

Will i be able to use this card with ZFS and my case?

http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=E

the case has 45 slots and no expander
how woukd you connect 45 drives to a 8 channel controller?
-> you will need a expander and/or 6 controller

look at a case with LSI SAS2 expander included

your Supermicro 2008 controller is the cheapest 2008 controller, (i suppose quite
identical to LSI 9211), but it is UIO (wrong side mounted) you have to do some modifications on brackets to mount it.

Gea
 
the case has 45 slots and no expander
how woukd you connect 45 drives to a 8 channel controller?
-> you will need a expander and/or 6 controller

look at a case with LSI SAS2 expander included

your Supermicro 2008 controller is the cheapest 2008 controller, (i suppose quite
identical to LSI 9211), but it is UIO (wrong side mounted) you have to do some modifications on brackets to mount it.

Gea

Actually, the e16 has a sas2 expander. One of these two cards should work fine with it, but you'll want to update their firmware:

http://www.lsi.com/storage_home/pro...pters/sas_hbas/external/sas9205-8e/index.html
http://www.lsi.com/storage_home/pro...pters/sas_hbas/external/sas9200-8e/index.html
 
You cannot really compare

Disk or Filesystem duplication/ replication is always slower than a simple file copy with rsync or robocopy
- especially with only a few large files. But Replication will create a real 1:1 copy with all Volumes, Snaps, Shares
and other ZFS properties like dedup, compress, ACL, .. and must copy more Data for that reason.
With rsync for example you just loose even simple file attributes like ACL.

Beside the exact 1:1 copy, you have two more advantages with replication

1. It is based on snaps, you can do during looad (no open files problem)
2. only the initial sync is a time problem. The next sync replicates only the changed data blocks based on snap-delta

Thats a huge difference then. If you have a 100 GB VM and just start it, a file based replication must
copy the complete 100 GB the next time while replication copies just a few KB within seconds.


about zfs transport methods.
i see three options

1. via mbuffer, i suppose the fastest way due to extensive buffering
i tried first but got problems/ interrupts with large ZFS

2. via netcat
no buffering but anso no protocol overhead

3. via ssh
encrypted, a lot of overhead, slowest way

Gea

Gea..
Thank you for your quick reply and valuable insight into this replication feature.
Is there no way to do a "reverse replication". Since I can't setup a new job without creating a new folder, (can't target original zfs folder) is there a way to "time slide" from backup system B to original system A?

Thanks..
 
Just installed napp-it on top of OpenIndiana on my server. I would just like to than you Gea for all the work you've put into this interface! Still in the midst of familiarizing myself with it, but it's pretty straight forward :)
 
Actually, the e16 has a sas2 expander. One of these two cards should work fine with it, but you'll want to update their firmware:

http://www.lsi.com/storage_home/pro...pters/sas_hbas/external/sas9205-8e/index.html
http://www.lsi.com/storage_home/pro...pters/sas_hbas/external/sas9200-8e/index.html

Yes just like you said. The case has 2x SAS2 expander so it needs 2xSFF8088 just like the 9205 you linked to.

The expanders are LSI SAS2X36 on the front and i think its the same for the back.

My disks are WB RE4-GP 2TB WD2002FYPS

I didn't see a compatibility report on the LSI card so I'm not sure if both the expanders and the drives are supported. I'm guessing the expanders are no problem sins they are both LSI.
 
Yes just like you said. The case has 2x SAS2 expander so it needs 2xSFF8088 just like the 9205 you linked to.

The expanders are LSI SAS2X36 on the front and i think its the same for the back.

My disks are WB RE4-GP 2TB WD2002FYPS

I didn't see a compatibility report on the LSI card so I'm not sure if both the expanders and the drives are supported. I'm guessing the expanders are no problem sins they are both LSI.

Shouldn't be a problem as long as you update the firmware on the adapter to P9.
 
Great, thanks allot. Astronot. Good to have that sorted i will try to get my hands on a LSI 9205.

Now that we have that sorted can you or someone else what OS to use for ZFS, I'm planing to make an all-in-one box with ESXi running 1 VM for ZFS and a few Windows VMs as well.

I'm not very in to Linux and i would like something that is easy to setup and maintain. I see that the ZFSguru is no longer being developed. So what to use?

I don't know it it important but here is my setup:

1x SuperMicro SC846A-R1200B (Changing backplane to Expander in the future)
1x 847E16-RJBOD1
1x MBD-X8DAH+-F -B
2x Intel Xeon E5606, 2.13 GHz - Quad Core/1066/8 MB
2x Heatsink SNK-P0038P (Rev. A & B) 2U+ DP Server
6x Kingston DDR3 ECC Reg, 1333 MHz, DR x4, 4 GB (24 GB)
1x USB Stick 8GB (For ESXi)
2x Intel 510 Series SSD, 120 GB, 450/210 MB/sec (For VMs)
2x SAS EL2/EL1 Cascading Cable (External), 68cm CBL-0166L
1x LSI SAS 9211-8i SGL (For internal disks)
1x LSI SAS 9200-8e SGL (For diske in JPOD case)
28 x WD Caviar RE4-GP, 2 TB, 64 MB, SATA II
 
Last edited:
Great, thanks allot. Astronot. Good to have that sorted i will try to get my hands on a LSI 9205.

Now that we have that sorted can you or someone else what OS to use for ZFS, I'm planing to make an all-in-one box with ESXi running 1 VM for ZFS and a few Windows VMs as well.

I'm not very in to Linux and i would like something that is easy to setup and maintain. I see that the ZFSguru is no longer being developed. So what to use?

I don't know it it important but here is my setup:

1x SuperMicro SC846A-R1200B (Changing backplane to Expander in the future)
1x 847E16-RJBOD1
1x MBD-X8DAH+-F -B
2x Intel Xeon E5606, 2.13 GHz - Quad Core/1066/8 MB
2x Heatsink SNK-P0038P (Rev. A & B) 2U+ DP Server
6x Kingston DDR3 ECC Reg, 1333 MHz, DR x4, 4 GB (24 GB)
1x USB Stick 8GB (For ESXi)
2x Intel 510 Series SSD, 120 GB, 450/210 MB/sec (For VMs)
2x SAS EL2/EL1 Cascading Cable (External), 68cm CBL-0166L
1x LSI SAS 9211-8i SGL (For internal disks)
1x LSI SAS 9200-8e SGL (For diske in JPOD case)
28 x WD Caviar RE4-GP, 2 TB, 64 MB, SATA II


Make sure the X8DAH+ will fit in that chassis, as it is a weird enh extended atx form factor. The X8DTH-6F may be the closest properly sized board, and actually lists your chassis as compatible.

I'd scrap the E5606 and go with E5620 minimum due to hyperthreading support, which benefits both esxi and solaris, depending on how you deploy.

Note that all of your hard drives except the 510 are SATA 3Gb/s, but your controllers/expanders support SATA 6Gb/s. If you even have a single SATA 3Gb/s drive connected to the expander, it will only maintain a 3Gb/s link to the HBA. Consider an alternative, such as hitachi's 7k3000 (they make a 7k3000 'enterprise' model like the RE).

Also, where are you connecting your SSDs? If they're for vms, I assume you're connecting them to the ICH10? If so, you can probably knock them down to the 320 series, as ICH10 won't benefit from the SATA3 of the 510 anyway.
 
Has anyone run ntpd in an openindiana virtual machine? If so, do you ever see messages like this:

May 27 11:42:42 nas ntpd[282]: [ID 702911 daemon.notice] frequency error -512 PPM exceeds tolerance 500 PPM

These print every few minutes. The clock seems to be correct. I have 2-3 other VMs, none of which manifest this behavior. vmware toolbox time sync is disabled in all cases. Only thing I can think of different is that the OI VM has two vcpus assigned, the others all have only one.
 
Make sure the X8DAH+ will fit in that chassis, as it is a weird enh extended atx form factor. The X8DTH-6F may be the closest properly sized board, and actually lists your chassis as compatible.

I'd scrap the E5606 and go with E5620 minimum due to hyperthreading support, which benefits both esxi and solaris, depending on how you deploy.

Note that all of your hard drives except the 510 are SATA 3Gb/s, but your controllers/expanders support SATA 6Gb/s. If you even have a single SATA 3Gb/s drive connected to the expander, it will only maintain a 3Gb/s link to the HBA. Consider an alternative, such as hitachi's 7k3000 (they make a 7k3000 'enterprise' model like the RE).

Also, where are you connecting your SSDs? If they're for vms, I assume you're connecting them to the ICH10? If so, you can probably knock them down to the 320 series, as ICH10 won't benefit from the SATA3 of the 510 anyway.

Thanks you for your reply.

I already have the cases and the motherboard and it fits fine no problem there.

I also already have the CPUs but i might upgrade them if they seems like a bottleneck in the future.

I also already have 35 of those 2TB disks. from my current server. but i am considering getting some 3TB drives then divide it in 2, then gather them i ZFS. Speed is not really a big concern as I'm sure it will be faster then the NIC. Hitachi seems like a good bet i just hope it works with my expanders, do you know of any experience with that? Also what about 4K support and TLR? I read on the ZFSGuru thread that it should be disabled? I think this is enabled on all enterprice drives, so i have to flash all my drives or what?

I also have the 510s today i would have taken the 320 for that reason. But now that i have them they will do even on the ICH10.
 
Thanks you for your reply.

I already have the cases and the motherboard and it fits fine no problem there.

I also already have the CPUs but i might upgrade them if they seems like a bottleneck in the future.

I also already have 35 of those 2TB disks. from my current server. but i am considering getting some 3TB drives then divide it in 2, then gather them i ZFS. Speed is not really a big concern as I'm sure it will be faster then the NIC. Hitachi seems like a good bet i just hope it works with my expanders, do you know of any experience with that? Also what about 4K support and TLR? I read on the ZFSGuru thread that it should be disabled? I think this is enabled on all enterprice drives, so i have to flash all my drives or what?

I also have the 510s today i would have taken the 320 for that reason. But now that i have them they will do even on the ICH10.


The Hitachi 7k2000/7k3000 drives are not 4k - even the 3TB ones. I have tested 2TB 7k2000 and 3TB 7k3000 drives with the expander you're using, and they work fine (though I'd stick with 7k3000 due to the bug I mentioned before). The hitachi drives also don't have TLER, but your RE drives will. ZFS doesn't require TLER, but I can't remember if it's detrimental or not to have it.
 
Back
Top