OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

The Hitachi 7k2000/7k3000 drives are not 4k - even the 3TB ones. I have tested 2TB 7k2000 and 3TB 7k3000 drives with the expander you're using, and they work fine (though I'd stick with 7k3000 due to the bug I mentioned before). The hitachi drives also don't have TLER, but your RE drives will. ZFS doesn't require TLER, but I can't remember if it's detrimental or not to have it.

Okay seems like Hitachi might be my next line of drives :D I will get 3TBs no real reason for 2TB drives.

I like to know if 4K drives are a problem with ZFS or not though?

I also hope someone else in here can tell me if TLER is a problem or not.
 
Okay seems like Hitachi might be my next line of drives :D I will get 3TBs no real reason for 2TB drives.

I like to know if 4K drives are a problem with ZFS or not though?

I also hope someone else in here can tell me if TLER is a problem or not.

You need TLER with a hardware raid Raid controller. With ZFS and Software Raid i
would say its a unwanted feature. But i have more than 20 2TB WD RE4 drives and
use them for more than a year without problems in my NexentaCore filers.

I have also some Samsung 4k drives and use them without problems beside
a slightly (i suppose up to 20%) reduced performance compared to similar non 4k drives.


Gea
 
Due to the current zfsguru "mess" I am looking at napp-it /OI and I have some questions ...
How easy is it to install sabnzbd, sickbed, couchpotato... ( and other apps? )
Can I install virtualbox 4.0 + php admin so I can run two linux VM
 
Yes, you can install virtualbox right on top of OI - I've done that myself. Might be simplest to run those apps in a small linux VM, unless you want to learn all about solaris :)
 
Well I could do that but what would be the performances ( write speed with NFS?) and impact on the system of adding an other layer? Wouldn't it add another client to NFS and impact negatively performances for the others?
I could always try but really once I will have moved my terabytes of data, I am not sure I will want to change OS...
 
Actually, the nfs write performance should kick butt, given that it doesn't actually go out in a physical wire. Here is my iperf test between a ubuntu VM and an OI VM on the same ESXi box:

[ 4] 0.0-10.0 sec 3.99 GBytes 3.43 Gbits/sec
[ 3] 0.0-10.0 sec 5.40 GBytes 4.64 Gbits/sec

(running in either direction)
 
From various comparisons I've seen, it's no better or worse (depends on the scenarios and such...)
 
Due to the current zfsguru "mess" I am looking at napp-it /OI and I have some questions ...
How easy is it to install sabnzbd, sickbed, couchpotato... ( and other apps? )
Can I install virtualbox 4.0 + php admin so I can run two linux VM


I run all of these apps inside SE11, and they work fine - most python apps should once you get python installed properly. I also run ubuntu inside virtualbox, and it works fine as well. I couldn't find a decent way to get squeezebox server installed in SE11, so it runs in a vm.
 
Due to the current zfsguru "mess" I am looking at napp-it /OI and I have some questions ...
How easy is it to install sabnzbd, sickbed, couchpotato... ( and other apps? )
Can I install virtualbox 4.0 + php admin so I can run two linux VM

i run SAB and sickbeard on solaris, runs great.

I have smf manifests if you want them

I was hoping maybe someone could work out an auto update script for sab...
 
i run SAB and sickbeard on solaris, runs great.

I have smf manifests if you want them

I was hoping maybe someone could work out an auto update script for sab...

sab keeps it's settings separate from the program folder, so updating is as easy as stop > rename old folder > place new folder > start
 
sab keeps it's settings separate from the program folder, so updating is as easy as stop > rename old folder > place new folder > start

yup, but i was hoping a script could know if the latest version came out on sourceforge then grab it and replace old version :)
 
yup, but i was hoping a script could know if the latest version came out on sourceforge then grab it and replace old version :)

That wouldn't be optimal with SAB because some updates (like the recent .6) require your queue to be clear or other things when it happens. Also, if you're running it in conjunction with sickbeard/couchpotato, not knowing when sab is updating could cause them to miss it with an auto-download.
 
Question:

Why are all the new ZFS file-systems created by replication rdonly? What is the danger of changing rdonly to off? Did I make a fatal mistake?
I was just playing around and noticed if I re-ran the replication job after changing the folder to read-write I somehow lost the file system. The job ran for a few minuets and then I noticed a great deal of activity on the replicated file-system without much network activity. I guess my 5TB were being deleted. 44 hours of backup disappeared. I did move some files around in sub-folders so I'm not sure if this damaged the replicated backup or not. The only way for me is to torturer test so I know exactly what I'm dealing with.

I learn everything the hard way it seems.
"put the keyboard down and step away from the computer"

wf
 
Last edited:
Question:

Why are all the new ZFS file-systems created by replication rdonly? What is the danger of changing rdonly to off? Did I make a fatal mistake?
I was just playing around and noticed if I re-ran the replication job after changing the folder to read-write I somehow lost the file system. The job ran for a few minuets and then I noticed a great deal of activity on the replicated file-system without much network activity. I guess my 5TB were being deleted. 44 hours of backup disappeared. I did move some files around in sub-folders so I'm not sure if this damaged the replicated backup or not. The only way for me is to torturer test so I know exactly what I'm dealing with.

wf


zfs replication works this way;

1. initial transfer
Source ZFS is snapped and the snap is transfered to the target
could last days with somr Terabytes. After replication a target snap
is created.

2. incremental transfer
Source ZFS is snapped and contains only the datablocks that were modified.
Only these modified blocks are transfered. After replication another target snap
is created.

3. Good to know

-during transfer, the target is not accessable.
-If the target is accessed by a write operation (opening is enough), the target
is resetted to the last target snap prior of next replication
-you always need these snap-pairs for the next replication or you have
to delete target and recreate a basic transfer (do not delete these snaps manually)

for this reason, i set the target automatically to read only.
If you want to use the target, you must first stop/delete replication job.
Then you can set it to read/write and use it as usual.

4. remote replication job is beta1

there is already some development needed

Gea
 
Last edited:
zfs replication works this way;

1. initial transfer
Source ZFS is snapped and the snap is transfered to the target
could last days with somr Terabytes. After replication a target snap
is created.

2. incremental transfer
Source ZFS is snapped and contains only the datablocks that were modified.
Only these modified blocks are transfered. After replication another target snap
is created.

3. Good to know

-during transfer, the target is not accessable.
-If the target is accessed by a write operation (opening is enough), the target
is resetted to the last target snap prior of next replication
-you always need these snap-pairs for the next replication or you have
to delete target and recreate a basic transfer (do not delete these snaps manually)

for this reason, i set the target automatically to read only.
If you want to use the target, you must first stop/delete replication job.
Then you can set it to read/write and use it as usual.

4. remote replication job is beta1

there is already some development needed

Gea

Thank you for explaining the reason for needing the RDONLY flag set "on". In the future I will not mess with the attributes as read only should be enough for a back-up data set.
 
Other question about OI ( sorry )

Can I boot/install on a 2 disk ZFS-mirror ?
How do I make one pool 4K aware and the other 512b aware?
 
OI will not install in mirrored mode, but you can add a second disk later to the rpool (root) mirror. There are also then a couple of commands to update the boot blocks on the newly added disk (sorry, don't know what those are, as I am not doing that.) No idea aobut 4k vs 512b..
 
Hi guys

I setup my all in one and I am in the process of setting up napp-it with NexentaCore - I installed NexentaCore but it did not bring up the network via DHCP.

I tried "Ifconfig" but it won't show the interface status... I also tried ifconfig e1000g0 and didnt work. Suggestions?

Thanks
 
Hi guys

I setup my all in one and I am in the process of setting up napp-it with NexentaCore - I installed NexentaCore but it did not bring up the network via DHCP.

I tried "Ifconfig" but it won't show the interface status... I also tried ifconfig e1000g0 and didnt work. Suggestions?

Thanks

sudo svcadm enable nwam (i think)
 
Other question about OI ( sorry )

Can I boot/install on a 2 disk ZFS-mirror ?
How do I make one pool 4K aware and the other 512b aware?

about mirrored boot drive read:
http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express

about 4k drives and ashift=12
i would not do anything and accept a slightly reduced performance with 4k drives
(currently i would avoid 4k if possible, ex. use hitachi drives up to 3 TB without 4k)

if you want to have max possible speed:
- create a pool (geom format) with ZFSGuru and ashift = 12 and import or
- use a modified zpool app (experimental)
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/

Gea
 
I made a test setup yesterday. Installed ESXi 4.1 on a usb, made a raid 1 on 2 SSD for VMs on my Adaptec controller. then passed through ich10 restarted and ESXi won't start, just flashes a courser. I'm guessing something else is on the same PCI port that ESXi needed. So i can't test with ich10 controller? or did i do something wrong?
 
I made a test setup yesterday. Installed ESXi 4.1 on a usb, made a raid 1 on 2 SSD for VMs on my Adaptec controller. then passed through ich10 restarted and ESXi won't start, just flashes a courser. I'm guessing something else is on the same PCI port that ESXi needed. So i can't test with ich10 controller? or did i do something wrong?

-> you have two divas now.
ESXi and Solaris are restricted in hardware.

i have had the same problem.
i tried a 3805 (had a lot of them around) and it won't work with pass-through-enabled.
another all in one with a 5805 works with ESXi and pass-through.

best to use is:
use a Intel server chipset with Xeons, Intel nics and
LSI HBA's based on 1068 or 2008 chipsets (LSI 9211)

SuperMicro + LSI is my preferred combo.
-> works always

Gea
 
This is what I'm testing with:

1x SuperMicro SC846A-R1200B
1x MBD-X8DAH+-F -B
2x Intel Xeon E5606, 2.13 GHz - Quad Core/1066/8 MB
2x Heatsink SNK-P0038P (Rev. A & B) 2U+ DP Server
6x Kingston DDR3 ECC Reg, 1333 MHz, DR x4, 4 GB (24 GB)
1x Kingston USB Stick 8GB (For ESXi)
1x Adaptec 51245
2x Intel 510 Series SSD, 120 GB, 450/210 MB/sec (For VMs)
+Some SATA drives

So i have no controller except my Adaptec and onboard ich10

When i make a RAID1 in the ich10 the SSDs are showing as 2 different disks not a raid. I googled that and found ich10 raid is fake raid and not supported by ESXi, is this right?

When i moved the disks to the Adaptec, the adaptec controller bios finds the raid and it shows as 1 disk in ESXi, so I'm guessing my original plan to use onboard controller for VMs is down the drain.

I will be getting a LSI controller but i wanted to test a bit first but i guess not.

For what controllers go I'm considering eighter
1x LSI SAS 9211-4i + 1 LSI SAS 9202-16e
This way i have 1 controller for all internal disks and 4 ports for externals, i only need 2 external now but its always good to have extra.

Or just 1x LSI SAS 9201-16i then take 3 ports and turn them external this way i have 1 spare external 8808.

Are all these controllers a good choice for ESXi / Napp-it OI ?
 
This is what I'm testing with:

1x SuperMicro SC846A-R1200B
1x MBD-X8DAH+-F -B
2x Intel Xeon E5606, 2.13 GHz - Quad Core/1066/8 MB
2x Heatsink SNK-P0038P (Rev. A & B) 2U+ DP Server
6x Kingston DDR3 ECC Reg, 1333 MHz, DR x4, 4 GB (24 GB)
1x Kingston USB Stick 8GB (For ESXi)
1x Adaptec 51245
2x Intel 510 Series SSD, 120 GB, 450/210 MB/sec (For VMs)
+Some SATA drives

So i have no controller except my Adaptec and onboard ich10

When i make a RAID1 in the ich10 the SSDs are showing as 2 different disks not a raid. I googled that and found ich10 raid is fake raid and not supported by ESXi, is this right?

When i moved the disks to the Adaptec, the adaptec controller bios finds the raid and it shows as 1 disk in ESXi, so I'm guessing my original plan to use onboard controller for VMs is down the drain.

I will be getting a LSI controller but i wanted to test a bit first but i guess not.

For what controllers go I'm considering eighter
1x LSI SAS 9211-4i + 1 LSI SAS 9202-16e
This way i have 1 controller for all internal disks and 4 ports for externals, i only need 2 external now but its always good to have extra.

Or just 1x LSI SAS 9201-16i then take 3 ports and turn them external this way i have 1 spare external 8808.

Are all these controllers a good choice for ESXi / Napp-it OI ?

they should all work according to http://wiki.openindiana.org/pages/viewpage.action?pageId=4885461
i prefer 2008 based ones

be aware with pass-through: you can only pass a complete controller, not parts of it.
In my own configs, i usually boot ESXi from driverless 2 x 2,5" sata hardware-raid-1 enclosures connected to
onboard sata and use this disk also as local datastore for the OpenIndiana VM used as SAN-OS.

All my other VM's are on a NFS SAN datastore provided by this virtualized OpenIndiana SAN (or Solaris Express/ Nexenta)


Gea
 
Last edited:
they should all work according to http://wiki.openindiana.org/pages/viewpage.action?pageId=4885461

be aware with pass-through: you can only pass a complete controller, not parts of it.
In my own configs, i usually boot ESXi from driverless 2 x 2,5" sata hardware-raid-1 enclosures connected to
onboard sata and use this disk also as local datastore for the OpenIndiana VM used as SAN-OS.

All my other VM's are on a NFS SAN datastore provided by this virtualized OpenIndiana SAN (or Solaris Express/ Nexenta)


Gea

I want to pass-trough the whole controller so thats no problem.

Can you link to that enclosures? If not that i will proberly get a LSI MegaRAID SAS 9261-8i for the VMs

Freak1
 
These drives manufacturers pisses me off. Why can't they had their drives report 4K sectors ! Just the please the Windows XP crowd ?
 
What is the performance lost when using 512bytes sectors on a 4K drive. Considering I will mainly write and read BIG files ( videos ).
 
These drives manufacturers pisses me off. Why can't they had their drives report 4K sectors ! Just the please the Windows XP crowd ?

I can't completely blame XP, as it seems to me it would be incredibly trivial to add a switch to the zpool create command to override the default behavior and just force ashift to whatever you want, and this whole issue would disappear. Instead ZFS tries to be too clever about it and ends up outsmarting itself. People following the storage industry have known, or should have known, these drives were coming for years now.

I bet we'd see a lot examples of other OSes getting egg on their faces too if drives started reporting >512b sectors. That assumption has been ingrained for a long time in the industry.
 
I can't completely blame XP, as it seems to me it would be incredibly trivial to add a switch to the zpool create command to override the default behavior and just force ashift to whatever you want, and this whole issue would disappear.
How many people using 4k consumer dives with fake sector reporting will ever pay a cent to Sun/Oracle? We aren't their customers, so I doubt they care. Even if you could use cheap 4k drives with ZFS, it would just be one less reason to buy an expensive disk array from Oracle.
 
I want to pass-trough the whole controller so thats no problem.

Can you link to that enclosures? If not that i will proberly get a LSI MegaRAID SAS 9261-8i for the VMs

Freak1

there are some driverless hardware raid-1 enclosures around.
i use http://www.raidsonic.de/en/products/soho-raid.php?we_objectID=7534
(about 130 Euro, they survice a drive failure but have problems with semi-dead drives, then the second drive is also not usable - not ZFS at all :)

Gea



announcement:
napp-it 0.500h is out
strongly suggested for all who are using replication
with a new feature: jobs every-n minutes/ hours but opt. only in the afternoon and/or morning/ night
 
Last edited:
there are some driverless hardware raid-1 enclosures around.
i use http://www.raidsonic.de/en/products/soho-raid.php?we_objectID=7534
(about 130 Euro, they survice a drive failure but have problems with semi-dead drives, then the second drive is also not usable - not ZFS at all :)

Gea



announcement:
napp-it 0.500h is out
strongly suggested for all who are using replication
with a new feature: jobs every-n minutes/ hours but opt. only in the afternoon and/or morning/ night

Thanks again. Though i don't really have the space for that in my case. I also more see semi-dead drives then totally dead.

Another thing i might consider is just using ESXi duplication/cluster service (Can't remember the name of it) as that will give me then same as RAID1 when its only used for VMs?
 
I don't understand what RAID1 has to do with deduplication? Also, dedup is a cpu and ram hog - beware!
 
announcement:
napp-it 0.500h is out
strongly suggested for all who are using replication
with a new feature: jobs every-n minutes/ hours but opt. only in the afternoon and/or morning/ night


Niiice, especially for the frequent auto-snap jobs.
 
Well its not the same. But if i have the VMs on 2 drives then i guess that i could loose a disk without loosing my VMs like RAID1 will do.

I prefer RAID1 but then i will need an extra controller. If if drains my cpu and ram like you say i can see its not a good idea.
 
Replication question:

2 appliances - site-a and site-b

Setup both at site-a, do initial replication

Move 2nd to site-b; different subnet

My assumption on how it must go is:

After moving, remove site-b server from site-a server's replication group and re-add with new ip
Site-b server doesn't need settings modification, and will continue to run the replication schedule?
 
Replication question:

2 appliances - site-a and site-b

Setup both at site-a, do initial replication

Move 2nd to site-b; different subnet

My assumption on how it must go is:

After moving, remove site-b server from site-a server's replication group and re-add with new ip
Site-b server doesn't need settings modification, and will continue to run the replication schedule?

from pure replication view, you only need a source and target snap-pair.
But napp-it replication needs hostname-ip (group info) and snapname with job-id.
You may need to manually edit job-settings with new name-ip info.

Gea
 
from pure replication view, you only need a source and target snap-pair.
But napp-it replication needs hostname-ip (group info) and snapname with job-id.
You may need to manually edit job-settings with new name-ip info.

Gea


The job is set up at the site-b server, and the site-a server ip isn't changing. Also, the snaps created appear to only be using the appliance name in them, not the ip. If I understand correctly, that would mean that the job actually doesn't need to change?

The only thing I see breaking it is the fact that since site-b server's ip changes, site-a server doesn't see it in the group. Because of this, the next time the job tries to run, site-b server can't do it because site-a doesn't see it as a group member anymore. If this is the only issue, removing and re-adding the site-b server in the site-a server config should be the only thing that has to be fixed?

Man that sounds a lot more complex than it is. I guess I can test it locally by changing to another ip for site-b :)
 
The job is set up at the site-b server, and the site-a server ip isn't changing. Also, the snaps created appear to only be using the appliance name in them, not the ip. If I understand correctly, that would mean that the job actually doesn't need to change?

The only thing I see breaking it is the fact that since site-b server's ip changes, site-a server doesn't see it in the group. Because of this, the next time the job tries to run, site-b server can't do it because site-a doesn't see it as a group member anymore. If this is the only issue, removing and re-adding the site-b server in the site-a server config should be the only thing that has to be fixed?

Man that sounds a lot more complex than it is. I guess I can test it locally by changing to another ip for site-b :)

You may:

- create a new job with initial replication
- create a new job and manually edit to use the old job-id and the old snap-pair
- manually edit the old groups-info: -> does not work,group- key is created from pw+host-info

if the pool is not too big, use 1.

Gea
 
Back
Top