OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Is it safe to use port multipliers in running a RAIDZ?

What is the opitmal number of drives to have when creating a RAIDZ1?

I am still researching about ZFS, but from what I have read so far once created, a RAIDZ1 cannot be expanded. Only the pool can be expanded. Is that correct? I am looking for something to replace my WHS, and I especially liked the drive extender feature in WHS. By adding to a storage pool in ZFS does that mean that the total capacity of the pool will increase, and I would not have a separate drive mount? Sorry if my question is not clear, but if any of you have run WHS or know about the drive extender feature, then you might have an idea about my question.

Thanks!
 
Is it safe to use port multipliers in running a RAIDZ?

What is the opitmal number of drives to have when creating a RAIDZ1?

I am still researching about ZFS, but from what I have read so far once created, a RAIDZ1 cannot be expanded. Only the pool can be expanded. Is that correct? I am looking for something to replace my WHS, and I especially liked the drive extender feature in WHS. By adding to a storage pool in ZFS does that mean that the total capacity of the pool will increase, and I would not have a separate drive mount? Sorry if my question is not clear, but if any of you have run WHS or know about the drive extender feature, then you might have an idea about my question.

Thanks!

RAIDZ1, probably 6 disks max
RAIDZ2, 10 disks

Expanding the pool increases the capacity of your existing storage, but must be done in groups of drives. If you want to expand similar to how WHS worked, maybe take a look at Unraid.
 
RAIDZ1, probably 6 disks max
RAIDZ2, 10 disks

Expanding the pool increases the capacity of your existing storage, but must be done in groups of drives. If you want to expand similar to how WHS worked, maybe take a look at Unraid.

I am thinking about doing the following:

Create a RAIDZ1 of 5x2TB drives
Create a storage pool with these drives
Create another RAIDZ1 of 5x3TB drives at a later date
Add the 5x3TB RAIDZ1 to the original storage pool

By doing this will I see the storage pool as if it were just one large 20TB drive, partition, or whatever it is called? I think this is how it works, but I want to some confirmation from people how have done it.
 
about port-multiplier
http://en.wikipedia.org/wiki/Port_multiplier

i do not know if its safe, i doubt it is supported - and
for me, its clear that its not a recomended solution at all

best is to use one sas/sata port for one disk

or if that is not possible:
use a SAS expander to connect more SAS/ Sata disks
(example HP SAS expander up to 32 disks to one controller)


about max disks in Raid-Z
you could create a raid-Z with much more disks.
The problem with very large vdevs is the time to
resilver a disk in case of failures and the fact that multiple
smaller vdevs are faster.

read also (about performance and pool design):
http://constantin.glez.de/blog/2010...ove-oracle-solaris-zfs-filesystem-performance
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide


about pools
with ZFS, you will store your files on a data pool
a pool is build from one or more vdevs (raidsets or single drive)
your pool capacity could be extended by adding more vdevs

you could start creating a pool from one Raid-Z1 vdev and add
one or more vdevs (any sort, any size) at any time

On a pool, you could create ZFS folders. These are independant
file-systems, mounted below your pool. They can have different
ZFS properties like shares, dedup, compress, encrypt... They are similar to
partitions on other file-systems.

Gea
 
Last edited:
I am thinking about doing the following:

Create a RAIDZ1 of 5x2TB drives
Create a storage pool with these drives
Create another RAIDZ1 of 5x3TB drives at a later date
Add the 5x3TB RAIDZ1 to the original storage pool

By doing this will I see the storage pool as if it were just one large 20TB drive, partition, or whatever it is called? I think this is how it works, but I want to some confirmation from people how have done it.

Yes, you have the right idea, though you'll see less than 20TB due to formatting, probably closer to 18TB effective.
 
Now if someone can confirm if port multipliers are safe to use in OpenSolaris. I don't have the capability of running a large server box due to space so I need a modular solution.
 
Now if someone can confirm if port multipliers are safe to use in OpenSolaris. I don't have the capability of running a large server box due to space so I need a modular solution.

Instead of a port expander why not go with something like

Addonics 4 3.5 enclosure (get the Infiniband CX4 plug and a SFF8087/8088 -> CX4 cables

or

Addonics 9bay 5.25 enclosure With some 4/5 in 3's, etc.

Use normal HBA's in your system (if you can find ones with SFF8088 external plugs it's cleaner, but you can run a plug in through an open slot in the back of the case to a SFF8087 regardless) and just plug them into the enclosure. Effectively it should give you the same form factor as a port multiplier, but with better performance/reliability.


Also re: plans on the pool to expand with 3TB hard drives later, relize that if you *ever* plan to add 4k sector drives to your storage pool you need to create the pool with 4k VDEV's to begin with (and on solaris with the hacked zpool binary, on freebsd with gnop - to set an ashift of 12). You can still add 512byte vdev's (ashift of 9) to a pool with an ashift of 12, but you don't want to go the other way.
 
Now if someone can confirm if port multipliers are safe to use in OpenSolaris. I don't have the capability of running a large server box due to space so I need a modular solution.

I wouldn't risk a multiplier.. If you need it, use a sas expander instead. That said, if you use a motherboard with ich10 and a single 1068e or sas2008 card, you can accomodate your drive needs without an expander.
 
I'm a little confused by the napp-it website. Is the latest nighlly supported in Solaris 11 Express. I'm planning a new NAS with Solaris 11 Express (although I could go with openindiana) and from the website it seems that I could need to go with the older napp-it? Is that correct?

Thanks!
 
Does the napp-it/afp script work on SE11?

I'd like to enable timemachine for os x clients

thanks!
 
I'm building a a headless zfs storage server for my HD movies. I figured i would go with NexentaStor, great looking gui what could beat that. That is until i found out you have to register the server and get a key, my build is under their 18TB limit but still all that rub me the wrong way. In the end i would have probably gotten over the fact i had to jump through their hoops and still went with them until i saw a 2010 benchmark on zfsbuild.

http://www.zfsbuild.com/2010/10/09/nexenta-core-platform-benchmarks/
http://www.zfsbuild.com/2010/09/10/freenas-vs-opensolaris-zfs-benchmarks/

The NexentaStor was slower than the version of OpenSolaris b134 it was derived from but the other bencmark show their "Nexenta Core Platform" which is their own inhouse version of OS b134 was super duper fast than everything even in the newer OpenIndiana version, FreeNAS wasn't even a contender. i only care about the read performance so it looks like "Nexenta Core Platform" is the OS for me. So i will run nappit which isn't as pretty as NexentaStor but I'm sure it will do. Any know if Nappit hinders performance like NexentaStor?

Now my main question: Does anyone know if my build will support ECC and work well together after i buy all the parts? Its hard to find specific info if all these things will 100% work well together and in ECC mode. TIA



The main peices of the hardware:

NORCO SS-500 5 Bay SATA / SAS Hot Swap Rack Module
http://www.newegg.com/Product/Product.aspx?Item=N82E16816133030

ASUS M4A88T-M LE AM3 AMD 880G HDMI Micro ATX AMD Motherboard
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131673

Kingston 8GB (2 x 4GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) Server Memory Model KVR1333D3E9SK2/8G
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262

AMD Athlon II X4 610e AM3 45W Quad-Core AD610EHDGMBOX
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103899

Western Digital Caviar Green WD20EARS 2TB 64MB
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136514

CORSAIR Builder Series CX430 CMPSU-430CX 430W
http://www.newegg.com/Product/Product.aspx?Item=N82E16817139017

Rosewill R101-P-BK 120mm Fan MicroATX Mid Tower Computer Case
http://www.newegg.com/Product/Product.aspx?Item=N82E16811147112

Thanks
-Flash
 
Last edited:
I'm running napp-it 0.415k on SE11 and I just tried to enable NFS but it just reverts to disabled everytime. Does anyone know what is wrong?

Thanks!

edit: just upgraded to 0.415l and the problem remains
 
Last edited:
Make sure NFS is online in the services tab

in the services tab it says: nfs-server : disabled

in the system tab, then services sub-tab it says:

disabled --- svc:/network/nfs/server:default
online --- svc:/network/nfs/cbd:default

changing nfs to enabled doesn't seem to have any effect
 
in the services tab it says: nfs-server : disabled

in the system tab, then services sub-tab it says:

disabled --- svc:/network/nfs/server:default
online --- svc:/network/nfs/cbd:default

changing nfs to enabled doesn't seem to have any effect

try to share a folder via NFS
NFS is then enabled per default.

(you cannot enable service without having a ZFS-folder with sharenfs=on)
if problem remains. look at menu system-log


more infos or manuals, see
Oracle Solaris 11 Express Information Library
http://download.oracle.com/docs/cd/E19963-01/index.html

see also other links on initial thread- regularly updated:
http://hardforum.com/showthread.php?t=1573272


Gea
 
Last edited:
How can I share a ZFS storage under Open Indiana + Napp-it with a another virtual machine like Ubuntu on the same host ?
 
Last edited:
How can I share a ZFS storage under Open Indiana + Napp-it with a another virtual machine like Ubuntu on the same host ?

you can share a ZFS folder via AFP, CIFS or NFS (file-based) or via iSCSI (blockbased).
It does not matter, it you connmect these shares from anothetr physical machine
or from a virtual machine on the same or another host.

If you want to share it only beween OI and Ubuntu, you have to set the ESXi virtual switch
accordingly to your needs.

Gea
 
Hey there! Just want to say thanks Gea and keep up the good work...

On the other hand, I do have some questions:
What speeds do you get over samba and iscsi share over 1Gbps network? I did some benchmarking yesterday, and got samba transfer between 70 and 80MB/s, but the iSCSI was a disaster... transfer around 25MB/s and load average jumped to 8. Anyone had similar problems?

Q number 2:
Currently, hard drives are named c7t4d0, c6t4d0, c5t4d0 and put together as RAIDZ. What happens if device names get changed because of some system upgrade or something. Will I loose the pools? Will I have to import them or will the system mount pool according to GUID or UUID of hard drive and I won't even notice something changed?

Q3:
How long does the scrub action takes? Is it hard on resources?

Q4:
I installed SE11... Is it possible to install it to ZFS mirror or to somehow create a mirror rpool?

Q5:
Is it possible to how add hard drives? Let's say I just got 3 more hard drives and I put the in 3 hotplug ports on my NAS/SAN and everything is connected to LSI 1068e HBA. Will the system detect drives and will I be able to use them without restart?

Thanks, Matej
 
Last edited:
Q1 about performance:
it depends on your disks, your raid-config and your hardware.

read:
http://constantin.glez.de/blog/2010...-oracle-solaris-zfs-filesystem-performance(10 ways to improve performance)
http://www.solarisinternals.com/wiki...l_Tuning_Guide (best tuning guide)

via cifs (not samba) you could get > 100 MB/s on a Gbit network to Win7 guests


Q2 move disks to another slot:
- no problem

Q3 scrub
runs on low priority, time depends on hardware and raid-config

Q4: boot-raid1: only manually possible, google solaris boot raid-1, (have not tried by myself)
i would prefer a driverless hardware HW-Raid1 enclosure like a Raidsonic SR2760-2S-S2B if boot raid is needed

Q5: hotplug
yes, 1068e is hotpluggable (hotplug service must be enabled, automatically done via napp-it on menu disks)


Gea
 
Great news Gea.

I found out I somehow did something wrong and got poor performance. I tried cifs again yesterday and gor steady 70MB/s(limit on my network).

About Scrub. What does that do? As far as I know, ZFS doesn't have things like fsck or scandisk because it check data on read against a checksum and does copy on write... Why do I then need to do scrub?

I would like to install solaris on mirror in case one of the main drives fail. Is it possible to export settings with napp-it and then re-import them on a new installation? If not, then it's quite handy to have a mirror of main hard drive...

One more question about snapshots... Lets say I do snapshots every day and I find out today that something went wrong with file FOO and would like to get yesterdays copy. Do I have to rollback the whole snapshot or can I only get the file FOO.

Q2: In napp-it, if I delete a snapshot. Will that mean changes will be rollbacked or will the files stay current and I will only loose yesterdays snapshot?

Thanks, Matej
 
about scrub:
scrub is a online filecheck/ data refresh utility.
use it regularly to find and fix silent data errors unless they are reparable
by zfs- checksum

(silent data errors are errors, you get by radioactivity, weak sectors or magnetic fields,
you get them always, the more the bigger your disks, on other filesystems, you need to do
a dismount + filecheck to find and repair them eventually from raid)

about config files and mirrors:
napp-it does not store any important data (beside some comstar settings and jobs)
-nothing you cannot reset within minutes

about snaps
if you delete a snap, you cannot go back to this state, but it does not modify current data

about restore files from a snap
most easy way is to use nautilus time slider (OI and SE11) or
to connect a share via Windows and restore files from Windows
previous version

restore a complete snap is only needed with iSCSI

Gea
 
I'm running Solaris 11 Express w/ napp-it, currently I just have 1 SSD and asterisk, sabnzbd and sick-beard running (not doing much). I don't have my 4th HDD yet so I haven't set up my data zpool. However, if I log in intermitantly and check the system (top) the free memory is slowly going down. I have 8GB total in the system and when I booted I had around 6-7GB free, now its down below 3. Does this indicate a memory leak in a running program, is this a problem?

Thanks very much, I'm looking forward to getting my last drive so I can set up my zpool and test it out on the new zacate board!
 
If you are writing to the pool, the ARC will be caching data for later reads. If other clients start exerting back pressure for memory, zfs should give it back.
 
_Gea,

Another feature request. Can you make buttons under the disks menu to identify individual hard drives by continuously flashing the access light? That would rock.
 
_Gea,

Another feature request. Can you make buttons under the disks menu to identify individual hard drives by continuously flashing the access light? That would rock.

not possible without os-support.
Oracle and Nexentastor EE have this feature with closed source add-ons

but there are efforts to support ses backplanes in OpenIndiana/ Illumos
If you want to get more infos, google "ses blinking openindiana"


Gea
 
not possible without os-support.
Oracle and Nexentastor EE have this feature with closed source add-ons

but there are efforts to support ses backplanes in OpenIndiana/ Illumos
If you want to get more infos, google "ses blinking openindiana"


Gea

Well what about using dd to read from the requested disk to get it to light up? Something like dd if=blah of=/dev/null with a large file size to overcome cache so that it actually reads from the drive. Can you make some buttons do that?
 
Gea: I'm working with napp-it under OpenIndiana 148. I'm new to Solaris in general, but I have a good background in other unix environments.

I'm impressed with napp-it for its thoroughness, but I believe the interface could be improved. For example, when changing a password, there is no option to have a confirmation dialog. I was under the impression that sensitive dialogs with hidden text should always allow a confirmation dialog (or the option to show the text if a confirmation dialog is not possible).

On a related note, I noticed that changing passwords via napp-it truncates them to 8 characters. It took me a few hours of attempting different solutions to discover this problem.
 
Hello!

Scenario: I create a thin prov LU in, lets say, iscsi-luns ZFS folder, copy some data over and then create a snapshot of the ZFS folder. Then I copy some more data to the iscsi LU. How will the snapshot look like? Will the system copy the whole file to .zfs/snapshots or will it remember only changed blocks like with ZVols... Because on the Rollback LU page, I can see the note: Restore a snapshot for a LU (only works with volume LU)

How do I grow tin prov LUs? Do I just go to modify size and change it? What if LU is 10GB big and 100% used and on the modify_size page I change size to 5GB. Will it allow me to do it or come up with an error?

How can I change IQN of iscsi target? Now default IQN is iqn.19xx.com.sun....

Does nappit reserves some space when creating a new pool? If one of my hard drives fails and I change the drive with a 100kB smaller one, will it sync?

Thanks, Matej
 
Last edited:
i changed some settings due to your suggestions:

napp-it 0.414o changelog:
- pw-length now max 16 char (does not work on Nexenta, without modifications, Nexenta use only first 8 char.)
- if you change/ set a user pw, you have to enter twice
- menu disks, identify drive via dd (does not work on all systems)

update via menu napp-it update (or via wget from older versions without integrated update-option)

@levak
about comstar, see http://download.oracle.com/docs/cd/E19963-01/html/821-1459/fncoz.html

about snaps:
With ZFS and copy on write, files are never overwritten or modified, but created newly.
Unlike other systems like Apples time-machine, data for snapshots are never created by a copy action.
ZFS only preserves the file informations when you create a snapshot. therefor snaps are done nearly without
delay and without initial space consumption. If your LU is volume or file-based, you can restore them from snaps.

about space reservation when creating pools
napp-it does not modify system defaults or care about minimal smaller sizes.
But i have never had problems, replacing disks with the same type unlike i have had with my former hardware raids sometimes.
(But i do not know, if and how much ZFS cares about)


Gea
 
Last edited:
As far as I know how snaps work on ZFS folder, they work like this:
- I create a snap and a folder in .zfs/snapshots is created
- I then modify a file inside ZFS folder
- The new file is written to the ZFS folder and the old one moved to snapshot folder

Is that correct?

If it is, what happens when I have a file-based LU in one of the folders. I create a snap and when something is changed in a LU, the whole LU is written back to the hard drive and the old one moved to the snap folder. Does that sound correct?

Matej
 
As far as I know how snaps work on ZFS folder, they work like this:
- I create a snap and a folder in .zfs/snapshots is created
- I then modify a file inside ZFS folder
- The new file is written to the ZFS folder and the old one moved to snapshot folder

Is that correct?

Matej

not correct.
if you make a snapshot, the content of the .zfs/snapshot folder represents
the state of the filesystem at this time. if you modify a file afterwards, this file or the different parts of the file
(copy on write works at data-block-level) is created newly. there is no move or copy to the snapshot folder in any way.

(remember: no time delay, no initial space consumption of a snap, only modifications afterwards need space)

see also http://en.wikipedia.org/wiki/ZFS
"Copy-on-write transactional model

ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 32-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256)[28] of the target block which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and an intent log is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme).
Snapshots and clones

An advantage of copy-on-write is that when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are created very quickly, since all the data composing the snapshot is already stored; they are also space efficient, since any unchanged data is shared among the file system and its snapshots."..



Gea
 
Last edited:
Aaaaaa ok:)

I got confused when looking in .zfs/snapshot folder and the size of snapshot was the same as the size of used LU, so I thought it copies it over...

On, in that case, filebased LU's are the way to go:) I have to do some benchmarking, but I think it should be as fast as zvol LU.

In case my currently used LU gets corrupted, I can simply copy the file from .zfs/snapshots and remount the iscsi and I will have a rollbacked LU?

Matej
 
Aaaaaa ok:)

In case my currently used LU gets corrupted, I can simply copy the file from .zfs/snapshots and remount the iscsi and I will have a rollbacked LU?

Matej

should work, its just a file


Gea
 
Back
Top