OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

omniscence

[H]ard|Gawd
Joined
Jun 27, 2010
Messages
1,311
Did anybody try to build a ZFS pool on multiple iSCSI LUNs? I'd like to use ZFS, but I'm not going to install the harddisks into the ZFS server nor can I use PCI passthrough based solutions. Can Solaris properly handly the failure of a harddisk backing a iSCSI LUN?
 

adi

Limp Gawd
Joined
Aug 1, 2002
Messages
399
Did anybody try to build a ZFS pool on multiple iSCSI LUNs? I'd like to use ZFS, but I'm not going to install the harddisks into the ZFS server nor can I use PCI passthrough based solutions. Can Solaris properly handly the failure of a harddisk backing a iSCSI LUN?
From 2006
http://www.cuddletech.com/blog/pivot/entry.php?id=566

Still looking for larger pools with more iSCSI targets, but ZFS definitely can use them (ZFS can also use files for pools).
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
@ cymon
I don't personally use Napp-it, but impressive project. I should give it a whirl some time. I'm unclear do you utilize time-slider for the snapshots or your own scripts/cron jobs?
napp-it has its own menu-driven auto-job management with autosnap, autoscrub, automail (status and/or alert), autoshutdown and highspeed auto-replicate (nearly ready based on mbuffer + zfs send between two hosts).
Its really simple to setup.

cou can try it at http://www.napp-it.org/pop11.html in menue jobs


Did anybody try to build a ZFS pool on multiple iSCSI LUNs? I'd like to use ZFS, but I'm not going to install the harddisks into the ZFS server nor can I use PCI passthrough based solutions. Can Solaris properly handly the failure of a harddisk backing a iSCSI LUN?
I have not tried but from zfs view, there is no difference between any block device, does not matter if its a real disk or a iSCSI LUN. So if you want for example two iSCSI targets from two other hosts as a ZFS mirror it should work. But this is a rather special question.

If there is nobody here already tried such, you may ask in zfs-forum at http://www.opensolaris.org/jive/forum.jspa?forumID=80 or google with a search string like "zfs mirror iSCSI"

gea
 

omniscence

[H]ard|Gawd
Joined
Jun 27, 2010
Messages
1,311
From 2006
http://www.cuddletech.com/blog/pivot/entry.php?id=566

Still looking for larger pools with more iSCSI targets, but ZFS definitely can use them (ZFS can also use files for pools).
Yes, I already knew that blog entry, it is actually the only source I could find about this. You cannot call that experience however, it is just a mere test.


I have not tried but from zfs view, there is no difference between any block device, does not matter if its a real disk or a iSCSI LUN. So if you want for example two iSCSI targets from two other hosts as a ZFS mirror it should work. But this is a rather special question.

If there is nobody here already tried such, you may ask in zfs-forum at http://www.opensolaris.org/jive/forum.jspa?forumID=80 or google with a search string like "zfs mirror iSCSI"
I mostly fear that the iSCSI subsystem will crash or something if the remote block device becomes unavailable.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
I mostly fear that the iSCSI subsystem will crash or something if the remote block device becomes unavailable.
it should have the same effect like hot-unplug a disk. pool will be in degraded state.
you should just try it with a most recent ZFS-OS, maybee Solaris Express 11 at first place.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
There is currently only one Solaris derived distribution based on Illumos
its called schillix from Joerg Schilling, see http://schillix.berlios.de/

EON will be based on Illumos with a future release (just like next Nexenta or OpenIndiana)
see http://eonstorage.blogspot.com/2010/09/eon-zfs-takes-road-to-illumos.html

current EON is based on Sun's SXCE Build 130 (rather old)
see http://eonstorage.blogspot.com/2010/04/eon-zfs-storage-0600-based-on-snv-130.html

napp-it will run on Eon, if you use a EON version with Apache webserver
but i have not tested it with current 0.4 version.
http://eonstorage.blogspot.com/2009/11/using-napp-it-webadmin-with-your-eon.html

BUT
I would not suggest to setup a new NAS based on current EON because
it's end of live. I hope to see a new EON release based on Illumos soon.


ps
EON is the smallest ZFS-NAS distributions available. It lacks ZFS boot support with system snapshots but is intended
to run from a minimal boot-medium purely from ram. System modifications are not saved on shutdown or you have to save them manually.

so its good for minimal NAS installations booting from a usb stick or similar

Gea
 
Last edited:

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
I have uploaded a new version of my online-installer + free web-gui for NexentaCore,
OpenIndiana and Solaris Express 11

changelog:
* speed improvements with menu folder when you have a lot of ZFS or snaps
* highspeed replication (via mbuffer + zfs send/receive) between hosts or ZFS is nearly feature complete
* replication will create and keep hosts or ZFS in sync with exact copies - with all ACL, snaps or iSCSI - volumes based on snapshots (no open-file problem)
* this is the fastest transfer method at all, but its not encrypted, use it on SAN, Intranets or management networks only

* replication is not yet completely ready or bugfree, only tested on NexentaCore, but from this version you may test it



How to:
* Build a group of your server with menue group
* Create a replication job. You can select source host, source ZFS and target ZFS on localhost (pull only)
* All source share-properties are removed from target, but you can smb-reshare with or without guest-access
* Target ZFS is set to read-only
* The second host (sender) is remotely managed by this job, You can monitor ZFS, snaps and running processes on remote hosts.

After initial-sync all further syncs are incremental, so you could keep two server in sync with all snaps and <br>
iscsi-volumes in a 15 min interval. You could cancel jobs and when you delete them, all involver snaps are deleted.

If you like it, try it and report problems.

You may try menues with <a href="http://www.napp-it.org/pop_en.html " target="_blank">my testinstallation</a><br>
but you cannot run the jobs there, because the try-it version is running on different ports.


Installation like usual:
Install NexentaCore, OpenIndiana or Solaris Express 11 from boot-iso

Configure your NAS and install or upgrade napp-it web-gui:
Login at console as root (or user and su to get root permissions) and enter
wget -O - www.napp-it.org | perl

Gea
 

rx7boy

n00b
Joined
Jan 13, 2011
Messages
5
Hi Gea

I love your napp-it. I appreciate all your efforts in making this a great install.

I am having one issue that I can't seem to get around. SMB share work perfectly fine but when I install AFP and access the shares from OS X on AFP I have no problem writing to the shares but I cannot delete anything from the share, it says it does not have permission to. I have tried to look everywhere but could not find an answer. It is probably something simple but I cannot seem to find it. Do you know what is causing this?

Also do you accept donations for all your hard work?

Thank you in advance
John
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
hello John
have you done the following:

1. create a new user example john,
beeing a member of group staff (default)

2. create a ZFS-folder example mediacut with smb-defaults
this will set permissions and ACL-defaults to "usable" for SMB and AFP
(The Nexenta/OpenIndiana/Solaris defaults are save but unusable!)

(check if folder permission is set to 755 or 777 and if Folder-All is set to modify-set,
if not, klick on ZFS-folder and set defaults to smb)

Share this folder for AFP, see



I also suggest to install/ update to the newest netatalk 2.1.5
there are some fixes especially with current ZFS versions
see http://netatalk.sourceforge.net/2.1/ReleaseNotes2.1.5.html


ps
you are the first asking to donate :))
I can't offer anything but the old letter style.

gea
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
important news

First stable release of OpenIndiana will be 02.2011

Alasdair Lumsden wrote "...
Prior to the Oracle takeover, Solaris 10 was free to use in production, and for a long time, security updates were provided free of charge. OpenSolaris was also free to use, and updates were available by living on the bleeding /dev edge. People were (mostly) happy.

Then Sun hit financial difficulties and discontinued free security updates for Solaris 10. Then Oracle happened, ending the free use of Solaris in production.

This has left people wishing to use Solaris technologies on their production servers in a difficult position. They have to pay Oracle, or use distributions that don't provide security updates. Or switch to Linux.

There are a great many people who would jump at the chance to use Solaris if there were a production ready version with security and bug fixes provided for free.

Indeed, this is what people have come to expect from mainstream UNIX platforms - Linux distributions such as Debian, CentOS, Ubuntu, etc, provide updates free of charge - and this is one of the reasons they have become so popular.

We have a real opportunity to capitalise on the situation left by Oracle, to capture server market share away from OpenSolaris, Solaris 10, and give users a migration path other than switching to Linux (which a lot of people are doing).

There are a lot of people out there who really really want a stable build of OpenIndiana - myself included, and I believe OpenIndiana's best chance of gaining acceptance, market share, and building a thriving development community is by capturing the server market.

There is also a risk that if we don't do this, we'll become an obscure fringe distribution, like DragonflyBSD.

The goal here is to be the mainstream accepted de-facto Solaris distribution. Something people talk about and seriously consider using.

Solaris contains killer technologies not seen on other platforms; technologies like ZFS, Zones, SMF, DTrace, COMSTAR, Crossbow - I couldn't live without any one of these, and we should capitalise on this while we can.

It's also worth keeping in mind that despite warning users that oi_147 and oi_148 were development releases, people are already using it in production environments, myself included, due to a lack of alternatives. The great news is that it has proven to be exceedingly reliable, and I have no hesitation in recommending it for busy workloads. All we need to do is add security updates and critical bug fixes on top and we'll be in a great position. No small feat I grant you, but we can start off small and work our way up.

Now is also an opportune time to do this - our next release will be based on Illumos, which has seen rapid development and will involve some integration pain. Some have called for a stable branch after Illumos is integrated, but it could be many months until we have an Illumos dev build suitable for respinning as a stable branch. That's months of lost opportunity.

So I say we do it now.
/dev builds will continue as normal, the next one will be Illumos based - Desktop users can continue to use our /dev builds, and internet facing servers can use the stable branch.

...."

see http://wiki.openindiana.org/oi/2011.Q1+-+Foreverware


napp-it 0.414k nightly from today

this is a bugfix release:
-autosnap: delete > keep
-new installation: error about creating a file
-menue folder-create: hide other change options
-not fixed already: replication problem of large ZFS

Gea
 
Last edited:

rx7boy

n00b
Joined
Jan 13, 2011
Messages
5
Hi Gea

I had everything setup correctly but still no go. I then remembered kind of a similar situation I had with freenas...

For AFP you can create a folder within the pool but the share will not have delete or change capabilities. You can only share the root of the pool. So I just edited the line in the Volumes file under AFP to share the root of the pool and then it was all good. I don't know what causes this but I have run into this before, if you have any ideas why this might be it would be great to hear.

Thank you for your help.
Keep up the good work

John
rx7boy
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
you should be able to share any ZFS folder without problem
if your base folder-permissions are set correctly to 777.

please tell:
OS-version
netatalk-version
napp-it Version

there are so many combinations possible.

ps
you cannot share a pool itself and access included ZFS
- there is often a misunderstood because of the terms

vdevs (single drive or Raidset of disks)
pool (your datapool build of vdevs)
zfs-folder (independant filesystem, compare it to a conventional partition, mounted as a folder within your pool)
folder (just a folder, created by you)


Gea
 

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
I've been playing with napp-it and solaris 11 express on real hardware. So far I'm impressed but I've ran across problems when trying to get SMB shares set up correctly. I'm not sure the best way to go about it. I have a windows user named PowerUser that is the only user in my network that I want to have read/write access to all of my shares. Should I just create smb-user PowerUser on the solaris box, or should I just usermap it to root? Seems like both ways it still asks for my credentials when I try to connect to the share from Windows while logged in as PowerUser. I would like for it to read my credentials automatically without asking for login/pass.

Unix/Linux permissions and ACLs have always been the hardest thing for me to configure for some reason.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
hello jahamala

do it similar like you would have done on a real windows box

1. use napp-it to create a user named poweruser on your solaris box
2. use napp-it to add this user to smb-group administrators on your solaris box
3. use napp-it to create a smb-share with smb-defaults and guest-access disabled

-> napp-it' smb default settings will modify unix permissions and ACL settings from
"nobody is allowed to do anything" to root is allowed to do everything and everybody is allowed to modify.


4. user mapping Windows-user -> unix-user
at this point you have to do one unix-specific setting:

unix user root is owner of your shared folder.
If you want to login as poweruser with admin-permissions,
you need to set a mapping winuser: poweruser = unixuser:root

This is always the minimal needed user mapping. (win:admin=unix:root)
(I usually use a user administrator at this point.
Then you have nearly no difference ) with a real Windows-Server

thats all. now you have full access, when you login as poweruser. You may also create other user and set all ACL settings from your Windows-Computer.
All ACL settings are Windows compatible with Solaris Kernel based CIFS Server


user settings with napp-it on Solaris*:



Be aware of the following:

- If you create a user on Solaris, you can use this account also for Windows access
- Solaris unix groups are not used for Windows sharing, Instead Solaris use additional Windows compatible groups
- Solaris ACL with Kernel based SMB Server are much more Windows compatible, than they are with Linux or Samba.
- You always need a minimal mapping from Windows user/group to Unix user/groups

example:
winuser:administrator=unixuser:root
wingroup:administrators=unixgroup:root


- You do nor need any other mapping or special setting. You can now login from your Windows Computer to set all needed permissions.

!! Do not try to set ACL from Solaris. Its not as easy like doing it from Windows. !!
The same like Snapshot access and restore: Do it from Windows with previous version

or use
Solaris Nautilus with its time-slider
-> select a folder and go back in time with this folder with the help of time-slider see http://java.dzone.com/news/killer-feature-opensolaris-200
 
Last edited:

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
Well I never could get it to recognize my credentials automatically. I think it was probably a case sensitivity issue and the fact that solaris wouldnt let me make a user called PowerUser. It said it was too long. Maybe because of the two upper case letters? Anyway, I just created and administrator user like you suggested. When I connect to the share I can log in and change permissions, so I should be able to figure out how to create the users I want and give them access. I'm really not good at permissions from Windows either.

A tip for anyone else running Solaris 11, if you want smartmon to work you need to install gcc, otherwise it doesn't compile.

"pkg install gcc-3"
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
Well I never could get it to recognize my credentials automatically. I think it was probably a case sensitivity issue and the fact that solaris wouldnt let me make a user called PowerUser. It said it was too long. Maybe because of the two upper case letters? Anyway, I just created and administrator user like you suggested. When I connect to the share I can log in and change permissions, so I should be able to figure out how to create the users I want and give them access. I'm really not good at permissions from Windows either.

A tip for anyone else running Solaris 11, if you want smartmon to work you need to install gcc, otherwise it doesn't compile.

"pkg install gcc-3"
hello jahamala

Sun had done a lot not only to have a good unix server with the option to share via smb but to be as Windows compatible as possible.

OK,
Now you have the regular Windows administrator with admin permission. You ony need to add more user like your desired poweruser (lowercase!!) with napp-it.

Login as admin, create a new folder, right click on it and set permissions like
administrator=full access
poweruser=modify

After this only administrator and poweruser can access the folder

The other possibility is:
You can not only set permissions on files and folders but on a share itself,
just like you can do it on a real windows server

From your Windows-box open computer-manage and select
connect to other server and connect your Solaris box
Now you are able to set share permissions


about smartmontools:
i have added gcc to napp-it onlie installer, no need to install separately
 
Last edited:

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
Just another heads up. When trying to create an iscsi target, I was getting "itadm: command not found". It took some digging but it looks like the iscsi port provider isn't installed by default. "pkg install network/iscsi/target" and then restarting the service took care of it. Again, this is solaris 11 express.

Some of these missing package issues may be because I did a text install instead of booting the live cd and installing that way. I don't really know.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
thanks for the info

The Solaris Installer (Text Version and Live Version) does not install target service.
I have added it to the current napp-it online-installer

see
http://www.napp-it.org/downloads/changelog_en.html


ps
On OpenIndiana and Solaris Express i would prefer to install the Live version with the GUI. This version is not only more user-friendly but has also the Time Slider feature.
For me a must to have on a NAS.

With Time-Slider you can select a folder and go back in time, much better than Apples Time Machine or Windows pervious version.

see http://java.dzone.com/news/killer-feature-opensolaris-200

Gea
 
Last edited:

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
Thanks for the info on the slider, but it's not something I'm really interested in. My data is all mostly archival. And it's great to see that you're so quick and active with the development.

I must still be missing something with my user accounts or smb permissions config. I created administrator, added it to the administrators group, and an idmap is created to map it as root. I have a share created and shared over smb just as you suggested. When I browse to the share, it asks for user/pass and I can log in successfully as administrator and read/write/change data no problem. However if I right click the share and try to add a user and give them permissions I get the following:

An error occurred while applying security information to:
\\ZFSserver\Temp
Access is denied.
If I shift-right click Computer Management and run as administrator, I can then connect to ZFSserver. But, when I click on Shared Folders and Shares, I get this message:

You do not have permissions to see the list of shared folders for Windows clients.
When I go to Users and try to create a user, I get another error message:

The following error occurred while attempting to create the user testuser on computer ZFSSERVER
A remove procedure call (RPC) error has occurred.
Any ideas?
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
have you tried to create a user on your Solaris Server remotely from your Windows system?

That is not supported. You can only set ACL on folder, file- and share-level remotely.
You have to create additional user with napp-it on your Solaris server and add them
there to smb-groups.

After you have done this, you can use these users and groups from your Windows system
to restrict access to shares, files and folders. Right Klick on your share/ folder and select
property - security to view or set permissions.

Gea
 

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
Yeah I tried creating a user remotely, I wasn't sure if that'd work or not. So, tried creating the users through napp-it, and they then show up under the permissions menu. However, when I try to add a user at the share level such as \\zfsserver\temp, I get permission denied. Though, when I create a folder under temp folder I CAN give users permissions to that. I don't see what good that does if I can't get into the share in the first place. I'm still missing something.

I know I sound like a broken record by now!
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
ACL's at share level and on file/folder level is a feature of a Microsoft Windows Server.
It's fine on large installations but you have two settings to deal with.

Default share permission is: no restriction,
so its more easy to do all access restrictions on folder level.

Solaris Kernel-based CIFS Server supports all these advances Microsoft features.
With napp-it, you can set functional defaults for shares and folders in
menu folder, when you klick on the ZFS-name.

set folder defaults to SMB (root=full, everybody=modify) and
share=full and everything should work.

Gea
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
I recently discussed about folders and volumes.
But there was a lot of confusion about terminology.

I will try to list the most important ZFS terminology:

1. ) pool

First you have a pool, where you store your data. Pools can grow nearly unlimited by adding vdevs.
Depending on a pool version you have different options like dedup or encryption available. On a pool you will create your filesystems.

On NexentaCore, OpenSolaris, OpenIndiana and Solaris Express, a pool ex. tank is mounted as /tank
Nexentastor mounts it as /volumes/tank


2.) vdev

pools are build from vdevs. a vdev could be build from a single disk or a raid-set ex. a raid-z1.
vdevs could not grow or shrink beside the special case, if you replace disk by disk followed by a resilver.
In case of a mirror you could add or remove a mirrored disk.


3.) block devices

vdevs are build from block-devices
usually a disk, but there are other block devices like iSCSI LUNs or volumes


4.) ZFS filesystems

often called ZFS-folder or only folder or volume (bad terminology)
these are independant filesystems, something like a partition on conventional systems.
Each filesystem has its own ZFS-properties like dedup, compress, quota..
they are mounted below a pool just like a simple folder but they do not inherit properties
automatically. ex. if you share a pool via smb, you will see these "folders" but you cannot access.


5.) a simple folder in a filesystem

just a folder, nothing special. you cannot assign ZFS properties on them.


6.) ZFS-volumes

are block devices, created on a ZFS pool. Its something like a virtual disk.
see http://dlc.sun.com/osol/docs/content/ZFSADMIN/gaypf.html
volumes are usually used to build locical units for Comstar iSCSI


7.) Locical units (LU)

are used for Comstar iSCSI. They could be created from a file, a volume or a disk.
You could assign a LU to a SCSI target (the thing you can connect to from your computer with a iSCSI initiator)
locical unit are only visible in a target if you have set a view to all targets or to a target group, where a target is a member.


8.) LUN

If you have connected your computer with a target, so will see the defined logical units.
You can mount them just like a local disk and use them as a LUN.


9.) shares

their behaviour is different beween NFS and SMB and different between Windows, X+Samba and Solaris Kernel based CIFS server

Windows: shares are convenient names of exported folders. shares within other shares are allowed
Solaris Kernel based SMB: a share is property of a ZFS filesystem. shares within a share is not possible.
If a ZFS filesystem is mounted below another pool or ZFS filesystem, cou cannot switch to this folder after mounting the share.


anything important missing or wrong?

Gea
 
Last edited:

rx7boy

n00b
Joined
Jan 13, 2011
Messages
5
Hi Gea

I guess your right, I am not sharing a root pool but I guess a ZFS Filesystem which is for me "Storage".

I see you say you like the time slider feature which I also like for obvious reasons. So do you use the time slider feature in Solaris Express or the snaps feature in your napp-it, or do they do the same thing? I thought they did but maybe I am wrong.

Also if I want to create an afp share on a folder that has a space in it like "Johns Stuff" how does one list this in the volumes file under afp? I tried underscore and hyphen but no luck I was only able to get it to work by taking the space out.

Thanks
John
rx7boy
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
about TimeSlider in Solaris Express 11 and OpenIndiana Live versions:

Time Slider is a function of the Nautilus filebrowser to browse already created snapshots.
You can create these snaps with napp-it job-managment ex. daily, keep 30 or hourly, keep 24

With Time Slider cou can select a folder in Nautilus file browser and browser into the past with
the slider. If you have deleted a file sometime, it will appear if you slide over older snaps.

This is much more comfortable than the time based selection like with Windows previous version
or Apple's Time Machine. There you select a date and you can browser files within this snap. If the file isn't there you have to search a snap from another date.


about spaces in share and usernames

This is always problematic with unix-systems. i suppose its not possible with afp-shares.
But you could use Uppercase and spaces in SMB usernames up from today with current
napp-it nightly.


about setup Kernel based SMB Server - the most simpelst way to setup

After installing napp-it, reset password at console for root with passwd root to create a smb password. Add other smb user when needed.
Connect share from Windows via \\servername (not IP) and login as root. There is no mapping needed (see services-smb). Set all User ACL from Windows
It is recommended to use lowercase user-names without spaces. Newest ZFS-OS version like SE 11 and OpenIndiana supports Uppercase and spaces in usernames,
- although its usually better to avoid on unix systems
Gea
 
Last edited:

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
OK, enabling napp-it to accept uppercase letters in usernames seems to have fixed all of my problems. I started from scratch and removed all the ID mappings, then recreated the users and I can admin the folders from Windows now.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
hello jahamala

ID user-mapping between Unix and Windows isn't funny at all-
better to avoid it if possible like with current Solaris-based systems

ps
i have updated my mini-howto doc, see
napp-it.pdf

Gea
 

jahamala

n00b
Joined
Jun 5, 2007
Messages
9
Yes. Previously I was giving it a username "Joe" and the webgui was creating user "joe" instead. I was in the middle of setting up my usernames manually from the CLI when I saw your note above about now being able to use uppercase and spaces in names and shares. Once I updated napp-it, this solved all of my previous problems with share permissions as it was now creating and recognizing user accounts the way I intended!

Now after working through a couple bugs and the missing pkg issues we went through earlier, functionally, napp-it on solaris 11 express is perfect for my needs. All that is missing is a few GUI enhancements (SMART, link agg, etc) and I'll be 100% happy. I'll definitely be suggesting this to others with similar needs as mine.
 

rx7boy

n00b
Joined
Jan 13, 2011
Messages
5
Hi Gea

I noticed SE 11 is not automatically deleting snaps. I noticed it looks like you corrected this as of 0.412. I have an hour snapshot setup to keep a maximum of 50 but it is not getting rid of the old ones. Could it be something I have done wrong?

Thanks
John
rx7boy
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
hello rx7boy

about autojob: autosnap
i'm working heavily on the new autojob "highspeed sync/ replicate between two hosts".
With 0.412 nightly i changed one of the behaviours with a side problem on autodelete snaps.

see my changelog

please update to newest nightly
wget -O - www.napp-it.org/nappit | perl

then manually delete snaps with the new delete menu


then delete old snap-jobs and recreate them


Gea
 

rx7boy

n00b
Joined
Jan 13, 2011
Messages
5
Hi Gea

Ahh yes, I will give this a go and let you know how it goes.

Thanks
John
rx7boy
 

wingfat

Weaksauce
Joined
May 13, 2010
Messages
115
Gea..
Thank you for your hard work so that people like me can use these unix based systems. I am having a problem with your new replicate system. Could you give a step by step for this? I either get snapshots on the master or nothing on the slave. It all seems to work backwards for me ..ok stop laughing...
I can't get anything to work if I use the localhost in the first field. If I use the remote master it will work but the snapshots end up on the master, not the backup system from which I'm setting this all up.
I need this feature as bad as you... My backups via cifs take 3 days (@ 5TB) I have tried to learn zfs replication and snapshot but all I'm good at is using the time slider..and I can't even use that to duplicate the snapshots to another system (unless I set up iSCSI which causes MAJOR shutdown problems) I would also like to see a way to automatically restore / clone -whatever my data from these snapshots if I were to totally loose my master server. Thank you!

NB. I'm using 2 Solaris 11 systems (neither virtual)
jeff
 

Firebug24k

Weaksauce
Joined
Aug 31, 2006
Messages
106
Thanks a lot for your excellent NAPP-IT - I've got it up and running on Solaris 11. I've got a small 3-drive (2TBx3) Raidz1 array right now, and I'd like to add six more 2TB drives. Am I correct that I can take my existing 3 drive array and add six more drives, so that I end up with a 9 drive Raidz3 (without losing my current data)? Thanks!
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
Gea..
Thank you for your hard work so that people like me can use these unix based systems. I am having a problem with your new replicate system. Could you give a step by step for this? I either get snapshots on the master or nothing on the slave. It all seems to work backwards for me ..ok stop laughing...
I can't get anything to work if I use the localhost in the first field. If I use the remote master it will work but the snapshots end up on the master, not the backup system from which I'm setting this all up.
I need this feature as bad as you... My backups via cifs take 3 days (@ 5TB) I have tried to learn zfs replication and snapshot but all I'm good at is using the time slider..and I can't even use that to duplicate the snapshots to another system (unless I set up iSCSI which causes MAJOR shutdown problems) I would also like to see a way to automatically restore / clone -whatever my data from these snapshots if I were to totally loose my master server. Thank you!

NB. I'm using 2 Solaris 11 systems (neither virtual)
jeff
about high-speed replication (full/incremental with zfs-send over netcat or mbuffer)
-replication is already "work in progress", currently developped on NexentaCore
-napp-it's high speed replication will always pull data, so you have to setup the job on the target machine.
-there are already some problems with OpenIndiana/ SE11 and remote control
-we moved from the fastest option mbuffer to netcat due to problems with large ZFS

i suppose, we already need a few nighties to have it bugfree on all supported platforms

If you want to test replication, you have to:
- menu groups-add member: add the host, you want to pull data from
- menue jobs-replicate-zfs send: create a replication job from ZFS -> to local ZFS

ps
clone/ restore from snaps
the easiest way to access snaps to clone or restore are via Windows-previous version.

Keep CIFS in sync with a backup-server
i do this with my 18 TB filer via Windows robocopy \\server1\folder \\server2\folder /b /mir

This is a ultra fast file-based sync that copies only changed files with full ACL support (all ACL but owner).
ZFS replication on the other hand creates full replicas of a pool with all ZFS properties, snaps and volumes. Its mostly needed with virtualization.



Gea
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
Thanks a lot for your excellent NAPP-IT - I've got it up and running on Solaris 11. I've got a small 3-drive (2TBx3) Raidz1 array right now, and I'd like to add six more 2TB drives. Am I correct that I can take my existing 3 drive array and add six more drives, so that I end up with a 9 drive Raidz3 (without losing my current data)? Thanks!
no that is not possible.

about pools:
-ZFS stores data on pools. Pools are build from vdevs (raidsets)
-Pools can grow nearly unlimited by adding more vdevs,
-Pools are getting also faster by adding new vdevs

about vdevs:
vdevs (Raidsets) are build from (basic) one or more disks (mirror, raid-z)
You cannot shrink or expand vdevs. Only on a basic vdev (one disk) you
can add mirrored disk. Also you may replace disk by larger disk of a raid-z,
followed by a resilvering to increase the size of a raid-z

Other Systems (Windows, Linux) store data on Raidsets, They don't have
the pool of raidset-concept.

Keep in mind:
ZFS was designed for large installations with high performance needs.
With every new vdev, added to a pool, the pool is then larger and faster.

In your case,
i would add a raid-z with 5 disks and a spare disk for the pool.

If you do not need a performance increase but maximals space, you have to
copy your files to a backup-disk and create a new 9-disk raid-z3

see also my updated mini HowTo Setup a ZFS Server

Gea
 

axan

[H]ard|Gawd
Joined
Nov 5, 2005
Messages
1,935
playing around with solaris express 11 but running into a weird issue, every time i reboot it changes my /etc/resolv.conf
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
3,938
do you use dhcp?
did you always get the same entries in /etc/resolv.conf
have you selected a boot environment?

Gea
 

adi

Limp Gawd
Joined
Aug 1, 2002
Messages
399
If you are using a static IP address, you have to disable the network manager and use the default network service.
http://forums.oracle.com/forums/thread.jspa?threadID=2139833 is what I followed.
Basically:
svcadm disable network/physical:nwam
svcadm enable network/physical:default

Or you can try and make the changes to /etc/default/dhcpagent per that page.

And from there set up your IP information manually, using directions from http://wiki.sun-rays.org/index.php/SRS_5.1_on_Solaris_11_Express (not the packages, just the legacy network part)

Basically even if you have a static IP address set, in some cases Solaris 11 DHCP still does stupid things with DHCP and overwrites some files (like resolv.conf.).
 

grausch

Weaksauce
Joined
Jan 31, 2011
Messages
112
Hi Gea,

Thank you for creating this great interface.

I installed it yesterday morning, played around with it a bit and then did a fresh install of OpenSolaris b134 to set up my file server.

However, when I run the wget instruction, this is the output I get now:

rauserv@RauschServer:~$ su
Password:
rauserv@RauschServer:~# wget -O www.napp-it.org/nappit | perl
Backticks found where operator expected at - line 4, at end of line
(Missing semicolon on previous line?)
syntax error at - line 2, near "Usage:"
Can't find string terminator "`" anywhere before EOF at - line 4.
rauserv@RauschServer:~#

Do you have any idea what is wrong?

Any advice will be much appreciated.

P.S. First time round, I followed exactly the same installation procedure and both times napp-it was the first thing I installed (or tried to).
 
Top