OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
You may:

- create a new job with initial replication
- create a new job and manually edit to use the old job-id and the old snap-pair
- manually edit the old groups-info: -> does not work,group- key is created from pw+host-info

if the pool is not too big, use 1.

Gea


Pool will be > 3TB on it's initial replication, so waiting a week or two for it to run over the wan isn't an option. What file would have to be edited for option #2? Also, would be be possible to remove the ip as a unique identifier and use the dns name and/or a guid or something instead?

Edit: Also, just to be clear: when creating a replication job, I create it on the receiving host. Does this store any information on the sending host? If the receiving host configures the job, how does it describe it's own (receiving) host-info in the job file?
 
Last edited:

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
You may:

- create a new job with initial replication
- create a new job and manually edit to use the old job-id and the old snap-pair
- manually edit the old groups-info: -> does not work,group- key is created from pw+host-info

if the pool is not too big, use 1.

Gea

no warranty, no support, try first with a test-ZFS

- create a new job with same settings

-look at folder /var/web-gui/data/napp-it/_log/jobs
replace old jobid (from filename and content) with new one
or put old job-id to new job

relevant is only job-id -> snap-pair name

maybee ít works

Gea
 
Last edited:

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
Here's another question, beta-style and all:

I create a replication job for a pool with recursive=on and no info in the sharesmb fields
In this pool I have multiple zfs folders with smb=on
On the replication destination, it shares the zfs folders as they were on the source, but I would like it not to do so

1. Is there a setting to turn off all the smb shares when replicating?
2. If I disable smb on the resulting destination zfs folders, will it break the replication job?
 

intel

n00b
Joined
Apr 5, 2009
Messages
35
I just migrated from my ubuntu FUSE zfs to Gea's NexentaCore + napp-it

I backed up my filesystem on a spare 2tb external, but just FYI there is a bug when IMPORTING zfs pools. It will not create a symlink under /nameofpool -- so you cannot benchmark using napp-it.

To ensure a troubleless install I destroyed my pool and I will restore my files from backup, but FYI on that bug.
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,704
Either is fine. OI is a newer code base. Also, OI is (AFAIK) an open source project - with NC, you are at the mercy of nexenta updates...
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
Here's another question, beta-style and all:

I create a replication job for a pool with recursive=on and no info in the sharesmb fields
In this pool I have multiple zfs folders with smb=on
On the replication destination, it shares the zfs folders as they were on the source, but I would like it not to do so

1. Is there a setting to turn off all the smb shares when replicating?
2. If I disable smb on the resulting destination zfs folders, will it break the replication job?


Currently, the script disables smb sharing only for the replicated ZFS, not for embedded ones.
(will be included in one of next versions)

You may disable sharing but its resetted on next replication


Gea
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
Currently, the script disables smb sharing only for the replicated ZFS, not for embedded ones.
(will be included in one of next versions)

You may disable sharing but its resetted on next replication


Gea

30.5.2011 0.500i nightly
replication: disable sharing recursively on destination (not working on Nexenta)
new feature: menu disk-smartinfo with basic smartinfos (like sn, type, status and temp)
 
Last edited:

s0rce

Limp Gawd
Joined
Jan 17, 2011
Messages
495
30.5.2011 0.500i nightly
replication: disable sharing recursively on destination (not working on Nexenta)
new feature: menu disk-smartinfo with basic smartinfos (like sn, type, status and temp)

my napp-it isn't reporting 0.500i when I check for the latest update. Still on g.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
my napp-it isn't reporting 0.500i when I check for the latest update. Still on g.

i removed 0.500i nightly due to a replication bug with Nexenta
(Nexenta does not support recursive setting of ZFS properties)

wait until tomorrow

Gea
 
Last edited:

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
It would be sweet if eventually there existed a napp-it replication faq that described things like what job status messages mean (in testing, I see things like local-partly and remote-missing during a job that eventually ends without error), what a proper job looks like through it's run, and maybe common error explanations.
 

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
Regarding the new job schedule additions: I set up some jobs with xBC times, like an every-15BC minute job, an every-1BC and every-2BC, but any of them I created with a BC time haven't run yet since creation a day ago.
 

Freak1

Limp Gawd
Joined
Sep 9, 2009
Messages
191
I just ordered the SAS 9201-16i. can't wait to get it so i can test some more.
 

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
bugfix: autojob, set timer with ABCD

Impressive. Do prior jobs using ABCD need to be recreated?

Edit: I also notice that Disks > Diskinfo now show serial#'s for drives connected to sas2008 controllers (under smart_sn, not sn). This is excellent! Not yet showing under Disks or Disks > Smartinfo, but as it is it makes things easier :)
 
Last edited:

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
It would be sweet if eventually there existed a napp-it replication faq that described things like what job status messages mean (in testing, I see things like local-partly and remote-missing during a job that eventually ends without error), what a proper job looks like through it's run, and maybe common error explanations.

ZFS replication is just simple from its basics, but it needs a lot to catch all the
possible problems tha6t may occur.

Whats happening during a replication job:

Phase1:
-start a zfs-receive piped from netcat as net-transport application
(will display locally-running if ok or init/ end at the begin or end of a transmission in job-overview)

Phase 2:
- remotely create a snap
- remotely start ZFS send piped to netcat as net-transport application
(will display two bullets in job-overview -remote field if running)

Phase 3:
- End of transmission
- on sender side: zfs send quits, after a few seconds, netcat terminates
- on receiver side: netcat terminates, after up to 60s, zfs receive terminates
(will display init/end locally and one or two bullets or - in job-overview on sender side)

Phase 4.
- create a local destination snap
- next replication is based on this source-destination snap-pair

ps
0.0500j nightly from today is out (hope it works on Nexenta/OI/SE)
http://napp-it.org/downloads/changelog_en.html

Gea
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
Impressive. Do prior jobs using ABCD need to be recreated?

Edit: I also notice that Disks > Diskinfo now show serial#'s for drives connected to sas2008 controllers (under smart_sn, not sn). This is excellent! Not yet showing under Disks or Disks > Smartinfo, but as it is it makes things easier :)

Menu disk-smartinfo should work.
You may also manually edit jobs. The timer is part of jobname.
see /var/web-gui/data/napp-it/_log/jobs

just edit the name and set hour-filed for example from every to every-2BC
..will be editable via GUI some time...


Gea
 

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
Menu disk-smartinfo should work.
You may also manually edit jobs. The timer is part of jobname.
see /var/web-gui/data/napp-it/_log/jobs

just edit the name and set hour-filed for example from every to every-2BC
..will be editable via GUI some time...


Gea

You're correct re: disk-smartinfo; it's there, just missed it because of the two different fields.

Didn't even have to edit the ABCD jobs, they're running now :)
 

jwinsor566

n00b
Joined
Feb 26, 2011
Messages
13
I use that mainboard, but I do not have any expander (although i may buy one like
you have - with new LSI SAS2 chipset)

What I would try:
Connect disks directly to 2008 controller

otherwise ask at http://forums.servethehome.com/showthread.php?148-Intel-RES2SV240-24-port-SAS2-Expander-Wiki&

(thread about Intel, but it seems the same chipset)

Gea
FYI... I am not 100% positive on this but I think I am getting these errors because I do not have the 2nd controller hooked up into the Backplain. I am ordering the second cable and will see if this resolves. I would rather add the extra cable and maybe make use of MPxIO rather than disabling the port to get rid of the errors.
 

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
no warranty, no support, try first with a test-ZFS

- create a new job with same settings

-look at folder /var/web-gui/data/napp-it/_log/jobs
replace old jobid (from filename and content) with new one
or put old job-id to new job

relevant is only job-id -> snap-pair name

maybee ít works

Gea

Here is what I have done, which *appears* to work:

I have job 1306722125, which replicates a ZFS folder from host-a to host-b.
host-b has an ip address change.

1. host-b appears offline on host-a under extensions-appliance-group. Remove host-b and re-add with the new ip address.
2. Test replication job on host-b. Job hangs because zfs-send is attempting to send to host-b's old ip address. Cancel job.
3. Edit 1306722125.pl and 1306722125~inforemoved.job, replacing 2 instances of host-b's old ip with the new ip.
4. Test job. Job completes successfully, and a changed test file is updated in host-b as expected.

Is there anything that I may have missed and broke? I can see how it may be complex to resolve this as a feature after seeing how the information is used in the files.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
Here is what I have done, which *appears* to work:

I have job 1306722125, which replicates a ZFS folder from host-a to host-b.
host-b has an ip address change.

1. host-b appears offline on host-a under extensions-appliance-group. Remove host-b and re-add with the new ip address.
2. Test replication job on host-b. Job hangs because zfs-send is attempting to send to host-b's old ip address. Cancel job.
3. Edit 1306722125.pl and 1306722125~inforemoved.job, replacing 2 instances of host-b's old ip with the new ip.
4. Test job. Job completes successfully, and a changed test file is updated in host-b as expected.

Is there anything that I may have missed and broke? I can see how it may be complex to resolve this as a feature after seeing how the information is used in the files.

if it works, its ok

Gea
 

Obscurax

n00b
Joined
Mar 19, 2011
Messages
20
What do these illegal request mean? Google didn't help me much.
Should I worry? I am worried something is wrong but I have no clue what to do...
I am using a flashed Intel SASUC8I as controller.

Code:
  pool: rpool
 state: ONLINE
 scan: resilvered 8.12G in 0h3m with 0 errors on Mon May 30 23:18:48 2011
config:

	NAME          STATE     READ WRITE CKSUM
	rpool         ONLINE       0     0     0
	  mirror-0    ONLINE       0     0     0
	    c3t0d0s0  ONLINE       0     0     0
	    c3t1d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 1h48m with 0 errors on Tue May 31 04:48:49 2011
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    c1t0d0  ONLINE       0     0     0
	    c1t1d0  ONLINE       0     0     0
	    c1t2d0  ONLINE       0     0     0
	    c1t3d0  ONLINE       0     0     0
	    c1t4d0  ONLINE       0     0     0
	    c1t5d0  ONLINE       0     0     0

errors: No known data errors


Code:
c3t0d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: WDC WD1600BEVT-0 	Revision: 1A01 	Serial No: 	 
  	Size: 160.04GB <160041885696 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR="DarkRed"]Illegal Request: 12[/COLOR] 	Predictive Failure Analysis: 0 	  	  	 
c3t1d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: WDC WD1600BEVT-0 	Revision: 1A01 	Serial No: 	 
  	Size: 160.04GB <160041885696 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR="DarkRed"]Illegal Request: 11[/COLOR] 	Predictive Failure Analysis: 0 	  	  	 
c1t0d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: Hitachi HDS5C302 	Revision: A580 	Serial No: 	 
  	Size: 2000.40GB <2000398934016 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR="DarkRed"]Illegal Request: 14[/COLOR] 	Predictive Failure Analysis: 0 	  	  	 
c1t1d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: Hitachi HDS5C302 	Revision: A580 	Serial No: 	 
  	Size: 2000.40GB <2000398934016 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR="DarkRed"]Illegal Request: 14[/COLOR] 	Predictive Failure Analysis: 0 	  	  	 
c1t2d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: Hitachi HDS5C302 	Revision: A580 	Serial No: 	 
  	Size: 2000.40GB <2000398934016 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR=DarkRed]Illegal Request: 14[/COLOR] 	Predictive Failure Analysis: 0 	  	  	 
c1t3d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: Hitachi HDS5C302 	Revision: A580 	Serial No: 	 
  	Size: 2000.40GB <2000398934016 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR=DarkRed]Illegal Request: 14[/color] 	Predictive Failure Analysis: 0 	  	  	 
c1t4d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: Hitachi HDS5C302 	Revision: A580 	Serial No: 	 
  	Size: 2000.40GB <2000398934016 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR="DarkRed"]Illegal Request: 14[/COLOR] 	Predictive Failure Analysis: 0 	  	  	 
c1t5d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	  	 
  	Vendor: ATA 	Product: Hitachi HDS5C302 	Revision: A580 	Serial No: 	 
  	Size: 2000.40GB <2000398934016 bytes> 	  	  	  	 
  	  	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0
  	[COLOR="DarkRed"]Illegal Request: 14[/COLOR] 	Predictive Failure Analysis: 0
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
OpenIndiana 151 developer release available
This is the first available release based on Illumos (independent from Oracle)

info: http://wiki.openindiana.org/oi/oi_151
download: http://dlc-int.openindiana.org/151/oi-dev-151-text-x86-20110526-1.iso

this release is beta and with text-only installer and for testing only
but i could it install without problem. ZFS pool is already on version 28 (without encryption)

see also diskussions at openindiana chats if you want to get informed or help to develop
http://openhatch.org/meeting-irc-logs/oi-meeting/2011-05-31.log.html
http://echelog.matzon.dk/logs/browse/openindiana
http://echelog.matzon.dk/logs/browse/oi-dev


Gea


@Obscurax
about illegal requests messages
I would ignore unless some other things happens.

Gea
 
Last edited:

Astronot

Weaksauce
Joined
Feb 1, 2008
Messages
83
On a 1.4TB ZFS folder, I scheduled a replication job. On it's first run, it proceeded to somewhere around 64%, then the progress reversed iteself, counting down the percentage and the job finished without presenting an error. The target ZFS folder showed no data.

Immediately running the job again errored out and reported a lack of target snapshot for the pair, so I deleted the target and initiated the job manually again. This time it finished properly.
 

ChrisBenn

Limp Gawd
Joined
Feb 21, 2011
Messages
440
Re: Illegal Request: 14

If they start going up constantly then it probably indicates some issue - if they numbers are always 14 after a reboot then there's something (likely with the drivers) that is throwing the errors in the bootup cycle.

But as long as the number is constantly increasing while you have disk access it's not really anything to be concerned with (imo).
 

forumator

Weaksauce
Joined
Nov 6, 2009
Messages
93
Total newbie here, just now setting up my new nappit ZFS server, installed the OpenIndiana 151 build to flash drive but as Gea said it's text-only version...what is the command to install gnome 3 from the CLI?
 

Obscurax

n00b
Joined
Mar 19, 2011
Messages
20
@Obscurax
about illegal requests messages
I would ignore unless some other things happens.

Gea

Re: Illegal Request: 14

If they start going up constantly then it probably indicates some issue - if they numbers are always 14 after a reboot then there's something (likely with the drivers) that is throwing the errors in the bootup cycle.

But as long as the number is constantly increasing while you have disk access it's not really anything to be concerned with (imo).

The illegal requests keep increasing, on the OS drives it got to 816!
On the raidz2 pool it got to 30. Any way to figure out what is causing this?
No problems accessing the disks tho.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
Total newbie here, just now setting up my new nappit ZFS server, installed the OpenIndiana 151 build to flash drive but as Gea said it's text-only version...what is the command to install gnome 3 from the CLI?

use OI 148 live or wait some days until OI 151 live is ready.


about ilegal requests.
i suppose its hard to find the reason (try another controller/ sata to verify that the controller
or driver is the reason, if its the controller/ driver, try reflash)

But due to checksums, you will be get informed if there are data problems.

Gea
 

wheelz

Weaksauce
Joined
Feb 4, 2011
Messages
100
Gea - I think I might have a bug. I am on napp-it v 0.500k and for some reason my iscsi target's authentication got set back to none. I tried to use the napp-it web interface to correct it (by setting it to chap). Even with or without all fields filled in, it would return back like the command was successful but when I got back to the Target screen it still showed none for the auth. I then used "itadm modify-target -a chap <target name>" command and that worked.
 

Garuda888

n00b
Joined
Mar 11, 2011
Messages
29
Does installing to a USB drive make sense? and does the read/write speed of the key make a difference? The 8gb stick I was using seemed to run out of space on my Solaris 11 express install when I installed GDM for the GUI. So I bought a Kingston 16gb key but it's only 10MB/s read/ 5 write. Whilst the Patriot Rage was something like 25MB/s

Someone previously recommended installing the OS onto one of the drive pools/arrays I'm putting in the server. But it's spec'd as follows
4x 300gb 15K SAS
6x 2tb Hitachi 5k3000

Perhaps a couple SSDs for ZIL and L2ARC at a later date.

1. USB key is ok? not redundant so possibly susceptible to failure? will be running non stop for approx 2years +
2. if I were to install Solaris Express 11 on the disks instead of the USB key which disk set and how do I size the OS partitions since I discovered I can't shrink a ZFS pool
3. Can I add the ssds later? or must it be now.
4. does the ssd speed matter significantly. ie. only the latest ssds have 250MB/s for R/W approx 130bucks for a 60gb. Figuring Agility 3 or something

Thanks much!
 

wheelz

Weaksauce
Joined
Feb 4, 2011
Messages
100
Does installing to a USB drive make sense? and does the read/write speed of the key make a difference? The 8gb stick I was using seemed to run out of space on my Solaris 11 express install when I installed GDM for the GUI. So I bought a Kingston 16gb key but it's only 10MB/s read/ 5 write. Whilst the Patriot Rage was something like 25MB/s

Someone previously recommended installing the OS onto one of the drive pools/arrays I'm putting in the server. But it's spec'd as follows
4x 300gb 15K SAS
6x 2tb Hitachi 5k3000

Perhaps a couple SSDs for ZIL and L2ARC at a later date.

1. USB key is ok? not redundant so possibly susceptible to failure? will be running non stop for approx 2years +
2. if I were to install Solaris Express 11 on the disks instead of the USB key which disk set and how do I size the OS partitions since I discovered I can't shrink a ZFS pool
3. Can I add the ssds later? or must it be now.
4. does the ssd speed matter significantly. ie. only the latest ssds have 250MB/s for R/W approx 130bucks for a 60gb. Figuring Agility 3 or something

Thanks much!

For my OS drive I bought 2 x Super Talent 16 GB MLC 7-pin SATA DOM (mirrored). This saved my HD slots for data drives. It is not noticeably slow for me.
 

Garuda888

n00b
Joined
Mar 11, 2011
Messages
29
For my OS drive I bought 2 x Super Talent 16 GB MLC 7-pin SATA DOM (mirrored). This saved my HD slots for data drives. It is not noticeably slow for me.

Quite expensive no? I looked those are 100$~ or so. Wouldn't you have been better off getting 2x 60gb SSD and carving up a portion of it for the OS. Then using the rest for ZIL/L2ARC ? *Is that possible even?

I'm thinking maybe two USB keys mirrored..(is that possible) Does OS in a dedupe scenario require significant read/write performance? I read somewhere about issues of the dedupe db needing to be moved to quicker disk for better performance.

I have extra 10k/15k 3.5" drives but I'm trying to save the drive bays and reduce heat/noise in the SAN. At worst I could probably buy a couple SSDs for the OS.. but far from ideal given cost.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
..Wouldn't you have been better off getting 2x 60gb SSD and carving up a portion of it for the OS. Then using the rest for ZIL/L2ARC ? *Is that possible even?

I'm thinking maybe two USB keys mirrored..(is that possible) Does OS in a dedupe scenario require significant read/write performance? I read somewhere about issues of the dedupe db needing to be moved to quicker disk for better performance..

DO NOT!

Its possible to slice a disk, but it is not supported and it is not suggested.
Solaris is designed to have a separate bootdisk. If you need read or write cache
drives, use also extra disks, not only due to performance but also for pool security.
Ex. up to pool V.19 you may loose the pool if you have lost your write cache.
With current pool versions it is not as critical. The pool keeps working but you cannot
import such a pool. If your write cache is on your boot-drive and this drive fails, you are lost.

The same with a usb boot disk. This is ok with a minimal OS like ESXi who is running
completely from RAM and where it does not matter it it boots in a few minutes.
Solaris is a heavy-weight OS, just like a Windows server OS. If you put it on a USB stick,
(not supported) the storage features may work without speed problems but all management is a pain.

Use a cheap SSD or a 24x7 2,5" sata boot disk instead - nearly same price !


Gea
 
Last edited:

astrapak

n00b
Joined
Jun 6, 2011
Messages
3
Hello,

First of all, thank you _Gea for your nice work, it's a good webgui :)

Everything is fine for me except one thing -> the ACL .

So I installed Solaris express 11 with napp-it and I've builded a RaidZ with 4 disk for testing, next I created a folder and I've shared it (without guest access//777). Then I created several users within the admin group, I can access/write in my shared folder with root and with these users but I'm not able to modify the rights per user

I've something like this
14Anonyme-20110606-221631.png

xp pro sp3

I would add my users created via napp-it but I can't, no user is found :(
I followed your howto http://napp-it.org/doc/downloads/napp-it.pdf

Can you help me? My searches on the web are unsuccessful :confused:

Thanks
 

Garuda888

n00b
Joined
Mar 11, 2011
Messages
29
DO NOT!

Its possible to slice a disk, but it is not supported and it is not suggested.
Solaris is designed to have a separate bootdisk. If you need read or write cache
drives, use also extra disks, not only due to performance but also for pool security.
Ex. up to pool V.19 you may loose the pool if you have lost your write cache.
With current pool versions it is not as critical. The pool keeps working but you cannot
import such a pool. If your write cache is on your boot-drive and this drive fails, you are lost.

The same with a usb boot disk. This is ok with a minimal OS like ESXi who is running
completely from RAM and where it does not matter it it boots in a few minutes.
Solaris is a heavy-weight OS, just like a Windows server OS. If you put it on a USB stick,
(not supported) the storage features may work without speed problems but all management is a pain.

Use a cheap SSD or a 24x7 2,5" sata boot disk instead - nearly same price !


Gea

I got the USB idea from our production ESXi boxes, I thought it would be ok given Oracle has Live USB images for SE11. But I am not sure if Live USB and installing to USB are the same thing from a support perspective. Is the Live USB images only for testing?

Can Solaris software mirror the OS disks? Or do I need to put them on a raid controller.

I plan on having two disk sets. 4x300gb 15K sas disks and 6x2tb 5400rpm drives.
Can one set of ssds zil/l2arc act for both pools? Or do they have to be dedicated to a pool/disk set
 
Last edited:

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
Hello,

Everything is fine for me except one thing -> the ACL .

So I installed Solaris express 11 with napp-it and I've builded a RaidZ with 4 disk for testing, next I created a folder and I've shared it (without guest access//777). Then I created several users within the admin group, I can access/write in my shared folder with root and with these users but I'm not able to modify the rights per user

Thanks

First you have to decide if you want to modify ACL on files/folders or on a share itself

1. share-level
-SMB connect like \\server\folderand login as a user who is member of administrators
-open computer management
-connect computer management with server
-change share acl with computer management

2. or ACL on files/ folders (much easier)
-SMB connect like \\server\folder and login as user root
(or as other user if you have a mapping winuser: xxx > unixuser: root)
-right klick on a file or folder
-select properties-security-add
-you will see your user and groups from your Solaris server

Gea
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,051
I got the USB idea from our production ESXi boxes, I thought it would be ok given Oracle has Live USB images for SE11. But I am not sure if Live USB and installing to USB are the same thing from a support perspective. Is the Live USB images only for testing?

Can Solaris software mirror the OS disks? Or do I need to put them on a raid controller.

I plan on having two disk sets. 4x300gb 15K sas disks and 6x2tb 5400rpm drives.
Can one set of ssds zil/l2arc act for both pools? Or do they have to be dedicated to a pool/disk set

you may install from and to usb but i think its not a good idea with a os with lots
of reads and writes. (low performance of the interface and the stick, bad reliability,
limited write cycles). This will change with USB3 and SSD-disk alike Sticks.

You can mirror your disks after installation
see links in first thread http://hardforum.com/showthread.php?t=1573272

Read and write cache drives are member of a pool. They cannot be shared between pools

Gea
 

DJ_Datte

Weaksauce
Joined
Oct 11, 2010
Messages
89
Code:
	NAME        STATE     READ WRITE CKSUM
	fs1         DEGRADED     0     0     0
	  raidz2-0  ONLINE       0     0     0
	    c4t0d0  ONLINE       0     0     0
	    c4t1d0  ONLINE       0     0     0
	    c4t2d0  ONLINE       0     0     0
	    c4t3d0  ONLINE       0     0     0
	    c4t4d0  ONLINE       0     0     0
	    c4t5d0  ONLINE       0     0     0
	    c4t6d0  ONLINE       0     0     0
	    c4t7d0  ONLINE       0     0     0
	    c5t3d0  ONLINE       0     0     0
	    c5t6d0  ONLINE       0     0     0
	  raidz2-1  DEGRADED     0     0     0
	    c5t0d0  ONLINE       0     0     0
	    c5t1d0  ONLINE       0     0     0
	    c5t2d0  ONLINE       0     0     0
	    c5t4d0  ONLINE       0     0     0
	    c5t5d0  ONLINE       0     0     0
	    c5t7d0  REMOVED      0     0     0
	spares
	  c2t1d0    AVAIL

If I have the above scenario, what is the commands I need to execute for it to use the spare in place of the c5t7d0 disk ? Obviously, it wont do it automatically as it thinks the drive is removed and not failed. But how do I "unspare" a drive, so I can just insert into the array as a replacement ?

Thanks!
 

astrapak

n00b
Joined
Jun 6, 2011
Messages
3
1. share-level
-SMB connect like \\server\folderand login as a user who is member of administrators
-open computer management
-connect computer management with server
-change share acl with computer management

2. or ACL on files/ folders (much easier)
-SMB connect like \\server\folder and login as user root
(or as other user if you have a mapping winuser: xxx > unixuser: root)
-right klick on a file or folder
-select properties-security-add
-you will see your user and groups from your Solaris server

Gea



Thanks for your reply

Unfortunately, I've already did that. I tried again with no luck.

If I connect myself with a user of admin group, with the computer management I've only "Everyone" and "Root" user. Despite that, I can see my users in computer management (Solaris) -> system tools -> Local user and group -> users

If I use root username, then folder -> propreties -> security, there is only "Everyone" user name.

I don't know what I missed :(
 
Top