OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

@ Gea_:

Is there any documentation of how to use the TFTPD service in napp-it or can you explain how it works? I would like to use it in relation to IPXE boot.

Edit: Nevermind. Found the "help" button :)
 
Last edited:
It does sound similar to the problem I was having on OpenIndiana. A quick check on whether this is a HW (cables, etc.) or SW problem, would be to do a quick install of another known-good OS temporarily. Install Windows as the main OS, install the LSI drivers and see if the eight drives show up correctly.

I tried this and all the drives are correctly displayed from windows.
 
I've bought two 128GB Sandisk SSD during a sale. I'd like to have them as mirror rpool (each bootable) instead of my current very old HDD. Does napp-it help in the process, or is it just a link toward a blog explaining the process ?
 
I've bought two 128GB Sandisk SSD during a sale. I'd like to have them as mirror rpool (each bootable) instead of my current very old HDD. Does napp-it help in the process, or is it just a link toward a blog explaining the process ?

On OI/OmniOS, you can mirror your rpool with menu Disks >> mirror bootdisk
This is not working with Solaris 11.1

The help menu links to infos about:
http://constantin.glez.de/blog/2011/03/how-set-zfs-root-pool-mirror-oracle-solaris-11-express
 
Thanks for your answer, that's good to know. I'm on OI. I want to put one SSD in, have it as a mirror of the HDD, then remove the HDD and replace it with the other SSD. Is that possible, especially regarding the copy of the boot sector ?
 
Thanks for your answer, that's good to know. I'm on OI. I want to put one SSD in, have it as a mirror of the HDD, then remove the HDD and replace it with the other SSD. Is that possible, especially regarding the copy of the boot sector ?

no, you must insert the disk into the sata port where it was during mirroring (sata port is hard-coded in grub) and aelect this disk as second boot option in your bios.
If you only want a backup of your system disk, you can either use clonezilla or a driverless 3,5" hardware sata -> raid 1 enclosure for 2 x 2,5" disks with hotplug capability.
 
You lost me somewhere. Basically I've got a single drive rpool on a HDD and want to replace the HDD with a pair of mirrored SSDs. Don't mirrored rpools provide failover ? Are you saying that if one SSD failed the server would keep running, until I tried to reboot ? Changing BIOS settings is not a problem as this is not mission critical, I don't want to use a RAID1 enclosure because it can fail too.
 
What I think gea is saying is that it isn't enough to miror the root pool, you have to copy the grub sectors and such, *and* tell the BIOS "boot from either of these two disks".
 
I reflashed my LSI2308 to IT using the previous firmware PH15-IT (instead of the PH16.0.1-IT).
I still don't see any disk from napp-it (but they are visible from windows)
How can i solve this?
 
I tried to reflash the LSI2308 with full firmware+BIOS this time.
In the LSI BIOS menu (using CTRL+C) i can see the eight hdds! (and all the leds are blinking at least one time.)
But in napp-it i can't see them, i tried again to disable partition support, initialize disks, delete disk buffer and enter the format command, but i only see two hdds.

My own config on my SuperMicro X9SRH-7TF
- 2308 flashed to IT mode (PH 16)
- running under ESXi 5.5/ pass-through
- OmniOS 151006 stable
- Sandisk SSD 480 GB as datadisks

without problem.
As I do not know of others with problems, I would not expect a general but a problem with your config.

What you can try is sas2ircu (lsi tool downloadable from LSI and SuperMicro). If you call menu disks - sas2extension it is downloaded to /var/web-gui/_my/tools/sas2ircu/sas2ircu

When you execute this tool at CLI, it gives you details about your SAS config and disks -
maybe helpful
 
I'v got this:
e470c052-a53d-4f90-aa8c-4744475f24fd.jpg
 
Is there something different with a barebone? (i didn't make an All In One with ESXi)
Do you think i need to associate the mpt_sas driver like here?
 
Is there something different with a barebone? (i didn't make an All In One with ESXi)
Do you think i need to associate the mpt_sas driver like here?

All in one with pass-through acts identical to a barebone config regarding storage. Your link shows a method to get it working when the 2308 was not yet included by Illumos.(OI < 151.a8 or Illumos < 1006). With current releases it should work per default.
(You may try to be sure)

Can you remove all disks and try only one disk (try at least two, optionally other/older disks) and check controller behaviours again with sas2ircu (call via CLI or check whole napp-it monitor-log to see full output)
 
so im running open indiana on one of these

http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=1769151&SRCCODE

have had this thing running for almost 2 years now, problem i have is only 1 of the onboard nics is recognized out of the box by OI. Now i like to think im decent with linux but still getting the hang of OI.

Is it possible to get my second on board nic recognized and configure?
thanks in advance.

Solaris is similar to OSX. You should always buy hardware that is supported out of the box. You can try to find a Solaris driver but mostly the driver is included or not available.
Have you tried newer releases like OI 151.a8, OmniOS 1008 or Solaris 11.1 where the driver may be included?. Other option is to add a cheap Intel pci-e 1x nic if you need a second one.
 
Solaris is similar to OSX. You should always buy hardware that is supported out of the box. You can try to find a Solaris driver but mostly the driver is included or not available.
Have you tried newer releases like OI 151.a8, OmniOS 1008 or Solaris 11.1 where the driver may be included?. Other option is to add a cheap Intel pci-e 1x nic if you need a second one.

Thanks, ill try a pci-e or upgrading to newest version.
 
All in one with pass-through acts identical to a barebone config regarding storage. Your link shows a method to get it working when the 2308 was not yet included by Illumos.(OI < 151.a8 or Illumos < 1006). With current releases it should work per default.
(You may try to be sure)

Can you remove all disks and try only one disk (try at least two, optionally other/older disks) and check controller behaviours again with sas2ircu (call via CLI or check whole napp-it monitor-log to see full output)

I tried plugging 2*SSD intel 520 (1*60 + 1*240GB) and 1*5K4000 directly on the LSI2308 ports but i can't see any of them from napp-it, and sas2ircu return me the same error (Discovery error).
How do you call sas2ircu from CLI and check monitor log?
 
_Gea, is it safe (i.e. no reboot and no known issues) going from 0.9a5 nightly to 0.9d2 nightly? Since this is our production AIO server I am scared silly to change any configurations and especially do some upgrades.

Can I spin up a VM of OI/Napp-It and download 0.9a5 specifically? I can use that to test the upgrade, but mostly I want to make sure you haven't received any reported issues with connectivity or with pools after the upgrades.

Thanks!
 
Okay, i tried on another X9SRH-7TF, took from my current windows desktop PC with a known working LSI2308 as all my disks are plugged on it (2*5K1000, 1*M500 960GB, 1*Intel 520 60GB and 1*CompactFlash SATA reader, all working correctly under windows 7 x64)

LSI2308 has been tried with IR and IT firmwares and three different drives plugged (1*Intel 520 240GB, 1*Crucial M500 960GB and 1*Hitachi 5K4000).

All the drives displayed on the LSI BIOS, but remain invisible under napp-it.
I tried all the tricks (partition disable, sas2ircu, format command, initialize (disable partition support and delete disk buffer are not on the new napp-it)) advised without luck.
I also updated napp-it to the newest free version without improvement.

What can i do?
 
So this has nothing to do with napp-it, and you are just confusing yourself (and any readers) by continuing to mention napp-it in this thread. If the format command doesn't show them, it's something related to drivers and/or the OS.
 
The OS is OmniOS and i use the web interface napp-it, reason why i say the drives don't show up from napp-it, i don't say the problem comes from napp-it.
The problem is precisely that i don't know where the problem is.
 
If it's not related to napp-it, then it is pointless to try different versions, etc. Anyway, if you run sas2irc2 from command level, and do 'sas2ircu list', does the HBA show up? If not, maybe it's hosed? Or needs to be tried in a different pci-e slot?
 
Yeah, but i'm running short ideas, so i try everything i can. :/

The LSI2308 is integrated to the motherboard, but as it's working fine under windows (on the two mobo i tried) i guess it's not an hardware problem.
As _Gea said current OmniOS release should work with the LSI2308 per default the OS/driver should be ok aswell.

sas2ircu return me "discovery error".
 
Unfortunately, I don't know enough about the opensolaris type command to tell you what to look for as far as drivers and such.
 
Just wondering whether this is normal, when transferring large files, e.g. 4-8gb isos, from the OmniOS server to my Windows 8 machine I get around 90-100MB/s read, however when the file is cached (e.g. I initiate the copy again) I get 110-113MB/s which is the max I'll get on a gbit network.

Why is it that dd and bonnie benches show much higher read speeds yet network transfer is slower unless cached?

This is using CIFS

I have 10 x Samsung 2TB's (HD204UI) in raidz2
2 x LSI 9211-8i's
Intel 980x
24gb ram
Intel pcie gbit nic
 
Yeah, but i'm running short ideas, so i try everything i can. :/

The LSI2308 is integrated to the motherboard, but as it's working fine under windows (on the two mobo i tried) i guess it's not an hardware problem.
As _Gea said current OmniOS release should work with the LSI2308 per default the OS/driver should be ok aswell.

sas2ircu return me "discovery error".

If you tried two mobos I would not expect a hardware problem.
As I have a working config with 1006 without any special setting (I can post details 7th jan when needed) ich can think of problems due to

- bios settings
- bios release

- driver problems (Omni 1008 problem or damaged driver file)
>> download 1008 again, try bloody or SmartOS/OI/Solaris 11.1
It should be enough to boot the live editions/installer and call format to check for disks (cancel with ctrl-c)
 
Just wondering whether this is normal, when transferring large files, e.g. 4-8gb isos, from the OmniOS server to my Windows 8 machine I get around 90-100MB/s read, however when the file is cached (e.g. I initiate the copy again) I get 110-113MB/s which is the max I'll get on a gbit network.

Why is it that dd and bonnie benches show much higher read speeds yet network transfer is slower unless cached?

This is using CIFS

I have 10 x Samsung 2TB's (HD204UI) in raidz2
2 x LSI 9211-8i's
Intel 980x
24gb ram
Intel pcie gbit nic

ZFS doesn't cache large files.
 
ZFS doesn't cache large files.

Well whatever it does, if I cancel the copy and start it again it goes full speed up to the point it copied then drops back down. If I copy the file again after it has completed it goes full speed.

zpool iostat -v tank 3 shows no activity on the disks during the second time copy so where is it coming from?

If I do a zfs scrub any file I copy goes full speed until I reboot then its back to 90-100

Writing files goes ~111MB/s regardless so I don't know why reads are slower. Copying onto SSDs, raid0 840 pros..
 
I tried reinstalling OmniOS using the last version (OmniOS_Text_r151008f.usb-dd).

And then,

b1b9aa4b-6d61-4151-b2d2-1fcb4f35ec1d.jpg


:D
 
can i export a pool from OpenIndiana w/ nappit, install OmniOS w/ nappit and import the pool back, keeping all the acl/shares the same?

thanks

and just curious if people here prefer OpenIndiana or OmniOS in general.
 
As far as i know, yes... ZFS on oi is older then on OmniOS, so there shouldnt be any problems.

Shares and acls will stay the same, only iscsi conf you have to export.

I think people here choose OmniOS, sice nappit oficially supports omnios and its better maintained then oi.

Matej
 
As far as i know, yes... ZFS on oi is older then on OmniOS, so there shouldnt be any problems.

Shares and acls will stay the same, only iscsi conf you have to export.

I think people here choose OmniOS, sice nappit oficially supports omnios and its better maintained then oi.

Matej

Thanks for the advice.

For iscsi conf, do I simply save the comstar conf and then restore it back when i have OmniOS w/ nappit configured?
 
napp-it zfs free appliance v. 0.9a9 nightly Mar.04.2013
OpenIndiana

I recently noticed degraded performance and decided to check the SMART status of my disks:
c3t5000CCA37EC1CC3Ad0 3001 GB important mirror ONLINE S:0 H:0 T:0 Hitachi HDS723030BLE640 sat,12 PASSED 31 °C
c3t5000CCA37EC227BAd0 3001 GB important mirror ONLINE S:0 H:7 T:19 Hitachi HDS723030BLE640 sat,12 FAILED! 33 °C

I mirrored these to have redundancy. What should my next steps be? Should I get a replacement disk which is the same model if available? Should I be trying to shut off that disk, or trying to get data off it at this point?

Please help!
 
I mirrored these to have redundancy. What should my next steps be? Should I get a replacement disk which is the same model if available? Should I be trying to shut off that disk, or trying to get data off it at this point?

If they're mirrored, then I probably wouldn't be too worried about getting the data off the array. Order yourself a new drive, and just swap it out ASAP.

In the meantime, run a scrub on the pool and see what it reports.
 
Napp-It Question/Feature Request:

Would it be possible for the scrub/snap time periods to be adjusted? Specifically, I don't want to scrub my pool every week. I'd like to do it once a month on a Sunday. Say the 4th Sunday of every month. From what I can tell, that's not possible now. Nexenta has something similar but they include "week" as a valid time period. So basically, you can say "Every Week" "Every 2 Weeks" "Every 3 Weeks" "Every 4 Weeks" _then_ you can set a day. So, in my example you'd set "Every 4 Weeks" on a "Sunday".

While we're on that topic, would it be possible to change around how you choose which days you want a job to run? Example, I want snapshots to happen every hour over the weekend, but during the weekdays, i only want it every 2 or 3 or 6 hours. To do that now, I have to create 2 jobs for the weekends, and 5 jobs for the weekday. It could be done in 2 jobs if I could select which days of the week I wanted things to run. So for each job, you'd have check boxes for the days of the week you want the job to run. Job 1 would have Sunday and Saturday checked. Job 2 would have Monday, Tuesday, Wednesday, Thursday and Friday checked.

Is this at all possible? Has it been done and I'm not seeing it? Is it part of an external package? Thanks for your help, _Gea!
 
If they're mirrored, then I probably wouldn't be too worried about getting the data off the array. Order yourself a new drive, and just swap it out ASAP.

In the meantime, run a scrub on the pool and see what it reports.

Thanks for the advice. Sorry, how do you perform a scrub?
 
Napp-It Question/Feature Request:

Would it be possible for the scrub/snap time periods to be adjusted? Specifically, I don't want to scrub my pool every week. I'd like to do it once a month on a Sunday. Say the 4th Sunday of every month. From what I can tell, that's not possible now. Nexenta has something similar but they include "week" as a valid time period. So basically, you can say "Every Week" "Every 2 Weeks" "Every 3 Weeks" "Every 4 Weeks" _then_ you can set a day. So, in my example you'd set "Every 4 Weeks" on a "Sunday".

While we're on that topic, would it be possible to change around how you choose which days you want a job to run? Example, I want snapshots to happen every hour over the weekend, but during the weekdays, i only want it every 2 or 3 or 6 hours. To do that now, I have to create 2 jobs for the weekends, and 5 jobs for the weekday. It could be done in 2 jobs if I could select which days of the week I wanted things to run. So for each job, you'd have check boxes for the days of the week you want the job to run. Job 1 would have Sunday and Saturday checked. Job 2 would have Monday, Tuesday, Wednesday, Thursday and Friday checked.

Is this at all possible? Has it been done and I'm not seeing it? Is it part of an external package? Thanks for your help, _Gea!

Next 0.9e contains 5 new day-trigger:
- mon-fri
- mon-sat
- sat-sun
- first-sat
- first-sun
 
Back
Top