OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

about missing manuals around Illumian, OpenIndiana and napp-it

I'm aware, that my english is not perfect but i suppose it does not help if i write manuals in perfect German (my mother language). I invite all to help with a better manual - related to napp-it as well as Illumian and OpenIndiana regarding language and content.

Recently I started to move my manuals to bookie.cc see http://booki.cc/illumian-openindiana-napp-it-the-missing-manual/_info/ where it is possible to collaborate writing manuals under a open licence. I invite all to help writing manuals not only about napp-it but also about things in OpenIndiana and the difference to Ilumian or Solaris 11 (this is the annoying part). You can also start books there. Linking between them may give a manual some day replacing Oracle docs (the differences are growing) or the old OpenSolaris bible (too outdated)

If you or other likes editing access, please send a mail to [email protected]

Just a thought but a wiki site would make more sense for a multi user updated help and info. Note that because of wiki spam I would recommend makeing it a more closed down wiki that requires a login etc. You don't get a printable book out of it but it is much more dynamic and can be linked into napp-it dirrectly a lot easier I would think.

Also I was wondering about napp-it itself. Is there anyway we can help improve the english menues and descriptions displayed by the web UI? have you written it in german and then added english localization text? If you have you may be able to let others update just this localization text for you without haveing to hand out access to your main source. having perfect english in the interface won't make it work any better but it will make it seem a lot more proffessional and finished.
 
Just a thought but a wiki site would make more sense for a multi user updated help and info. Note that because of wiki spam I would recommend makeing it a more closed down wiki that requires a login etc. You don't get a printable book out of it but it is much more dynamic and can be linked into napp-it dirrectly a lot easier I would think.

Also I was wondering about napp-it itself. Is there anyway we can help improve the english menues and descriptions displayed by the web UI? have you written it in german and then added english localization text? If you have you may be able to let others update just this localization text for you without haveing to hand out access to your main source. having perfect english in the interface won't make it work any better but it will make it seem a lot more proffessional and finished.

Hello Latent
I first thought about a wiki too but the booki.cc aproach has several advantages:
- depends not on a website and its owner
- allows independant books/ groups as addition or to translate to other languages
- is easier to use compared to most wikis
- is easier to rearrange content
- you can printout as a manual
- can be linked too (needs the extra step of create a pdf and upload to a website)

about language
I write all messages in english and add optionally a german translation to the languale files in /var/weg-gui/data/napp-it/zfsos/_lib/lang (textfiles).

Napp-it is intended to be multi-language some time (> 1.0). Currently there are too much changes in code and structure to finish translation (Not all messages are currently in the language files, some are in the scripts). But due to the fact that I have much more users outside Germany, i always use english first. If someone likes editing these few language files or send me an email about 'most funny or worse' GUI messages, i will update at once (use current 0.7).

[email protected]
 
Gea,

I have read you are using SSD's to store your vm's

Which ssds are you running?

What configuration?
 
Gea,
I have read you are using SSD's to store your vm's
Which ssds are you running?

What configuration?

my first pool was build from 1st gen 120 GB Sandforce.
Because they were 'quite' cheap MLC and consumer grade,
i started witn 3 way mirrors. (currently 4 x 3 disks) + hotfix.

In the first year the failure rate was up to 10% (complete failure or too many errors)
the by warranty replaced disks seems better, failure rate is lower now.

On my newest machine, i use Intel 320 (300 GB) since mid of last year.
No problem since. Currently i will stay at 3 way mirror, these VM*s are critical
(AD server etc) and rebuild time is extrem short (30 min on 120 GB)
 
Is anybody using STEC MACH16IOPS 50GB SLC drives for ZIL?

If so how do they perform? I think the specs said 190MBps and 25,000 IOPS SUSTAINED for writes..?

I just bought some brand new for a stupidly low price. :D
 
I'm curious about those drives as well.

I have about $800 to spend on ZIL device(s)

Looking for the best bang for my buck.
 
Hi there,

I've been trying to go through pulling a disk from my array and making sure I know how to add a new drive and have it resilver. In the process I've been looking at my SMART data and notice that one of the drives is marked as failed. I'm using Samsung HD204UI's and the raw read error rate is through the roof on all of them. The one drive that is my only WD drive.

These 5 drives were all purchased at the same time last July and have hardly been used as I'm still trying to find the time to online my server.

Here are the values from 5 out of 6 of my drives, the 6th one is the drive I pulled and haven't put back yet.

Code:
all.c4t0d0:  1 Raw_Read_Error_Rate     0x002f   001   001   051    Pre-fail  Always   FAILING_NOW 58686
all.c4t1d0:  1 Raw_Read_Error_Rate     0x002f   090   090   051    Pre-fail  Always       -       28856
all.c4t2d0:  1 Raw_Read_Error_Rate     0x002f   092   092   051    Pre-fail  Always       -       24024
all.c4t3d0:  1 Raw_Read_Error_Rate     0x002f   088   088   051    Pre-fail  Always       -       25915
all.c4t5d0:  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0

What's happening here? Is it something to worry about or just something that the drives do? Can I have all 5 drives failing together?

Thanks.
 
I would say it's more like you have another component causing errors - maybe cabling, ram, PSU?

How is your server set up?
 
Gea, thanks for the response. I do have an LSI 2008 system, so I tried the extension and it was able to find the slots, however the identification did not work. There are a number of pins on my backplane so I am thinking it supports it, but they didn't give me much in the way of a manual for it. :( I'll have to see if I can figure that out.

The scrub completed and repaired 88 checksum errors. It has been error free since yesterday evening so I guess I'll just keep an eye on it. Perhaps it was cleaning up after the failed disk?

Any idea on why the auto-replace didn't work? Thanks!

Well my weekly scrub ran and I have another 5 checksum errors on that same new drive. I'm thinking that when I get my warranty replacement for the original failed drive, that I swap those out and run some vendor tests on it... Anyone have any better ideas? It is this only one drive and I never had any drive errors for nearly a year until now. I would think if it was a system problem I would have seen it before now and it would happen on more drives. Cabling hasn't changed (hot-swap).
 
Hi there,

I've been trying to go through pulling a disk from my array and making sure I know how to add a new drive and have it resilver. In the process I've been looking at my SMART data and notice that one of the drives is marked as failed. I'm using Samsung HD204UI's and the raw read error rate is through the roof on all of them. The one drive that is my only WD drive.

These 5 drives were all purchased at the same time last July and have hardly been used as I'm still trying to find the time to online my server.

Here are the values from 5 out of 6 of my drives, the 6th one is the drive I pulled and haven't put back yet.

Code:
all.c4t0d0:  1 Raw_Read_Error_Rate     0x002f   001   001   051    Pre-fail  Always   FAILING_NOW 58686
all.c4t1d0:  1 Raw_Read_Error_Rate     0x002f   090   090   051    Pre-fail  Always       -       28856
all.c4t2d0:  1 Raw_Read_Error_Rate     0x002f   092   092   051    Pre-fail  Always       -       24024
all.c4t3d0:  1 Raw_Read_Error_Rate     0x002f   088   088   051    Pre-fail  Always       -       25915
all.c4t5d0:  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0

What's happening here? Is it something to worry about or just something that the drives do? Can I have all 5 drives failing together?

Thanks.

you have to know how to read smart variables. the first set of numbers is the most important. your drives have a threshold of 051 which means if the value drops below 51 the drive has a big problem and should probably be replaced ASAP. Note your top drive is down to 001 which means it's error rate is way too high and this drive will need replacing. The other drives are at 90/92/88 which probably started at 100 and count down as the error rate increases. The number at the end is the raw data value and only the drive maker would have a clue how to read that number or what it really means.

If all the drives are having problems then it may be a controller/cabling/power fault causing the higher errors during operation.

My advice would be to replace that drive that has passed it's thereshold. For the other drives I would keep an eye on the number and if it drops too fast from the current 88-92 values then you may have an issue that needs more investigation. You can try swapping controllers or cables maybe. Another option is to pull one of the drives and torture test it in a different machine to see if the numbers change.

Note that this raw error rate number may be based on a number of read/write errors over time so if you fix the underlying problem the numbers may start slowly going up closer to 100 but it's hard to know how this is implemented.

Also look closely at the other smart variables as if there is a real problem then it will often affect other variables as well as many of them are indirectly related.
 
you have to know how to read smart variables. the first set of numbers is the most important. your drives have a threshold of 051 which means if the value drops below 51 the drive has a big problem and should probably be replaced ASAP. Note your top drive is down to 001 which means it's error rate is way too high and this drive will need replacing. The other drives are at 90/92/88 which probably started at 100 and count down as the error rate increases. The number at the end is the raw data value and only the drive maker would have a clue how to read that number or what it really means.

If all the drives are having problems then it may be a controller/cabling/power fault causing the higher errors during operation.

My advice would be to replace that drive that has passed it's thereshold. For the other drives I would keep an eye on the number and if it drops too fast from the current 88-92 values then you may have an issue that needs more investigation. You can try swapping controllers or cables maybe. Another option is to pull one of the drives and torture test it in a different machine to see if the numbers change.

Note that this raw error rate number may be based on a number of read/write errors over time so if you fix the underlying problem the numbers may start slowly going up closer to 100 but it's hard to know how this is implemented.

Also look closely at the other smart variables as if there is a real problem then it will often affect other variables as well as many of them are indirectly related.

Thanks a lot for this and you are correct, I don't know much about SMART data, mostly because I've never been able to find any decent documentation. Is there any?

My currently plan of attack is to use Samsung's ES-tool to investigate each drive using the built in controller on the motherboard. I did run Seagate's SeaTools from within a VM using the SASUC8i and 2 drives failed the SMART tests but all seemed to pass the short generic test. However, I decided to take the SASUC8i out and use different cables for my testing with the built in SATA controller.

One thing that stands out though is that the WD drive connected to the same controller with the same SFF-SATA cable is showing no problems.
 
Thanks a lot for this and you are correct, I don't know much about SMART data, mostly because I've never been able to find any decent documentation. Is there any?

My currently plan of attack is to use Samsung's ES-tool to investigate each drive using the built in controller on the motherboard. I did run Seagate's SeaTools from within a VM using the SASUC8i and 2 drives failed the SMART tests but all seemed to pass the short generic test. However, I decided to take the SASUC8i out and use different cables for my testing with the built in SATA controller.

One thing that stands out though is that the WD drive connected to the same controller with the same SFF-SATA cable is showing no problems.

Oh one other thing that may be the problem is vibration of the drives. If the case you have it in is not well designed then the disks can vibrate each other. Some disk models handle vibration well and adjust for it while some cheaper drives do not. Most enterprise SAS disks are designed for this since they are used in multi disk raid but consumer drives are a mixed bag. Vibration will cause the disks heads to miss read a track and have to keep trying which will cause a high raw read error rate and cause some performance problems. To test this just run the drive outside of your case somehow and see if this helps. Testing the disk system throughput in napp-it to see if reducing vibration increases performance will tell you if was a major problem or not. The more disks you have together the worse the vibration will be as well.

If this is the cause of the problem then unless you can find a way to mod your case to reduce the vibration then you may have to replace the drives or case...
 
Oh one other thing that may be the problem is vibration of the drives. If the case you have it in is not well designed then the disks can vibrate each other. Some disk models handle vibration well and adjust for it while some cheaper drives do not. Most enterprise SAS disks are designed for this since they are used in multi disk raid but consumer drives are a mixed bag. Vibration will cause the disks heads to miss read a track and have to keep trying which will cause a high raw read error rate and cause some performance problems. To test this just run the drive outside of your case somehow and see if this helps. Testing the disk system throughput in napp-it to see if reducing vibration increases performance will tell you if was a major problem or not. The more disks you have together the worse the vibration will be as well.

If this is the cause of the problem then unless you can find a way to mod your case to reduce the vibration then you may have to replace the drives or case...

OK, thanks. Something else to try. The case is a Fractal Design Define R3 and the drives are all mounted on the default trays with rubber grommets. The drives are all upside down though for cabling purposes - HDD's can cope with any orientation though can't they?

What about SFF-SATA cables? Can someone recommend a brand? I'm currently using an Adaptec one and it has a plastic woven shield.
 
Last night I was transferring a large number (100,000+) small files on my Raid-Z2 Openindiana SAN in a few different instances. Some of the files failed to transfer for permission reasons. (10-20 maybe)

Looking at napp-it today I noticed these errors that were not there before.

Code:
c3::dsk/c3t10d1	connected	configured	unknown	ATA SAMSUNG HD204UI disk n /devices/pci@0,0/pci15ad,7a0@15/pci1014,3b1@0:scsi::dsk/c3t10d1
c3::dsk/c3t11d1	connected	configured	unknown	ATA SAMSUNG HD204UI disk n /devices/pci@0,0/pci15ad,7a0@15/pci1014,3b1@0:scsi::dsk/c3t11d1
c3::dsk/c3t12d1	connected	configured	unknown	ATA SAMSUNG HD204UI disk n /devices/pci@0,0/pci15ad,7a0@15/pci1014,3b1@0:scsi::dsk/c3t12d1
c3::dsk/c3t13d1	connected	configured	unknown	ATA SAMSUNG HD204UI disk n /devices/pci@0,0/pci15ad,7a0@15/pci1014,3b1@0:scsi::dsk/c3t13d1
c3::dsk/c3t14d1	connected	configured	unknown	ATA SAMSUNG HD204UI disk n /devices/pci@0,0/pci15ad,7a0@15/pci1014,3b1@0:scsi::dsk/c3t14d1
c3::dsk/c3t15d1	connected	configured	unknown	ATA SAMSUNG HD204UI disk n /devices/pci@0,0/pci15ad,7a0@15/pci1014,3b1@0:scsi::dsk/c3t15d1
pcie160	connected	configured	ok	Location: pcie160



Diskinfo: iostat -Ensr c2t0d0 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	 	 
 	Vendor: VMware 	Product: Virtual disk 	Revision: 1.0 	Serial No: 6000c298953a6db 	 
 	Size: 21.47GB <21474836480 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0 
 	Illegal Request: 2 	Predictive Failure Analysis: 0 	 	 	 
c1t0d0 	Soft Errors: 0 	Hard Errors: 5 	Transport Errors: 0 	 	 
 	Vendor: NECVMWar 	Product: VMware IDE CDR10 	Revision: 1.00 	Serial No: 	 
 	Size: 0.00GB <0 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 5 	No Device: 0 	Recoverable: 0 
 	Illegal Request: 2 	Predictive Failure Analysis: 0 	 	 	 
c3t15d1 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	 	 
 	Vendor: ATA 	Product: SAMSUNG HD204UI 	Revision: 0001 	Serial No: S2H7JD1ZC01647 	 
 	Size: 2000.40GB <2000398934016 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0 
 	Illegal Request: 0 	Predictive Failure Analysis: 0 	 	 	 
c3t14d1 	Soft Errors: 0 	Hard Errors: 6 	Transport Errors: 4 	 	 
 	Vendor: ATA 	Product: SAMSUNG HD204UI 	Revision: 0001 	Serial No: S2H7JD2ZA11281 	 
 	Size: 2000.40GB <2000398934016 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 2 	Recoverable: 0 
 	Illegal Request: 0 	Predictive Failure Analysis: 0 	 	 	 
c3t11d1 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	 	 
 	Vendor: ATA 	Product: SAMSUNG HD204UI 	Revision: 0001 	Serial No: S2H7JD2ZB12840 	 
 	Size: 2000.40GB <2000398934016 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0 
 	Illegal Request: 0 	Predictive Failure Analysis: 0 	 	 	 
c3t12d1 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	 	 
 	Vendor: ATA 	Product: SAMSUNG HD204UI 	Revision: 0001 	Serial No: S2H7JD1ZC01649 	 
 	Size: 2000.40GB <2000398934016 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0 
 	Illegal Request: 0 	Predictive Failure Analysis: 0 	 	 	 
c3t13d1 	Soft Errors: 0 	Hard Errors: 0 	Transport Errors: 0 	 	 
 	Vendor: ATA 	Product: SAMSUNG HD204UI 	Revision: 0001 	Serial No: S2H7JD2ZA11282 	 
 	Size: 2000.40GB <2000398934016 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 0 	Recoverable: 0 
 	Illegal Request: 0 	Predictive Failure Analysis: 0 	 	 	 
c3t10d1 	Soft Errors: 0 	Hard Errors: 6 	Transport Errors: 4 	 	 
 	Vendor: ATA 	Product: SAMSUNG HD204UI 	Revision: 0001 	Serial No: S2H7J9KB704127 	 
 	Size: 2000.40GB <2000398934016 bytes>	 	 	 	 
 	 	Media Error: 0 	Device Not Ready: 0 	No Device: 2 	Recoverable: 0 
 	Illegal Request: 0 	Predictive Failure Analysis: 0

I am not really worried, but just want to confirm that I am actually not having major disk issues and that my disks are healthy. What has me concerned are the hard errors. To me it looks like something timed out to cause the errors.
 
What's the con for installing Ubuntu instead of openindiana

Openindiana has better zfs compatibility but if you want to hosts small server like mine craft is that OK down the road

I play with Ubuntu before lots of support and lots of things to do

Thanks
 
Hi gea

any guide on performance tuning?

currently using i3-2100T, supermicro x9scm 4gm ECC ram, 2x2TB mirror, 2x1TB mirror

read write about 60MB/s

Thanks
 
What's the con for installing Ubuntu instead of openindiana

Openindiana has better zfs compatibility but if you want to hosts small server like mine craft is that OK down the road

I play with Ubuntu before lots of support and lots of things to do

Thanks

Napp-it only works with openSolaris based systems with built in ZFS. Ubuntu does not support ZFS by default and must be either compiled into the kernal by you (because of licencing issues) or by using FUSE which is user level but may be slower. Also ZFS on linux is not mature yet and is not recommended for production yet. freeBSD has a better ZFS implementation but some benchmarks show it is not as fast as solaris based systems.

If you don't want a simple web admin console like napp-it gives you and you want to set up all the elements of a NAS yourself then you could use ubuntu.

Note that you may want to look at nexentaCore which is now being redone as illumos right now which has the same core as openIndiana but has ubuntu style package managment. But Illumos is not finished yet.
 
ic thanks latent

i have a kingston 64 gb i will like to use as a cache i read the napp it manual but which selection should i set ssd read VS ssd write VS ssd write cache mirror

when i build my pool do i have to include the ssd in the pool with my other HDD? or is it separate.

do i need a hdd for the Solaris Express 11 install or can I use like a USB?

how much performance do you loose if you decide to encrypt your data, do you guys encrypt you data?

thanks dl a copy of Solaris Express 11 to play on my VMware lol
 
ic thanks latent

i have a kingston 64 gb i will like to use as a cache i read the napp it manual but which selection should i set ssd read VS ssd write VS ssd write cache mirror

when i build my pool do i have to include the ssd in the pool with my other HDD? or is it separate.

do i need a hdd for the Solaris Express 11 install or can I use like a USB?

how much performance do you loose if you decide to encrypt your data, do you guys encrypt you data?

thanks dl a copy of Solaris Express 11 to play on my VMware lol

adding a read cache speeds up read (google L2ARC). write cache speeds up sync writes (google ZFS LOG) and the mirrored one is for 2 or more write cache devices to cover one of them failing. most cheap consumer SSD's are only good for l2ARC read caches as they don't handle power cuts well without some form of power loss protection circuits (write caches need this!).

when building pools build them with all the same speed device (all SSD or all similar hard drives) and then you can add SSD's as read and/or write cache (google ZFS BEST PRACTICE).

Solaris needs a hard drive/SSD to boot off (2 in a mirror is best eg 2x 2.5inch hard drives or 2x small SSD's). you can also do an all in one where you boot vmware with 1 or 2 drives in mirror and then run solaris off this storage and store your main vm's on NFS/iSCSI from your NAS vm.

encryption only available with oracle solaris/express which is only free for personal use so most people don't use it. I would say that with a fast enough CPU encryption wouldn't be a big problem but I have no experience sorry.
 
Last edited:
Folks:
Quick question, when it comes to power management (spin down, sleep etc.) which version of solaris is the best (free of course < 15TB)?
 
whats the word on 4k sectors and solaris/OIA/ilumiana etc? i have some test projects coming up for some big storage and 3TB drives as far as i can tell are all 4K sectors.
 
Folks:
Quick question, when it comes to power management (spin down, sleep etc.) which version of solaris is the best (free of course < 15TB)?

The only Solaris is Solaris 11 with spin down currently not working.
(OK with OpenIndiana)
 
whats the word on 4k sectors and solaris/OIA/ilumiana etc? i have some test projects coming up for some big storage and 3TB drives as far as i can tell are all 4K sectors.

ZFS supports 4k disks announcing themselves as 4k.
You have only a problem with 4k disks lying about being 512b ones.

But all of them are lying today
You currently have the option, to create asize 12 vdevs via FreeBSD or a modified
inofficial zpool on OpenIndiana. There are also discussions at OpenIndiana about
integrate this modification.
 
gea is that true of the hitachi drives as well? i just read their spec sheet and they claim 512 byte sectors.
 
adding a read cache speeds up read (google L2ARC). write cache speeds up sync writes (google ZFS LOG) and the mirrored one is for 2 or more write cache devices to cover one of them failing. most cheap consumer SSD's are only good for l2ARC read caches as they don't handle power cuts well without some form of power loss protection circuits (write caches need this!).

when building pools build them with all the same speed device (all SSD or all similar hard drives) and then you can add SSD's as read and/or write cache (google ZFS BEST PRACTICE).

Solaris needs a hard drive/SSD to boot off (2 in a mirror is best eg 2x 2.5inch hard drives or 2x small SSD's). you can also do an all in one where you boot vmware with 1 or 2 drives in mirror and then run solaris off this storage and store your main vm's on NFS/iSCSI from your NAS vm.

encryption only available with oracle solaris/express which is only free for personal use so most people don't use it. I would say that with a fast enough CPU encryption wouldn't be a big problem but I have no experience sorry.

thanks latent

but i dont get what you mean here

"Solaris needs a hard drive/SSD to boot off (2 in a mirror is best eg 2x 2.5inch hard drives or 2x small SSD's). you can also do an all in one where you boot vmware with 1 or 2 drives in mirror and then run solaris off this storage and store your main vm's on NFS/iSCSI from your NAS vm. "

i have to research it bit on what you wrote lol
 
thanks latent

but i dont get what you mean here

"Solaris needs a hard drive/SSD to boot off (2 in a mirror is best eg 2x 2.5inch hard drives or 2x small SSD's). you can also do an all in one where you boot vmware with 1 or 2 drives in mirror and then run solaris off this storage and store your main vm's on NFS/iSCSI from your NAS vm. "

i have to research it bit on what you wrote lol

As far as I know its best to boot Solaris off hard drives or SSD's as it is not designed to boot off cheap USB sticks. EON is alternative version that does support USB booting though.

It is best to use two boot drives in a mirror (like RAID1) which means that if one dies your system still boots.

There is also an option to use VMWare to make an all in one which changes how your boot drives work.
http://www.napp-it.org/doc/downloads/all-in-one.pdf for details.
 
gea is that true of the hitachi drives as well? i just read their spec sheet and they claim 512 byte sectors.

current Hitachis are mostly 512b but i have heard they will move to 4k with next generation
 
Is there a way to not do snapshots during the napp-it install process? I'm using some older PATA disk on modules that are only 8GB, they're running out of space during the install.

*edit* enabling compression to rpool allowed the install to complete. i can safely remove all the BEs and snapshots right? this is a fresh install after all.
 
Last edited:
Is anybody using STEC MACH16IOPS 50GB SLC drives for ZIL?

If so how do they perform? I think the specs said 190MBps and 25,000 IOPS SUSTAINED for writes..?

I just bought some brand new for a stupidly low price. :D

These are the industrial ones, linked to below.
So only 20000 iops read and 1500iops write.

Seller has lts of different ones tho

I got 3 of the cheapy industrial ones to play with

.
 
These are the industrial ones, linked to below.
So only 20000 iops read and 1500iops write.

Seller has lts of different ones tho

I got 3 of the cheapy industrial ones to play with

.

http://www.stec-inc.com/product/mach16.php

Code:
INDUSTRIAL SLC SSD
INTERFACE	Serial ATA (SATA) 3.0Gb
CAPACITIES	up to 200GB
Form Factor	2.5-Inch, 1.8-Inch
READ & WRITE PERFORMANCE	Read:  250MB/s
Write:  100MB/s
TRANSACTIONAL READ & WRITE	Read:  up to 20,000
Write:  up to 1,500
INDUSTIAL TEMPERATURE
-40 °C to +85°C
NORMAL POWER CONSUMPTION	 6.0W
Code:
IOPS SLC SSD
INTERFACE	Serial ATA (SATA) 3.0Gb
CAPACITIES	up to 200GB
Form Factor	2.5-Inch, 1.8-Inch
READ & WRITE PERFORMANCE	Read:  250MB/s
Write:  180MB/s
TRANSACTIONAL READ & WRITE	Read:  up to 26,000
Write:  up to 16,000
COMMERCIAL TEMPERATURE
0°C to 60°C
AVERAGE LATENCY
<50us
NORMAL POWER CONSUMPTION	 6.0W

http://www.boston.co.uk/products/m16isd2-50ucu.aspx
Code:
Average Latency	<5µs
Capacity	50GB
Category (sub)	Solid-State Drives (SSD)
Flash Memory Type	SLC
Form Factor	2.5"
Interface	SATA 3.0 Gb/s
Manufacturer	STEC
Operating Temperature	0°C ~ +70°C
Power Consumption	Operating Mode: 6.0W
Sleep/Idle Mode: 750mW
Read Speed	225MB/s
[B]Write Speed	200MB/s[/B]

BAsed on purely the part numbers - somebody is not telling the truth........

But it does say Industrial temp on the ebay auction.


Still way better than a few spinning rust drives. Until you get to 10's of drives.
 
Is there a way to not do snapshots during the napp-it install process? I'm using some older PATA disk on modules that are only 8GB, they're running out of space during the install.

*edit* enabling compression to rpool allowed the install to complete. i can safely remove all the BEs and snapshots right? this is a fresh install after all.

1.
First of all, with your 8 GB modules I doubt you will be happy with the capacity and performance.

2.
If you need to use them, only NexentaCore is designed to work with 8 GB
(And maybe EON in future)

3. If you want to try despite:
download the nappit installer via wget to /root
edit and comment the beadm create command prior and after installation
comment installation of not needed tools like smartmontools, mc, iperf, bonnie
run the installer via perl ./nappit from root directory

there seems to be a bug with multiple pkg install. you may need to run the installer twice and reboot then (without a final activated snapshot)

Think:
- snapshots do not need any initial space
- if you install all tools like smartmontools etc, your disk is always too full

I would only use harddisks or SSD's > 16 GB, no USB sticks and no DOM modules
with Solaris. Solaris is not designed to work on very small or very slow disks
 
Have 2 Mirror'd vdev's inside a Pool, drive has failed on one.

I marked it as removed and now inside my pool I have one mirror and another device standalone. I cannot seem to add this new drive back into a mirror with the now standalone drive that was previously mirrored.

How do I do this without losing data on that drive?
 
1.
First of all, with your 8 GB modules I doubt you will be happy with the capacity and performance.

2.
If you need to use them, only NexentaCore is designed to work with 8 GB
(And maybe EON in future)

3. If you want to try despite:
download the nappit installer via wget to /root
edit and comment the beadm create command prior and after installation
comment installation of not needed tools like smartmontools, mc, iperf, bonnie
run the installer via perl ./nappit from root directory

there seems to be a bug with multiple pkg install. you may need to run the installer twice and reboot then (without a final activated snapshot)

Think:
- snapshots do not need any initial space
- if you install all tools like smartmontools etc, your disk is always too full

I would only use harddisks or SSD's > 16 GB, no USB sticks and no DOM modules
with Solaris. Solaris is not designed to work on very small or very slow disks
seems fine. the actual data is on 15 750GB ultra stars a dd bench had something like ~350MB/s write and ~750MB/s read for a 14 disk raid 10 (7 mirro sets with a hotspare)

can you expand further on why you dislike DOMs? apart from this particular DOM which is small and a bit old i would think 16GB SATA DOMs for the root pool would be fine. If it is a log write issue can't local logging be disabled entirely in favor of a centralized system like say splunk?

also, and i apologize as I haven't spent more than 30 minutes with napp-it at this point, but how do i disable napp-it logging to the local console? super annoying when trying to work in the console and the gui at the same time :).
 
Have 2 Mirror'd vdev's inside a Pool, drive has failed on one.

I marked it as removed and now inside my pool I have one mirror and another device standalone. I cannot seem to add this new drive back into a mirror with the now standalone drive that was previously mirrored.

How do I do this without losing data on that drive?

with napp-it, look at menu disk add

via CLI its something like
zpool attach pool id1 id2
id 1: a single disk or mirror where you want to attach a disk to
id2: your new disk

in case of a failure, do a disk replace or better use a hotfix
 
Back
Top