seems fine. the actual data is on 15 750GB ultra stars a dd bench had something like ~350MB/s write and ~750MB/s read for a 14 disk raid 10 (7 mirro sets with a hotspare)
can you expand further on why you dislike DOMs? apart from this particular DOM which is small and a bit old i would think 16GB SATA DOMs for the root pool would be fine. If it is a log write issue can't local logging be disabled entirely in favor of a centralized system like say splunk?
also, and i apologize as I haven't spent more than 30 minutes with napp-it at this point, but how do i disable napp-it logging to the local console? super annoying when trying to work in the console and the gui at the same time .
I'm somewhat back up and running but this new disk is a Seagate S2000DL003 4k sector disk. My other 2TB's are 512b ST32000542AS
I cannot add it back in because of sector alignment...
cannot attach c2t5000C500370C382Bd0 to c2t5000C50029ECDE52d0: devices have different sector alignment
well your out of luck replacing with that drive. That drive reports 4k sectors and it seems you can't mix sector sizes in the same vdev. This makes a lot of sense when you think about it because each drive has to mirror the same data and trying to do this when the base unit of storage is different would be very hard! You will have to either copy all the data off your pool and reset up a new pool or source a drive that reports 512byte sectors. Note that most drives of this size are now 4k sector drives that report as being 512byte sectors for compatibility which makes them very bad for zfs as it doesnt handle the allignment without hacks right now.
The great news is that that new seagate drive is a TRUE 4k sector drive so it is a great drive for ZFS but you need more than one of it so you can combine it together into a 4k sector vdev. but as I touched on before you can't transition to a 4k sector vdev from a 512byte one as they are not compatible so you have to create a band new pool and manually copy your data over and then blow away your old pool and then you can reuse the old 512byte drives.
I'm not 100% sure if the 4k/512byte thing matching just relates to individual vdev's in a pool or the whole pool. you may be able to have one vdev of each type in the same pool to reuse the old drives.
For new builds these 4k drives may be a good option but you would have to keep a hot/cold spare drive as you won't be able to use any other brand 2TB drives as they all report as 512byte sectors.
Edit: online I found others having problems with this drive reporting 512byte physical sectors so unless they have changed the firmware it should not be complaining like it did for you.
why wouldn't it be the case? NAND has a limited amount of write cycles. the ZIL is write heavy. mirroring means writing to two different devices at the same time. which means the two SSDs are both failing at the same exact rate.
Because write cycles is an approximation? Not to mention that there are other failure conditions where having a mirrored ZIL is beneficial. You said that mirroring the ZIL was pointless and gained you nothing. I disagree.
... The drives are all upside down though for cabling purposes - HDD's can cope with any orientation though can't they?...
I'm running the all-in-one version and resized my OI system disk from 12GB -> 16GB. in ESXi. But I need to expand the system drive in OI.
Can anyone help with this? Gparted does not allow me to expand the system pool/drive.
Does anyone know the two commands i need to run to map a samba user/group to root in a workgroup environment? I connect to a samba share that is also an nfs share (to a debian client) and of course permissions are getting all screwed up but i'm content with mapping the samba users to root if that is the easiest solution. Right now any file that is created or modified by a samba user is completely broken for nfs, i've searched and tried different idmap combo's without any luck. Thanks!
Just getting back to testing my drives after noticing SMART errors. Does anyone have any suggestions for tools to use to test HDD's? I'm using Samsung's ES-Tool to perform a low level format but it isn't finding any problems.
In workgroup mode, SMB users are local Unix users.
You cannot map a Unix user to a Unix-user
You can only map local SMB groups or AD users/groups
what you can do:
You can assign an NFS client to root in NFS share settings
You can set ACL of your share to 777 and everyone@=modify with inheritance =on
to access all newly created files from SMB and NFS
For already created files, you must reset ACL or permissions recursively
(per napp-it ACL extension, CLI or remotely from Windows when connectes as root)
I really like that concept of pre-clearing drives, they use over at the unRAID forum and I use it for stress-testing new or moved-around drives as well.
Here's the link: http://lime-technology.com/forum/index.php?topic=2817
Not sure if that script will work from OI/Solaris...For using it with LSI2008 based HBAs, you'll need unRAID of the newer beta versions, I think.
pool overview: pool: tank state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 0 in 4h19m with 0 errors on Sun Feb 26 07:20:20 2012 config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 c1t0d0 FAULTED 1 597 0 too many errors c1t1d0 ONLINE 0 0 0 c1t2d0 FAULTED 0 1.65K 0 too many errors c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0
maybe Try adding a second 16GB virtual Hard drive in vmware and then inside napp-it menu disks->add add this into the rpool which will make it a mirror with 2 disks. Wait for it to rebuild and then remove the 12GB drive from the mirror (and then from vmware later). You should then have a 16GB single disk pool. I haven't tried this myself this is just how I think it should work anyway.
Note that you can do it danswartz's way as well which is to add a second disk as a new vdev to the rpool which is done under the menu pools->add instead. This method it just adds the extra storage into the pool but you have to leave both disks in your vmware config forever.
Thank you for the help, i've read this once before and unfortunately i just don't understand it. I've tried setting permissions recursively with a windows client to full permissions for everyone but the setting doesn't seem to be persistent. I just now found the acl settings in napp-it but it appears i need register/pay for that functionality. Is there a good guide on managing these types of acl properties from the CLI in solaris? All i'm trying to do is set it so that all existing and any new files on the nfs/samba share are available to every user via nfs and samba. I've logged into solaris and set permissions to everything chmod -r 777 but that doesn't stick for newly created files.