Yes, technically I do agree.
Maybe this is a psych/mind-at-rest thing...like you *did* switch off the coffee-maker, you *know* it, you well *remember* that you did, but when you leave home, you double check (or even pull the plug;)
This is IMHO not ZFS related, but before employing a disk and deploying it into an array, I'd like to know if it is up to specs and standards.
I am using a "burn-in", just like memtest for RAM on new and used/migrated disks.
For new disks I run the tool at least 3x and 1x for a used one...if...
thanks for your answer.
It's the TXT install CD and it installs just fine on another system (tested a VM).
I as well installed Solaris11-Express and then applied the pkg update procedure.
The upgrade process did run fine and upon reboot, the system hangs after the first kernel prompt.
I did build another box with an AMD Opteron 3350HE (AM3+) and with 8GB ECC memory.
The CPU features AES-NI but Solaris 11-Express would not recognise it.
However, Solaris 11.1 refuses to install.
The installer would hang forever "tranferring" data to disk (using a S-ATA SSD).
Oh, right...you're using 2 vdevs in a single pool...sorry, I misread your info.
Good that you have another pool with enough space to hold your data.
IMHO a zfs send/receive from your old pool to a new folder in your backup pool and then back to yoiur newly created pool should do the trick.
Yes, the new Seagate disks will not fit in your old pool.
Also, replacing the old ones is a one-at-a-time operation...time consuming and potentially dangerous (giving unnecessary stress to the old disks during resilvering of each and every new one).
If you can mount all five new disks in...
need your help here, I am afraid.
My all-in-one gave me a PSOD due to a RDIMM gone bad.
Now one of my pools shows:
root@tank:~# zpool status -v tank
status: One or more devices has experienced an error resulting in data
...honestly, I'd rather go and buy a small box with disks, build a ZFS array (with or w/o redundancy) and perform a zsf send / zfs receive job.
This wll definitely get everything back and forth ... complete and simple.
maybe a HP Microserver N40L is a good option...
Edit: ouups...missed the...
This is a concept which looks tempting for home use, for sure.
Sorry, but I cannot offer any experience with snapraid ATM.
...some questions though:
How does solaris CIFS Server fit into the equation with 1pool=1vdev=1disk?
Assuming that I want to provide single share to hold my media, which...
Because I wanted to stay on SOL11.0 (Powermangement still working), that's exactly what I did.
@_Gea...with v0.8l3 I am unable to create an encrypted ZFS-Folder via GUI/napp-it.
The option remains as unavailable, although pools are created with ZFS-V31.
(...it for sure did work in the "old...
Yes I understand. Maybe I have to excuse because english is not my native tongue.
I have been looking for an alternative to ESXi ever since.
And of course, I looked into smartOS as soon as kvm support was announced.
I decided to stay away from it, so I cannot offer real life experience which...
Well thank you, but I don't think so.
For an Intel based system, vt-x is inherited with vt-d and not the other way around.
Your quote does not proof that vt-d is supported with smartOS.
Can you point me in the right direction where this is confirmed, as you state?
The vt-d feature (or...
The kvm implementation of/for SmartOS lacks support for vt-d, which makes it building an all-in-one the inside-out way, isn't it?
I simply don't like the idea of your SAN to be your hypervisor.
If you want to overcome the vendor lock-in and license cost, I'd suggest to try Proxmox-VE...
Quick question, since I find the current info somewhat confusing and cannot find my path around the options:
I do want/need the native full disk encryption with SOL11.
What combinations of versions of SOL11 and napp-it will allow me
to configure my pools with encryption and pools V31+ via...
Hmmm...ok, so how do I find out about the physical sector size that the disks report?
I expect there are many more drives out there that "lie" about their inner setup.
Just created a test-pool with my WDEARX as a basic vdev...ahift is set to 12 automagically.
So running a small test for...
One of my disks in a pool is failing but the configured spare does not kick in.:o
When I try to replace it manually with the failed disk, I receive:
cannot replace c0t5000CCA369C9AAACd0 with c0t50014EE206E0EA31d0: devices have different sector alignment
This is SolEx-11 with...
...the X9SRL is SandyBridge and socket 2011, where the X9SCM is SandyBridge (upgradeable to IvyBridge with BIOS) on 1155 socket. So, all although being in the set of recommended X9xxx-F, the X9SRL is definitely a different kind with chipset 602.
The current reports with tze X9SCM might be...
...there are reports out with the X9SCM-F with IvyBridge CPUs, where passthrough of 3 M1015 would not work.
With SandyBridge CPUs all is fine. Also reported are effects/problems with other card types when using V2.0a (IvyBridge) BIOS, even when on SB-CPUs.
...yes...there is one more thing....for my M1015 / LSI HBA, in order for the flash to become effective, you need to reboot the host, not just the VM....not a problem imposed by virtualization...you need to reboot even when flashing on bare metal
...so what you gain when flashing from inside...
I've done this several times with my HBA (M1015 / LSI 9221-8i) and the solaris sas2flash utility from LSI.
This is from within my napp-it NAS (Solaris-Express 11 where the HBA is passed to).
...using ESXi5.0 though.
Thanks for sharing!
I am looking into the X9SRL-F with a single E5-2620 as a future upgrade.
My current setup is in no way comparable to yours. However, I never managed to
use all the headroom on my SOLex-11 VM...even with encryption enabled.
I have vmware tools installed and use the vmxnet3...
Sorry, can't help you with that...besides that I recall that 2 cores is the recommended
value for OI/Solaris in a ESXi VM.
But seeing that you passed through 3 pcs. M1015 cards, may I ask you to share your
hardware specs? ..motherboard make & model, especially.
Hmm, sorry maybe my post was a bit confusing.
ESXi does not use software emulation for CPUs in guests, that is why every
guest on the same host should see the native CPU model (and all of if its features).
But your Solaris and OI guests report quite different features.
So maybe ESXi is...
...the CPU features reported by the guests are so different, which AFAIK is not "normal" for ESXi because it should not use soft-emulation in normal operation mode.
However, I seem to recall that you CAN set CPU features or rather CPU models....that feature is called EVC and AFAIK only applies...
Hmm...ok, do you mean that this action will get me back the option to create V31 encrypted pools under S11-Ex with napp-it 0.8k?
Yes I know about that feature but this is not an option for me, I am afraid.
Thanks _Gea for your reply.
I kept S11-Express because of working PM.
Next upgrade of that box is due at YE...will have to cope with CLI until that time.
Maybe I am lucky and OI will introduce their own zfs encryption feature meanwhile.
...I just wanted to create a new ZFS Folder and noticed that the option for encryption in napp-it shows always as "not available" :confused:
I am running napp-it 0.8k on Solaris-11 Express
and ZFS Version shows as V31 in all pools.
Existing encrypted Folders are running fine....
For my embedded media players, this is not an option I am aware of in the firmwares.
Most devices will silently give up and you can re-initate the access to the share.
One model does need a reboot in order to get to the shares.:eek:
I also use XBMC in windoze...when the underlying win does...
Thanks _Gea for your response.
Unfortuneately the "important" data is the larger set (the movies and music collection).
It is not in use on a regular basis, that is why spin-down makes sense in terms of energy savings.
Which makes both aspects, energy efficiency and easy access to the data, a...
I am running a rather small "all-in-one" at home, where I use to spin down disks, which in itself works great.
But some clients (mostly connecting via CIFS) have problems to "see"/mount the shares right away, when the array is in a spun-down state.
The result is that clients...
I really like that concept of pre-clearing drives, they use over at the unRAID forum and I use it for stress-testing new or moved-around drives as well.
Here's the link: http://lime-technology.com/forum/index.php?topic=2817
Not sure if that script will work from OI/Solaris...For using it with...