Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
The components of my All-in-One are arriving. I will be creating a 6x2TB raidz2 ppol using Hitachi 512k drives. In the mean time, I have created a VM running OI 151a and napp-it to begin learning.
Question 1: What is the most stable release of OI that I should use?
Question 2: If Solaris Express 11 is more stable than OI, would you recommend SE11 (I have no need for encryption).
Question 3. In my test VM, I created a 6 drive raidz2 pool, however, I can't tell if the pool that was created is actually a raidz2 or raidz pool (the pool status differs from the pool details).
device-dependency-property removable-media /dev/fb
autopm default
autoS3 default
cpu-threshold 1s
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior
autoshutdown 30 9:00 9:00 noshutdown
cpupm enable
device-thresholds /dev/dsk/c3t0d0 1800s
You could always run openindiana? Unless you need zfs encryption?
Can somebody explain how I can import (or transfer) a pool?
Right now I'm using a dedicated file server machine. It is using onboard SATA plus LSI 9211-8i SAS card.
I'm moving everything into a brand new ESXi "all-in-one" box using 2x LSI 9211-8i cards and won't be using onboard SATA.
How can I transfer my pool from the old system to the new (and virtualized) one with the slightly different hardware?
Thanks! I saw the import button in napp-it but somehow missed the export button...
I'll be using the same operating system (with napp-it) so hopefully it goes smooth. Thanks again for all of your help and advice!
I am about to dump solaris express 1 for oi 151a, is that a good idea, i have no need for encryption......
Hmmm... I tried to export my pool (so I can move the disks to a new server) but it says "cannot export 'tank': pool is busy"
How can I fix that?
partly thanks to this thread, i'm up and running with oi_151a and napp-it, 5 drives in RAID-Z2, and joined to the AD domain.
Does anyone know of a walkthrough somewhere for setting the ACLs (through windows) on my ZFS folders?
I am getting slow READ performance under ESXI. (5)
VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)
The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.
On bare metal its fine, problem only occurs under ESXi. Any suggestions?
- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.
I am getting slow READ performance under ESXI. (5)
VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)
The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.
On bare metal its fine, problem only occurs under ESXi. Any suggestions?
- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.
I am getting slow READ performance under ESXI. (5)
VM to VM is fine, so I know its not the ZFS server that is slow. But going from VM to Physical PC I am only getting 25MB/sec reads (80MB/sec writes)
The physical NIC is an Intel 1000MT and the vNIC is the VMXNet3, I tried the E1000 but got the same results. Clearly if the writes are capable of 80MB/Sec, the reads she be as fast or faster.
On bare metal its fine, problem only occurs under ESXi. Any suggestions?
- VMware tools are installed in the guest OS. (Solaris Express 11)
- Jumbo Frames enabled in both the OS and the ESXI vSwitch settings
- FreeNAS has the same problem
- The array reads at over 1400MB/sec
- Bare metal is fine, but I really need the setup to be All-In-One
- jPerf is fine, ~100MB/s both ways.
this may be a noob question so please dont hate:
I'm ready to build my raid-z2 with 6 drives, what is the best way to label them ahead of time so I know which one to pull when one dies? Do I plug in one by one and check the ID or will that change?
Gea, I'm wondering if you know whether openindiana supports sun's storagetek availability suite.
What are the least expensive disks recommended for an ESXi data store (hosted by ZFS on an "All-In-One" box) ?
I'm thinking about using RAID 10 (4 disks) with something like 250 GB disks but 250 GB disks aren't much cheaper than 2 TB disks!
so the only reliable form of replication is running send/receive snaps via cron over the wire then?This was already dead at SUN with latest builds (at least as a free project)
Current free HA project is https://www.illumos.org/projects/ihac
(but less activity there)
In domain mode:
SMB connect your share from another computer (also member of domain) as root
and set desired ACL (right click-property-security)
How would a RAID 10 of 4x Hitachi 5k3000 2 TB disks work?With ESXi, NFS and sync writes you should look for I/O performance
All large and cheap disk are quite bad with this.
Least expensive could mean a lot
Depending of your money and capacity needs, i would prefer:
Use as much vdevs as possible, prefer mirrors
Prefer 10k disks like WD Raptors or at least 7200 U/m drives, avoid green ones
Avoid 4k disks, add SSD write caches (prefered as a mirror, used size is 50% of RAM)
I would look at WD Raptors or a lot of Hitachi disks (7200 U/m, non 4k)
How would a RAID 10 of 4x Hitachi 5k3000 2 TB disks work?
...or is there another combination of 4x Hitach 5k3000 2 TB disks that would result in better performance? I don't even really need a total of 2 TB of storage for VMs (but I already have two of those disks and was thinking about ordering two more)