Origin_Unknown
Limp Gawd
- Joined
- Dec 12, 2009
- Messages
- 257
_Gea I've been using Solaris Express 11 with napp-it for a few weeks now and i'm very impressed so far - thank you for the work you've put into this
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
How important is ECC in a ZFS pass through / ESXi build? I'm considering between a Xeon build vs i7-2600 /w Intel Q67 motherboard. Is the ECC worth the extra $200+?
Intel 82579LM for the nic - is that officially supported by ESXi or do you still have to do unsupported driver hax?
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182262
(note the +F, has 2 82574L controllers)
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115083
E3-1230 is cheaper than the 2600
you can use the price difference between the 2600 and e3-1230 to bump the ram up to ECC
I think the price will be no more than 40-50 bucks more (mobo price difference), and you are getting dual nics and probably a more reliable overall setup. It's certainly going to be one that more people have used.
If in a Pure mirroring setup, does ram size and with or without ecc matters?
I was trying to add some extra complexity to the build by using the board as a desktop pc for a little while until the Intel X79 launches..
Gea, I notice in the napp-it instructions on the all in one setup, you say not to use esxi 5.0 because of the 8 GB ram limitation, but I thought vmware changed to 32 GB as the limit.
32 GB seems to be adequate for lots of builds, though I agree 4.1 was better in this aspect.
He changed the guide before the updated announcement of 8GB --> 32GB limitation was changed. I have had an ESXi 5 install for months now and it works beautifully and I would recommend you start with that.
for the home? not really. for a business production environment? yes.
maybe for you, but i care about my data so i use ecc
when have you ever seen ECC 'save' data? i will argue the home environment never stresses a fileserver hard enough that ECC will ever 'save' anything. with the drive sizes these days you're going to run into bit rot before ECC is going to 'save' the data from transferring or streaming that video file.
when have you ever seen ECC 'save' data? i will argue the home environment never stresses a fileserver hard enough that ECC will ever 'save' anything. with the drive sizes these days you're going to run into bit rot before ECC is going to 'save' the data from transferring or streaming that video file.
Obviously a loaded question - when you have an error due to non-ecc memory it's not really going to be diagnosable - and likewise if you have ecc there's not really any sort of performance counter that i'm aware of that lists errors corrected.
Really the price different is so miniscule why not go for it. Are you going to crash and burn without it? Probably not.
As for bit rot, that's why you are in a solaris thread. Problem solved.
maybe for you, but i care about my data so i use ecc
I'm just doing a fresh Napp-It VM build, going through a shakedown on the hardware and noticed that Bonnie++ in Pools/Benchmarks won't launch. dd bench works.
Use of uninitialized value in subroutine entry at /usr/perl5/5.10.0/lib/i86pc-solaris-64int/DynaLoader.pm line 226.
OK, here it is in case anyone wants it, modify as you see fit.
usage:
zpool-spindown.sh poolname
==============================
#!/usr/local/bin/bash
#
# zpool-spindown.sh
#
ZPOOL="$1"
if [ -z "$ZPOOL" ]
then
echo "zpool name required"
exit 2
fi
PATH=/usr/local/bin:/bin:/usr/bin:/usr/sbin:/usr/local/sbin:/bin:/sbin
export PATH
# Cleanup any tmp file if present
if [ -f /tmp/zpool.iostat ]
then
rm -f /tmp/zpool.iostat
fi
# Name of samba share to check if mounted
SMBSHARE="media"
# Get drives for pool
drives=`zpool status $ZPOOL | egrep "da[0123456789]" | awk '{print $1}' | tr '\n' ' '`
firstdrive=`echo "$drives" | awk '{print $1}'`
# Activity checks
smbactive=`smbstatus -S | grep -A 6 "Connected at" | grep $SMBSHARE | wc -l | awk '{print $NF}'`
scrubrunning=`zpool status $ZPOOL | egrep "scrub in progress|resilver in progress" | wc -l | awk '{print $NF}'`
spundown=`smartctl -n standby -H /dev/$firstdrive | tail -1 | grep "STANDBY" | wc -l | awk '{print $NF}'`
if [ -f /tmp/locate.running ]
then
echo "Locate running...Aborting spindown!"
exit 3
elif [ $smbactive -gt 0 ]
then
echo "Samba share is mounted...Aborting spindown"
exit 3
elif [ $scrubrunning -eq 1 ]
then
echo "Scrub/resilver is running...Aborting spindown"
exit 3
elif [ $spundown -eq 1 ]
then
echo "Spundown already...Aborting spindown"
exit 3
fi
# Longer IO Activity check - only perform if got past above
zpool iostat $ZPOOL 30 2 | tail -1 > /tmp/zpool.iostat
reading=`cat /tmp/zpool.iostat | awk '{print $(NF-1)}' | awk -F\. '{print $1}' | sed -e 's/K//g' | sed -e 's/M//g'`
writing=`cat /tmp/zpool.iostat | awk '{print $NF}' | awk -F\. '{print $1}' | sed -e 's/K//g' | sed -e 's/M//g'`
rm -f /tmp/zpool.iostat
if [ $reading -gt 0 ]
then
echo "Pool shows IO activity...Aborting spindown"
exit 3
elif [ $writing -gt 0 ]
then
echo "Pool shows IO activity...Aborting spindown"
exit 3
fi
drives=($drives)
type=""
driveop () {
drive=$1
# Need to issue differnt command to ada vs da devices!!!
type=`echo $drive | cut -c 1`
if [ $type = "d" ]
then
camcontrol stop $drive
elif [ $type = "a" ]
then
camcontrol standby $drive
fi
return
}
drives_count=${#drives[@]}
index=0
while [ "$index" -lt "$drives_count" ]
do
driveop ${drives[$index]}
printf "Spindown Drive %s\n" ${drives[$index]}
let "index = $index + 1"
done
===============================
I'm having an issue with the auto script. I have the OI server plugged into a monitor and the screen is telling me to check the auto_error.log. It has a bunch of entries that say this. The server is running Barebones install of OI 151a and Napp-it 0.500s
Code:Use of uninitialized value in subroutine entry at /usr/perl5/5.10.0/lib/i86pc-solaris-64int/DynaLoader.pm line 226.
I have a couple last issues I've been trying to figure out before my server build is finished, maybe some Solaris/OpenIndiana gurus can help...they are both power related.
1) How to get drive spindown working? I've set the device threshold in power.conf via napp-it, and disabled fmd because I read that it could be preventing spindown, but alas no luck. I did see on page 2 of this thread a script that someone created for FreeBSD:
Can this be adapted for Solaris? The script aborts at the command 'smbstatus'.
2) Is it possible to get a Cyberpower UPS interfacing w/ Solaris via USB? I've tried various things with apcupsd but I'm a bit lost on that since I'm a Unix newbie.
ps. This thread rocks, so much good info, thanks Gea!!
I have a couple last issues I've been trying to figure out before my server build is finished, maybe some Solaris/OpenIndiana gurus can help...they are both power related.
1) How to get drive spindown working? I've set the device threshold in power.conf via napp-it, and disabled fmd because I read that it could be preventing spindown, but alas no luck. I did see on page 2 of this thread a script that someone created for FreeBSD:
Can this be adapted for Solaris? The script aborts at the command 'smbstatus'.
2) Is it possible to get a Cyberpower UPS interfacing w/ Solaris via USB? I've tried various things with apcupsd but I'm a bit lost on that since I'm a Unix newbie.
ps. This thread rocks, so much good info, thanks Gea!!
#
# Copyright 1996-2002 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#pragma ident "@(#)power.conf 2.1 02/03/04 SMI"
#
# Power Management Configuration File
#
device-dependency-property removable-media /dev/fb
autopm enable
autoS3 default
# cpu-threshold 1s
# Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior
autoshutdown 30 9:00 9:00 noshutdown
cpupm enable
cpu-threshold 300s
#device-thresholds /dev/dsk/c5t3d0 2m
#device-thresholds /dev/dsk/c5t4d0 2m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@0,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@1,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@2,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@3,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@4,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@5,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@6,0 10m
device-thresholds /pci@0,0/pci8086,3b42@1c/pci1000,3140@0/sd@7,0 10m
Never use a vdev larger than, say ~12ish disks. You will get extremely bad IOPS performance. And when you resilver it will take days.So if you already had some 10+2 2TB (20TB usable) Raid-Z2 vdevs would it be better to add 7+2 3TB (21TB usable) arrays or 20+2 3TB (30TB usable) arrays?
I didn't see a way to do it in the napp-it gui so I decided to hardcore an ip in Nexenta because every time i reboot the server it pulls a new ip from dhcp. I bing around and found some instructions. I edit "vim /etc/nwam/llp" then set "xnf0 static xx.xxx.xxx.xxx from dhcp" if i restart services "svcadm restart svc:/network/physical:nwam" i get the hard coded ip but if i reboot i get a dhcp ip.
To fix the static changing on reboot, on the openindiana irc board someone suggested:
svcadm disable network/physical:nwam
svcadm enable network/physical:default
ipadm create-addr -T static -a local=xx.xxx.xxx.xxx xnf0/v4
The last command failed because ipadm is not available on Nexenta Core. Now i have no IP when i boot.
Can anyone suggest a way for me to get myself out of this hole that i've dug?
-Flash
(...)One vdev will give you the same IOPS as one disk. Say you have 20 disks in raidz3 - then you have 20 disks acting as a single disk - which means very bad IOPS.
One vdev will give you the same IOPS as one disk. Say you have 20 disks in raidz3 - then you have 20 disks acting as a single disk - which means very bad IOPS.
Never use a vdev larger than, say ~12ish disks. You will get extremely bad IOPS performance. And when you resilver it will take days.
I am going to buy a norco 4224. I am going to use two 12 disks in a raidz3. Better would be to have three 8 disks in raidz2 - but I dont care too much about IOPS.
One vdev will give you the same IOPS as one disk. Say you have 20 disks in raidz3 - then you have 20 disks acting as a single disk - which means very bad IOPS.
Sorry, that was a typo. The 20+2 was meant to be 10+2 (same as the original vdev). I was just curious if there was any unusual penalty from mixing vdev arrangements (other than the usual performance/bottleneck issues).
use ipconfig instead of ipadm like (nonpersistant, works only until boot)
current settings: ifconfig -a
new setting like
ifconfig hme0 netmask 255.255.255.0 broadcast + up
set ip then via napp-it menu system network to have persistant settings
or
reboot to last system snap
or
disble networkhysical, enable nwam