Post your ZFS setup

erebus720

Gawd
Joined
Jul 16, 2003
Messages
1,017
Title says it all...

very curious as to what everyone is running their zfs setup(s) for and what hardware / disks they are running them on.

Include disks, MB's HBA's etc


I have a norco 4224

x9scm-f-o
e3 1230 xeon
16gb
8x wd 5000aaks drives
mirrored 320gb laptop drives

currently used for running vm's for testing
 
X9SCL+-F
E3-1230 Xeon
16gb
8x Hitachi 5K3000 3TB Drives
Mirrored 500GB drives
LSI 9201-16i
 
(still in the process of setting this up, but here it is:)

head node x2:
X8DTH-6F (uses 825TQ-R720LPB case)
2x E5620 Quad Cores
48gb DDR3-1333mhz ECC Reg
AOC-STG-12 10gbe dual NIC (Intel 82598EB based)
LSI 9205-8e HBA

jbod:
847E26-RJBOD1

disks:
ZIL - 1 Stec ZuesRAM 8gb
Pools - 17 1TB Seagate Constellation ES ST31000424SS
 
(still in the process of setting this up, but here it is:)

head node x2:
X8DTH-6F (uses 825TQ-R720LPB case)
2x E5620 Quad Cores
48gb DDR3-1333mhz ECC Reg
AOC-STG-12 10gbe dual NIC (Intel 82598EB based)
LSI 9205-8e HBA

jbod:
847E26-RJBOD1

disks:
ZIL - 1 Stec ZuesRAM 8gb
Pools - 17 1TB Seagate Constellation ES ST31000424SS

What are you using this for?
whats the model # of your zeus drive?
How much did you pay for it?
my work is considering a similar build
 
Last edited:
What are you using this for?
whats the model # of your zeus drive?
How much did you pay for it?
my work is considering a similar build

Primary storage for a bunch of ESX servers

Zeus Drive: Z4RZF3D-8UC
http://www.stec-inc.com/product/zeusram.php
They go for around 3k (most expensive drive i have ever bought, lol)

Racked everything up, just havent installed OS, etc yet. Will probably start a thread here in the next day or so detailing the build.
 
Intel Xeon E3-1230
SuperMicro X9SCM-F
16GB (4x4GB) SuperTalent DDR3-1333
Norco RPC-4224 24-bay
Corsair Professional Series Gold AX850
8GB Patriot Xporter XT Boost flash drive [hypervisor storage]
320GB Samsung Spinpoint F4 HDD [OS datastore]
24TB (12x 2TB) Hitachi Deskstar 5K3000 HDDs in RAID-Z2
2x IBM ServeRAID BR10i SAS Controllers w/ LSI IT firmware
VMWare ESXi 5.0 Hypervisor
OpenIndiana b151
 
AMD Athlon II X2-250
Asus M4A785T-M
16gb ECC RAM (4x 4gb)
Crappy coolermaster case
4x 2TB Hitachi 5k3000 RAIDZ
60gb OCZ Vertex 2 (L2ARC)
IBM Br10i
2x Kingston 16gb SSD - boot
Solaris 11
 
Intel Q9450
P45 Gigabyte EP45-DS3P
AOC Sata PCI-X card, eight slots. In a PCI slot.
8 Samsung 204i 1TB sata disks
Antec P182
8GB RAM
 
I'm going for the worst specs award:

Dell Dimension 8400:
3Ghz(?) Pentium 4 w/HT
2GB DDR2-533 non-ECC (not ideal)
4 SATA ports on the Intel 925X Express
3x Samsung HD103SM (RAIDZ) and a crap old PATA system drive (OI-151a).
 
HP Micro Server NL36
8GB RAM
FreeNAS 8 running on a 16GB SSD
RaidZ1: 4x2TB Hitachi 7200RPM

I spent $600 on the entire rig. I wouldn't go for 7200RPM drives if I did it again, even this small setup with fairly low power consumption can perform sequential read/writes at more than double the speed of a 1GbE link.
 
Supermicro X9SCL+-F
Core i3-2100
16GB Kingston ECC RAM
2x 9220-8is/M1015 (not flashed to IT firmware, using LSI imr_sas driver)
8x Seagate 500GB drives (in RAIDZ2)
4x 300GB IBM badged WD Velociprators (in striped mirror)
2x 32GB OCZ Vertex SSDs (mirrored rpool)
2x 16GB Samsung SSDs (not arrived yet, will be ZIL)
1x Intel 320 120GB SSD (possible L2ARC, not decided yet)

OpenIndiana 151 with Napp-it.


8x 500GBs are storage for files, 4x 300GB are primarily datastore for ESX but might dump some other files on there given I dont need 600GB for ESX.

Very happy with performance so far and the features of napp-it.
 
(still in the process of setting this up, but here it is:)

head node x2:
X8DTH-6F (uses 825TQ-R720LPB case)
2x E5620 Quad Cores
48gb DDR3-1333mhz ECC Reg
AOC-STG-12 10gbe dual NIC (Intel 82598EB based)
LSI 9205-8e HBA

jbod:
847E26-RJBOD1

disks:
ZIL - 1 Stec ZuesRAM 8gb
Pools - 17 1TB Seagate Constellation ES ST31000424SS

In the end, will be running NexentaStor (mainly for HA and so i have someone to call at 2am when something goes wrong

However, will be testing illumos and illumian with napp-it as well.
 
Intel S5000PSL
Xeon 5148 x 2
16 GB DDR2 FB-DIMM
AOC-USAS-L8I x 2
10 x Mixed model 3Gb Seagate 750 GB
5 x Samsung F4 HD204UI
1 x Adata S599 60GB Sata
1 x Supermicro 5 disk hotswap
Norco RPC-450

Seagates are in a Stripped mirror array
Samsungs are in a RAIDZ1 array
 
SUPERMICRO MBD-X9SCA-F-O
Intel i3-2100T
Kingston 8GB ECC
6x 2TB HITACHI 5K3000 in RAIDZ2
IBM ServeRAID BR10i SAS
1x $12 ebay Hitachi 40GB os drive :D
Antec NEO ECO 400C
NORCO RPC-250 2U case

Open Indiana 151a + napp-it

65W idle power consumption with 4 fans blasting; 85W under load

371.69 MB/s Write
474.07 MB/s Read
 
i have 3 in production right now, all small, building a 4th/5h next month.

1)
dell 1950 16GB 2 x 18.6ghz quads with pci-e u320 scsi card and 4 gigE
2 u320 14 disk powervaults filled with maxtor/compaq 10k 300gb scsi drives. one is 2 x triple parity one is 7 x mirrored.

2)
supermicro x7dvle mobo
2 x 1.86ghz quads
12GB ram
6 x gigE
2 x AOC-SAT2-MV8
11 x 1TB RE4 5 mirrors + HS
4 x 80GB X25 SSD for l2arc
2 x wintec SLC 8GB SSDs for slog

all crammed in an old SC933 (formerly a coraid system)

3) same as #2 but no ssds just 15 850GB ultra stars in 7 mirrors +1HS but going to reconfigure to 2 x 7 disk triple parity + HS.

numbers 4 and 5 that i'm still researching a bit will be based on SC835 chassis as head units so i can use two ACARD 9010s and still have the option to use ddrdrives if i need.

i'm leaning towards the socket c32 H8DCL-6F motherboard and the Opteron 4228 HE because they're inexpensive, low power, AESNI, and a ton of PCI-E x8 or better slots, and the SAS 2008 can drive 8 sata3 SSDs in the head unit.

Networking will be 40gig infiniband.

Undediced as yet which JBOD chassis but they'll be connecting via LSI 1068 controllers. Spinners aren't limited by 3gbit at all and the 1068s work better with solaris.

I also have 54 2.5" 73gb 15K SAS drives hanging around that I need to make use of. Probably going to use them for database specific storage and or my own management type storage for backend VMs etc. Probablly will forgo l2arc here and just use a zil + 12 4 drive z2 arrays.
 
Norco 4216 case
Supermicro X7DBN
Xeon E5205
12GB RAM
2x br10i
6x Hitachi 2TB green (5400ish rpm) -Raidz2
4x Hitachi 750GB -Dual Mirror
2x 500GB -mirror
2x 320gb 2.5" rpool - mirror
1 64GB Crucial m4 -l2arc
1 acard ans-9010b 6GB -zil
solaris 11 express
 
Supermicro H8SMi-2 mobo
AMD Athlon 3800+ X2 (AM2)
8GB Kingston Unbuffered ECC ram
2x nondescript (used) 5400RPM 2.5" 60GB sata drives (mirrored rpool)
DELL SAS6i HBA (LSI 1068e flashed to IT firmware) passed to external 4 lane SAS connectors
all crammed in an ATX 1.0 era rackmount enclosure

Rackable Systems SE3016 (16 disk SAS JBOD expander array) with 8x750GB 7200RPM 'cudas (assorted models) configured as 6-way raidz2 with 2 hot-spares

Running OpenIndiana
 
Asus P5q-pro
Q6600 oc'd to 3.4ghz (getting my money's worth from that old Tuniq HSF)
CoolerMaster Centurion 5 "HAF" with icy dock hot-swap bays
8GB crucial RAM
Dual port intel server gig nic
Intel sasuc8i
6 x 2TB Hitachi 7200 rpm in raid-Z2
2 x 40GB ssd in raid-0 as l2arc
3 x 80GB SATA-1 as mirrored rpool (3-way because the disks are old)
OpenIndiana 151 with napp-it

Exports iSCSI, NFS, and SMB for my home network

ISCSI bench numbers:
Seq. 108.8 r. 50.76 w
512k. 106.7 r. 49.10 w
4k. 14.25 r. 8.5 w
4k qd32. 100.1r. 9.9 w

(any one know why the qd32 is so much faster than the qd1 for 4k reads? Is that my l2arc in action?)
 
Supermicro X8SI6-F
HP SAS Expander
16gb ECC RAM
10x 3tb wd greens in raidz2 ( planning to add another vdev later )
OI 151 + napp-it
 
Back
Top