The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
My system is in the sig. I can go back and add it to the post as well if it helps. Was just following the example of others, on this very page who also have less than 20TB, I assumed since NO ONE BIT THEIR HEAD OFF I WOULD BE OK. Excuse me, fXK and neb. :rolleyes: (added the picture of the HDDs too, enjoy.)
 
Last edited:
Better save than sorry ;)
Today the first hardware arrived ;)

next parts arrived:
 
Last edited:
I've made a upgrade to the setup posted here and have now a 20+ TB System.

Specs:
----------------
MB: Gigabyte GA-B75-D3V
CPU: Intel Xeon E3 1230V2
RAM: 2x 8 GB DDR3 @ 1600 MHz
RAID: Intel SRCSASRB with 256 Cache and BBU
HBA: 1x AOC-SASLP-MV8
HDD 3.5": 8x 2 TB + 4x 1.5 TB
HDD 2.5": 4x 160 GB 2,5" drives
OS: Windows Server 2012
Case: Flamingo Coral (it's a generic case with enough room for 9 internal drives and the rest in hdd enclosures 3-in-one and 4-in-one).
PSU: Chieftec 450 W (the entire setup draws less that 390W per hour if it had full load but it's idle being a home media server).
 
Last edited:
After using two HP Microserver NL36 for storage and ESXI lab, i'll decided to upgrade a little.
Got all the parts and just waiting for some time to assemble it.

Specs:
MB: MSI X79A-GD65
CPU: Intel Xeon I7 3930K
RAM: 32GB DDR3
RAID: Adaptec ASR-5805
HBA: IBM M1050
HDD 3.5": 5x 2 TB + 2x 4 TB
SDD 2.5": 1x 128 GB ( Maybe going for 2 or 3 for som cache )
Hypervisor: ESXI 5.1
OS: Windows Server 2012
Case: Cooler Master 590
PSU: Corsair HX650
 
@MAFRI

If Storage1 and Storage2 are the filers for the iSCSI storage, what's the shared DAS for, and is it really necessary? Couldn't the iSCSI cluster handle that job?
 
Total storage single chassis = 60TB

Specs:
MB: Supermicro X10SLM-F
CPU: Intel Xeon E3-1270v3 Haswell, 3.5Ghz
RAM: 32GB DDR3
HBA: (2) IBM M1050
RAID: Linux mdadm raid 6
HDD 3.5": (15) Seagate 4TB
SDD 2.5": 1x M4 128 GB
Hypervisor: KVM
OS: Fedora 19 - Linux
Case: Supermicro 933T
PSU: 13.8 volt DC powered
USE: Personal system, backups etc.

Code:
[root@localhost log]# fdisk -l|grep Disk
Disk /dev/sda: 128.0 GB, 128035676160 bytes, 250069680 sectors
Disk /dev/sdk: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdm: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdh: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdl: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdp: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdo: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdn: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdg: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdf: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdj: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdi: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/md127: 52008.5 GB, 52008481980416 bytes, 101579066368 sectors
[root@localhost log]#
 
PSU: 13.8 volt DC powered
Very interesting. Do you have more details about what this is, where you got it from, that kind of thing? Are you using it with lead-acid batteries to provide backup? If so, what are you using to charge/float the batteries?
 
Total storage single chassis = 60TB

Specs:
MB: Supermicro X10SLM-F
CPU: Intel Xeon E3-1270v3 Haswell, 3.5Ghz
RAM: 32GB DDR3
HBA: (2) IBM M1050
RAID: Linux mdadm raid 6
HDD 3.5": (15) Seagate 4TB
SDD 2.5": 1x M4 128 GB
Hypervisor: KVM
OS: Fedora 19 - Linux
Case: Supermicro 933T
PSU: 13.8 volt DC powered
USE: Personal system, backups etc.

Code:
[root@localhost log]# fdisk -l|grep Disk
Disk /dev/sda: 128.0 GB, 128035676160 bytes, 250069680 sectors
Disk /dev/sdk: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdm: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdh: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdl: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdp: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdo: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdn: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdg: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdf: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdj: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdi: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Disk /dev/md127: 52008.5 GB, 52008481980416 bytes, 101579066368 sectors
[root@localhost log]#

I'm thinking of trying KVM out what sort of write/read speeds are you seeing ?
 
Very interesting. Do you have more details about what this is, where you got it from, that kind of thing? Are you using it with lead-acid batteries to provide backup? If so, what are you using to charge/float the batteries?

I'm currently powering the (15) drive bays with (2) of these and the MB with this. With one of the drive bay units set to 5v and the other 12v. The negatives of each supply are bonded, to ensure the same reference across each rail.

The system is powered by a 13.8v 40 amp supply right now, but in the future it will all be solar powered, on a large battery bank. Batteries will be lead acid, probably Surrette. It will also be backed up with grid power, controlled by an Arduino.

The entire system will pull ~22 amps at 13.8v spinning up the drives, then idle down to ~6 with it pulling ~11 amps while building the array. I'll do further consumption testing will all cores hot and the drives loaded soon.

I'm thinking of trying KVM out what sort of write/read speeds are you seeing ?

Give me some time and I'll post some, I'll do a few dd test with a couple different chunk sizes, unless you have something specific in mind?
 
no that would suffice. you're running software raid right? I look forward to what you find.
 
no that would suffice. you're running software raid right? I look forward to what you find.

Raid = mdadm (software) RAID 6

I used 3x my RAM size, but you should still take the total file size of 105G and divide the total elapsed time for a true cache-less test.

Code:
[root@localhost main]# time su -c "dd if=/dev/zero of=/main/ddfile1 bs=512k count=200000; sync;"
Last login: Wed Jun 19 13:30:15 EDT 2013 on pts/3
200000+0 records in
200000+0 records out
104857600000 bytes (105 GB) copied, 106.055 s, 989 MB/s

real	1m50.840s
user	0m0.046s
sys	0m34.593s
[root@localhost main]# time su -c "dd if=/dev/zero of=/main/ddfile1 bs=256k count=400000; sync;"
Last login: Wed Jun 19 13:44:16 EDT 2013 on pts/4
400000+0 records in
400000+0 records out
104857600000 bytes (105 GB) copied, 105.409 s, 995 MB/s

real	1m50.259s
user	0m0.085s
sys	0m33.684s
[root@localhost main]# time su -c "dd if=/dev/zero of=/main/ddfile1 bs=128k count=800000; sync;"
Last login: Wed Jun 19 13:44:30 EDT 2013 on pts/3
800000+0 records in
800000+0 records out
104857600000 bytes (105 GB) copied, 104.857 s, 1.0 GB/s

real	1m49.491s
user	0m0.105s
sys	0m33.333s
[root@localhost main]#


Didn't really see any reason to go lower on chunk size. I can do further if you'd like?

EDIT:

Note this is not VMed, tests were done on the base OS, Fedora 19, kernel 3.10rc6.
 
fwiw that's faster than my hardware raid 50 setup tested in windows server 2012 with reFS. I'm using an intel expander to an ibm m5015...

i don't know how to do it but how would you test random write/reads and different queue depths?

thanks for running these tests

as for VMs, I've given up putting my file server on a vm so doing something similar to what you are doing is what I'd like to do
 
i don't know how to do it but how would you test random write/reads and different queue depths?

Not quite at different queue depths, but better than straight dd:

Code:
[root@localhost main]# bonnie++ -d /main/ -s 64G -u root
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost.local 64G  1350  89 898407  33 550403  28  3763  89 1494025  32 339.6   6
Latency              8997us   12114us     145ms   23414us   52945us     109ms
Version  1.96       ------Sequential Create------ --------Random Create--------
localhost.localdoma -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4870   5 +++++ +++  7594   7  7048   7 +++++ +++ 12582  14
Latency             16880us     156us     324us      82us      15us      51us
1.96,1.96,localhost.localdomain,1,1371664221,64G,,1350,89,898407,33,550403,28,3763,89,1494025,32,339.6,6,16,,,,,4870,5,+++++,+++,7594,7,7048,7,+++++,+++,12582,14,8997us,12114us,145ms,23414us,52945us,109ms,16880us,156us,324us,82us,15us,51us
[root@localhost main]#

Happy to do further, either PM me or create a thread in Storage, so we don't derail too much :)
 
What made you give up?

ESXi was giving me all sort of fits with msi_x and msi interrupts to which I tried to fix to no avail with freebsd, then openindiana but meh, i'll give it another crack.

I think reFS of windows server 2012 is slowing me down, i'm doing more investigative work.
 
My modest 18TB ZFS Linux box:

  • Sharkoon Rebel 12 Value
  • MS-Tech Vertigo V-Go 350W
  • Asus P8H67-V
  • Intel Celeron G530 w/ stock cooler
  • 8GB RAM G.Skill DDR3 1600
  • Crucial M4 64GB - for Linux host
  • Samsung SpinPoint F1 DT (HD103UJ) - dedicated to Windows VM
  • IBM ServeRaid M1015 flashed to LSI IT mode
  • 6x WD Green 3TB (WD30EZRX) - for ZFS pool
  • 7x Sharkoon HDD Vibe Fixer 3
The system has been up and running for almost 2 years now. I started with an ESXi 5.0 + OpenIndiana VM + Windows VM + Linux VM setup (I had also tested VirtualBox under OI, with terrible results).
Recently I moved to Ubuntu Server 12.04.2 LTS with ZFS on Linux 0.6.1 (dropping ESXi and OI) with just one Windows VM under KVM. The ZFS pool is configured as RAIDZ2 with net capacity of 12TB.
The system acts as a home server. The data is organized into 9 ZFS datasets. The important ones are backed up by sending incremental snapshots to another box made from old parts with 4TB net storage that is only ever turned on for that purpose. Performance is more than enough for my needs, I routinely see speeds >100MB/s on untuned Win7 SMB clients and up to 120MB/s with MTU and TCP window size tuning. The system is quiet enough for the living room but not for the bedroom.
 
I have a modest setup as well, single 12 bay Synology NAS:

12x3TB WD RED drives
SHR 2 protection from 2 drive failure
27TB useable space

i-g6GFS8k-M.jpg


i-3T2CqNP-M.jpg
 
Last edited:
Case: Norco 4224 (green shitty backplanes I have repaired)
PSU: Antec 650 Benji modded for external 3-pin fan to board for control and monitoring
Motherboard: GA-X38-DS5 with modified PCI-E x1 slots to hold VGA card
CPU: Q9550
RAM: DDR-II 800
GPU: nVidia GT430
Controller Cards:
  • 2x M1015's flashed with LSI IT firmware
  • Multiple 1GbE NIC's for multi-Gbit network flooding at LAN parties
  • USB-3.0 PCI-E card
Optical Drives: None
Hard Drives:
  • 8x 3TB ST3000DM001's Parity Storage Spaces pool
  • 8x 2TB ST2000DL003's Parity Storage Spaces pool
  • 4x 500GB ST500's in Intel Matrix RAID-10 (ICH9R)
  • 4x 500GB WD black Enterprise in Storage Spaces Double-Mirror
  • 1x 1TB ST3100034NS Enterprise (AHCI - OS disk)
  • 1x 150GB WD RaptorX for P2P torrent work
Battery Backup Units: N/A
Operating System: Windows 8 Pro with MCE

Pictures soon although, I am sure you all have seen a Norco before.

Roles are as Media server and MCE playback to main TV/sound system in next room. Write speeds to large arrays exceeds 75+ MB/sec, reads are able to flood 4Gbit while still writing, so who cares? At some LAN's a few games are hosted off the system as well. OS patched for Term-Server for multiple concurrent logins.
 
My setup doesn't really compare to most people in this thread (if only I had the living situation for a rack...), but I'm proud of it all the same :)

Home Server
38.75TB Physical / 24.75TB Logical


Case: Sharkoon Rebel 12
PSU: Corsair HX850
Motherboard: ASRock X79 Extreme6
CPU: Intel Core i7-3930K
RAM: 48GB DDR3-1600MHz
GPU: AMD Radeon 5400-something
Controller Cards: 2x IBM M1015
Hard Drives:
1x 256GB Samsung 830 SSD (System)
1x 1TB Hitachi Deskstar 7K1000.C HDS721010CLA332 (Scratch)
6x 3TB Hitachi Deskstar 5K3000 HDS5C3030ALA630 (RAID6-1)
6x 3TB Toshiba PH3300U-1I72 (Hitachi HDS723030BLE640) (RAID6-2)
1x 1.5TB Western Digital WD15EARX in External USB3.0 enclosure (Backups)
UPS: Some Cyberpower 1500VA
OS: Ubuntu 13.04

Each set of six 3TB drives are in a RAID6 array (md126 and md127), which are then in a RAID0 array (md125), for the total ~24TB. md125 is encrypted with LUKS, and LVM is sitting on top of LUKS. All of the LVM volumes are formatted ext4, although I'm contemplating moving to xfs. It's primarily a Plex server for a whole slew of devices, but also hosting a bunch of random services as needed (LAMP stack, ownCloud, Crashplan destination for friends and other computers, etc).

Crashplan backs up the system drive (not the content) to the Crashplan cloud, the internal RAID array, the external drive, my desktop, and the offsite backup server.

Some random copy and pastes of the storage details:
Code:
cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md125 : active raid0 md127[0] md126[1]
      23441081856 blocks super 1.2 256k chunks
      
md126 : active raid6 sdk[2] sdn[5] sdm[4] sdi[0] sdj[1] sdl[3]
      11720541184 blocks super 1.2 level 6, 128k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md127 : active raid6 sdg[4] sdf[3] sdd[1] sde[2] sdh[5] sdc[0]
      11720541184 blocks super 1.2 level 6, 128k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: <none>
Code:
vgs storage
  VG      #PV #LV #SN Attr   VSize  VFree
  storage   1   4   1 wz--n- 21.83t 3.34t
Code:
lvs storage
  LV                VG      Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  backups           storage -wi-ao--   1.00t                                           
  shared            storage owi-aos-  16.00t                                           
  shared_2013-06-22 storage swi-a-s-   1.00t      shared  12.37                        
  temp              storage -wi-ao-- 500.00g
Code:
df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/system-root            50G   25G   22G  53% /
/dev/sda2                         2.0G  132M  1.7G   8% /boot
/dev/mapper/system-downloads      148G   25G  116G  18% /downloads
/dev/mapper/storage-backups      1008G  286G  672G  30% /backups
/dev/mapper/backup-cloud          689G  287G  371G  44% /backups/cloud
/dev/mapper/backup-system         493G   69G  400G  15% /backups/system
/dev/mapper/backup-downloads      148G   25G  116G  18% /backups/downloads
/dev/mapper/scratch-crashplan     788G  286G  462G  39% /scratch/crashplan
/dev/mapper/storage-shared         16T   14T  1.7T  90% /storage
/dev/mapper/storage-temp          493G  210G  258G  45% /storage/temp

Pictures:
LERyJavl.jpg

igrDDbwl.jpg




Backup Server (no pictures, unfortunately)
20TB Physical / 16TB Logical


Case: Antec NSK2480 (system... yeah, I know) and SansDigital TR8X+B (most of the drives)
PSU: Corsair HX430 (I think)
Motherboard: Asus Z77 something-or-other
CPU: Intel Core i5-3470S
RAM: 16GB DDR3-1600MHz
GPU: Onboard Intel
Controller Cards: LSI SAS3801E
Optical Drives: LG Slim DVD
Hard Drives:
1x 500GB Seagate Momentus XT ST95005620AS (System)
1x 1TB HGST HTS721010A9E630 (Scratch)
7x 2TB Western Digital WDC WD20EFRX (RAID6)
1x 2TB Samsung HD204UI (RAID6)
2x 2TB Seagate Barracuda ST32000542AS (RAID6)
UPS: APC 750VA
OS: Ubuntu 13.04

All 2TB drives are in one big RAID6 array, with encryption on top of it, LVM on top of it, and ext4 on top of it. It's primarily an offsite mirror for my home server, but it also serves as an offsite Crashplan destination for my machines. It's ever so slowly backing up my data to Crashplan's cloud, too, one gig at a time (it'll finish someday...). It also backs up to the home server. I use a 1TB external drive and some udev scripts on both it and my home server to keep the two in sync (using rsync batch files).

Code:
cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md127 : active raid6 sdj[1] sde[11] sdf[10] sdg[4] sdi[5] sdh[3] sdl[13] sdk[12] sdc[8] sdd[7]
      15627066368 blocks super 1.2 level 6, 128k chunk, algorithm 2 [10/10] [UUUUUUUUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>
Code:
lvs storage
  LV     VG      Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  shared storage -wi-ao-- 14.50t
Code:
df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/system-root       394G   82G  294G  22% /
/dev/sda2                     1.5G  142M  1.2G  11% /boot
/dev/sda1                     511M  2.2M  509M   1% /boot/efi
/dev/mapper/storage-shared     15T   13T  1.1T  93% /storage
 
Last edited:
While not so amazing as most of these, I'm quite happy with my computer as it stands. I also took it as a wee challenge to see just how much I could cram into the case without resorting to heavy modifications. Today I added a new Samsung 840 500 GB SSD system drive in addition to the 256 GB 830 I had and it put the whole thing over 14 TB now.

Case: Corsair C70 with Cooler Master 4-in-3 bay
PSU: Corsair TX750W on a Cyberpower 1500VA UPS
Motherboard: ASUS Z77 Sabertooth
CPU: Intel Core i7 2600K
RAM: 16 GB DDR3-1600 Corsair Vengeance
GPU: Sapphire 7970
Drives:
256 GB Samsung 830 (Windows 7)
500 GB Samsung 840 (Windows 8)
2 x 1 TB WD Black
750 GB Seagate FreeAgent Pro
2 x Seagate 2 TB Barracuda
500 GB Hitachi (oldest drive and the first to be replaced, it's going on 7 years now)
1.5 TB Samsung
3 TB Seagate
1 TB Seagate
500 GB Seagate Momentus XT

I pretty much just add as needed. What I'd like to do though is have a separate server from this machine, but at the moment it's a fair sight cheaper to just add to this.

drives-8-jul-2013.png

pc-8-jul-2013b.jpg
 
Update to living room media server and torrent box. Using FlexRAID for pooling and parity. 9 drives for data, 3 for parity.

Server
Lian-Li PC-Q25A
Asus P8H77-I
Core i5 2400s
8GB Corsair XMS
LSI SAS HBA SAS3801EL-S
Corsair CX430 PSU
1 Intel 60GB SSD
1 Seagate 3TB (scratch disk)
4 Seagate 2TB

External Enclosure
Proavio Editbox 8MS
8 WD 2TB Green

1004759_10200241708350037_1894469084_n.jpg


1176246_10200433964516321_1829638995_n.jpg


1003869_10200434006437369_1068004286_n.jpg


1045130_10200241708710046_2045430440_n.jpg
 
Last edited:
YUbq4FR.png

Of course, there isn't actually a floppy drive ;)

Backed up using 'hillbilly RAID' - i.e. a cupboard full of external disks. Having seen many a RAID array go bad after things like environment failure (bad PSU surge etc.) I decided to take the approach of leaving my backups disconnected from any power unless being updated through external docks.

This approach is starting to date now as the vast majority of external docks can't read disks larger than 2TB, or if they do, can only read them if they partitioned them (and will remove the partition header if they didn't), causing quite a few wasted days having to redo 3TB backups at USB2 speeds. Still, it's a backup of sorts, and better than getting a second NAS this large.

Live set is currently: (46TB pre-format, 42.8TB usable)
4x WD20EARS Mk. I: TV1-TV4
1x WD20EARS Mk. II: TV5
3x WD20EARX: TV6, Olympics 1, Overflow
3x WD30EZRX Mk. I: HD Films 1/2, Olympics 2
4x WD30EZRX Mk. II: TV7, Games, VGLP1/2
3x WD30EFRX: TV8, Misc, VGLP3

Backup set is currently:
2x HD753LJ: VGLP3C
4x WD10EACS Mk. I: TV3A/B, TV4A/B
1x WD10EACS Mk. II: TV1A
1x WD10EAVS: TV1B
5x WD10EADS: TV2A/B, TV5A/B, VGLP3A
1x WD15EADS: VGLP3B
4x WD20EARS Mk. I: (VGLP1B/VGLP2A), HDF1A, (HDF1B/HDF2A), HDF2B
1x WD20EARS Mk. II: VGLP1A
5x WD20EARX: TV6, VGLP2B, Olympics 1, Olympics 2, Overflow
1x WD30EZRX Mk. I: Misc
3x WD30EZRX Mk. II: TV7/8, Games

There's some older 250/500GB drives lying around but I tend not to use those any more.
 
Total Storage: 36.75TB

System 1: 36.75TB
Case: X-Case RM424 Pro
PSU: Seasonic X-Series 650w
MB: Supermicro X9SCM-iiF
CPU: Xeon E3-1230 V2
HSF: Noctua NH-U9B SE2 (2x Noctua NF-B9-PWM)
RAM: 32GB Kingston ValueRAM PC3-12800 ECC (4x 8GB KVR16E11/8)
GPU: Matrox G200 (Onboard)
HBA: 3x IBM M1015 (Flashed to LSI 9211-8i IT (P16))
NIC: 2x Intel 82574L (onboard), 1x Intel PRO/1000 PT Quad Port
BOOT: 1x 8GB Kingston DataTraveler SE9 (internal X9SCM USB port)
HDD: 1x WD Scorpio Black 2.5" 750GB (WD7500BPKT)
HDD: 12x WD Caviar Red 3TB (WD30EFRX),
UPS: None (To be purchased in near future)
OS: ESXi 5.1 U1
VMs: FreeBSD (FreeNAS-8.3.1 p2 x64)

New build to replace 2x hardware RAID5 arrays (Adaptec 3805) in my PC running 4x 500GB, 3x 2TB and getting very low on space.

8GB USB ESXi boot, 1x WD Scorpio 750GB 2.5" drive for ESXi datastore, 11x 3TB WD RED Raidz-3 + 1x spare.

Storage on FreeNAS virtual machine with 16GB RAM and 3x M1015 passed through via vt-d. Used for storing and serving various media, Bluray images, music, backups, personal docs, etc. 21TB useable space. 12x drive bays and HBA capacity for future expansion.

camcontrol devlist

Photos:
http://i.imgur.com/5STtfqe.jpg
http://i.imgur.com/IiX0xVi.jpg
http://i.imgur.com/K3RCP3X.jpg
http://i.imgur.com/p3rI4dw.jpg
http://i.imgur.com/WfIp0d5.jpg
http://i.imgur.com/6UgyA08.jpg
http://i.imgur.com/DU2MffD.jpg
http://i.imgur.com/NI1tdu3.jpg
http://i.imgur.com/tm7rpQp.jpg


Future virtual machines planned include pfSense, and 2-3 windows servers for building and testing work related configurations etc.
 
Last edited:
did you run into timeout issues with esxi and fbsd, the fabled mpt wait issue?
 
Dont believe so, only really encuontered 2 issues during the build, though I was researching for weeks before i started so knew about these ahead of time:

1. With 3x M1015 passed through to the VM the system would get stuck at boot with the following errors:

run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config mps_startup
run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config mps_startup

Found a solution to this on these very forums:
http://hardforum.com/showthread.php?p=1038483037

2. Running more than 1 CPU & 1 core in the VM causes IRQ storms, seems this is not uncommon, some have managed to resolve it by disabling unrequired hardware on the VM such as floppy drive, serial, parallel etc. Sadly this didn't work for me:

My given solution to this so far is to run 1 CPU and 1 core, which for my use seems more than sufficient, getting good speed up, down and scrubs.


At some point I need to boot a bare metal FreeNAS install and compare speeds to see if my performance is CPU bound at all. This should be as easy as exporting my FreeNAS config, pulling the ESXi USB stick out, putting a spare USB stick in, install FreeNAS and import the config and see what happens :)
 
Mine, running at this moment DSM 4.2 planning to get ZFS OpenIndiana om it, the thing is i dont get the usb stick correctly done&#8230;

System 1
HP Proliant Microserver N54L
Patriot 2x8GB (16GB)
Seagate 3TB x4 running raid5

Code:
	NAME                                    STATE     READ WRITE CKSUM
	storage                                 ONLINE       0     0     0
	  raidz1-0                              ONLINE       0     0     0
	    scsi-SATA_ST3000DM001-1CH_Z1F2WN2F  ONLINE       0     0     0
	    scsi-SATA_ST3000DM001-1CH_Z1F2WM37  ONLINE       0     0     0
	    scsi-SATA_ST3000DM001-1CH_Z1F2WMRG  ONLINE       0     0     0
	    scsi-SATA_ST3000DM001-1CH_Z1F2WGR6  ONLINE       0     0     0

System 2
HP Proliant Microserver N36L
Patriot 2x8GB
Samsung 1.5TB x4 running raid5

Code:
	NAME                                                    STATE     READ WRITE CKSUM
	storage                                                 ONLINE       0     0     0
	  raidz1-0                                              ONLINE       0     0     0
	    disk/by-id/scsi-SATA_SAMSUNG_HD155UIS2KRJD2B401495  ONLINE       0     0     0
	    disk/by-id/scsi-SATA_SAMSUNG_HD154UIS1XWJ1LS814950  ONLINE       0     0     0
	    disk/by-id/scsi-SATA_SAMSUNG_HD154UIS1XWJ1LS814962  ONLINE       0     0     0
	    disk/by-id/scsi-SATA_SAMSUNG_HD154UIS1XWJ1LS814951  ONLINE       0     0     0

2013-07-28%2019.12.01.jpg
 
Last edited:
where did you get that bracket for the controller card fan?

Having spent AGES looking for a retail solution and deciding all 2-3 i could find were either really rubbish or crazy expensive, I crafted my own from some aluminium (angle and strip) bought from B&Q, some nuts/bolts from ebay and some basic hand tools :)

Total cost was probably about £8-10 and leaves me with a load of the raw materials for future use.
 
2013-07-31-20.01.50.jpg


RAW 93Tb, so close to 100Tb club, so close
 
Dude those Xservers are so hot. I've been looking in to finding one populated with 750's as a backup device.
 
Specs plz FLECOM :D

Apple xServe RAID 14x 750Gb
Apple xServe RAID 14x 750Gb
Apple xServe RAID 14x 250Gb
Apple xServe G5 (1x 120Gb SSD & 2x 2Tb)
Dell R610 (5x80gb 10k, 1x 120Gb SSD)
Infrotrend EonStor 16x 2Tb
Infortrend EonStor 16x 1Tb
Infortrend EonStor 16x 1Tb

the top two xServe RAIDs are setup into 4x 7 drive RAID5 arrays stripped on the host side into one container across the 4 RAID5 arrays

the bottom xServe RAID is 2x 7 drive RAID5 arrays stripped on the host side into one container across the 2 RAID 5 arrays

the xServe has an OCZ vertex something for boot and 2x 2Tb drives mirrored

R610 has 5x 80gb velociraptors in a RAID5 and a 120Gb OCZ Vertex something

top EonStor disk array has 16x 2Tb drives in a RAID6

middle EonStor disk array has 16x 1Tb drives in a RAID5 + hotspare

bottom EonStor disk array has 16x 1Tb drives in 2x 8 disk RAID5 arrays

everything is FC attached via a Cisco MDS9020 4gbps fiber channel switch to the R610 (yes even the xServe RAIDs are attached to the Dell lol)
 
Status
Not open for further replies.
Back
Top