The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Dont believe so, only really encuontered 2 issues during the build, though I was researching for weeks before i started so knew about these ahead of time:

1. With 3x M1015 passed through to the VM the system would get stuck at boot with the following errors:

run_interrupt_driven_hooks: still waiting after 60 seconds for xpt_config mps_startup
run_interrupt_driven_hooks: still waiting after 120 seconds for xpt_config mps_startup

Found a solution to this on these very forums:
http://hardforum.com/showthread.php?p=1038483037

2. Running more than 1 CPU & 1 core in the VM causes IRQ storms, seems this is not uncommon, some have managed to resolve it by disabling unrequired hardware on the VM such as floppy drive, serial, parallel etc. Sadly this didn't work for me:

My given solution to this so far is to run 1 CPU and 1 core, which for my use seems more than sufficient, getting good speed up, down and scrubs.


At some point I need to boot a bare metal FreeNAS install and compare speeds to see if my performance is CPU bound at all. This should be as easy as exporting my FreeNAS config, pulling the ESXi USB stick out, putting a spare USB stick in, install FreeNAS and import the config and see what happens :)

I suspected that and ended up passing my 1015s to a Solaris install and had no issues. I'm curious what your bare metal speeds are
 
How much did you pay for the 10.5 xserves? I'm in the market, seeing them go for ~$800 on ebay.
 
How much did you pay for the 10.5 xserves? I'm in the market, seeing them go for ~$800 on ebay.

not sure what you are asking? you mean the xServe RAIDs?

I got both units with the 750gb drives for $500/ea shipped...
 
Where did you get your cables? Esp. molex connectors for the backplanes, looks very neat and tidy. The Norco 1 to 7 cable seems to be unavailable now and I need a replacement.

All of the power cables (except motherboard power) were hand made using the following:
18AWG stranded cable
Mod/smart 90° 4pin Molex plug (with relevant inline or end caps)
Mod/smart 90° 16Pin SATA plug (with relevant inline or end caps)
6 Pin Male VGA Power Connector (to fit modular PSU)
8MM Braided Sleve
12MM adhesive lined 3:1 heatshrink

Also had to get a molex crimping tool to do the 6pin PSU connectors

The SFF8087-SFF8087 are 60cm 3Ware cables
 
Last edited:
48.25 TB

Supermicro SC846BE16-R920B Enclosure (4U)
Enclosure-supplied dual 920W PSUs
Intel S1200BTL Motherboard
Intel Xeon E3-1220 CPU
16GB Crucial ECC RAM
3ware/LSI 9750-4i RAID controller
QLogic QLE2464 quad-port 4Gbps FC adapter
2x Crucial M4-CT128M4SSD2 128GB SSDs
16x Seagate Constellation CS 3TB ("Enterprise Value") HDDs (ST3000NC002)
Debian GNU/Linux Wheezy (7.x)

The SSDs are installed in a rear-mounting 2x 2.5" hot-swap carrier and are set up with software RAID on the motherboard. These are used only for the server's OS and a couple of extra things that benefit from being on SSD. The 16x drives are arranged in 2x RAID-6 arrays for a total of 32.74 TiB of usable storage in the system. LVM keeps things organised for me on the disks.

Most of the storage is used for backups and media storage (DVD rips, CD rips etc), thus most of the data on there is not backed up to other locations. The server runs BackupPC to backup other machines that I run, for example. It hosts my media collection over NFS to my media centre machine running XBMC, which itself is diskless and boots of a small mSATA SSD.

I use fibre channel on the server in target mode. This allows me to boot my workstation in the living room over fibre, and also all the other servers I have in my back room for testing/playing with. This avoids the hassle of extra disks in the house - all my machines other than my server either have SSDs or FC or both: no hard disks in the house anywhere else.

The system has been built up over a matter of years, with various parts replaced as an when I could afford to. I just added the final 2x 3TB disks today and build my 2nd array, hence writing this up for the top 20! :)

The pictures below were taken a little time ago, sorry I don't have any newer ones at the moment.

Photo-10-02-2013-16-14-39-e1375458268936-764x1024.jpg

It's the bottom 4U box in the rack. The machine above is my old server that is no longer in use. The rack is home-made using 2x4 and rack strip, inspired by many people in this forum.

IMG_0694-764x1024.jpg

There it is in the rack before I fully populated it.

Photo-21-12-2012-16-34-39-1024x764.jpg

Photo-21-12-2012-16-33-47-1024x764.jpg

Photo-21-12-2012-16-33-24-1024x764.jpg

Photo-21-12-2012-16-33-02-1024x764.jpg

IMG_0673-1024x764.jpg
 
I use fibre channel on the server in target mode. This allows me to boot my workstation in the living room over fibre, and also all the other servers I have in my back room for testing/playing with. This avoids the hassle of extra disks in the house - all my machines other than my server either have SSDs or FC or both: no hard disks in the house anywhere else.

I apologize in advance if this is a completely retarded question, but do you boot some of your machines from the server - meaning that they don't even have an SSD with the OS? If yes, please explain how this is achieved.
 
Apple xServe RAID 14x 750Gb
Apple xServe RAID 14x 750Gb
Apple xServe RAID 14x 250Gb
Apple xServe G5 (1x 120Gb SSD & 2x 2Tb)
Dell R610 (5x80gb 10k, 1x 120Gb SSD)
Infrotrend EonStor 16x 2Tb
Infortrend EonStor 16x 1Tb
Infortrend EonStor 16x 1Tb

the top two xServe RAIDs are setup into 4x 7 drive RAID5 arrays stripped on the host side into one container across the 4 RAID5 arrays

the bottom xServe RAID is 2x 7 drive RAID5 arrays stripped on the host side into one container across the 2 RAID 5 arrays

the xServe has an OCZ vertex something for boot and 2x 2Tb drives mirrored

R610 has 5x 80gb velociraptors in a RAID5 and a 120Gb OCZ Vertex something

top EonStor disk array has 16x 2Tb drives in a RAID6

middle EonStor disk array has 16x 1Tb drives in a RAID5 + hotspare

bottom EonStor disk array has 16x 1Tb drives in 2x 8 disk RAID5 arrays

everything is FC attached via a Cisco MDS9020 4gbps fiber channel switch to the R610 (yes even the xServe RAIDs are attached to the Dell lol)



Is this in your basement or a hosting center?
 
I apologize in advance if this is a completely retarded question, but do you boot some of your machines from the server - meaning that they don't even have an SSD with the OS? If yes, please explain how this is achieved.

Yes, I do exactly that. The server is in target mode, so it presents itself as an FC disk to initiators. Initiators are 'normal' FC machines; I use mostly QLE2460 cards for those because you can get them cheap on eBay.

The software on the server is built-in to Linux kernels 3.5 and above, and can be configured with targetcli (your distribution should be able to provide this). targetcli can be used to configure iSCSI, fibre channel, SRP (InfiniBand), iSER (InfiniBand), SBP (FireWire, I wrote this bit myself), and a host of other SCSI target types. It's most excellent software.
 
Is this in your basement or a hosting center?

This is at the back of my house, in a sort of extension. It's a bit like a garage but there isn't a hope in hell of fitting a car in it (it's about 6m above road level for one).

Edit: Bah. Wrong post. Thought you were asking me but I've got this Friday feeling.
 
Bootc I like that you wrote some of the code. Nice setup too
 
Yes, I do exactly that. The server is in target mode, so it presents itself as an FC disk to initiators. Initiators are 'normal' FC machines; I use mostly QLE2460 cards for those because you can get them cheap on eBay.

The software on the server is built-in to Linux kernels 3.5 and above, and can be configured with targetcli (your distribution should be able to provide this). targetcli can be used to configure iSCSI, fibre channel, SRP (InfiniBand), iSER (InfiniBand), SBP (FireWire, I wrote this bit myself), and a host of other SCSI target types. It's most excellent software.

Very cool and inspirational indeed. I have replaced the HDD's in all computers in the household with small SSD's, meaning that all data except operating systems and applications is stored on our NAS. A setup like yours, however, would be much smarter. The only problem is that I have no way of fitting a PCI-e FC card into my mini-ITX HTPC (the slot is occupied by a GFX) or my mini-ITX PC (the slot is occupied by my Xonar sound card). I suppose that you have micro-ATX or ATX motherboards in the other computers in your household.
 
For things that don't need lots of bandwidth, just do iscsi.

For my htpc frontend machines, I have all of mine diskless with ipxe/iscsi boot from the host.
For things that need more bandwidth, sure, it makes since to use fc or 10gbit networking.
 
So, your basement, then, got it.

LOL no it's actually in a colo... I'd post pics of the rest of the facility but I don't think they would like that very much :)

I have a bunch of clients I've referred to them so they give me a pretty decent rate... using the bottom arrays to do off site backups for clients, top arrays have my media backups :D
 
Fractal Design XL R2
Asus F2A85-M / CSM
AMD A8 6500K (65W)
8 GB 1600 Kingston DDR3
Corsair AX650 PSU
1 x WD 500GB RE4
5 x Seagate 3TB Barracuda
Coolermaster 4 in 3 device bay adapter
Windows Home Server 2011
StableBit Drive Pool


tkjr.jpg
[/URL][/IMG]

kfba.jpg
[/URL][/IMG]
 
Last edited:
This is a really basic machine I setup for archiving backups at work. It is our old domain controller with some extra disks stuffed in.

2x 2TB ST3000 drives in ZFS RAID1 for current ShadowProtect images
3x 3TB ST3000 drives in ZFS RAIDZ for continuous incrementals via ShadowProtect

The machine has dual E5405s with 8GB of ECC RAM running FreeNAS 9.1.

I have the machine physically segregated in a separate data closet from the rest of the servers, and of course have a good off site backup plan as well.

20130814_152257_zps85eac3d5.jpg


20130815_122158_zps378aff80.jpg
 
Last edited:
This is a really basic machine I setup for archiving backups at work. It is our old domain controller with some extra disks stuffed in.

2x 2TB ST3000 drives in ZFS RAID1 for current ShadowProtect images
3x 3TB ST3000 drives in ZFS RAID6 for continuous incrementals via ShadowProtect
...

Typo? Minimum is 4 drives for a RAID-6/Z2, no?
 
Cool. Out of interest, does the chipset heatsink get really hot? The one on my HP server motherboard does :( Not sure if it's common for that generation?
 
Massive update to a previous post :D

Total Advertised: 50TB
Total Available: 40TB

Supermicro SC743TQ-865-SQ with CSE-M35T-1:
CPU: Intel Xeon E3-1230 v2
Motherboard: Supermicro MBD-X9SCM-F-O
Memory: 4 x 8GB Kingston ECC Unbuffered DDR3 1600
Power supply: Supermicro 865W
SAS HBA: IBM ServeRAID M1015
SAS Expander: Intel RES2SV240
SCSI HBA: LSI Logic LSI20320IE
2 x SSD: Samsung 840 pro 256GB and Crucial M4 128GB
OS: Gentoo Linux running software raid -> dm-crypt -> LVM -> ext4.
(Eventually i'm going to use bcache between the raid and encryption).
26TB RAID 6:
7 x 2TB WD Green
4 x 2TB Hitachi 5K3000
2 x 2TB Samsung HD204UI​

SC847E1-R1400LPB (JBOD):
Power supply: Supermicro 1400W redundant controlled with CSE-PTJBOD-CB1
24TB RAID 6:
5 x 3TB WD Green
3 x 3TB Seagate Barracuda​

iStarUSA D-300:
Tape Drive: HP StorageWorks Ultrium 960 LTO 3
Power supply: iStarUSA TC-2U 500W Power controlled with CSE-PTJBOD-CB1​

The rack is an open frame 31.5" deep 20U rack from ebay. I believe the manufacturer is a Taiwanese company called CLM. The casters are Steelex D2614 3-Inch 150-Pound Threaded Swivel Double Lock Polyurethane Plate Caster that I found on Amazon along with M12 nut and lock washers from Home Depot.

Album for more photos and more descriptions about rack components. Also, a look at my PWM fan controller that makes it all silent :cool:

FKuPjXt.jpg


Edit: Added info about the rack and casters as I was getting lots of questions about them.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Cool. Out of interest, does the chipset heatsink get really hot? The one on my HP server motherboard does :( Not sure if it's common for that generation?
I just checked and yes it's definitely pretty warm. Not nForce4 warm but warmer than most for sure.
 
Purpose: Small, elegant, quiet, low heat/power consumption. Media storage, photo storage.

Components:

Motherboard/CPU: SuperMicro X7SPE-HF-D525-O
- integrated Intel ATOM CPU. Heatsink is passive
- IPMI v2.0
- Gigabit Intel NIC
- Internal full sized USB port for thumbdrive
- 6 SATA ports
Case: Fractal Node 304
- 6 3.5" HDD bays
- Attractive front, subtle power and HDD LEDs
- Two expansion slots, required for the 'over sized' mITX SuperMicro board
- Quiet case fans on a 3 position controller. The controller requires one Molex 4-pin power cable. My SuperMicro motherboard has a 4-pin Molex on board that is able to power the fans on low power. Medium is hit or miss and high power causes the motherboard to beep angrily
HDDS:
- 4TB parity
- 5x2TB storage
Cache: Mercury AccelsiorM mSATA PCIe Controller w/ 64GB Plextor M5M mSATA SSD
- I tried two cheaper PCIe cards from NewEgg, a SYBA and a Koutech. Both were based on the same Marvel controller with 'hyperduo' SSD caching technology. I couldn't get either card to work in my unRaid server or my general Windows desktop (computers would not POST). I finally found this Accelsior card from Other World Computing based off of ASMedia 106x controller.
PSU: SeaSonic M12II 620 Bronze 620W
- Modular cables
- Two 6-pin power cables are permanently attached and not necessary for my build
- Only two SATA power cables are needed besides the permanently connected cables
RAM: 4GB DDR3-1333
USB Drive: Lexar FireFly 2GB

unRAID: Plus license. Using all available disks, cache, and flash. Motherboard is completely full on SATA ports and case is completely full of 3.5" drives. Perfect.

Additional information:
The motherboard and initial hard drives started off in a much larger rackmount case that could accommodate 14 HDDs. I had plans of buying the SuperMicro 8port PCIe card but I found that after a few years I still had not filled up my initial 1 and 2TB hard drives. I moved out of my house in the country into a small apartment in the city where the rackmount/server case was taking up too much space. I then bought a 4TB parity and the Fractal case. They way I look at it HDDs will be increasing in capacity faster than I can fill them up!

This is a very low performance server but so far I have had no troubles serving media to my WMC and XBMC HTPC. I only recently upgraded from unRAID 4.7 to the 5.0 RC's and everything is running smoothly. Transferring files to the sever used to run around 20MB/sec but with the new cache drive they are up to 50MB/sec. I think the SSD should be running faster so I am looking into that.

The future:
The next step is to get a UPS/battery backup. Once unRAID 5.0 final is released I will upgrade and begin installing plugins. Right now I am looking at SimpleFeatures, unMENU, and torrents. I have a powerful gaming/photo editing computer that runs torrents 24/7. Now that it is summertime, it is making my apartment noticeably warmer. If possible I would like my low power server to do all of the torent duties.

Photos!

i-H4mwdKq-XL.jpg


i-TNpfM4h-XL.jpg


i-RMfBf4f-XL.jpg


i-c8tgQGS-XL.jpg


i-g5KWbSj-XL.jpg


i-CVsqp7m-XL.jpg


i-wrJBdj6-XL.png
 
Amount of total storage: 20.9TB

Case: Norco RPC-230; Rackable System SE3016
PSU: Antec EA380D
Motherboard: Supermicro X9SCM-F
CPU: Xeon E3-1230 v2
RAM: Kingston 16GB ECC UDIMM
GPU: Onboard BMC
Controller Cards: IBM M5014 (LSI 9260-8i fw)
Battery Backup Units: LSI iBBU08
Optical Drives: None
Hard Drives: 2x WD10EAVS, 2x WD10EADS, 2x WD20EARS, 5x ST3000DM001, Kingston HyperX 128GB SSD (boot drive)
RAID: 4x 1TB RAID10, 2x 2TB RAID1, 5x 3TB RAID5 w/HSP
Operating System: Win7 Pro

Screenshot.png

fr_597_size880.jpg

fr_598_size880.jpg
 
Any issues with the ST3000DM's on that RAID card?

7 ST3000... Harddrives on HP P410 Raid Card that has been running since December 2012...
those drive run smoothly without hick-up, BTW I flashed those drives with the latest firmware at that time, before moved to raid card.
 
7 ST3000... Harddrives on HP P410 Raid Card that has been running since December 2012...
those drive run smoothly without hick-up, BTW I flashed those drives with the latest firmware at that time, before moved to raid card.

Cheers for that fellas. I would have expected less love from an older card like the P4XX HP's.

Getting itchy to look at H/W RAID for my server that will be converted to DAS (Host Server will be outboard. Will look at putting my drives in the rear of the Norco 4224.
 
LOL no it's actually in a colo... I'd post pics of the rest of the facility but I don't think they would like that very much :)

I have a bunch of clients I've referred to them so they give me a pretty decent rate... using the bottom arrays to do off site backups for clients, top arrays have my media backups :D

When I got a tour of the colo facility we use for work I asked the account manager if I could take pictures. He didn't care. You might be able to get away with more than you think.

Then again, he did tell me the names of several other clients by name, so they may not be big on security. No big deal for us since we have asset stickers on our gear.
 
16 bays.
6 x 4TB RaidZ2 + spare + 256 samsung 840 pro for cache.
not sure what i'm going to do with the other 8 bays. made a two way mirror from 4 old drives and VMs seem to run fine off that pool, but I should put in some SSDs instead. I have zfs sync disabled on the little VM pool.

booting esxi from internal USB port. OmniOS is on motherboard SATA. 2 M1015s passed-through to Omnios.

32GB RAM, but only 16 given to OmniOS so far.

e3-1230v2 or something like that





 
Cheers for that fellas. I would have expected less love from an older card like the P4XX HP's.

Getting itchy to look at H/W RAID for my server that will be converted to DAS (Host Server will be outboard. Will look at putting my drives in the rear of the Norco 4224.

actually loves and hates combined together :D with HP P41X :p
 
0af11476_IMG_20130915_001916.jpeg


File Server;

Case: Norco 4020
PSU: OCZ ZT 750
Motherboard: Sapphire Pure Black P67 Hydra
CPU: Core i3 2120
RAM: 8gb's Patriot Black Mamba
GPU: Some Xfx Card i think
Controller Cards: LSI 9260-4i, Intel RES2SV240, Voltaire HCA 410-Ex Infiniband Card
Hard Drives: 12x Seagate ST2000DM001, RAID 6
Boot Drive: Corsiar Force Series GT 120gb SSD
Operating System: Windows 7 Ultimate

Home Server;

Case: Norco RPC-450
PSU: Coolermaster 625 Watt...
Motherboard: Gigabyte GA-Z68XP-UD5
CPU: Core i7 2600k
RAM: 8gb's Mushkin Frostbyte
GPU: can't remember.
Controller Cards LSI 8888ELP, Voltaire HCA 410-Ex Infiniband Card
Hard Drives: 8 Toshiba HDKPC09 or DT01ACA200 2 Tb drives RAID 5
Operating System: Windows Home Server 2011

For a combined 40Tb's of advertised space
 
0af11476_IMG_20130915_001916.jpeg


File Server;

Case: Norco 4020
PSU: OCZ ZT 750
Motherboard: Sapphire Pure Black P67 Hydra
CPU: Core i3 2120
RAM: 8gb's Patriot Black Mamba
GPU: Some Xfx Card i think
Controller Cards: LSI 9260-4i, Intel RES2SV240, Voltaire HCA 410-Ex Infiniband Card
Hard Drives: 12x Seagate ST2000DM001, RAID 6
Boot Drive: Corsiar Force Series GT 120gb SSD
Operating System: Windows 7 Ultimate

Home Server;

Case: Norco RPC-450
PSU: Coolermaster 625 Watt...
Motherboard: Gigabyte GA-Z68XP-UD5
CPU: Core i7 2600k
RAM: 8gb's Mushkin Frostbyte
GPU: can't remember.
Controller Cards LSI 8888ELP, Voltaire HCA 410-Ex Infiniband Card
Hard Drives: 8 Toshiba HDKPC09 or DT01ACA200 2 Tb drives RAID 5
Operating System: Windows Home Server 2011

For a combined 40Tb's of advertised space





So what are all the other boxes Doing?

The 1U below your home server i have an identical one.. im sure they can come with different hardware but mine is running pfSense.. works quiete well althoguh probably over powered for what i use it for
 
So what are all the other boxes Doing?

The 1U below your home server i have an identical one.. im sure they can come with different hardware but mine is running pfSense.. works quiete well althoguh probably over powered for what i use it for

The top 2u is my pfsense box MSI G31T-P21 Intel E2200 Dual Core

The 1u has a dvr card and when I buy some cameras it will record surveillance footage for my house ASROCK N68-S Athlon 240

And the second 2u is for torrents VMware osx and any programs I want to try. Big Bang Xpower and a core i7 950

oh and there is a 4u case behind all that on a box that houses my pxe server...gonna move it to a 2u someday

also need a taller rack...

the 15 u is nice but its too short.
 
Last edited:
Here's my Lian Li D8000 black case which took the place of my old Norco 4020 (terrible air flow, but a beast of a case, solid too, just wanted something more manageable).

zfs.jpg

inside before the asus board...but you get the idea, also the panel that seperates the chambers with the two 140mm fans is not shown, and it's got the old heatsinks, but the 1Us are boss
IMAG0105.jpg

IMAG0104.jpg


Raw space: 60TB (20x3TB drives, 19 ST3000DM001's and one hitachi 3TB)
Usable space: 40TB (drives set up in striped raidz akin to raid-50)

Other hardware:

Asus Z8PE-18
3 IBM 1015s in IT mode
72GB DDR3 1333 ECC
2x Intel L5639 (2.13 Ghz, 12MB, 6-cores)
Supermicro 500W psu - main components
Corsair CX650W psu - drives, fans
FreeBSD 9.1-P7
ZFS

Cooling:

Cooljag 1U 1366 HSF all copper fans
6 - 120mm cougar black fans for the drives
2 - 140mm fans blowing air onto the 1015s and ram
1 - 120mm cougar fan exhaust

I'm currently performing baseline testing on the drives to get the sweetspot. I'll move this to ESXi and virtualize Windows Server 2012 and SQL 2012.
 
Last edited by a moderator:
Here's my Lian Li PC9000 black case which took the place of my old Norco 4020 (terrible air flow, but a beast of a case, solid too, just wanted something more manageable).

zfs.jpg


Raw space: 60TB (20x3TB drives, 19 ST3000DM001's and one hitachi 3TB)
Usable space: 40TB (drives set up in striped raidz akin to raid-50)

Other hardware:

Asus Z8PE-18
3 IBM 1015s in IT mode
72GB DDR3 1333 ECC
2x Intel L5639 (2.13 Ghz, 12MB, 6-cores)
Supermicro 500W psu - main components
Corsair CX650W psu - drives, fans
FreeBSD 9.1-P7
ZFS

Cooling:

Cooljag 1U 1366 HSF all copper fans
6 - 120mm cougar black fans for the drives
2 - 140mm fans blowing air onto the 1015s and ram
1 - 120mm cougar fan exhaust

I'm currently performing baseline testing on the drives to get the sweetspot. I'll move this to ESXi and virtualize Windows Server 2012 and SQL 2012.

I loooove LL's! :D
wish I had an excuse (plus the $$$) to buy that case! :p
pics of the inside, pleeeeease?
btw, with all that space, why the 1U heatsinks? you hate your CPUs? :confused: :p :D
 
Status
Not open for further replies.
Back
Top