The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.



Thermaltake Armor case
3x Startech 4 in 1, 2.5" HDD bays
2x CoolerMaster 4 in 3, 3.5" HDD bays
2x Western Digital Black 2.5" 7,200RPM drives mirrored for Operating System
Intel i3-4130T 35W CPU
4x8GB GSkill DDR1333 RAM
Asrock Z87 Extreme 4 motherboard
2x M1015 SAS cards flashed with IT firmware
Intel PRO/1000 VT quad port Gb NIC
Corsair HX850 PSU

Operating System: Windows 2012 R2

Virtual Machine Pool
- 4x512GB Toshiba SSD drives
- 8x600GB WD Velociraptor 10,000RPM SATA drives
- Windows automatically auto-tiers hot data into SSD tier each night
- Primocache v0.9.2 software uses 29GB of system RAM as read cache for VM Pool

Shares Pool
- 5x3TB Seagate 7,200RPM SATA drives

21.8TB raw capacity total
 
here's an update to my all in one build

XE7eTe7.jpg


3mXGiNY.jpg


2vmuEYg.jpg


hY8pnMA.jpg


5y1bQdr.jpg
 
on the right side I'm using a silverstone 4x sata power adapter SST-CP06 and on the rest my psu 4 x sata cables. but im looking on moddiy.com for 5 x sata power cables that are compatible with my hx750 psu.
I still need to add one more ssd for zil and two more 3TB wd reds so i can do raidz2 with 6 3TB WD reds right now i have 5 x 1TB and 4 x 3TB raidz1.

ocz deneva 2 c slc 60GB would this be ok for a zil ?

8jcOIbf.jpg
 
slc is good. usually people say to mirror the zil but you should be fine with slc.
 
Not as kool as the majority of these setups in here but slowly growing & realizing that once you start you can't stop (expanding your storage that is).

CaseLabsMagnumMH10FileServer-Build2-2Resizedx1300.jpg


The system on the left is my File Server. This stays at home all the time & is now using my old setup I had a few years ago.

Current File Server Specs..

Intel i7 2700k (@ 4Ghz)
ASUS Maximus V Extreme
Corsair 16GB (4x4) DDR3 1600 (8-8-8-24 1N)
45.12TB - 15x 3TB's (WD Red's & Toshibas) & 120GB Corsair Force 3 SSD
LSI 9261-8i (+BBU) & Intel 24-Port Expander. (2x RAID5 Array's)
Intel PRO/1000 PT Dual Port NIC + Connected to Netgear GS724Tv3
Corsair AX1200
Magnum MH10

Current Main Rig Specs..

Intel 3930k (@ 4Ghz - Loop)
ASUS Rampage IV Extreme
G.Skill 32GB (4x8GB) DDR3 2400 Trident-X C10
2-Way SLI EVGA GTX 580 3GB
ASUS Essence STX
60.75TB - 18x 3TB's / 1x 4TB / 1x 2TB / 3x 250GB SSD's (Toshiba/Seagate/Samsung/Intel) - Only 28TB Shared/Hashed on Network
IBM M1115 & Intel 24-Port Expander (Single DIsk/Software Pooled)
Corsair AX1200
Magnum SMH10
Dell U3011

Spare/Backup Drive Storage Tub..

83.**TB - 22x 3TB's / 4x 2TB's / 6x 1TB's / 8x Less then 1TB's (Approx 3TB all up). All dated from 2009 to now. WD/Hitachi/Toshiba/Seagate/Samsung

Eventually I would like to get 2x Norco 4224's. I want to make sure that I don't lose any data even if I have a spare drive for my array. I've lost a chunk before from 'trusting' a RAID controller before, won't let it happen again. I also will have to invest in a 10Gbe NIC pretty soon for both systems since I just had to transfer approx 20-22TB over a 1Gbe NIC and that took nearly 2 days:(

When I first got this setup in the main rig I tried using the LSI 9261-8i controller on it and it absolutely hated it. The firmware would be non-existent & it basically wiped the storage configuration I already had set on it. I've also experienced many other issues with different cards on this board. I absolutely hate this Rampage IV Extreme but I've found the sweet spot of no issues & I plan on leaving it alone until ASUS release a newer version of this board. The Maximus V Extreme on the other hand is flawless, perfect & has always resolved any issues that the Rampage would have came up with...

what model is your hotswap drive bay is? I want to buy few of yours.
 
just switched my zfs pool to mirrors less two drives so now rocking about 25 TB of space
 
Well I think I can finally join this club, I'm barely at 10TB but still past it. :D

Code:
root@isengard ~]# df -hl
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_isengard-lv_root
                       50G  4.3G   43G  10% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sde1             485M   38M  422M   9% /boot
/dev/mapper/vg_isengard-lv_home
                       53G  180M   50G   1% /home
/dev/md0              5.4T  3.0T  2.2T  59% /volumes/raid1
/dev/md1              6.3T  2.7T  3.4T  45% /volumes/raid2
[root@isengard ~]#


raid1 is a raid 10 with 4 3TB drives
raid2 is a raid 5 with 8 1TB drives

So 11.7TB total of usable space. Just made it in this thread. :p

I recently moved the 1TB drives from my main server to this box, which is dedicated for storage. My goal is to have all storage centralized so any server I add only needs some small flash storage, whether it's a SSD or a USB stick. When I build a VM server I'm probably going to run everything off a USB stick. Maybe a single drive for stuff like /var and what not so I don't kill the stick.


The server:












Behind:



It has a FC card that is connected to 4 IBM enclosures, though I don't have enough UPS capacity for those so I don't leave them on. Once in a while I turn them on to do some backups or just to play around with if I need lot of drives for something. The drives show up individually on the server, which is nice.

And this is what keeps it going if the power goes out, gives me about 4 hours:





Full server room view:




Not that impressive compared to some of the stuff posted here, but it's enough for now and I have plenty of room for expansion especially if I start replacing those 1TB drives with bigger ones.
 
Well I think I can finally join this club, I'm barely at 10TB but still past it. :D

Code:
root@isengard ~]# df -hl
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_isengard-lv_root
                       50G  4.3G   43G  10% /
tmpfs                 3.9G     0  3.9G   0% /dev/shm
/dev/sde1             485M   38M  422M   9% /boot
/dev/mapper/vg_isengard-lv_home
                       53G  180M   50G   1% /home
/dev/md0              5.4T  3.0T  2.2T  59% /volumes/raid1
/dev/md1              6.3T  2.7T  3.4T  45% /volumes/raid2
[root@isengard ~]#


raid1 is a raid 10 with 4 3TB drives
raid2 is a raid 5 with 8 1TB drives

So 11.7TB total of usable space. Just made it in this thread. :p

I recently moved the 1TB drives from my main server to this box, which is dedicated for storage. My goal is to have all storage centralized so any server I add only needs some small flash storage, whether it's a SSD or a USB stick. When I build a VM server I'm probably going to run everything off a USB stick. Maybe a single drive for stuff like /var and what not so I don't kill the stick.


The server:

http://gal.redsquirrel.me/thumbs/lrg-2139-dsc_2651.jpg[img][/url]


[url=http://gal.redsquirrel.me/images/house_projects/server_room/dsc05182.JPG][img]http://gal.redsquirrel.me/thumbs/lrg-1702-dsc05182.JPG[img][/url]


[url=http://gal.redsquirrel.me/images/house_projects/server_room/dsc05177.JPG][img]http://gal.redsquirrel.me/thumbs/lrg-1700-dsc05177.JPG[img][/url]




Behind:

[url=http://gal.redsquirrel.me/images/house_projects/server_room/dsc_2314.jpg][img]http://gal.redsquirrel.me/thumbs/lrg-2128-dsc_2314.jpg[img][/url]

It has a FC card that is connected to 4 IBM enclosures, though I don't have enough UPS capacity for those so I don't leave them on. Once in a while I turn them on to do some backups or just to play around with if I need lot of drives for something. The drives show up individually on the server, which is nice.

And this is what keeps it going if the power goes out, gives me about 4 hours:

[url=http://gal.redsquirrel.me/images/house_projects/server_room/dsc_2229.jpg][img]http://gal.redsquirrel.me/thumbs/lrg-2111-dsc_2229.jpg[img][/url]

[url=http://gal.redsquirrel.me/images/house_projects/server_room/dsc_2228.jpg][img]http://gal.redsquirrel.me/thumbs/lrg-2110-dsc_2228.jpg[img][/url]

Full server room view:

[url=http://gal.redsquirrel.me/images/house_projects/server_room/dsc_2299.jpg][img]http://gal.redsquirrel.me/thumbs/lrg-2123-dsc_2299.jpg[img][/url]


Not that impressive compared to some of the stuff posted here, but it's enough for now and I have plenty of room for expansion especially if I start replacing those 1TB drives with bigger ones.[/QUOTE]
this is in your house?
 
5x hot swap chassis and 15' of cabinets for 11TB? You forget a couple of 0s or something?

Or you running RAID 10000000, :D

Not all filled and most of the drives are 1TB. The 4 IBM enclosures don't count as they arn't live/prod, I get a couple more TB out of them. (raid 6) top two are 400GB drives and bottom two are 240GB drives. Old tech. :D
 
bj7e.jpg


My storage/ESXI host

I7 4770
32GB Ram
LSI 9201-16i

X-case 20 bay (Norco Clone)

Storage

ESXI Datastore's-

120gb Corsair force GT
2 x 300gb Velociraptors
1x 500gb for vm backups and ISO store

Storage ( LSI Card in passthrough to a 2012 VM)

3x 4TB Seagate's
2x 3TB Seagate's
4x 2TB Samsung's
1x 1TB (can't remember brand it was an old drive)

I did have ALOT of 2's but i sold up to bigger drives after i realised i didnt need the storage

TOTAL Storage - 27TB (About 24TB usable)
 
Image Removed for quoting
My storage/ESXI host

I7 4770
32GB Ram
LSI 9201-16i

X-case 20 bay (Norco Clone)

Storage

ESXI Datastore's-

120gb Corsair force GT
2 x 300gb Velociraptors
1x 500gb for vm backups and ISO store

Storage ( LSI Card in passthrough to a 2012 VM)

3x 4TB Seagate's
2x 3TB Seagate's
4x 2TB Samsung's
1x 1TB (can't remember brand it was an old drive)

I did have ALOT of 2's but i sold up to bigger drives after i realised i didnt need the storage

TOTAL Storage - 27TB (About 24TB usable)
Dude, please move that switch/router off the carpet and up under the server. Not worth the damage to the carpet of fire risk.
 
Dude, please move that switch/router off the carpet and up under the server. Not worth the damage to the carpet of fire risk.

And watch craigslist to get a real rack. You can pick up an openframe on there for under $100 pretty easily.
 
And watch craigslist to get a real rack. You can pick up an openframe on there for under $100 pretty easily.

Its not on.. its something i havent got round to playing with yet.

Im looking for a rack as we speak as its going into my cinema room. my switch is setup slightly differently...

Imm in the UK so we dont get such amazing deals as you guys get :)

iofs.jpg
 
what model is your hotswap drive bay is? I want to buy few of yours.

Sorry for the late reply. The Left ones are the first/original Norco SS-500's & the ones on the right are the third version (latest) of the Norco SS-500's. The second & third version have better HDD trays & even mounting system for the drives but the rear 80mm fans are noticeably louder compared to the first version. Temps are a slightly warmer when at full speed on the first version but sound difference is huge. Now I have all fans on a fan controller for that perfect balance;)
 
I'm guessing you are either moving the cables or leaving a pocket between the door frame and the overhead plate/stud:)

Just an fyi although I'm sure you've got it covered!

Yeah the door is installed now, it leaves a bit over an inch on top. When I drywall I'll leave a small area open and put some fire blocking or w/e to seal it up. Really I should have had the wire go on top, but only realized after and said screw it. :D

are those deep cycle marine batteries?

Yep. 4 of them. Eventually I want to try to get 4 of them per shelf for better density but I'd need to either get a custom drip tray built (very expensive, I checked) or build up some kind of tray/coating. These things are designed for boats and RVs and get bounced around so the odds of a leak is slim but better safe than sorry. Currently they're in a plastic container that has a box of baking soda in the back. If a battery was to leak it will eat through the box and release the baking soda. At least, in theory. :D

I could go with AGM or gel but those are like 3x the price, and don't last as long, since you can't add water. They DO vent too just not as much. When you see a bulged up gel cell, it's because the vent port failed.
 
A trick we used to used on wet or damp (AGM) batteries was to put them on a piece of MDF wood. When they leak, the wood soaks the liquid and swells but it holds it. Cheap and easy to replace.
 
A trick we used to used on wet or damp (AGM) batteries was to put them on a piece of MDF wood. When they leak, the wood soaks the liquid and swells but it holds it. Cheap and easy to replace.

Hmmm never thought of that. I have vapor barrier plastic on the shelves, so I could buy a "sacrificial" MDF board and place it on top. The odds of leak is fairly slim, and if there does happen to be a leak it's probably going to only be one cell, so it's not like I'm looking at a bucket worth of acid. Something to look into once I decide to add more batteries. I think home depot sells unfaced MDF so it would work well for this. Ditching the containers (which only come on a limited amount of sizes) would allow to put more batteries per shelf.
 
I have been reading this forum (and especially this thread) for some time now and I just love some of the systems you guys build.

I just bought new H/W for my NAS and decided to share my setup with you:

Previous setup:

Intel i5-2400
ASUS P8H67-V
16 GB 1333 Mhz RAM
1x IBM ServeRAID M1015 (IT-Mode cross-flashed)
1x 320GB WDC WD3200AAKS (System)
6x 2TB WDC WD20EADS
5x 2TB WDC WD20EFRX
1x 2TB WDC WD20EURS
Sharkoon Rebel12 Economy
2x Supermicro CSE-M35QTB

OS: Debian 7

Total size: 24TB
Usabe size: 20TB (Linux MD RAID6)

New setup:

Intel Xeon E3-1230 v2
Supermicro X9SCM-F
2x 8GB Kingston ValueRAM 1600Mhz ECC CL11
3x IBM ServeRAID M1015 (IT-Mode cross-flashed)
6x 2TB WDC WD20EADS
5x 2TB WDC WD20EFRX
1x 2TB WDC WD20EURS
2x 4TB HGST DeskStar NAS (for a new ZFS test pool)
Sandisk Cruzer Fit 8GB (ESXi Boot)
Inter-Tech 4U-4324L

I have built a custom Rack for this new case (lower part, with rolls). The upper part is a previous try, only for the switch)


I'll probably buy some more 4 TB disks in the near future and migrate all storage to zfs.

Some pictures of the build:

custom cooling, quiet and cool at ~40°C, 4x 80x80, 1x 92x92 and 1x 120x120 strapped together as a cooler wall (the 2 disks are not there anymore, just temporary for data migration)
 
Last edited:
Initially this PC was used as a server hosted in a datacenter. The M14T enclosures were specced for easy access in case a drive needed to be replaced by datacenter personnel. The server has since been retired. My original WHS 2K3 box needed a refresh, so the following is the result. It’s not blazing fast, but for storage it absolutely does the job.

AMD Phenom 9500 Quad Core 2.2ghz
8gb DDR2

Windows Server 2012 Standard
Drivebender 1.9.5.0

BadCabling.JPG


2 x Supermicro M14T 4 x 2.5” enclosures
Perc 5i
4x 250gb 2.5” 7200rpm Seagate ST9250421AS – Raid 5, 696gb for used the OS
4x 500gb 2.5” 5400rpm Western Digital WD5000BEVT - Raid 5, 1.36tb available – Drivebender Pool Member

2xM14T.JPG


Onboard controller
1 x 500gb - Hitachi HDP725050GLA360
1 x 1tb – Hitachi HDS721010CLA332
1 x 1tb – Hitachi HDT721010SLA360
1 x 2tb – Seagate ST32000542AS
2 x 3tb – Seagate ST3000DM001

Drives.JPG


Available Controller
Silicon Image 3114 (4 ports, currently empty)

Total advertised: 13.5tb
Total available space 10.91TB
 
Heres a few pictures of the drives I have been collecting. Still waiting on a few more parts before I can start building the systems these drives will go in. There will be four systems total.

Here are the boards, minus one that didnt make it in frame
kguJYUv.jpg


The drives. Total capacity 58.5TB

mlOxVzN.jpg
 
Hi!

I switched from 2 HP MicroServers to a homemade system.
  • HDD: RAID5: 4x 1,5TB Western Digital Caviar Green WD15EARS + 2x 1,5TB Western Digital Caviar Green WD15EADS
    RAID5: 5x 2TB Western Digital Caviar Green WD20EARS + 1x 2TB Western Digital Caviar Green WD20EARX
    RAID1: 2x 1TB Western Digital Scorpio Blue WD10JPVT + WD IcePack
  • Logical volumes: NoRAID 2x 120GB (VMware Datastores)
    RAID5 Write Back 6,82TB (RDM on XPEnology VM)
    RAID5 Write Back 9,09TB (RDM on XPEnology VM)
    RAID1 1TB (RDM on XPEnology VM)
  • VMs: XPEnology DSM 4.3 (DLNA, CIFS, Cloud Station, IP Cameras), Windows XP Pro (Softether VPN, rtl1090, FR24 feeder), Fully Automated Nagios (CentOS 5), Zentyal

Accessories:
  • SSD Mounting Case: ********** PCI25-2S
  • Additional network adapter (but not supported by vSphere): HP NC1020 PCI Gigabit Network Adapter

Some pictures:



 
With the new 6TB Seagate drives this thread should be renamed as "Post your 100TB+ systems" :D
 
I need to update my post... It doesn't have several upgrades (12x3tb drives added to a machine, two 24x1 TB backup machines) and I just ordered 24x4 TB drives to upgrade my main machine.
 
Status
Not open for further replies.
Back
Top