[H]ard Forum Storage Showoff Thread

EnderW

[H]F Junkie
Joined
Sep 25, 2003
Messages
11,009
Welcome to the new, simplified [H]ard Forum Storage Showoff Thread

Anyone can post here regardless of system specs, but only standout systems will be shown in the "featured" systems section. For example:
  • 50TB+
  • 10+ Drives
  • Unique setups
  • Clean systems

To be considered for the featured section, you must have:
  • Pictures of your actual hardware (not a screenshot of disk management)
  • Full system specs
  • Photos must be hosted on http://imgur.com/ unless you have your own hosting and are an established forum member


Your post should include the following information (Make sure you have the storage amounts in bold text and at the front of your post):
Amount of total storage

Case
PSU
Motherboard
CPU
RAM
Controller Cards (if any)
Hard Drives (include full model number)
Operating System
(A short paragraph here describing what you use your storage for and how you handle backups and organizing)


FAQ
Is the "advertised" space before or after raid? Suppose I have 10 1TB drives in raid6 for 8TB of "advertised" space after raid, does this count? I think the intended meaning in the thread is "sum advertised space of drives in your system" (e.g., 10 * 1TB => 10TB), but it could mean "compensate for advertising overhead on the amount of usable space you have" (e.g., 7.6TB because drives are smaller than 1TB each => 8TB).
The sum of the manufacturer stated capacity for all your drives. Ignore RAID, formatting, etc.

Clarification,
at home or work?
And
Paid for by personally or by work?
You can post anything, but specify if it's work

Old threads
http://www.hardforum.com/showthread.php?t=1146317
http://www.hardforum.com/showthread.php?t=1393939
 
Last edited:
Joined
Sep 14, 2008
Messages
1,622
Total multiple system storage 458 TB
Total single/internal system storage 108 TB

Picture is a bit old:



I will be taking some new pictures when I get a chance. Some of the storage is off-site (myth machine at my dad's house) and colo box...

Local Machines In order of machines in picture:

Router box/zeroshell (labeled 'zeroshell' top machine in rack - 1u):

Total Storage 3 TB
Code:
Supermicro 1u 4 hot-swap
Supermicro X9SCL with i3-2120T (2.6 Ghz)
4GB ram.
2x.1.5TB Seagate disks (no raid)

df output:
Code:
admin@zeroshell: 06:27 AM :~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
rootfs                  32G    25G   7.0G  79% /
/dev/root               32G    25G   7.0G  79% /
shm                    2.1G      0   2.1G   0% /dev/shm
/dev/sdd1              1.6T   745G   756G  50% /drive2
/dev/sda2              1.5T   519G   950G  36% /data

Misc usage server (labeled 'kaizoku' in rack 2nd from top - 1u):

Total Storage: 3TB
Code:
Supermicro CSE-811T-260B Case (260watt/1U)
Another ASUS server board I dont know the model of
8GB DDR2-800 Memory
Core 2 duo E6600 2.4 Ghz
2x.1.5TB Seagate disks (no raid)

Second backup box (labeled 'chikan' in 3rd from top -2u)

Total Storage 12 TB
Code:
Supermicro CSE-811T-260B Case (260watt/1U)
Another ASUS server board I dont know the model of
8GB DDR2-800 Memory
Core 2 duo E6600 2.4 Ghz
2x.1.5TB Seagate disks (no raid)

Backup box (4th from top -4u):

Total Storage: 48 TB
Code:
Supermicro SC846 Case (6gbps built in expander)
Supermicro X9DR3-LN4F+
1x Xeon E52603 v2 (Ivy Bridge 1.8 Ghz)
48GB DDR3-1333 Ram
24x2TB Hitachi Deskstar 7K2000's
ARC-1880i

Main rig [dekabutsu] (5th from top -4u):

Total Storage: 96 TB
Code:
Supermicro SC846 Case
Supermicro X9DR3-LN4F+
Dual Intel Xeon E5 4650L (8 core, 2.6Ghz, 3.1 Ghz Turbo)
EVGA Geforce gtx 980
64GB DDR3 PC-1333 ECC Memory
ARC-1880ix-24 raid controller
24x4TB Hitachi SATA  5K4000 coolspin (raid6)
ARC-1880x raid controller (hooked up to nothing ATM)
Current df output:
Code:
root@dekabutsu: 06:29 AM :~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sde1              500G   219G   281G  44% /
udev                    34G   300k    34G   1% /dev
none                   1.1M   300k   750k  29% /lib/rcscripts/init.d
/dev/sda1              129G    99G    31G  77% /winxp
/dev/sdd1               88T    82T   6.4T  93% /data
/dev/sdf1               96G    91G   5.9G  94% /ssd

External DAS #1 (6th from top -3u):

Total Storage: 49TB
Code:
Supermicro SC933 Chassis
HP SAS expander (no cpu/motherboard/ram)
15x3TB Hitachi 5K3000 (coolspin) disks
1x4TB Hitachi 5K4000 (coolspin) disks.
Hooked up to pi calculation box.

External DAS #2 (7th from top -3u):

Total Storage: 49TB
Code:
Supermicro SC933 Chassis
HP SAS expander (no cpu/motherboard/ram)
15x3TB Hitachi 5K3000 (coolspin) disks
1x4TB Hitachi 5K4000 (coolspin) disks.
Hooked up to pi calculation box.

Windows box (8th from top -2u):

Total Storage: 8TB:
Code:
Old core 2 quad based Xeon system
32GB DDR2 FB-DIMM
Geforce gtx 760
ARC-1222
8x1TB Seagate SATA.

Pi Calculation box (9th from top/ bottom most -4u):

Total Storage: 108 TB
Code:
Supermicro SC846 Case (HP SAS expander)
Supermicro X9DR3-LN4F+
2x Xeon E-2660 v2 (Ivy Bridge, 10 core, 2.2Ghz, 3Ghz Turbo)
192GB DDR3-1333 Ram
27x4TB Hitachi Deskstar 5k4000's
(only 24 are hot-swap)
3x IBM M1015 raid controllers
Current DF output:
Code:
root@pi: 07:30 AM :~# df -H
Filesystem               Size  Used Avail Use% Mounted on
tmpfs                    193G  120M  193G   1% /
/dev/md0                  13T  812G   12T   7% /a
/dev/md1                  13T  812G   12T   7% /b
/dev/md2                  13T  812G   12T   7% /c
/dev/md3                  13T  812G   12T   7% /d
/dev/sdj                  12T  812G   12T   7% /e
/dev/sdk                  12T  812G   12T   7% /f
/dev/sdl                  12T  812G   12T   7% /g
/dev/sdm                  12T  812G   12T   7% /h
/dev/sdn                  12T  812G   12T   7% /i
/dev/sdo                  12T  812G   12T   7% /j
/dev/sdp                  12T  812G   12T   7% /k
/dev/sdq                  12T  812G   12T   7% /l
/dev/sdr                  12T  812G   12T   7% /m
/dev/sds                  12T  812G   12T   7% /n
/dev/sdt                  12T  812G   12T   7% /o
/dev/sdu                  12T  812G   12T   7% /p
/dev/sdv                  12T  812G   12T   7% /q


Remote Machines:

Myth box

Total storage: 52TB
Code:
Norco RPC-4020 Case
MSI P55A-G55 motherboard
Xeon X3440 (2.53 Ghz, 4 cores) CPU
8GB DDR3 Memory
2x PCI OTA HDTV capture cards
1x PCI DVB-S capture card
2x USB 8PSK DVB-S capture card
8x2TB Hitachi 7K2000
12x3TB Hitachi 5K3000 (coolspin)
Areca ARC-1280-ML Raid controller.
Current DF output:
Code:
myth ~ # df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sda2               78G    50G    28G  65% /
udev                    11M    78k    11M   1% /dev
none                   1.1M   332k   717k  32% /lib/rcscripts/init.d
/dev/sda1              197M    50M   137M  27% /boot
/dev/sdc1               30T    22T   9.0T  71% /tv
/dev/sdb1               12T    12T   892G  93% /tv2

Colo box:

Total Storage: 30TB
Code:
2U supermicro (SC825TQ-R720UB)
This has dual 720 watt 90%+ effeciency PSUs
2x Xeon E5530 2.4 Ghz
48GB DDR3-1333 Memory
ARC-1880i:
8x3 TB Hitachi 5K3000 (coolspin)
ARC-1231ML:
6x1 TB Samsung EVO 840 SSDs
Supermicro/Intel Quad Gig UIO NIC (AOC-UG-I4)
Onboard graphics

Total Internet connectivity:
2000 megabits.
Current DF output:
Code:
root@方向音痴: 06:30 AM :~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sda1               50G    46G   4.5G  92% /
udev                    11M    62k    11M   1% /dev
/dev/shm                26G   4.1k    26G   1% /dev/shm
/dev/sda2               11G   8.5G   1.9G  83% /mail
/dev/sda3              2.1G   540M   1.6G  27% /usr/portage
/dev/sdb1               30G   4.4G    26G  15% /var/lib/mysql
/dev/sda4              2.1G   930M   984M  49% /tmp
/dev/sdc1               18T    11T   7.8T  57% /data
/dev/sdd               6.0T   2.5T   3.6T  41% /ssd
 
Last edited:
Joined
Sep 14, 2008
Messages
1,622
WOW. Please include system wattage if possible. A TB/watt would be an interesting stat to include here.

A lot of the machines do not run anywhere near 24/7. Here is the power usage of my rack currently:



So about 1200 watts.

Although quite a bit of extra power is being sucked by one machine that has 2x10 core CPU's and is hammering the CPU most of the day (atleast 150 watts) and another box which is 2x8 core CPUs (although much more idle).

This is with 85 disks powered on in 5 chassis. The disks are:
53x 4TB 5K4000 Hitachi/HGST coolspin disks.
30x 3TB 5K3000 Hitachi/HGST coolspin disks.
2x 1.5 TB seagate 7200 RPM disks.

So 305 TB disks powered on using 1.2 kilowatts so that is about 4 watts per TB.
 

xtream1101

n00b
Joined
Dec 17, 2014
Messages
8
Total: 87T
One Case: 63T

Code:
Supermicro SC846E16 & 2 x SE3016 SAS expanders
2 x SuperMicro 900 Watt
Supermicro X8DTE-F motherboard
2 x Xeon 5570 @ 2.93GHz
24GB DDR3 1333 Ram
Adaptec 5805 HW Raid card
6 x Samsung Spinpoint F4EG 2T (HD204UI)
9 x Seagate Barracuda 3T (ST3000DM001)
12 x WD Red 4T (WD40EFRX)
Tripp Lite SMART1500LCDT 1500VA 900W UPS
Windows Server 2012

This system runs my Plex server and my downloads (usenet).
The OS is on a Samsung 850 250GB SSD
The hdd's are set up in 3 main storage arrays and a temp/download array:
Array #1 (Raid6): 6 x 2T Samsung (HD204UI)
Array #2 (Raid6): 8 x 3T Seagate (ST3000DM001)
Array #3 (Raid6): 12 x 4T WD Red (WD40EFRX)
downloads: 1 x 3T Segate (ST3000DM001)

I keep a unused 2T, 3T, and a 4T hdd on hand to replace any failed hdd's in the arrays.

I am using Drive Bender to combine Arrays #1-3 into a single pool, this way I just add any new arrays to the pool to expand.

Inside the Supermicro case


Array #1 & #3 and the download disk are stored in the supermicro case (SC846E16)
Array #2 is in the middle case (SE3016)
The bottom case is currently empty (SE3016)
The 2u Norco case is currently my ESXI Server.


This is all in my 26U rack:


Here is a video I took so you can hear the noise difference https://vid.me/eWol
 
Last edited:

Ryokurin

[H]F Junkie
Joined
Aug 14, 2001
Messages
10,560


30 TB storage, expanding to 44 TB Soon

Nanoxia Deep Silence 1 case with a Coolermaster 4 in 3 drive holder for the external bays. Due to the locks the door can't close all the way but it does close enough to keep the noise down.

Corsair TX 750w power supply. It had enough sata cables to power each drive, and it was cheap. Definitely overkill for this system.

Asrock FM288M-HD+ Micro ATX board, with support for 8 sata devices

A6 7400K set to 45w TDP

8GB DDR3 PC1600

Vantec 6 port Sata 6gb/s PCIe Raid Host card. 4 ports internal, 2 esata out. Needs PCI express x2 slot or better.

Various hard drives ranging from 2TB (older Samsung), 3TB (Toshiba DT01ACA300) to 4TB (HGST Deskstar)
Windows 8.1 with Flexraid Transparent raid

This machine's main purpose is to run SABnzbd, and the usual apps (couch potao, sonarr, headphones) The majority of the drives currently are not in a raid, as it's things I won't miss if a drive dies. Currently the only part that's in a raid is 8TB which is music, SD movies and various files.

My future plans is to use the esata ports and buy two external drive enclosures (more than likely the Vantec Nexstar HX4) and fill them with 4 or 8TB drives, if they are cheap enough. The vast majority of my data is video and will never change so the cheaper archive drives will be adequate. If I go with 8tb drives I'll raid all of it.
 

Blue Fox

[H]F Junkie
Joined
Jun 9, 2004
Messages
11,811
Current total is about 120TB.

Primary server consists of the following:
Supermicro 846E16
Supermicro A1SRi-2558F and 8gb ECC RAM
Intel 320 40gb SSD
Areca 1882i
24 x 2TB Hitachi disks

Secondary server consists of the following:
Supermicro 847E1
Supermicro X8DT3-F with Xeon L5630 and 8gb ECC RAM
Imation 32gb SSD
Areca 1880i
26 x 2TB Hitachi disks
10 x 1TB Hitachi disks

File servers #3 and #4 only have a couple disks each, so won't bother mentioning them. Third (and fourth, not pictured) 4U chassis aren't in use at the moment, but should be soon seeing as I'm completely out of space. Have only ~3TB total free.

Secondary server has been moved to colo, but it just functions as an offsite backup.

 
Last edited:

Machupo

Gravity Tester
Joined
Nov 14, 2004
Messages
5,225
My SFF addition :)

36TiB total, 32.7TiB usable for storage

iStarUSA S-915 case (modded) - 17L of volume
ASRock C2550D4I
32GB Unregistered ECC memory
IBM M1015
20x Seagate M9T 2TB HDDs (one big Z2 set) for storage
Silverstone 450W Bronze PSU to push all of the 5VDC needed :)



 
Last edited:

FLECOM

Modder(ator) & [H]ardest Folder Evar
Staff member
Joined
Jun 27, 2001
Messages
15,708


Amount of total storage in picture: About ~147Tb

Systems from top to bottom:

HyperV #1
Case: Sun X4170
PSU: Dual 760W Proprietary
Motherboard: Proprietary
CPU: Dual Xeon L5520
RAM: 24Gb DDR3 ECC
Controller Cards: Sun OEM Adaptec ASR-5805 w/Battery
Hard Drives: 2x 146Gb 10k SAS RAID1 (Hitachi HUC10141CSS300) 2x 120Gb SSD RAID0 (Intel 330)
Operating System: Hyper-V Server
HyperV #2
Case: Sun X4170
PSU: Dual 760W Proprietary
Motherboard: Proprietary
CPU: Dual Xeon X5570
RAM: 24Gb DDR3 ECC
Controller Cards: Sun OEM Adaptec ASR-5805 w/Battery
Hard Drives: 6x 146Gb 10k SAS RAID5 (Hitachi HUC10141CSS300)
Operating System: Hyper-V Server
HyperV #3
Case: Sun X4170
PSU: Dual 760W Proprietary
Motherboard: Proprietary
CPU: Dual Xeon X5570
RAM: 24Gb DDR3 ECC
Controller Cards: Sun OEM Adaptec ASR-5805 w/Battery
Hard Drives: 6x 146Gb 10k SAS RAID5 (Hitachi HUC10141CSS300)
Operating System: Hyper-V Server
HyperV #4
Case: Sun X4600 M2
PSU: Quad 950W Proprietary
Motherboard: Proprietary
CPU: 8x Opteron 8389
RAM: 128Gb DDR2 ECC
Controller Cards: Sun LSI SAS 106x based RAID Onboard
Hard Drives: 2x Intel SSD RAID0 (Intel 330) 2x 146Gb 10k SAS RAID1 (Hitachi HUC10141CSS300)
Operating System: Hyper-V Server
DAS1
Case: EonStor Fiber Channel Attached Array
PSU: Dual 400W
Motherboard: Proprietary
CPU: Embedded
RAM: 512Mb DDR
Controller Cards: Proprietary
Hard Drives: 16x 1Tb Seagate 7200.11/7200.12 Drives RAID6
Operating System: Embedded
DAS2
Case: EonStor Fiber Channel Attached Array
PSU: Triple 405W
Motherboard: Proprietary
CPU: Embedded
RAM: 1024Mb DDR
Controller Cards: Proprietary
Hard Drives: 16x 2Tb Western Digital WD20EADS/FALS Green Drives (I know) RAID6
Operating System: Embedded
DAS3
Case: EonStor Fiber Channel Attached Array
PSU: Triple 405W
Motherboard: Proprietary
CPU: Embedded
RAM: 1024Mb DDR
Controller Cards: Proprietary
Hard Drives: 24x Hitachi 2Tb UltraStar Drives, 2x 12 Drive RAID6
Operating System: Embedded
DAS4
Case: EonStor Fiber Channel Attached Array
PSU: Triple 405W
Motherboard: Proprietary
CPU: Embedded
RAM: 1024Mb DDR
Controller Cards: Proprietary
Hard Drives: 24x Hitachi 2Tb UltraStar Drives, 2x 12 Drive RAID6
Operating System: Embedded

This equipment is in a colocation facility but I paid for it myself and own it
 
Last edited:

Blue Fox

[H]F Junkie
Joined
Jun 9, 2004
Messages
11,811
I think he's more alluding that most of the people on here with a lot of storage have colocated servers and don't need to rely on third party hosting.
 
Joined
Sep 14, 2008
Messages
1,622
I think he's more alluding that most of the people on here with a lot of storage have colocated servers and don't need to rely on third party hosting.

I think the fact that it was listed as a requirement to be considered as a 'featured system' is what probably prompted the negative connotation. I have, and always will, use my own hosting which is on hardware owned by me (colo'd) which I have control over and is backed up to my home machine everyday. I have way more faith in my own hosting than a free third-party one that could potentially go 'poof' some day.

I get that the issue is probably we don't want broken links. I do have to say BlueFox that you should at least have some hostname associated with your image links as going strait to an IP like that guarantees its going to stop working some day in the future.
 

TeeJayHoward

Limpness Supreme
Joined
Feb 8, 2005
Messages
10,443
I think the fact that it was listed as a requirement to be considered as a 'featured system' is what probably prompted the negative connotation. I have, and always will, use my own hosting which is on hardware owned by me (colo'd) which I have control over and is backed up to my home machine everyday. I have way more faith in my own hosting than a free third-party one that could potentially go 'poof' some day.

I get that the issue is probably we don't want broken links. I do have to say BlueFox that you should at least have some hostname associated with your image links as going strait to an IP like that guarantees its going to stop working some day in the future.
You could always upload it to imageur AND host it yourself - Something like this?
Code:
[url =YOUR_HOSTING][img]IMGUR_HOSTING[/img][/url]

(Honestly, just saving myself a spot on the front page. I did some alterations since the last thread. Got yelled at for having my UPS on the carpet, so it's sitting at the top of the stack. I'll update this post when I get some new photos up.)

Please excuse the mess:


~96TB total storage

Code:
Supermicro SC822T
-E3-1230
-X9SCL-F
-4x8GB DDR3
-1x320GB WD RE2
-ESXi 5.5 Host

Supermicro SC822T
-E3-1230v2
-X9SCL-F
-4x8GB DDR3
-1x320GB WD RE2
-ESXi 5.5 Host

Supermicro SC846
-E3-1220v3
-X10SL7-F
-4x8GB DDR3
-AOC-S2308L-L8e
-24x2TB Seagate SAS
-ESXi 5.5 Host (NAS and Infrastructure)

Supermicro SC846
-E3-1220v3
-X10SL7-F
-4x8GB DDR3
-AOC-S2308L-L8e
-24x2TB Seagate SAS
-ESXi 5.5 Host (Backup)

The NAS VM has a 2308 passed through with all 24 drives on it. It's the datastore for the rest of the setup. It's rsync'd nightly to the Backup VM on the other SC846, which also has a 2308 passed through. Snapshots are taken every night, and retained for 30 days. The two SC822Ts are clustered together, and contain the majority of my "real" VMs:

Code:
centos                      A   192.168.0.35 # VM for Linux desktop stuffs
chan                        A   192.168.0.22 # Imageboard
forum                       A   192.168.0.26 # Old vBulletin forum
ftp                         A   192.168.0.19 # FTP server (obviously)
irc                         A   192.168.0.25 # IRC server (obviously)
mail                        A   192.168.0.16 # Mail server (obviously, but inactive right now)
minecraft                   A   192.168.0.34 # Minecraft server (obviously)
nis                         A   192.168.0.20 # NIS and DNS server (On the top SC846)
ntp                         A   192.168.0.27 # NTP server (On the top SC846)
pxe                         A   192.168.0.18 # PXE and DHCP server
tim-and-sarah               A   192.168.0.28 # Web site 1
vcsa                        A   192.168.0.13 # vCenter Server Appliance (on the top SC846)
voice                       A   192.168.0.23 # Teamspeak, Ventrilo, etc
vpn                         A   192.168.0.17 # OpenVPN
webproxy                    A   192.168.0.24 # Proxy (Apache) for my two sites	
wiki                        A   192.168.0.21 # Wiki page (not really used)
win10                       A   192.168.0.36 # Win10 testing VM
win81                       A   192.168.0.37 # Win 8.1 VM for RDP (My home away from home)
www                         A   192.168.0.29 # My main web site
 
Last edited:

EnderW

[H]F Junkie
Joined
Sep 25, 2003
Messages
11,009
I get that the issue is probably we don't want broken links. I do have to say BlueFox that you should at least have some hostname associated with your image links as going strait to an IP like that guarantees its going to stop working some day in the future.
that's it, seems like I see a lot of photobucket or other crap hosting used and then the pics are gone a few months later

I've modified that requirement for you guys that are established and have your own hosting
 
Last edited:

olol

n00b
Joined
Jul 22, 2013
Messages
12
There's some impressive systems out there I must say. As soon as I get my new chassis, I will take a few pictures of my rack and show it off, won't make it to the featured list though :(
 

m0po

n00b
Joined
May 14, 2014
Messages
17
Wow, impressive systems. I have parts on order and should be arriving in about two weeks :)
 

asgards

Limp Gawd
Joined
May 8, 2008
Messages
204
Total storage: 192TB



Drive nodes:

Towers:
#36TB
Case: Chieftec Bigtower BA-01B-B-SL-OP + 3x CoolerMasters STB-3T4-E3-GP
PSU: Chieftec CFT-750-14CS
Motherboard: GigaByte GA-MA74GM-S2
CPU: 5050e
RAM: half of KVR800D2N5K2/4G
Controller Cards: 1x AOC-SAT2-MV8 + 1x Sil3114
Hard Drives: 18x 2TB
Operating System: gentoo

#37TB
Case: Chieftec Bigtower BA-01B-B-SL-OP + 3x CoolerMasters STB-3T4-E3-GP
PSU: Chieftec CFT-750-14CS
Motherboard: GigaByte GA-MA74GM-S2
CPU: 5050e
RAM: half of KVR800D2N5K2/4G
Controller Cards: 1x Sil3114 + 1x AOC-SAT2-MV8
Hard Drives: 17x 2TB + 1x 3TB
System drive: DataTraveler Micro 8gb 2.0
Operating System: debian

4Us:

#15TB
Case: Codegen IPC-4U-500 + 1x SK33502
PSU: Chieftec CFT-750-14CS
Motherboard: GigaByte GA-MA790GP-UD4H
CPU: Phenom II 940
RAM: 2x KVR800D2N5K2/4G
Controller Cards: Sil3114
Hard Drives: 5x 3TB
Operating System: debian

#34TB
Case: XCase RM400B + 1x hdha170
PSU: Fortron 550w
Motherboard: GigaByte GA-MA74GM-S2
CPU: 4200+
RAM: half of KVR800D2N5K2/4G
Controller Cards: 2x Sil3114
Hard Drives: 8x 2TB + 6x 3TB
System drive: DataTraveler Micro 8gb 2.0
Operating System: debian


Head nodes:
#58TB ( old primary box, keeping it as primary till ive moved everything to VMs )
Case: Norco 4220
PSU: Antec 650w
Motherboard: GigaByte GA-H67M-UD2H-B3
CPU: i5-2500
RAM: KVR1333D3N9K2/8G
Controller Cards: 2x m1015
Hard Drives: 3x 2TB + 16x 3TB + 1x 4TB
System drive: SSDSA2CT040G3K5
Operating System: debian

#12TB ( partly operational, don't have picture with it, yet )
Case: Norco 2208
PSU: Zippy (P2G-6510P) 2U 510W
Motherboard: Supermicro X10SLA-F
CPU: E3-1220
RAM: 16G
Controller Cards: 1x Sil3114
Hard Drives: 6x 2TB
System drive: Kingston V300 240GB
Network: 2x EXPI9402PT
Operating System: ESXi 5.5.0

drives used:
2TB: HD203WI, WD20EARS, HD204UI, HDS723020BLA642, HDS722020ALA330
3TB: DT01ACA300, WD30EURS, ST3000DM001
4TB: HDS724040ALE640


I'm using this to store media for myself and friends.
Currently all the drives are singles, iscsi-mounted to primary node.
Media is shared via samba over vpn.
I've started moving off customer grade hardware and into virtualized systems, but all this takes time ;/
 
Last edited:

SuperChicken

Weaksauce
Joined
Dec 11, 2006
Messages
99
I'll reserve a spot for my build as I'm currently moving everything around upgrading from my 39 TB server.
I should update my sig too.

Edit: Finally adding pictures

Box 2.0
AMD proc, Gigabyte Mobo
Areca 1880i
HP Sas Expander

14x 2TB Raid 6 = 24tb
7x 3TB Raid 6 = 15tb

Box 1.0 was running with 99.9% uptime (Ice storm last winter ruined 99.99%). I recently replaced the 4 year old AMD guts with new AMD guts. I use it as just a home file server nothing fancy.Normally a laser printer sits on top of the Norco case. But since replacing the guts and reformatting I couldn't get the drivers to work again on Windows Server 2008 R2. Lastly for those who can count, I do have 22 drives in the case. I have the OS SSD mounted where the internal DVD drive goes, and in next to it inside is the 21st 3TB drive, which does make hot swapping a chore.



Tower
I bought this server off a friend who had to move out of the country and was didn't want to bring it with him, or sell it off.

The guts are unknown, as are all the random pci sata cards and 500w psu. It also has Sans Digital 4 hd enclosures. It's running Unraid 6 b12, with a whole bunch of mix and mash drives:
1x 500GB (WD 5000AALS Black Edition, Cache drive)
2x 4TB (WD40EZRX, 1 is Parity drive)
3x 3TB (WD30EZRX)
15x (13x WD20EARS, 2x WD20EADS)
1x 1.5TB (Seagate ST31500341AS)

44.5TB array, with 500GB cache.




The plan is to combine both of these, I still don't know if I'm going to keep the H/W raid or software raid. They each have their +/-'s and I've got way more storage then I need :p
 
Last edited:

Lost-Benji

Limp Gawd
Joined
Jun 23, 2013
Messages
321
Holy Shit, new thread, about time!

One suggestion/request:

Please include info on what is actually controlling the RAID/JBOD as I keep seeing the same bad habits from the old thread, parts list only with no details for others to see.

List the OS, RAID card and config or if doing soft-RAID please.
 

vFX

n00b
Joined
Sep 28, 2013
Messages
55
you are right, sorry

specs (from top)

Switch

- Cisco SG300-10 (front)
- Cisco SG100-16 (back)


SAN / Storage (always on)

- SuperMicro SuperChassis SC733TQ-465B
- Supermicro X8DTE-f + 1xE5620 2.40@2.00 + 24GB ECC Reg DDR3 1333
- LSI 9240 --> 2x Toshiba 2.5" 7200rpm 500GB RAID1 --> OS (Ubuntu 14.04) + SSD data backup
- LSI 9240 --> 2x intel 730 480GB (400GB RAID1 volume) --> iSCSI (LIO) --> SAN/esxi datastore (multipath active/active + VAAI / hw acceleration)
- adaptec 5405 (256MB+BBU) --> 4x WD RE4 1TB RAID5 (3TB) --> samba+nfs --> NAS
- WD RED 3TB --> NAS backup
- 2x intel Pro 1000 Server

At the moment the storage is provided at 2Gbit (thanks to multipath) but I have done all the tests and I'm ready to move to Infiniband (IPoIB 40Gigabit) with Mellanox ConnectX2 VDI/IB cards



- Workstation (it's connected to the monitor on the desk)
- Chieftec CH-09B
- ASUS P8B-X + Xeon 1230V2 +16GB DDR3 1333 ECC
- GTX Titan
- 2x Samsung 840 EVO 240GB


ESXi-1 (always on)

- SuperMicro SuperChassis SC822T-400LPB
- Supermicro X8DTE-f + 1xE5620 2.40@2.00 + 24GB ECC Reg DDR3 1333
- WD RE4 500GB (local datastore)
- HP 331T Quad Gigabit


ESXi-2 (always on)

- SuperMicro SuperChassis SC822T-400LPB
- Supermicro X8DTE-f + 1xE5620 2.40@2.00 + 24GB ECC Reg DDR3 1333
- WD RE4 500GB (local datastore)
- HP 331T Quad Gigabit


Undefined (powered on when needed, usually as temp shared storage)

- Supermicro 1U case
- Gigabyte GA-C1037UN-EU + Celeron 1037U + 2GB RAM
- 3x WD RE4 500GB


FreeNAS (in test)

- SuperMicro SuperChassis SC822T-400LPB
- SuperMicro X7SBi + Xeon X3350 + 8GB RAM
- 6x WD RE4 500GB


APC SMT750RMI2U (connected only to "always on servers + switches)

- Smart UPS, 19" Rackmount, 2U
- 500W, 750VA
 
Last edited:

RabbiX

n00b
Joined
Nov 10, 2014
Messages
19
Total RAW storage in rack: 112TB

The SAN:
Total RAW Disk: 96TB

(Both nodes are identical)
RAID Raw/Usable - 44TB/20TB
1x Raid-10 - 22 disks each + 2 hotspairs
Linux
Case: Norco RPC-4224
MB: Supermicro MBD-X9SCM-F-O
CPU: Intel Xeon E3-1270v2
RAM: 8GB ECC
Raid: ARC-1882IX-24-4GNC
Raid Battery: ARC-6120-T121
NIC1: Intel X540-T2 (iSCSI-10GE)
NIC2: Intel X540-T1 (Replication-10GE)
Powersupply: SeaSonic 1200w
Raid HDs: 24x2TB (WD20EFRX)
System HD: Intel 520 60GB SSD


The 2 SAN nodes present shared datastores to the 3 ESXi hosts via iSCSI. Both nodes also serve as a NAS for regular file sharing.
Synology:
Total RAW Disk: 15TB

Synology DS1511+
RAID Raw/Usable: 12TB/10.91TB
Raid-5 (5x3TB disks)
Disks: WD30EZRX
Linux

This system is used solely for backups from vSphere and Desktops. Backup storage is presented to vSphere hosts via NFS. Desktops (Windows/Macs) use CIFS/AFP.

vSphere 6 Hosts:

VH01/VH02/VH03
Total RAW Disk: ~1TB

VH01
Chassis: Dell R620
CPU: 2x Intel Xeon E5-2620
MEM: 64GB ECC
NIC: Intel X540-T2
Powersupply 2x750w (redundent)
RAID: PERC710
Raid HDs: 8x146GB 10K SAS (ST9146852SS)
Raid-5 7-disk +1 hotspair
ESXi 6

This storage is used by one of my exchange servers at all times, and sometimes used to move critical VMs over to if I needed to take the SAN offline for some reason.


VH02 and VH03 (identical) - Storage is only used for booting ESXi
Chassis: Supermicro SYS-5017R-WRF
CPU: 1x Intel Xeon E5-2670v2
MEM: 64GB ECC
System HD: Intel 520 60GB SSD
NIC: Intel X540-T2
Powersupply 2x750w (redundent)
ESXi 6

So, 3 Physical hosts. The total VM count fluctuates alot depending on what I am working on. There are about 30 VMs currently that I would consider my core servers.





Drawing of the ESXi SAN setup I created a while back which provides a design overview :



Additional Details if curious:

Networking:
1x Netgear GS752TXS (All switches connect here via SFP+)
1x Netgear GSX712T
2x Netgear GS708E (iSCSI/ESX)
1x Netgear R8000 (WAP)
1x Cisco ASA 5515X (premise router / Firewall)
1x Watchguard XTMv (Firewall)
1x F5 BIG-IP VE-LAB (Load Balancer/Reverse Proxy/Web Server Firewall)
(There is probably around 15 VLAN/Subnets on the network...)

Backups:
VM backups are performed via Veeam Backup and Replication.
Desktop backups are perfromed via Veeam Endpoint Backup or Apple Time Machine.

Cabinet (Rack):
APC Netshelter CX 38U
http://www.apc.com/resource/include/techspec_index.cfm?base_sku=AR4038
(Copy link location and paste into new tab - site doesn't allow links from third party)
 
Last edited:

spankit

Limp Gawd
Joined
Oct 18, 2010
Messages
262
That cabinet looks awesome. Can you provide additional information on that build as well?
 

RabbiX

n00b
Joined
Nov 10, 2014
Messages
19
You can own one for a measly $8K !

Not far off, but not quite that bad either.

http://www.newegg.com/Product/Product.aspx?Item=N82E16816225171

Spent a full year saving up for it....and I would still buy it again if given the choice. It has ended up being one of the best decisions I ever made for my aural sanity. Granted if you do not have to share the same space with your ridiculously loud equipment there would be no point, but the cabinet is about 3ft from my head in the office (which both me and the wife share). The previous few years of that 80dB fan whine was starting to drive me bat shit crazy. You basically could not even have a conversation in the room with someone without yelling at all times it was so loud (The networking equipment is the worst).

If it helps put it in some kind of perspective, each storage node cost more to build at the time than the single cabinet. Given the topic of this thread, and all the equipment porn being shown off here, I'm honestly surprised the price of a mere cabinet is even considered. For instance, one of the systems I see on display easily looks to have $20k+ in just hard drives :). Hell I've installed many SANs at work over the years and a single one of the smaller 24 drive units cost more money than every thing in my entire rack combined (including the cabinet).

All that said...I freaking love my cabinet :D
 
Last edited:

spankit

Limp Gawd
Joined
Oct 18, 2010
Messages
262
I'm Canadian so after the exchange and all that jazz it would probably be damn close to 8k for me. ;) If I could afford it, I would probably buy one too. It looks pretty rad.
 

RabbiX

n00b
Joined
Nov 10, 2014
Messages
19
I'm Canadian so after the exchange and all that jazz it would probably be damn close to 8k for me. ;) If I could afford it, I would probably buy one too. It looks pretty rad.

I can certainly understand the frustration with obtaining anything large from the states when you don't actually live there. The cabinet is a fairly recent purchase, and honestly it only became possible once me and the family moved to California in the last few months. Before that, we were living in Japan for the last 10 years.

While I do miss Japan (loved living there and hope to go back), I certainly don't miss some of the ridiculous prices I had to pay for shipping certain things. Attempting to ship something this size was basically out of the question, and I honestly don't want to even guess what it would have costed me. Damn thing probably wouldn't have even fit in our tiny box of a house.

I do miss my ridiculously fast and low cost internet from Japan though. Internet in the states is so bad is not even funny. I paid approximately ~$40 a month for 100Mbs speeds 10 years ago in Japan. 10 years ago....seems like the internet here (USA) has barely gotten better after all that time (sorry for going a bit off topic...but damn the internet sucks here :)).
 
Last edited:
Top