[H]ard Forum Storage Showoff Thread

this is the best i can do since i am poor. :(

on the bottom rackable se3016 - replaced all the fans including psu. 12 - 2tb drives and 4 - 4tb drives. the 2s will be upgraded to 4s once they bite the dust.

above that rackable 2u quanta S44 - removed one of the xeons, replaced the fans. dedicated torrent box.

that plain looking box on the top is an old industrial dvr case. houses my plex server with a 4790K. lsi 9201-16e connected to the se3016 running flexraid.

after all the bs and parity, 30tb useable.

NP0OfXY.jpg


BX5abjr.jpg
 
Last edited:
Don't be ashamed wirerogue, that is a well cabled, well organized, beautiful setup. Look at some of the other posters with wires run between things making it look like a spaghetti monster vomited over skittles.

You have a clean setup for what it is. Be proud.
 
Yeah, that's a very nice setup, wirerogue, what little cabinet is that?

that is the mystery cabinet. picked it up at a garage sale. used it as nightstand.

after a while i noticed the opening was exactly 17" wide and 10u.

it's only 16" deep so short depth cases only, which is good since it sits in my living room.
 
Look at some of the other posters with wires run between things making it look like a spaghetti monster vomited over skittles.

Kind of like mine :p. In my defense everything is front cabled which makes things a little bit harder

First, the rack:


Top to bottom:
  • Brocade SilkWorm 200e 16 Port FC Switch
  • Quanta LB4M
  • 4x Rackable ESXi nodes
  • Rackable SE3016 SAS expander
  • Rackable 2u ESXi AIO (Storage server + Routers)
  • Rackable SE3016 SAS Expander

Now for some specifics on the storage server:


First SE3016:
  • 13x Hitachi 500GB UltraStar drives
  • 2x RaidZ2 VDEV's + Hot spare
  • 2x STEC Mach16IOPS 50GB SSD's for sLog

Storage Server:
  • ESXi 5.5U2
  • Tyan s7012 Motherboard
  • 2x Xeon x5560
  • 48GB DDR3 ECC REG
  • LSI 9200-8e (passthru to OmniOS+Nappit)
  • Qlogic qle2462 FC HBA (passthru to OmniOS+Nappit for FC Target)
  • Adaptec ASR-5805 256MB Cache + BBU (4x 72GB 10,000RPM drives RAID 0+1 for ESXi boot, OmniOS+Nappit boot and pfSense VM storage)
  • ConnectX2 10Gb adapter for routing VLAN's and serving CIFS shares from the storage server
Second SE3016:
  • 5x 2TB drives RaidZ1 for media storage

Nothing spectacular like some of the others, but a little bit different. The main reason why I chose smaller drives is for shorter rebuild times, faster sequential writes and redundancy. I don't need a lot of storage for my VM's so it works well. The storage server provides shared storage for the ESXi servers over 2x 4Gb FC connections. General media storage is delivered from the 5x 2TB RaidZ1 pool CIFS/SMB over the 10Gb network.
 
Last edited:
that's alot of rackable. i'm pretty happy with mine. took a little work to bring down the noise levels. looks like you got that in your basement so probably not an issue for you.
 
that's alot of rackable. i'm pretty happy with mine. took a little work to bring down the noise levels. looks like you got that in your basement so probably not an issue for you.

Yep, they are in the basement. The 1u's are loud but the 2u is actually quieter than the se3016's. I was actually surprised how quiet it was the first time I powered it up. I didn't plan on all rackable but a good deal is a good deal.

I got the 2u for $325 recently with 24gb of ram and the raid card. I got the 1u's 2 years ago for $100 a piece shipped.
 
that is the mystery cabinet. picked it up at a garage sale. used it as nightstand.

after a while i noticed the opening was exactly 17" wide and 10u.

it's only 16" deep so short depth cases only, which is good since it sits in my living room.

Looks like one of the IKEA hacks.
I recently bought some IKEA Rast nightstands to make a DIY server rack.
For my rack build Im going to recess the rails in 1.5" so I can add a door and put some sound deadener on the inside of the door to cut down on noise as well as a couple of 120mm inlet fans.

Thats a clean IKEA hack build though, props :cool:
 
Last edited:
Space before formatting (across three systems): 76TB.
jGD4Gm9l.jpg





Primary file server:
Case: NORCO RPC-2008 2U
PSU: generic 300w-ish from ebay
Motherboard: SUPERMICRO X8DTi-F
CPU: 2*:5640 Xeons
RAM: 72GB ECC Memory
Controller Cards: Supermicro AOC-SAS2LP-MV8
Hard Drives: 8*3TB Seagate Barracuda ST3000DM001
Operating System: Ubuntu Server 14.04 LTS w/ZOL (RAIDz2)
Network: 10gbps fiber
Raw Storage: 24TB


Backup file server:
Case: NORCO RPC-4020
PSU: Corsair 850w
Motherboard: S1200BTLR LGA1155
CPU: Intel G540 Celeron
RAM: 16GB ECC Memory
Controller Cards: Areca 1280ML w/2GB cache in JBOD mode
Hard Drives: 20*2TB Hitachi GST Deskstar 7K2000
Operating System: Ubuntu Server 14.04 LTS w/ZOL (RAIDz3)
Network: 10gbps fiber
Raw Storage: 40TB


Testbed:
Case: Rosewill R4000 4U
PSU: Corsair 850w
Motherboard: No idea!
CPU: Intel X3450 Xeon
RAM: 32GB Memory (non-ECC, so ZFS is out of the question for now)
Controller Cards: LSI MegaRAID MR SAS 84016E
Hard Drives: 12*1TB hard drives (assorted brands, mostly Seagate)
Operating System: Probably Ubuntu, but... that's a project for another weekend
Network: 4*1gbps copper
Raw Storage: 12TB


The other systems on the rack include an IPFire router, two Xenserver 6.5 hypervisors, and a Ubiquiti Edgeswitch 48-port 500w as my core switch.
The Primary File Server handles about 5-15 VMs at any given point in time, in addition to being an NFS NAS.
The Backup File Server handles rsync backups about 4 times a month (turned off the rest of the time).
The Testbed Server..... is for giggles, and I don't mind accidently breaking it and reinstalling everything.
 
Last edited:
...

The other systems on the rack include an IPFire router, two XenServer hypervisors, and a temporary rig that I'm screwing around with.
(X3450 Xeon, 32GB Memory, 12*1TB HD's, OS to be determined...)

how do you like IPFIRE? I am still using IPCOP for years since 2000.
could you share your discovery in this thread od PM


Thanks!
 
how do you like IPFIRE? I am still using IPCOP for years since 2000.
could you share your discovery in this thread od PM


Thanks!

I love IPFire. It works fairly well, and has a LOT of bells and whistles.
I've been playing around with Squid / Update accelerator recently, and it's going just fine.
The IPFire hardware is an Intel 3330S, 16GB memory, Intel quad gigabit NIC, and a 1TB SSHD from Seagate. Works wonderfully thus far.

Probably will be virtualizing it pretty soon though. No sense in having extra computers online when I don't need to.
 
I love IPFire. It works fairly well, and has a LOT of bells and whistles.
I've been playing around with Squid / Update accelerator recently, and it's going just fine.
The IPFire hardware is an Intel 3330S, 16GB memory, Intel quad gigabit NIC, and a 1TB SSHD from Seagate. Works wonderfully thus far.

Probably will be virtualizing it pretty soon though. No sense in having extra computers online when I don't need to.

should be good, I have been virtualized IPCOP since 2008 when my VIA C800 died on me
thanks for your reply.
 
Everyone should include their 24 hour average power draw.
The smaller it is per TB, the more efficient, cheaper and cooler your setup is!
 
Everyone should include their 24 hour average power draw.
The smaller it is per TB, the more efficient, cheaper and cooler your setup is!

Oh dear. Power efficiency? For the X8-series of supermicro boards? Can I just disqualify myself now? xD I'm running three of them, fully loaded with CPU's / Ram....
 
This goes along with the request for power consumption, what sort of UPS are you guys running for your personal storage servers?
Im picking up a pair of 1000w rackmount UPS to compliment the rack Im setting up.
I envision one UPS will go to the media server (plan is to migrate the media server to a 3u eventually) and the other will go to the network equipment and will later add a VM server.

Currently I run two 850w UPS; one for my gaming rig and single monitor (gaming rig only gets 8 mins of battery backup) and the other for the media server and all my network equipment (I dont have the RAID array setup yet on the media sever but as it sits now I get 20mins).
 
This goes along with the request for power consumption, what sort of UPS are you guys running for your personal storage servers?
Im picking up a pair of 1000w rackmount UPS to compliment the rack Im setting up.
I envision one UPS will go to the media server (plan is to migrate the media server to a 3u eventually) and the other will go to the network equipment and will later add a VM server.

Currently I run two 850w UPS; one for my gaming rig and single monitor (gaming rig only gets 8 mins of battery backup) and the other for the media server and all my network equipment (I dont have the RAID array setup yet on the media sever but as it sits now I get 20mins).
I've got my central server and my workstation on a 1500W BackUPS. Everything else is just on a fancy surge protector. If anything goes down, I can bring it back up by remoting in to my workstation. After 30 minutes without juice, the main server goes into graceful shutdown and the workstation just goes off when there's nothing left.

I've also got a 1000W BackUPS for my Mac in the bedroom (mainly for APC's payout if lightning kills my gear) and a 750W UPS just for the router/modem. 750W's long enough that the house would need to be out of power for a full day+, and I'd be home by then. The only rooms in my house without a UPS are the kitchen and bathroom.
 
I'm only going to talk about the three x 1u servers since they are the only ones running at the moment. The other servers are a 2 node 2U which will be a single Nutanix Community Edition node once the storage arrives for backup/replication. The other node will be running experimental stuff. The top server has been gutted but love the chassis. This will eventually turn into an all flash array at some point when larger SSD prices fall in a comfortable area to take on that endeavor. In the meantime, my Nutanix Cluster does everything I need right now. Ahh..the NUC, that's my primary Domain/DNS server running Server Core 2012R2. Btw if your wondering about storage usage, I only have one Windows VM and that's just a template, the majority are Linux, from CentOS to Ubuntu Server to CoreOS.

3 x Supermicro SYS-1026T-6RFT+ with the exact same config:

CPU: 2 x Xeon 5680 6 core CPUs
Memory: 96GB DDR3 ECC
Disk: 2 x Samsung Data Center 843T 480GB
Disk: 4 x Seagate 2.5" Hybrid 1TB Drives
Controllers: Onboard LSI2108
Network: 2 x 1Gb Intel/2 x 10Gb Intel SFP+/1 x 10/1000 iPMI (onboard)
Software: Nutanix Community Edition
Configuration: 3 x Node - RF2

20fceq9.jpg


2uotbvb.png
 
Last edited:
vlzOF3r.jpg

hXDQsuE.jpg


Made by AIC. We've built multi-PB systems using these.

Case: AIC RSC-4H, with 60 SATA/SAS bays, takes up 4U
PSU: AIC triple redundant
Motherboard: Supermicro X10SRL-F
CPU: Intel Xeon E5-1620 v3 / 4x 3.50GHz
RAM: DDR4, 2133MHz, registered ECC
Controller Cards: LSI HBA or RAID
Hard Drives: 6TB Toshiba MD04ACA600
Operating System: Linux based

3 x backplanes with SAS expanders. Very easy to cable.

We use these as ZFS nodes in a cluster.

We've sold a few of these and advised clients outside of the UK to purchase them.

Edit: quad power supplies, not triple.
 
Last edited:
Made by AIC. We've built multi-PB systems using these.

Case: AIC RSC-4H, with 60 SATA/SAS bays, takes up 4U
PSU: AIC triple redundant
Motherboard: Supermicro X10SRL-F
CPU: Intel Xeon E5-1620 v3 / 4x 3.50GHz
RAM: DDR4, 2133MHz, registered ECC
Controller Cards: LSI HBA or RAID
Hard Drives: 6TB Toshiba MD04ACA600
Operating System: Linux based

3 x backplanes with SAS expanders. Very easy to cable.

We use these as ZFS nodes in a cluster.

We've sold a few of these and advised clients outside of the UK to purchase them.

Just curious, what kind of drive temps do you see on those (in a conditioned space environment) and what fans are those?

What kind of networking/storage netowk is used in the ZFS cluster?
 
Just curious, what kind of drive temps do you see on those (in a conditioned space environment) and what fans are those?

What kind of networking/storage netowk is used in the ZFS cluster?

Drive temperature depends on airflow and ambient tmp.

Usually less than 30C.

The fans are "San Ace" (Sanyo Denki). I don't have the model number, I can pull a fan tomorrow when I'm back in work.

There is really good airflow between the drives, as the drive caddies do not have any tray above or below the drive. So more of a gap for air.

The storage network used to export the storage to a ZFS head controller is InfiniBand. 40 Gbit QDR or 56 Gbit FDR.

You can of course use these as standalone boxes.
 
Made by AIC. We've built multi-PB systems using these.

Case: AIC RSC-4H, with 60 SATA/SAS bays, takes up 4U
PSU: AIC triple redundant
Motherboard: Supermicro X10SRL-F
CPU: Intel Xeon E5-1620 v3 / 4x 3.50GHz
RAM: DDR4, 2133MHz, registered ECC
Controller Cards: LSI HBA or RAID
Hard Drives: 6TB Toshiba MD04ACA600
Operating System: Linux based

3 x backplanes with SAS expanders. Very easy to cable.

We use these as ZFS nodes in a cluster.

We've sold a few of these and advised clients outside of the UK to purchase them.

Edit: quad power supplies, not triple.

That is a thing of beauty. I bet it costs an arm and leg though Would love to have something like that.
 
Agreed. and arm and a leg for sure, probably my wife if I tried to make the deal.

So whats the USD retail on one of those? (wink)
 
you can watch the same porn again and again and get something out of it? For me it used to be just once and done.
 
what kind of stuff do you guys have on those HUGE storage drives?

Customers use them for large internal systems. Some live data, some archive.

Also cloud companies use them for their storage.

If using 6TB drives, total raw is 360TB.

That sounds huge now, but in the future your phone will hold more than that :cool:
 
Just got my last 2 drives in yesterday and she is currently initializing the array.
Raid6: 8 x 4Tb WD Red Pro drives = 22Tb

12219562_900803406803_8260101028376736677_n.jpg


12191444_900911026133_7879135408590235199_n.jpg
 
Last edited:
Back
Top