The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Asus P5MT w/Pent D proc.
4gb Ram.
3ware 9550sx-8lp controller.
openfiler 2.99
8 2tb Seagate Drives, software raid6
total of 4 gb ethernet ports in a bond.

running iscsi & cifs .
 
An update of my server, moved to a different case, replaced few hard drives, replaced the controller.

Setup :
Xeon E3 1235 cooled by Corator DS
ASUS P8B WS
2x8GB + 2x4GB G-Skill RipjawsX RAM
IBM ServerRAID M1015 used as HBA
Digital Devices CineCT v6 dual DVB-C/T card
Seasonic X-460FL
Nanoxia Deep Silence 1 case
Corsair Force 3 240GB as system drive
4xWD30EFRX, 2xWD20EARS, 6xWD20EARX in a Greyhole drive pool (no RAID), 26TB usable space

frontssq.jpg

backswf.jpg
 
I wonder how you would see my contraption then. Sure I use 3 cases but I use internal connections and have only 1 PSU and socket connection.

IMG_0463.jpg

The 2 bottom boxes are just drive boxes and have no logic at all. I consider it as one case I just made it in 3 boxes so I could take it to LAN parties without breaking my back.
IMG_0459.jpg

note: old hardware inside it.

Setup :
CPU: Xeon E3 1265LV2
Motherboard: Supermicro X9-SAE-V
Memory: 16 GB ECC registered ram
Expansion Cards: 1x IBM ServerRAID M1015 IT Flashed and 2x HP SAS Expanders
PSU: Corsair HX1000
Case: 3 part custom modular case used as 1 system
SSD: 2x Intel Postville 160 GB
HDD: 8x WD Green EARX 3 TB & 8x Hitachi 5k4000 4 TB
OS: ZFSGuru
usage:
1 85 GB Boot partition (SSD1)
6 10 GB SLOG partitions (SSD1)
6 25 GB L2ARC partitions (SSD2)
RAIDZ2 8x WD Green EARX 3 TB
RAIDZ2 8x Hitachi 4K5000 4 TB
effective space around 37TB according to Windows but slowly expanding ;)
 
Because I got the drives for free and the machines will not be running 24/7 (just for backups.

Also I got the chassis (with internals) for only $250 it just needed disk caddies which I could get for free from work (with drives). All the drives were failed that work is not even bothering to RMA anymore so I was able to grab the failed ones and RMA them and use the replacements. The internals were pretty weak (dual core opteron with 8 GB of ram and PCI-X supermicro controllers) but for that + the chassis for $250 you can't go wrong. Its fast enough just for backups. Already seeing some pending and re-allocated sectors on the drives i got back from seagate:

Holy crap that's cheap! The backplanes alone are about $270 each. I use Norco chassis personally, but if I had that sort of price on multiple SC846's, I would jump all over them.
 
IMG_2019_zps8967e019.jpg


Windows Storage Server 2012
X9SCM-F-O
E3-1230v2
32GB ECC
6x Seagate ST3000DM001 3TB
Corsair TX650
16.3TB available via simple Storage Space
APC Back-UPS XS1000

Oracle Solaris 11.1
X9SCM-F-O
E3-1230
16GB ECC
6x Seagate ST3000DM001 3TB
Seaonic SS-400FL2
13.2TB available via ZFS raidz
APC Back-UPS XS1500
 
Last edited:
Odd. You've got 32GB in the Windows Storage Server 2012 machine but only 16GB in the Solaris/ZFS box. Yet Storage Spaces is almost cache-less while ZFS will use every bit of memory you give it to improve performance.

Seems backwards.
 
twas what I was thinking - maybe he's doing hardware raid on the solaris box
 
Seems backwards.
twas what I was thinking

Actually, they're both VMs. The two boxes are the ESXi hosts, with the drives passed through to the VMs. Solaris is rarely hit - It's a backup of the data on the Windows VM. Since I'm limited to gigabit line speed on the Solaris box, I saw no real reason to give it a ton of RAM. If scrubs/resilvers/whatever take a little bit longer, it's no biggie. I'd like to build up a SAN and toss the drives in, but I've got to wait until I pay off a bit of debt first. For now, this serves me just fine.
 
Makes sense man. Ram is so cheap tho it couldn't hurt.

I'm all about people getting debt off their balance sheets.
 
oA8MVzC.jpg


Here is my home server. It started out as just an HTPC with 6 drives. As time went on I needed more space so I got the Rosewill RSV-S8. I'm now on my second RSV-S8. Once I run out of space with my current setup I'll build a proper rack mounted server with a real raid controller.

Internal
Case: nMEDIAPC Black HTPC 5000B Micro ATX Media Center
OS: Windows Server 2008
Controller: HighPoint RocketRAID 644 PCI-Express 2.0 x4 SATA III Ewwww

6 Internal HHDs: SAMSUNG 1.5TB EcoGreen F2 5400rpm
1.5TB x 1 Boot / non critical backup
1.5TB x 2 RAID 1 for personal backup
1.5TB x 3 RAID 0 Scratch Drive

External
Cases: X2 - Rosewill RSV-S8
2TB x 8 RAID 5 (SAMSUNG EcoGreen F4 5400rpm)
3TB x 8 RAID 5 (Seagate Barracuda 7200rpm)

Total: 49TB
Usable: ~39TB

mhkTt7j.jpg


LbRTsvM.jpg


For some reason it was $30 cheaper per drive to buy external and rip out the internal than to buy the internal in the first place. Here I am testing the drives before I put them in the RSV-S8. Now I have 8 extra external HDD cases haha. This obviously voids the warranty but I figure as long none of the drives die within a year it will be worth the risk.

xt1WF0J.jpg


dRnSmKI.jpg


Trying to keep the server running while painting the living room. Unfortunately one of the RAID cables came loose and I had to restart the server. The RocketRAID 644 is a POS.
 
Last edited:
moved all my storage to a colo

xServe Dual G5 2.3ghz 120Gb Intel SSD & 2Tb RAID1
xServe RAID 250Gb x14 RAID5+0
Dell PowerEdge R200 with a quad core2duo something, 2x OCZ vertex 4 120Gb RAID0 & 1Tb Barracuda ES - running ESXi 5.1
Dell PowerEdge R610 L5639 24gb ram 5x 160Gb vraptor 10k sata datastore, 160gb OCZ Vertex 4 & 2x dual 4Gb FC cards passed through to a VM - also running ESXi 5.1
EonStor 16x 2Tb RAID6
EonStor 16x 1Tb RAID5+HS
EonStor 16x 1Tb RAID5+HS

2013-03-11-01.19.17.jpg

2013-03-11-01.20.20.jpg
 
that's awesome, love the colo pr0n, we should have a thread for that.
 
moved all my storage to a colo

Very nice.

Do you share this colo with anyone? That's alot of rackspace your taking up there.
What are you using all that stuff for? VPS / Web hosting?

It also looks like you are moving from RAID5+HS to RAID6. Did those rebuild times scare you?
 
friend of mine owns a small colo and I do some work for him from time to time... he told me if I ever need some colo space for personal stuff to let him know... I asked him if he was "absolutely sure" he wanted to make that offer and he said yes, so I showed up with that pile of stuff and he was like :eek: lol

I am bringing in some clients though that stuff isn't online yet

the ones doing 5+HS do not do RAID6 so that's as good as its going to get.. the rebuild time is not that terrible, think it was <24 hours last time... I may try doing a rebuild on the bottom one since I have not moved any data onto it yet to see how long it takes... I could do 5+0, not sure, still thinking about it... might change one of the 5+HS to a 5+0 not sure yet...

as far as what the servers are doing, the xServe is running Shout Cast for a couple friends of mine and the 2Tb RAID1 in it mirrors the 2Tb RAID1 I have in my desktop

the xServe RAID holds backup of the same data but with daily incremental backups

the R200 is running VMWare and is running a database for that which shall not be named

the R610 is running VMWare and is running a bunch of Windows XP VMs for... stuff... and a 2003 VM for my file server

the EonStor arrays are FC attached to the R610 and hold media of various types
 
He only subscribes to Playboy for the articles. The dirty filthy and nasty stuff is on the hard drives.
 
Hi guys been a long time lurker but never put up my system until now.

Specs-

Lian-Li p80b
4x x-case 5-into-3 hd caddys
Tyan S7025
2x E5530 Quad Xeons (2.4ghz)
12GB Ram
Supermirco MV8
Modified so now near silent

120gb OCZ Solid 3 (Win Server 2008 boot drive)
11x Samsung 2TB
2x Hitachi 2TB
1x Samsung 1TB
1x Hitachi 1TB

2x Seagate 3TB (Backup sit on shelf)

Each drive is currently configured as JBOD as seemed easier at the time to manage backup of my media files. (i.e lose a 10th of my data and i can see what ive lost and replace.)
Tempted to move to RAID but not sure which way to go? Any suggestions

Currently its in place as my fileserver/Game server. Want to change to VMWARE but dont want to risk fudging up my current config!
Current Capacity - 34TB - WOW Never worked that out before :s

2013-03-04%2015.54.47.jpg
 
Last edited:
Tempted to move to RAID but not sure which way to go? Any suggestions

Currently its in place as my fileserver/Game server. Want to change to VMWARE but dont want to risk fudging up my current config!

I too would like to move to a VM setup like ESXi but I'm afraid something would go wrong in the migration process and my cheap little raid card would not support it.

One problem you might face is that you will need to format all the identical drives that go into the raid.

If you are unwilling to purchase extra drives you would have to backup and format at least four of your 2TB Samsung’s, build an initial four disk raid-6, move data from backup to the RAID, expand the raid one or two disks at a time with the disks that were bring used as temp backup, rinse and repeat until you have an 11 disk raid-6. This would take a very long time.
 
SunStorage 7310 SAN providing storage for just about everything (user areas, shared files, application and deployment files/images, couple of databases, iSCSI target for the virtualised servers).
Around 2000 users total, but probably only about 500 logged in at any given time.

Quad core Opteron, 16GB RAM, 2x500GB SATA for Solaris, and 22x 1TB SATA for data (plus two 16GB SSDs for ZIL - mirror, I'm assuming).

Set up by a previous admin in one big 22 drive RAID-Z2 :eek: Performs surprisingly well considering. According to the web interface it has a fairly steady load around 1500-2000 IOPS but bursts to around 4000. I'll try and get a screenshot.

If I could start from scratch I'd prefer to replace the tiny SSDs with an extra two 1TB drives and make 4 6-drive Z2 vdevs, but for one thing that wouldn't net quite as much capacity. We're currently using about 14TB and that config would only provide 16TB.
Oh and more RAM :cool:
 
Last edited:
That doesn't exactly count unfortunately because it's your employers. I should post an update...been at 100TB+ for a while.
 
That doesn't exactly count unfortunately because it's your employers. I should post an update...been at 100TB+ for a while.

I checked the OP first and in the FAQ it says: "Home unless you own the business, but if not, feel free to post, it just won't count towards the rankings."

Who cares about the rankings or who owns the system? I just want to see other people's cool systems and have the opportunity to post mine.
 
SunStorage 7310 SAN providing storage for just about everything (user areas, shared files, application and deployment files/images, couple of databases, iSCSI target for the virtualised servers).
Around 2000 users total, but probably only about 500 logged in at any given time.
67pkes.jpg

Quad core Opteron, 16GB RAM, 2x500GB SATA for Solaris, and 22x 1TB SATA for data (plus two 16GB SSDs for ZIL - mirror, I'm assuming).

Set up by a previous admin in one big 22 drive RAID-Z2 :eek: Performs surprisingly well considering. According to the web interface it has a fairly steady load around 1500-2000 IOPS but bursts to around 4000. I'll try and get a screenshot.

If I could start from scratch I'd prefer to replace the tiny SSDs with an extra two 1TB drives and make 4 6-drive Z2 vdevs, but for one thing that wouldn't net quite as much capacity. We're currently using about 14TB and that config would only provide 16TB.
Oh and more RAM :cool:

Yeah ram should be boosted for sure, and a single 22-disk raidz2??! wow in a production grade setup? wow
 
Doesn't matter anyway this tread is dead, the owners are to lazy to update the topic anyway, the ranking is no longer relevant.
 
Yeah ram should be boosted for sure, and a single 22-disk raidz2??! wow in a production grade setup? wow

Yea, problems being:
a) It's not really my area of responsibility, so I can't just jump in and start making changes;
b) It's on a support contract and I don't know whether we'd be allowed to make hardware changes without that being affected;
c) Theoretically we should be able to just destroy the pool knowing that the data can be pulled from backup, but in practice I'm not all that confident that our backup setup/strategy is robust enough for us to be able to easily do that (plus, see a) again)
d) The people making the decisions and paying for the upgrades would need to be convinced to "fix something that isn't broken", as it were.
 
Last edited:
sit tight guys, rankings will be updated soon and original post cleaned up a bit
 
Amount of storage in the following system: 36.8TB

Case: Norco 4220
PSU: Enermax ERV1050EWT Revolution85+ 1050W
Motherboard: Supermicro X9DRH
CPU: E5-2620
RAM: 96GB DDR3 ECC Reg
GPU (if discrete): Onboard
Controller Cards (if any): Dell H200, IBM M1015, HP SAS expander
Optical Drives: None
Hard Drives (include full model number): 6*Hitachi HDS5C303-A580-3TB, 6*HITACHI 0F10311 2.0TB, 6*HITACHI Deskstar 7K1000.B 0A38016 1TB, 1*80GB 2.5" boot disk brought over from old config so I can copy off files, 2*320 laptop drives for backup/boot pool, FusionIO 80GB for "zones" pool.
Battery Backup Units (if any): Liebert Gxt3 2000rt230, attached to Surgex SXN240
Operating System: Smartos 20130111

Originally, everything was on the 6 1TB disks, which are arranged as a single raidz2 vdev; I got the 2TB disks in about a week ago and I'm in the process of verifying that they're not DOA and doing some stress tests. Then all the data will get moved off the 6*1 pool and onto the 6*2 pool, and then I'll destroy the 6*1 pool and add those disks as a second vdev to the new pool. This will give me 12TB of real capacity.
Well, what actually happened is I went out and bought 6*3TB disks, and created a new pool. I sent everything over via zfs send | zfs recv, then used just the 3TB disks for a while. I converted the 1TB disks into a backup pool, in raidz3, and just recently ran low enough on space that I added the 2TB disks back into production. The main storage pool is now at 15.5 allocated, 11.6 free.

I have 96GB of RAM in this system; in addition to file-serving duties, it runs several virtual machines. Crashplan, Plex, and FreeIPA each have their own Linux VM, and the bittorrent client, web server, general-purpose login, and CFengine 3 server each have their own zone-based VM. 32GB was enough to scrape by when I was running ESXi, but when VMs are as convenient as they are on this system, I sorta blew past the limits pretty fast.
 
Amount of storage in the following system: 36.8TB

Case: Norco 4220
PSU: Enermax ERV1050EWT Revolution85+ 1050W
Motherboard: Supermicro X9DRH
CPU: E5-2620
RAM: 96GB DDR3 ECC Reg
GPU (if discrete): Onboard
Controller Cards (if any): Dell H200, IBM M1015, HP SAS expander
Optical Drives: None
Hard Drives (include full model number): 6*Hitachi HDS5C303-A580-3TB, 6*HITACHI 0F10311 2.0TB, 6*HITACHI Deskstar 7K1000.B 0A38016 1TB, 1*80GB 2.5" boot disk brought over from old config so I can copy off files, 2*320 laptop drives for backup/boot pool, FusionIO 80GB for "zones" pool.
Battery Backup Units (if any): Liebert Gxt3 2000rt230, attached to Surgex SXN240
Operating System: Smartos 20130111


Well, what actually happened is I went out and bought 6*3TB disks, and created a new pool. I sent everything over via zfs send | zfs recv, then used just the 3TB disks for a while. I converted the 1TB disks into a backup pool, in raidz3, and just recently ran low enough on space that I added the 2TB disks back into production. The main storage pool is now at 15.5 allocated, 11.6 free.

I have 96GB of RAM in this system; in addition to file-serving duties, it runs several virtual machines. Crashplan, Plex, and FreeIPA each have their own Linux VM, and the bittorrent client, web server, general-purpose login, and CFengine 3 server each have their own zone-based VM. 32GB was enough to scrape by when I was running ESXi, but when VMs are as convenient as they are on this system, I sorta blew past the limits pretty fast.

how are you finding/liking the hitachi's? any comments on reliability? :)
 
Status
Not open for further replies.
Back
Top