The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Well, somehow I knew it would be an Addonics SiI-based 3114 clone. That's probably the best "dumb" controller available for the PCI bus. I mean, 4 ports, PM support, per-drive LEDs... The only thing missing really is the dedicated parity engine (which isn't even necessary for WHS use...).

As for the RAID/non-RAID option, I believe cards based on this chip can be flashed with one of two firmware files, one with RAID support, the other one without.

Cheers.

Miguel
 
Well, somehow I knew it would be an Addonics SiI-based 3114 clone. That's probably the best "dumb" controller available for the PCI bus. I mean, 4 ports, PM support, per-drive LEDs... The only thing missing really is the dedicated parity engine (which isn't even necessary for WHS use...).

As for the RAID/non-RAID option, I believe cards based on this chip can be flashed with one of two firmware files, one with RAID support, the other one without.

Cheers.

Miguel

yea, works great just to add some extra sata ports using PCI, never had an issue with them.
 
Simple, I leave the server on 24x7 (I turn my gaming rig off at night and restart it to frequently to run VMs for my liking) and when I am playing a game I know that 100% of the resources are available to the game. The server also draws less power and I can remote connect to it from my gaming rig so it works just as well for me.

Thx for the information. Just what kind of activities are you doing with the virtual machines? I'm not sure what kind of computing power I'm going to need for a file server I'm thinking of building. I'm not sure for the common Joe, what kind of benefits virtualization will do for me. Are you just testing images or software through virtualization or what?
 
If anyone is interested I upgraded my entry. Only 1 TB added to total storage pool in SSD storage (I will be upgrading my main rig to 2TB drives soon adding an additional 40 TB).

I updated the pics since upgrading my home rig (cpu/mobo, etc..) and also did a completely make-over on my server. Basically a totally new machine except raid controller/drives that were swapped to the new machine. It kind of looks funny now as its got a crazy number of RJ-45 connecting cables going into the back of it =)

Updated post link:

http://hardforum.com/showpost.php?p=1033721843&postcount=7
 
Question,Would this count combined to 10 TB when I do this?
One has 5 1 TB sata, the second ( disk array ) has 16 400 GB sata drives.

i've a server with a Esata card connected to a MobileRAID MR5CT1 http://www.sansdigital.com/mobileraid/mr5ct1.html

Looking at a 2nd system with a similar amount of storage, 16 400 GB sata drive via FC (5-6 TB) haven't decided if it will connect to the same server or not. ( this is at work now )

See pics of the two openfiler disk ( formatted space )




EDIT: total rated / labeled disk would be 11400 GB
 
Last edited:
If anyone is interested I upgraded my entry. Only 1 TB added to total storage pool in SSD storage (I will be upgrading my main rig to 2TB drives soon adding an additional 40 TB).

I updated the pics since upgrading my home rig (cpu/mobo, etc..) and also did a completely make-over on my server. Basically a totally new machine except raid controller/drives that were swapped to the new machine. It kind of looks funny now as its got a crazy number of RJ-45 connecting cables going into the back of it =)

Updated post link:

http://hardforum.com/showpost.php?p=1033721843&postcount=7

Wow, nice, particularly the colo. Why does the colo need so many different network connections? And does that main machine really need 48GB of memory for file serving anime? :p
 
Wow, nice, particularly the colo. Why does the colo need so many different network connections? And does that main machine really need 48GB of memory for file serving anime? :p

The colo box only has 24 GB of ram (my home rig that also has more storage is the one with 48 GB).

Basically the gig connection I only use for hosting some speedtest servers and my game-servers and don't actually push that much traffic through it. Before I had 4x100m to my server and had 3 of the links bonded (802.3ad) and the other link was for my game-servers. The majority of traffic goes over the bonded interface.

Because I already had the 4x100m links ran to my server (before I upgraded my game-server port to gigabit) when upgrading I decided to just get the 4 port NIC (as it wasn't that expensive) and use the existing cables so I went from 4 ports -> 6 ports. The yellow cables are for the private network. Before I didn't have my server hooked up to this but decided to do it now as i had an extra link even after doing the 4x100 meg. The other two private links are for IPMI and my areca controller. I didn't really want these to be public and the motherboard had a dedicated port for IPMI (instead of using one of the onboard NIC's like my home machine does).

Because of this my server basically has 8 RJ-45 connections going into which makes it look a bit crazy =) If you count the serial port that converts it to go over a cat5e/6 cable then it makes it 9.

The color coding is:
Yellow (private network)
Green (public/WAN 100m)
Purple (in this case public/WAN 1000m)
Pink (goes to serial adapter, for serial console)

And I just realized in this pic:



You can actually see my old case/mobo/cpu/ram from my old server (out of focus) in the right side of the pic, hehe. I took out the hot-swap fans so I could easily disconnect the SATA cables from the backplane. I also did this on the new server and then I *forgot* to put them back in. Doh! Luckily I realized it before running the machine more than 2-3 minutes.
 
Last edited:
The colo box only has 24 GB of ram (my home rig that also has more storage is the one with 48 GB).

Basically the gig connection I only use for hosting some speedtest servers and my game-servers and don't actually push that much traffic through it. Before I had 4x100m to my server and had 3 of the links bonded (802.3ad) and the other link was for my game-servers. The majority of traffic goes over the bonded interface.

Because I already had the 4x100m links ran to my server (before I upgraded my game-server port to gigabit) when upgrading I decided to just get the 4 port NIC (as it wasn't that expensive) and use the existing cables so I went from 4 ports -> 6 ports. The yellow cables are for the private network. Before I didn't have my server hooked up to this but decided to do it now as i had an extra link even after doing the 4x100 meg. The other two private links are for IPMI and my areca controller. I didn't really want these to be public and the motherboard had a dedicated port for IPMI (instead of using one of the onboard NIC's like my home machine does).

Because of this my server basically has 8 RJ-45 connections going into which makes it look a bit crazy =) If you count the serial port that converts it to go over a cat5e/6 cable then it makes it 9.

The color coding is:
Yellow (private network)
Green (public/WAN 100m)
Purple (in this case public/WAN 1000m)
Pink (goes to serial adapter, for serial console)

And I just realized in this pic:



You can actually see my old case/mobo/cpu/ram from my old server (out of focus) in the right side of the pic, hehe. I took out the hot-swap fans so I could easily disconnect the SATA cables from the backplane. I also did this on the new server and then I *forgot* to put them back in. Doh! Luckily I realized it before running the machine more than 2-3 minutes.

I take it, that this is not hosted in your home.
 
The colo box only has 24 GB of ram (my home rig that also has more storage is the one with 48 GB).
I understand that - I meant why does your home rig need the 48GB?

Ok so pretty complex with the networking :p When you say 'private network', is that between several machines of yours in the rack in the datacenter?
 
I understand that - I meant why does your home rig need the 48GB?

Ok so pretty complex with the networking :p When you say 'private network', is that between several machines of yours in the rack in the datacenter?

It doesn't *need* 48 GB but it can be nice to have everything fit in ram while par repairing and rar extracting a 20 GB download off usenet.

The only machine that is mine in that rack is the 2U that is labeled houkouonchi/sandon. This is colo'd at where I work (I get free colo).

The rack my server is in (and the one to the right of it) are the 'employee' racks. The private network is just a network that is on a 10.0.0.0/8 IP address that is only accessible through VPN to my work or SSH tunnel through one of our servers that has both a private and public network interface (almost all do).
 
It doesn't *need* 48 GB but it can be nice to have everything fit in ram while par repairing and rar extracting a 20 GB download off usenet.

The only machine that is mine in that rack is the 2U that is labeled houkouonchi/sandon. This is colo'd at where I work (I get free colo).

The rack my server is in (and the one to the right of it) are the 'employee' racks. The private network is just a network that is on a 10.0.0.0/8 IP address that is only accessible through VPN to my work or SSH tunnel through one of our servers that has both a private and public network interface (almost all do).

Cool, thanks for bothering to explain ;) Needless to say I'm jealous.
 
I have updated my system, removed a controller and some drives, i dont need that much space just yet. http://hardforum.com/showpost.php?p=1035294247&postcount=671
Sweet rig(s)! And very clean-looking to boot. Nice!

Now, a little nagging (you can't be perfect, right? :p Just kidding).

I think something is "off" in your description of the server: you say you have two arrays on your ICH10R. I have no problems whatsoever with the 4-drive RAID5 array, but it sounds really odd that you've been able to fit another RAID5 array on that controller, especially a 2-drive one), seeing as the ICH10R only has 6 ports... My guess is you meant to say it's a RAID0 (or RAID1, I couldn't figure that one out) array, right?

I know, I know, I'm being picky. Don't mind me, the rest is just impressive!

Cheers.

Miguel
 
It doesn't *need* 48 GB but it can be nice to have everything fit in ram while par repairing and rar extracting a 20 GB download off usenet.

The only machine that is mine in that rack is the 2U that is labeled houkouonchi/sandon. This is colo'd at where I work (I get free colo).

The rack my server is in (and the one to the right of it) are the 'employee' racks. The private network is just a network that is on a 10.0.0.0/8 IP address that is only accessible through VPN to my work or SSH tunnel through one of our servers that has both a private and public network interface (almost all do).

Cool, was wondering how much that colo space was costing, heh.
 
I don't have time to update right now, I'm gonna see if someone else wants to handle the updates, and if not, I'm just going to get rid of the list and let everyone just post there systems
 
I don't have time to update right now, I'm gonna see if someone else wants to handle the updates, and if not, I'm just going to get rid of the list and let everyone just post there systems

I volunteer to maintain it, if it is possible to transfer the post to me?
 
longblock454

I just read your sig, and I almost died.
You Sir are awesome :)
 
Welp, here is my "10TB+ system".
9.3TB formated, 12TB advertised.

Code:
/dev/md0:
        Version : 1.01
  Creation Time : Fri Feb 26 17:21:23 2010
     Raid Level : raid5
     Array Size : 9767569920 (9315.08 GiB 10001.99 GB)

AMD Phenom II X3 720
GIGABYTE GA-MA790XT-UD4P
4GB G.SKILL DDR3 1333
Corsair 550Watt
ATI 4550 HD Video card (fanless)
ZALMAN MS1000-HS2 (mmm 6 hotswap slots)
6x SAMSUNG Spinpoint F3EG HD203WI 2TB 5400 RPM
Arch Linux 64bit

Software raid, which is fine because the performance is still pretty good.
Code:
/dev/mapper/array-media: (raid)
 Timing cached reads:   7560 MB in  2.00 seconds = 3782.22 MB/sec
 Timing buffered disk reads:  674 MB in  3.00 seconds = 224.56 MB/sec

/dev/sdb: (SSD, for comparison)
 Timing cached reads:   6992 MB in  2.00 seconds = 3499.09 MB/sec
 Timing buffered disk reads:  484 MB in  3.00 seconds = 161.20 MB/sec



Mostly this is for my media center. The video card is so I can hook it up to my TV and play movies/tv/music. When popbox or some other media box comes out I'll hook it up to that. It's a NFS/FTP as well, for some other stuff to backup as well. Software raid only, the performance is fine for my purposes.

Pictures:
jVA9A.jpg

p7ow8.jpg
 
Last edited:
That should be 12TB advertised (6x2TB) + the size of the system drive :)

Edit: I also hope you have a coldspare, or can get one fast, because raid5 with 2TB drives is imho abit like playing with fire...
 
Last edited:
The List has been updated. Dead posts removed and all the modifications/additions I could find have been posted.

zydas, it appears your actually 12TB, if true please update your post and I'll get you on the list.
 
Hey Zydas,

How are the samsung F3's debating on whether to go with the Samsung F3EG's or WD20EADS for my build but there haven't been too many reviews on the F3EG's
 
Specs:
P4 3.06 GHz on Ausus Mobo
1GB RAM
Server 2008
intel pro 1000 server nic
supermicro aoc-sat2-mv8 card

Drives:
2x 250GB IDE (OS + torrent landing zone)
2x Hitachi 2TB
2x Seagate 1.5TB
1x WD Green 2TB
1x WD Green 750G
1x Maxtor 160GB

10TB_4.png

10TB_5.JPG

10TB_2.JPG

10TB_6.JPG

10TB_3.JPG

UPS has server, router, modem, and laptop on it
10TB_1.JPG

10TB_7.JPG
 
Hey Zydas,

How are the samsung F3's debating on whether to go with the Samsung F3EG's or WD20EADS for my build but there haven't been too many reviews on the F3EG's

I've got no complaints on them. They're pretty quick for 5400RPM, no errors on the ones I got. I've heard of people with some problems w/ the earlier 2tb WD 'green' drives, so I figured I'd stay clear. Though, I have a 1TB WD somewhere that's been fine for a year.
 
12tb2.jpg


Not the largest in this thread for sure, but pretty decent for a 25 year old grad student who should be spending his money on other things.

It's running in the original Cooler Master Stacker- which I've had since like 2005, I want to say. Sturdy as hell case. To think, I started off with a dual-cpu P3 system and 8x 200gb drives in hardware RAID5 back then... oh, how times have changed. I still remember posting in the "post your 1tb+ system!" thread and feeling somewhat proud over being one of the few at the time. Running a Q6600 with 4gb of RAM now. Not super impressive, but works fine.

2x WD 640's, 2x WD 1.5TB's, and 2x Samsung 1.5TBs running on the Intel SATA controller, 4x WD 1TB's running on a Highpoint Rocketraid 2310 (4x SATAII over PCI-E 1x), and 2x WD 1TB's running on some Sil3112 2-port SATA1 controller that's supposed to be used in Macs.

Probably done on expansion for a while. Every slot in the Cooler Master 4-in-3 bays is full.
 
Read the first post to get the format right, then we can get you on the list aswell.

And, no pics of the actual server? tsk tsk :p
 
11.75TB
10.95TB

Motherboard: Intel DP55WG
CPU: Intel i7-860
Case: NORCO RPC-4020 4U Rackmount Server Case
PSU: Corsair CMPSU-750HX
Rack: StarTech 4x4
HBA: 2x Supermicro SASLP-MV8
OS: WHS
HDD: Samsung 750gb x 1, Seagate 1.5tb x 4, WD20EADS x 2, and WE10EADS x1

So here is my attempt to throw my hat into the ring.

I started with this lil 4 bay chenbro case and modified wireless switch:
mg2097.jpg

And turned it into the following:
mg2133.jpg

39686428.jpg

whsdisks.jpg

I haven’t purchased all of the drives for it yet as I don’t have it all set up and am still playing with it before it replaces my chenbro setup. First thing i did was replace those stock screamers with some quieter fans, i'll be increasing its workload soon when i decide if im going to hold my breath for VAIL to be everything i need or just go with RC2
 
servers.jpg


Left Norco: Vault

Advertised space: 20TB
Usable space after format and with parity: 16.3TB

Motherboard: Supermicro C2SBA+II
CPU: Celeron 440
Ram: 2x 2GB DDR2 800
Case: NORCO RPC-4220
PSU: SeaSonic S12 430w
HBA: 2x Adaptec 1430SA, 1x SIL3112 x1 card. (A SASLP-MV8 will replace a 1430SA eventually for 20 ports. Its now supported in unRAID.)
OS: unRAID 4.5.3
HDD: WD20EADS x 10

Vault_unRAID_Server.png


Right Norco: Tower

Advertised space: 20.15TB
Usable space after format and with parity + cache: 17.2TB

Motherboard: Supermicro X7SBE
CPU: Core 2 E8400
Ram: 2x 2GB DDR2 800
Case: NORCO RPC-4220
PSU: SeaSonic S12 600w
HBA: 2x Supermicro AOC-SAT2-MV8s
OS: unRAID 4.5.3
HDD:WD10EADS x 20, WD1500HLFS x 1

Tower_unRAID_Server.png


Mini-ITX: Chenbro

Advertised space: 8.06TB
Usable space after format: 7.5TB

Motherboard: Intel DQ45EK
CPU: Core 2 E8400
Ram: 2x 2GB DDR2 800
Case: Chenbro ES34069-BK-180
PSU: 180w
HBA: e-sata to sata cable routed inside for the 5th port
OS: Server 2008 R2 with Hyper-V hosts. (No VMWare ESXi support for the mobo)
HDD: WD20EADS x 4, OCZ Vertex 60GB x 1

I'm in the process of splitting the 14 2TB drives to the both Norcos to hold my duplicated data and selling the 21 1TB drives. The Chenbro will get my unused 1.5TB Seagates as replacements for the lost 2TBs.

The backup Norco will eventually be off-site and connected through a dedicated 54g CPE. Right now its woken on lan and rsynced over ssh once a week.
 
Romir, I need a total advertised and most in a single chassis at the top of your post. There is a good outline on the first post, then I'll get you on the list.
 
Hi Guys

this my first post :) , i have been on overclockers.com.au for a while now if anyone posts on there you might have seen me

before i post my rig and total storage i just have a quick question

does tape storage count towards total storage , i have an LTO3 tape drive and 40X400 GB tapes (personally)
 
Last edited:
Status
Not open for further replies.
Back
Top