The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
48 TB of disks.
35.2 TB usable.
Phase 4. Last week the Chenbro CK23601 finally arrived. Got 12x new Samsung 2TB HDDs which became a new Z2 pool. The 1.5TB WDs went into other machines. Got 6x new WD Green 2TB HDDs and they, along with the existing WD 2TB drives, became a second Z2 pool. 48TB. 35.2TB usable


I thought RAID-Z2 uses 2 drives worth of parity, so 1.81 TB * (24 - 4) = 36.2TB. How'd you lose another TB?
 
You guys are nuts. I just put together a 4tb ZFS NAS in a 4 drive raid1z config. That should last me a while. I want to use 3tb drives in a year or so when they get cheaper. Some people with 20TBs of data, that's a lot of porn! :p
 
Sounds fishy :|

you gotta hide that pr0n somewhere!

And as for entrance requirements. easiest way to avoid these 'workstations' that have been posted. Leave the 10TB req, but just add that you must be using an add-on card. on board raid doesn't count. That will take care of all the other questions of file system, drive count, space, etc. a real storage server doesn't use onboard stuff, regardless of how much space you have. Seems like a simple fix to me, but in all honesty I really dont care. If you have such an issue with something not being [H]ardc0re then go make your own thread with [H]arder requirements for systems and you wont get these 'workstation' posts anymore.
 

heh Your comment irked, so I started grabbing some zfs-zpool-df-etc outputs to post. I didn't think the private filesystem should be affecting space visible for the total pool but I couldn't get my numbers to add up. On top of that, the Windows drive mapping now says each pool is 17.5TB when it used to say 17.6TB. (It's this capacity number from window's mapped drives that I based my original post on.) Fishy indeed !!

Investigation was warranted.

Windows first:
It seems windows is somehow be able to subtract the amount actually in-use on the private filesystem from the total tank when it reports capacity. It goes down as I add data to /tank/private but not when I add data to /tank/public. To be looked at further later ...
unleddq.jpg


Solaris:
Code:
root@storage:~# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  8.15T  9.58T   122K  /tank
root@storage:~#
Hmmm ... 2 x ( 8.15 + 9.58 ) = 2 x 17.73 = 35.46
Where's my other .8TB+ ??

Then I found this post. (Lots of good info in this blog!).

To summarise, the available space on a raid2z volume of vdevs with size > 2MB is not ((#hdds - 2)*hddsize), it's ((#hdds - 2)*hddsize*63/64.
I should have 1.81TB * 20 * 63 / 64 = 35.63TB of space, (two pools of ~17.817).
[and yes, I realise that available space on a drive is closer to 1.8167 than 1.81]

The conclusion: 0~.2TB is still unaccounted for.
Further enlightenment welcome! Anyone ????
 

heh Your comment irked, so I started grabbing some zfs-zpool-df-etc outputs to post. I didn't think the private filesystem should be affecting space visible for the total pool but I couldn't get my numbers to add up. On top of that, the Windows drive mapping now says each pool is 17.5TB when it used to say 17.6TB. (It's this capacity number from window's mapped drives that I based my original post on.) Fishy indeed !!

Investigation was warranted.

Windows first:
It seems windows is somehow be able to subtract the amount actually in-use on the private filesystem from the total tank when it reports capacity. It goes down as I add data to /tank/private but not when I add data to /tank/public. To be looked at further later ...
unleddq.jpg


Solaris:
Code:
root@storage:~# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  8.15T  9.58T   122K  /tank
root@storage:~#
Hmmm ... 2 x ( 8.15 + 9.58 ) = 2 x 17.73 = 35.46
Where's my other .8TB+ ??

Then I found this post. (Lots of good info in this blog!).

To summarise, the available space on a raid2z volume of vdevs with size > 2MB is not ((#hdds - 2)*hddsize), it's ((#hdds - 2)*hddsize*63/64.
I should have 1.81TB * 20 * 63 / 64 = 35.63TB of space, (two pools of ~17.817).
[and yes, I realise that available space on a drive is closer to 1.8167 than 1.81]

The conclusion: 0~.2TB is still unaccounted for.
Further enlightenment welcome! Anyone ????

You should start drinking.... more
 
Hello

Please find underneath my small server (compared to your insane computers)

10TB
Case : ANTEC Mini P180
PSU : QFan TR2 450W
Motherboard : Gigabyte mH55 UD2H
CPU : i3 530 (0.95v)
RAM : 8GB DDR3 (1.3v)
GPU : Intel HD
Controller Cards : SATA3 Asrock
Optical Drives : none
Hard Drives :
1*SSD ONYX 32GB
1* SEAGATE 2TB ST2000DL003
2* SAMSUNG F3 ECO2 2TB
2* WD EARS 2TB

Windows Server 2008 R2

Consumption :
IDLE : 35/40W
FULL: 65/70W

Connected to Glan and Optical fiber





From server to PC

From PC to server
 
Hello

Please find underneath my small server (compared to your insane computers)

10TB
Case : ANTEC Mini P180
PSU : QFan TR2 450W
Motherboard : Gigabyte mH55 UD2H
CPU : i3 530 (0.95v)
RAM : 8GB DDR3 (1.3v)
GPU : Intel HD
Controller Cards : SATA3 Asrock
Optical Drives : none
Hard Drives :
1*SSD ONYX 32GB
1* SEAGATE 2TB ST2000DL003
2* SAMSUNG F3 ECO2 2TB
2* WD EARS 2TB

Windows Server 2008 R2

Consumption :
IDLE : 35/40W
FULL: 65/70W

Connected to Glan and Optical fiber





From server to PC

From PC to server


Nice setup, is this your motherboard -> http://www.bit-tech.net/hardware/motherboards/2010/01/28/gigabyte-ga-h55m-ud2h-review/1? I assume you made a typo in the model number in your specs? ;)
 
Ender, this forum lacks an obvious button to post a new thread. Yes, I am a newbie on this forum, so need directions. Thanks
 
my ancient file server:

217063_10150165646789084_583649083_6621619_7940621_n.jpg


specs:
Asus A8N32-SLi Deluxe
4200+ X2
2x1GB ECC DDR400
nvidia 7300LE
3x SI3114
SI3124
SI3132
IBM M1015
Lian Li PC-201B modded to hold 32 HDDs :D

HDDs:
2x 2TB (Samsung F4EG)
4x 1TB (ES.2, Blacks)
2x 500GB (RE2 & RE3)
4x 150GB (4x RaptorX)
4x 160GB (WD RE)
36GB, 74GB, 80GB Raptors
 
Joining the 10TB+ club with my humble Home Server. Currently running WHS 2011, haven't decided on what sort of pooling I'm going to use so single drives duplicated using SyncToy right now.

sli90y.jpg


5upkph.jpg


wcgsw7.jpg


Specs:
Gigabyte GA-880GA-UD3H Rev 3.0
AMD Phenom II x4 840
G-Skill 2x4GB DDR3-1333
SiI 3124 SATA Card (4 port)
Intel PRO Dual Gigabit PCI-X NIC
LG DVD-RW
OCZ Fata1ty 550W
Coolermaster Centurion 590
Supermicro 3x5 Hot Swap Bays (x2)

Hard drives:
1 x 640GB (WD Black - System)
1 x 1.5TB (Seagate Barracuda 7200)
3 x 1.5TB (WD Green EADS/EARS)
4 x 2.0TB (Samsung EcoGreen F4)

Total Storage: 13.28TB
 
Thanks, just rebuilt it today with the new motherboard/CPU/memory and I tried to make it as neat as possible without going crazy. It's always fun routing 12 SATA cables!
 
Amount of advertised storage: 35tb

Case: Norco 4020
PSU: Corsair HX1000
Motherboard: Supermicro X8SAX
CPU: i7 920
RAM: 12gb CORSAIR XMS3 1333
GPU: Geforce 7300
Controller Cards: 2 x AOC-SAT2-MV8
Hard Drives: 1x CTFDDAC064MAG, 10x ST31500341AS (2 raid5's), 10x HD204UI (9 drive raid5 with 1 spare)
Battery Backup Units: APC XS 1300
Operating System: Openfiler VM inside Windows 7

I originally built this system in march 2009 (with 5 drives) with the intentions of running vm's from it. At the time, I didn't feel like doing a bunch of testing with vmware, and decided to install openfiler on the bare hardware. A couple of months later, I decided to add more drives (5 more). Still not wanting to mess with migrating it to a vm, I left it on the bare hardware.

2011 comes along, I am slowly running out of space on the 2 raid arrays, and figure I need to do something about it. I ordered 10 2tb samsung drives and the c300 and begun testing on running openfiler inside a vm. Results were surprisingly comparable to running on the bare hardware, so I created a large array with the new drives, and copied all the data over (as the seagates are nearly their life expectancy).

Sorry about the mess, lol.
nas.jpg


nasscreen.jpg


Thanks for looking
-Erik
 
Joined the 10tb+ club a while ago however have neglected to add myself to the list :(

Total drive space: 13.5tb
Available space: 2.3tb

Case: Chieftec DA-01BD Black Dragon
PSU: Coolermaster 380w (older than any other system that I currently have however its still humming long like a champ!)
Motherboard: Gigabyte GA-M720-US3 ATX
CPU: AMD Athlon64 X2 4400+ (under volted)
RAM: 2x 2gb OCZ Gold DDR2-800mhz
GPU: Sparkle PCI 1mb (VGA only!)
Controller Cards: 2x SiI3114 (PCI to 4 Port SATA150 )
Hard Drives: 2x Samsung HD501LJ 5400rpm (system + scratch), 4x Samsung HD154UI 1.5tb 5400rpm , 4x Hitachi 7200rpm , 2x HD203WI 5400rpm,
NIC: Onboard Realtek (YUCK!) :eek:
Operating System: Microsoft Windows Home Server V1

Definitely no pictures as the insides are an bomb shell especially since this case was NEVER designed to be run in this configuration :eek::eek:

It was primarily built to serve as a media server back when WHSv1 was first released but then everything else started to come along like torrent/usenet download server as well as remote access server (via VPN and SSH) as well as individual VMs, based in VirtualBox being run occasionally, and now its really starting to show its age :( The controllers are absolutely pants giving me a maximum of 20mb/s across a gigabit network but that's also down to the drives being full/unbalanced, something which WHS has struggled to actually do anything about as most of the drives are pegged at 97% with the others around 70%. Was definitely built on a budget!!

However next couple of months she's going to be decomissioned with a shiny new Ubuntu based RAID-6 server :D :D All to be served via NFS to other Linux and Windows 7 systems. Remote access and download functionality will be moved off onto a dedicated ITX based system....
 
Last edited:
Storage is spread over two servers, one main the other is just does backups really.

Server1 49TB
Dual L5630
Supermicro X8DTH-IF
Crucial 24GB buffered
Narco 20 bay case
Zalman ZM850-HP
2x Intel 320 40GB raid1 OS (Windows 2008r2, Hyper-V & WHS11)
2x Samsung 500GB 2.5" raid0 (Work/Temp drive)
Adaptec 31605
12x Samsung 2TB Eco (two raid 5 arrays)
Adaptec 51245
8x Samsung 2TB Eco (raid 5 with hotspare)
ext sas 4x Samsung 2TB Eco (raid 5, need to find a expander that works with this for more drives)

Server2 20TB
E3-1260L
Supermicro X9SCM-F
Crucial 16GB ECC
Supermicro SC835
Supermicro 920w 80+ platinum
Crucial M4 64GB OS (Windows 2008r2)
2x Samsung 2TB
LSI 8708elp
8x Samsung 2TB (raid 0+1, OS backups, important files)

IMG_0385.JPG


IMG_0393.JPG


IMG_0395.JPG


IMG_0275.JPG


IMG_0276.JPG


IMG_0277.JPG
 
This looks like a backplane - but maybe the Scythe Slot Rafter is an option for you?
 
Hi,

Here is my µATX NAS:

17.6TB

CPU: AMD Fusion APU E-350 1.60GHz @ 1.65GHz
Motherboard: Asus E35M1-M Pro
RAM: 1x4096MB G.Skill Extreme Series RL PC10600 1333MHz
Case: Lian Li PC-A04B
Fans: 3x Be Quiet Silent Wings 120mm USC
PSU: Cooler Master Silent Pro M 500W
Storage controller: Dell PERC 6/i Integrated 256Mo w/ BBU
Storage controller: SATA/IDE JMicron JMB363 PCIe 1x
NIC: 1x Realtek Gigabit PCIe integrated + 1x Realtek 100Mbits PCI
Case accessory: Lian Li BZ-501B
Storage :2x SSD OCZ Onyx 32GB
5x2To Western Digital Caviar Green 64MB Cache
4x1,5To Western Digital Caviar Green 64MB Cache
1x1,5To Western Digital Caviar Green 32MB Cache
OS: Windows Seven Professional SP1 64bits
Optical storage: HL-DT-ST GSA-T50N + Akasa AK-SDEN-01
UPS: iDowell iBox
Remote: SoundGraph iMon Knob
Monitors: Dell 30" 3008WFP + Dell 22" P2209WA
Speakers: Logitech Z5500
Keyboard/Mice: Logitech Illuminated Keyboard, Razer DeathAdder

I use this system as a NAS and HTPC (with XBMC).

Some pictures:








Screenshots:





Tips :) :

In case I lose remote access on the NAS when I'm out of home, I bought a Seagate DockStar, I hacked it by installing Debian Squeeze on it.

In order to use the same keyboard/mice on my 2 computers, I bought this 4 ports USB switch from Connectland:


PS: Sorry for my english ;)
 
what's the use of SSD's in a Data Server? what leverages the performance on these SSD's? just wondering!

Storage is spread over two servers, one main the other is just does backups really.

Server1 49TB
Dual L5630
Supermicro X8DTH-IF
Crucial 24GB buffered
Narco 20 bay case
Zalman ZM850-HP
2x Intel 320 40GB raid1 OS (Windows 2008r2, Hyper-V & WHS11)
2x Samsung 500GB 2.5" raid0 (Work/Temp drive)
Adaptec 31605
12x Samsung 2TB Eco (two raid 5 arrays)
Adaptec 51245
8x Samsung 2TB Eco (raid 5 with hotspare)
ext sas 4x Samsung 2TB Eco (raid 5, need to find a expander that works with this for more drives)

Server2 20TB
E3-1260L
Supermicro X9SCM-F
Crucial 16GB ECC
Supermicro SC835
Supermicro 920w 80+ platinum
Crucial M4 64GB OS (Windows 2008r2)
2x Samsung 2TB
LSI 8708elp
8x Samsung 2TB (raid 0+1, OS backups, important files)


IMG_0395.JPG
 
Last edited:
what's the use of SSD's in a Data Server? what leverages the performance on these SSD's? just wondering!

Personally speaking, SSD's are attractive for a file server primarily due to two things: size and reliability. They're small drives so you don't have to worry about losing a hot swap bay or two just for the OS and they're solid state so you don't have to worry as much about drive failure since mechanical failure isn't a concern.
 
Personally speaking, SSD's are attractive for a file server primarily due to two things: size and reliability. They're small drives so you don't have to worry about losing a hot swap bay or two just for the OS and they're solid state so you don't have to worry as much about drive failure since mechanical failure isn't a concern.

Neither of these are advantages over current generation "spinny" 2.5" drives. 2.5" drives are exactly the same size. And current generation SSDs are less reliable than current generation 2.5" spinny drives, even with the mechanical factors. That will change as the technology improves, but the "limited writes" and other issue still make SSD slighly less reliable overall.

Two advantages SSDs do have are speed and lack of heat/vibration. The "speed" part for a file server plays two roles: one is startup time - you generally want your file server to boot up "first" from a cold start. The other is as a short-term cache drive (e.g., for ZFS, a "ZIL") or fast-access storage in a multi-level storage hierarchy.

The heat/vibration advantage is that you have more options for where to place them in the server case.
 
even entry level ssds (write up to 40mb) offer way better performance then any hdd, during compilations hdd is usualy the bottleneck of everything, having ssd takes access time out of equation, that alone + saved space by beeing able to put it almost anywhere in case is worth every peny
 
Hes not using a proper filesystem (ntfs is olddddd) but things like ZFS can put cache/logs/writes on an SSD to improve performance.
 
even entry level ssds (write up to 40mb) offer way better performance then any hdd, during compilations hdd is usualy the bottleneck of everything, having ssd takes access time out of equation, that alone + saved space by beeing able to put it almost anywhere in case is worth every peny

True enough if you are talking about your laptop or your workstation - but the question was why SSD in a file server? When your access method is Samba over a 1GBe LAN you won't see the advantages you mention above. When the server uses the SSD for its OS and only its OS - and all files are served from the HDD array - you really won't see any difference. Even if you serve files directly off the SSD, you won't see a measurable difference unless you are counting IOPs for a server with dozens or more clients. Most of the time in a file server the OS drive is used to boot the box and then pretty much not touched again except for infrequent, low priority OS maintenance activity. And that is how almost every file server works.

Except in the case of a hierarchical storage architecture - or its functional equivalent like ZFS+ZIL, using SSDs as the OS drive for your file server provides little or no performance advantage. Being able to stuff them anywhere in the case might be interesting, but for the most part it is a complete waste of money.
 
I have to disagree that SSDs are less reliable than mechanical drives, particularly drives from Intel who is widely regarded as the front-runner in that domain.
 
Amount of total storage (if posting multiple systems) 43.28TB
Amount of storage in the following system 8.360 TB

Case Thermaltake ArmorPlus(Armor+) VH6000BWS
PSU Corsair CMPSU-850TX (850 Watt)
Motherboard Gigabyte GA-880GA-UD3H
CPU AMD FX8150 @4.2Ghz
CPU Cooler - Noctua NH-D14
RAM G.Skill Ripjaw 16GB (4X4GB)
GPU (if discrete) Radeon 7870 2GB
Controller Cards (if any) Syba 4 Port Sata controller
Hard Drives (include full model number)
Crucial M4 128GB SSD (System Drive)
Crucial M4 128GB SSD (Gaming Drive 1)
OCZ Vertex2 120GB SSD (Gaming Drive 2)
Western Digital WD10FALS X2 (2TB)
Western Digital WD20EARS X 3 (6TB)

Operating System Windows 7 Ultimate

This is my main PC that I use for primarily gaming. I also use it for Ripping and Encoding using Handbrake to encode movie vobs. My storage on this Pc is where the initial ripping and encoding takes place before it is transferred to my server.

Amount of storage in the following system 28.670 TB

Case Lian Li 343B
PSU Corsair 850Watt
Motherboard Gigabyte GA-MA790GPT
CPU AMD Phenom II X6 @ 3.2Ghz
CPU Cooler - Noctua NH-D14
RAM G.Skill Ripjaw 16GB (4x2Gb)
GPU (if discrete) On Board
Controller Cards LSI 8708EM2 Mega Raid X 2 With BBU's
Hard Drives (include full model number)
Crucial M4 64GB SSD (Boot Drive)
Western Digital WD30EZRS (3TB)
Western Digital WD20EFRS (Red) X 7 (14 TB)
Western Digital WD20EARS X 2 (4TB)
Western Digital WD20EARX X 2 (4TB)
Western Digital WD20EADS X 1 (2TB)
Western Digital WD10FALS X 1 (1TB)
Western Digital WD320AAKS x 2 (640gb)

Operating System Windows 7 Ultimate

This is my Server it hosts all my media to various pc's, and i use it for backing up all my pc's in the house and acting as a NAS.

The Remainder of My Storage is USB (6.250TB)
2 X 2TB Western Digital WD20EARS
1TB Western Digital Passport Drive
500GB Toshiba External Drive
750GB External 2.5 Drive

Pics of New Server Case Build (6/9/13)







 
Last edited:
Status
Not open for further replies.
Back
Top