The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
is there a pci-e version?

It's just for mounting and there is also a PCI-E mount built-in (universal port). I don't think the data transfers through the PCI bus, but rather 2x SATA ports, one for each drive.
 
what's the use of SSD's in a Data Server? what leverages the performance on these SSD's? just wondering!

Less heat was my main goal, faster responce for app startup times and am trying to find a way to stick another two 500gb 2.5" drives under each SSD lol.

Hes not using a proper filesystem (ntfs is olddddd) but things like ZFS can put cache/logs/writes on an SSD to improve performance.

ZFS is not needed for the task required and would not work well with the hardware raid cards. Also haven't really messed about with it enough yet to use it my productive work.

True enough if you are talking about your laptop or your workstation - but the question was why SSD in a file server? When your access method is Samba over a 1GBe LAN you won't see the advantages you mention above. When the server uses the SSD for its OS and only its OS - and all files are served from the HDD array - you really won't see any difference. Even if you serve files directly off the SSD, you won't see a measurable difference unless you are counting IOPs for a server with dozens or more clients. Most of the time in a file server the OS drive is used to boot the box and then pretty much not touched again except for infrequent, low priority OS maintenance activity. And that is how almost every file server works.

Except in the case of a hierarchical storage architecture - or its functional equivalent like ZFS+ZIL, using SSDs as the OS drive for your file server provides little or no performance advantage. Being able to stuff them anywhere in the case might be interesting, but for the most part it is a complete waste of money.

Never said it was a "Fileserver" and runs about 5-50 VM's at a time for my deployment testing, after moving over to SSD I noticed much smother network transfers and VM activity. Also a few other servers I have use this server as primary data storage (like how you would use a DAS). Its all a bit messy atm.

I need min amount of downtime as I also use these servers for offsite storage for my company servers. Image restore times are cut by 9/10ths by having SSD's.

If you look at the servers I wasnt trying to go for value for money and to tell the truth had them spare from my render box.

What are the temps on your northbridges for the NORCO case?

Just checked about 39c with system under 85% load.
 
Last edited:
Never said it was a "Fileserver" ...
The question posed was "what's the use of SSDs in a data server". You'll notice I answered the question and made no commentary on your particular build. That your's happened to be the last system posted and might have triggered the question is just a coincidence. For a VM host there might indeed be advantages, as you pointed out, but that was not the question being asked or answered.
 
The question posed was "what's the use of SSDs in a data server". You'll notice I answered the question and made no commentary on your particular build. That your's happened to be the last system posted and might have triggered the question is just a coincidence. For a VM host there might indeed be advantages, as you pointed out, but that was not the question being asked or answered.

All good mate, just noticed the op of the question quoted my post/pic. Regarding a SSD's in a file/data server I guess it still cuts down on heat and power.

Come to think of it havent seen much servers here running SSD's.
 
All good mate, just noticed the op of the question quoted my post/pic. Regarding a SSD's in a file/data server I guess it still cuts down on heat and power.

Come to think of it havent seen much servers here running SSD's.

There have been a couple on here but not many. I think the main issue is just cost. Its hard for a home user to justify spending 2-3x as much for SSDs when it comes down to it we really dont need what they have to offer. The heat is one thing and it would be noticable but honestly it would take a while for the power bill difference in cooling to catch up with the SSD's. If I had an unlimited budget id absolutely use SSDs though because they are faster, cooler and usually smaller. Just an all around good thing.

I have, however, replaced all my laptops and netbooks with SSDs and there is a notable and worthwhile speed difference over the crappy 5400 and 7200 rpm drives that were in there before. As well as battery life.
 
Just upgraded my NAS:

NAS:
Amount of total storage: 40.25 TB
Amount of usable storage: 32.25TB
Case: Norco 4220
PSU: Corsair HX650
Motherboard: Tyan S5207 (Intel I3100 Based)
CPU: Intel Mobile Celeron 410
RAM: 1*2GB ECC DDR2-400 Reg
GPU: Integrated ATI ES1000
Controller Cards: Adaptec 52445
Optical Drive: ESATA External
Hard Drives:
Qty 10 - WD WD20EADS (TLER enabled) 2TB Raid 6
Qty 10 - Hitachi 5K3000 2TB Raid 6
Qty 1 - WD WD2500BEVT 250GB
Battery Backup Units: APC Smart-UPS 700VA
Operating System: Centos 5.6 X86
Software: MediaTomb
 
Last edited:
Hey, hey, hey, everybody!!!!! Where are pics? C'mon........ I want to see these servers.........
 
I'm working on my new storage server / vm build I have most of the parts, a few are "out for delivery" today.

99% of my data is DVD and BD rips so I think I am going to use Flexraid's snapshot raid for simplicity. I also have a few dev VM's (SQL,IIS,Dev workstation) that will run on the system too. RIght now I am going to use Windows Server 2008 R2. I was going to use ESXi and OpenSolaris/napp-it but my raid card is not supported in unix.

Not in the list is a OS drive, right now it is older 400GB but will be replaced with a pair of SSD's when I find a good deal.

I am going to end up moving this to a Norco 4220 by the end of the year.

Total unformatted space ~14TB
I am going to use 2 parity drives so that should leave me with about 9TB of useable space.

Intel Core i7-2600 Sandy Bridge 3.4GHz
Intel BOXDQ67OWB3 LGA 1155 Intel Q67 SATA 6Gb/s Micro ATX Intel Motherboard
2x Kingston 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333
SUPERMICRO AOC-SASLP-MV8 PCI Express x4 Low Profile SAS RAID Controller
2x 3ware CBL-SFF8087OCF-05M 1 Unit of .5M Multi-lane Internal (SFF-8087) Serial ATA Breakout Cable
NZXT H2 H2-001-BK Black Steel / Plastic Classic Silent ATX Mid Tower Chassis
3x Western Digital Caviar Green WD20EARS 2TB 64MB Cache SATA 3.0Gb/s 3.5"
4x SAMSUNG EcoGreen F4 HD204UI 2TB 32MB Cache SATA 3.0Gb/s 3.5"
Intel EXPI9301CT Desktop Adapter Gigabit CT 10/ 100/ 1000Mbps PCI-Express 1 x RJ45

I will update with pictures as I build it this weekend.
 
Hi there! : I am Alexander from Belgium and here is my litlle home server :D
Advertised storage: 38320 GB
Formated and raid config : 31974.04GB

Case : Chieftec Arena
PSU : Corsair HX850
Motherboard: Asus P5B Ai AP series
CPU: Intel Q6600
RAM : 4x 1gb Corsair DDR2 800
GPU: PCI VGA 2mb :D
Controller Cards : Dell Perc 5i and Adaptec RAID 31205
Optical Drives : none
Hard Drives :
Boot Drive : 2.5inch Hitachi 7200 rpm 320gb
Raid 5a : 7x 2.0TB WD EARS or EARX series
Raid 5b : 4x 2.0TB Samsung F4 and 1x 2.0TBWD EARX
Jbod: 4x WD 2.0TB EARX or EARS 2x 1TB Samsung F2 and 2x 2.0TB Samsung F3
Battery Backup Units:APC Smart-UPS 1000VA
Operating System : Windows 7 Ultimate

Pictures :



 
For a VM host there might indeed be advantages, as you pointed out, but that was not the question being asked or answered.
Not really... ESXi doesn't even use the boot disk once the hypervisor is loaded. I boot a couple of my hosts at home from USB stick. When I moved my hosts to a blade chassis, I use the built in 2.5" drives just because they're there. When they fail, I probably won't bother replacing them, opting for SSD or USB/SD -> SATA/SAS "drives".

I really should take some pictures of my rack. I used to use a Dell 42U, but I changed to a unknown 42U because it has a cooler front door. It houses a 10 slot Dell 1955 blade server chassis (with 10 blades) and 2 x EqualLogic PS5000E's and 2 x PS5000XV's. I configured the E's as RAID5 (15 x 1TB SATA plus hot spare) and the XV's as RAID10 (15 x 450GB SAS plus hot spare), all connected via iSCSI via Cisco 3750 Gb switches.

Not nearly as much storage as some of these monsters on here, but it works for me. Although it's pretty [H]ard having 50+ drives in a single chassis, I prefer the 16-at-a-time upgrades. It allows me to get the newest size drives without forklifting my old stuff.
 
I'm working on my new storage server / vm build I have most of the parts, a few are "out for delivery" today.

99% of my data is DVD and BD rips so I think I am going to use Flexraid's snapshot raid for simplicity. I also have a few dev VM's (SQL,IIS,Dev workstation) that will run on the system too. RIght now I am going to use Windows Server 2008 R2. I was going to use ESXi and OpenSolaris/napp-it but my raid card is not supported in unix.

Not in the list is a OS drive, right now it is older 400GB but will be replaced with a pair of SSD's when I find a good deal.

I am going to end up moving this to a Norco 4220 by the end of the year.

Total unformatted space ~14TB
I am going to use 2 parity drives so that should leave me with about 9TB of useable space.

Intel Core i7-2600 Sandy Bridge 3.4GHz
Intel BOXDQ67OWB3 LGA 1155 Intel Q67 SATA 6Gb/s Micro ATX Intel Motherboard
2x Kingston 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333
SUPERMICRO AOC-SASLP-MV8 PCI Express x4 Low Profile SAS RAID Controller
2x 3ware CBL-SFF8087OCF-05M 1 Unit of .5M Multi-lane Internal (SFF-8087) Serial ATA Breakout Cable
NZXT H2 H2-001-BK Black Steel / Plastic Classic Silent ATX Mid Tower Chassis
3x Western Digital Caviar Green WD20EARS 2TB 64MB Cache SATA 3.0Gb/s 3.5"
4x SAMSUNG EcoGreen F4 HD204UI 2TB 32MB Cache SATA 3.0Gb/s 3.5"
Intel EXPI9301CT Desktop Adapter Gigabit CT 10/ 100/ 1000Mbps PCI-Express 1 x RJ45

I will update with pictures as I build it this weekend.

Let me know how that SAS card works in Server 08
 
@J-Will: The AOC-SASLP-MV8 works just fine in Server 2008R2. It also works equally well in WHS2011 (since WHS2011 is based on Server 2008R2).
 
Total Advertised: 16TB
Total Available: 12TB

Case: Cooler Master Centurion 5
PSU: PC Power and Cooling S61EPS 610W
Motherboard: Asus P7H55-M Pro
CPU: Intel Core i5-660
RAM: 2 x 2GB Corsair XMS3 DDR3 1333
NIC: Intel Gigabit
SAS HBA: IBM ServRAID M1015
SAS Expander Intel RES2SV240
HDD Raid 6:
5 x 2TB WD20EARS
2 x 2TB Samsung HD204UI
1 x 2TB Hitachi 5K3000​
HDD Boot: 8GB Sandisk Extreme CF 60MB/s
Hot Swap: 5 x Kingwin 3.5IN SATA Tray less
Battery Backup: APC Back-UPS Pro 650
OS: Gentoo Linux

Kill A Watt: about 110w idle, 140w raid check
(Includes UPS and Network Switch)

YYNg1.jpg


5YOLI.jpg


vES8G.jpg


I do plan on filling up that expander :D
 
Total Advertised: 16TB
Total Available: 12TB
Case: Cooler Master Centurion 5
PSU: PC Power and Cooling S61EPS 610W
Motherboard: Asus P7H55-M Pro
CPU: Intel Core i5-660
RAM: 2 x 2GB Corsair XMS3 DDR3 1333
NIC: Intel Gigabit
SAS HBA: IBM ServRAID M1015
SAS Expander Intel RES2SV240
HDD Raid 6:
5 x 2TB WD20EARS
2 x 2TB Samsung HD204UI
1 x 2TB Hitachi 5K3000​
HDD Boot: 8GB Sandisk Extreme CF 60MB/s
Hot Swap: 5 x Kingwin 3.5IN SATA Tray less
Battery Backup: APC Back-UPS Pro 650
OS: Gentoo Linux
I do plan on filling up that expander :D

do you use it just as a file server? What manages the raid? the cards? or some sort of software raid? Seems nice :)
 
do you use it just as a file server? What manages the raid? the cards? or some sort of software raid? Seems nice :)

Thanks, there are some really great machines here!

I primarily use it as a file server and for media (movies, music, and games). For games I boot Windows from an external eSATA enclosure and can still access my raid from a VM using VirtualBox. I've setup a public share for my roommates so they can watch all the movies I have, and upload what they have as well.

I'm using Linux software raid managed by mdadm.
 
finaly found time and pictures to update my post
moved all the boxes behind some wardrobes and added some sound insulation, silence and covert spot for my servers :D
 
Let me know how that SAS card works in Server 08

It works great using the newest version of the drivers for it. I had an issue with a drive dropping out till I upgraded the driver and it's been fine since.
 
Amount of total storage: 25TB
Amount of storage in the following system: 25TB

Case: Fractal Define R3
PSU: Corsair 550Watt
Motherboard: Supermicro MBD-X9SCL+F
CPU: Xeon E3-1230
RAM: Kingston 16GB DDR3 Unbuffered ECC
Controller Cards: LSI 9201-16i (16 Port), passed through to OI 151_a7 VM
Hard Drives (datastore): 2x Seagate 500GB, software RAID1 (datastore)
Hard Drives (storage): 8x Hitachi 3TB 5K3000, in a single raidz2
Battery Backup: Tripplite Smart LCD 1500VA
Operating System: VMware vSphere Hypervisor 5.1, OpenIndiana 151_a7 (Storage), ubuntu 12.10 (Plex Media Server)

It is used as a small VMware lab and to serve media to my various devices running XBMC and Plex.
 
Last edited:
I have been seeing a lot of Norco rack-mounted chassis here. How do you guys like them after using them a while?
 
Total Storage: 24TB

Specs:
Mobo: Supermicro MBD-X9SCL+F
CPU: Xeon E3-1230
RAM: Kingston 16GB DDR3 Unbuffered ECC
USB: Kingston 8GB Memory Stick, running ESXi 4.1u1 Hypervisor
HDD (datastore): 160GB 2.5inch 5400rpm
HBA: LSI 9201-16i (16 Port), passed through to OI 151 VM
HDD: 8x Hitachi 3TB 5K3000, in a single raidz2
CASE: Fractal Define R3

Just finished this build up tonight and loving the ESXi all in one setup.
pics?
 
shetu@

Supermicro X8DTH-6F
Dual E5645 Xeon CPUs (24 threads) (hexacore)
48GB RAM
Intel 510 SSD (250GB on 6gbps onboard SAS controller ~500MiB/s)
3ware 9750-24i4e
 
I'll post pics when I get a chance to clean things up a bit in my case. It was done in a rush as I was headed out of town this morning and wanted to get it to a point where I could install all the services I wanted running on it.
 
I am finally going to take the crown of first place? For anyone who didn't see my thread:

http://hardforum.com/showthread.php?t=1624422

I got some free SC933 supermicro chasis. I am going to add an ARC-1880x to my current system that has an ARC-1280ML and hook up 30x3TB disks to it in raid6 for 84TB usable. I will be just under 200TB in total storage and should finally take 1st place in this thread =)
 
This is my first NAS built, before I was using HTPC as media storage. :)

Total Advertised: 12TB
Total Available: (Still don't know, RAIDZ or not??)

Case - NZXT Source 210 Elite
PSU - Silverstone ST50F-ES
CPU - AMD Athlon X2-555 Black Edition
CPU Cooler - Freezer64Pro
Fans - 2x Enermax Magma @ Front
Mobo - Biostar TA880GB+
RAM - 4x4GB (16GB)
HDD - 1x2TB WD Caviar Black
HDD - 5x2TB WD Caviar Green
NIC - Onboard Realtek :(
OS - Freenas v8.01 Beta4









 
Last edited:
This is my first NAS built, before I was using HTPC as media storage. :)

Total Advertised: 12TB
Total Available: (Still don't know, RAIDZ or not??)

Case - NZXT Source 210 Elite
PSU - Silverstone ST50F-ES
CPU - AMD Athlon X2-250
CPU Cooler - Freezer64Pro
Fans - 2x Enermax Magma @ Front
Mobo - Biostar TA880GB+
RAM - 2x4GB (8GB)
HDD - 1x2TB WD Caviar Black
HDD - 5x2TB WD Caviar Green
NIC - Onboard Realtek :(
OS - Freenas v8.01 Beta4

dang that cabling is neat! :)
there are two free HDD spaces - get on it, chop chop! :p
 
This is my first NAS built, before I was using HTPC as media storage. :)

Total Advertised: 12TB
Total Available: (Still don't know, RAIDZ or not??)

Case - NZXT Source 210 Elite
PSU - Silverstone ST50F-ES
CPU - AMD Athlon X2-250
CPU Cooler - Freezer64Pro
Fans - 2x Enermax Magma @ Front
Mobo - Biostar TA880GB+
RAM - 2x4GB (8GB)
HDD - 1x2TB WD Caviar Black
HDD - 5x2TB WD Caviar Green
NIC - Onboard Realtek :(
OS - Freenas v8.01 Beta4










Truly a very good job...trying to get pointers and redo mines
 
Status
Not open for further replies.
Back
Top