The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
As far as I am aware, the issue with ext4 is that while the filesystem itself will go up to 1EiB, the tools will not go past 16TiB. So the question is, how do the both of you administer a volume that is greater than 16TiB?

Right now the largest we have is at 45TB, that is coming from a SAN volume and formatted using the typical mkfs.ext4 options used.
 
Finally upgraded my system and moved off WHS, now I'm using a simple zfs pool in FreeNAS.

Specs:
Lian-Li Q08B
Supermicro X7SPA-HF-O (Atom D510)
4gb G.skill DDR2 667mhz
6x3TB WD GP drives in two raidz1 vdev's
6bs53.png


While the file space is fairly similar to my old system, since I'm no longer using WHS am not wasting space with duplication it's actually a larger server. Plus it's also inside one case now, making it way smaller. :p
 
Finally upgraded my system and moved off WHS, now I'm using a simple zfs pool in FreeNAS.

Specs:
Lian-Li Q08B
Supermicro X7SPA-HF-O (Atom D510)
4gb G.skill DDR2 667mhz
6x3TB WD GP drives in two raidz1 vdev's
6bs53.png


While the file space is fairly similar to my old system, since I'm no longer using WHS am not wasting space with duplication it's actually a larger server. Plus it's also inside one case now, making it way smaller. :p

What kind of transfer speeds are you seeing for disk-to-disk copies and server-to-desktop transfers?
 
What kind of transfer speeds are you seeing for disk-to-disk copies and server-to-desktop transfers?

I'm not sure how to measure disk-to-disk (I am very much a zfs/FreenNAS noob :p) but from my old server (now running Server 2008 r2) I'm averaging about 50-60MB/s over ethernet to the new box, while only using maybe 30% cpu utilization (the intel nics on the supermicro board really help with the speed).
 
just installed the 120x12mm fan on top of the six hdds -- temps at load dropped from 41-44c to 25-27c... much better :)

Given the compact case an limited airflow, that's not too bad at all! High side of acceptable at 40+, but not too bad.

It'll be interesting to see how it hols up this summer when ambient is a bit higher (18C? I take it you like sweaters - or perhaps a snuggy?).

also installed the ceton infinitv4 :D
 
I'm not sure how to measure disk-to-disk (I am very much a zfs/FreenNAS noob :p) but from my old server (now running Server 2008 r2) I'm averaging about 50-60MB/s over ethernet to the new box, while only using maybe 30% cpu utilization (the intel nics on the supermicro board really help with the speed).

Those are some nice transfer speeds. I'm always curious how well low-power setups perform :)

I haven't worked with FreeNAS so I'm not sure if it's setup with an ssh server by default, but if you have one set up you can ssh in and copy a file with:

Code:
cp /path/to/file/filename /path/to/file/newfilename
 
Right now the largest we have is at 45TB, that is coming from a SAN volume and formatted using the typical mkfs.ext4 options used.

Getting back to this late, but I have another question. What distro are you running on your server(s)? I saw a post as recent as October 2010 where the e2fsprogs gave this error: "Size of device /dev/sdf1 too big to be expressed in 32 bits". This was on a 24TiB volume.

I'm not trying to belabour this issue, but it'll help me decide what filesystem to use for a future large-capacity server.
 
I guess I can finally post here. lol

Windows Home Server
5 x Samsung F4's

DSC_0013.jpg


I'm digging these fans.... They're extremely quiet, move a good amount of air, and are at MC for $2.99.

DSC_0267.jpg
 
@delvryboy

Did you upgrade the firmware on the Samsungs? If not, then you definitely should. See here for more detail.
 
Need to add a few more drives before I crack 10TB

Norco 4020 w/ 8x1TB 7200RPM drives (6 seagate and 2 Hitachi) :eek:
 
My total is 13TB without OS Drives. My Apartment is too small to install a single storage server so I have my storage distributed among 3 different computers. Would this disqualify me from being to post my Rig?
 
I have the same board, its good but network I/O is 50-70MiB/s with samba, poor network speed.
 
Aerocool BayDream Miditower Case
Corsair HX 650W
Gigabyte GA-G33M-DS2R
Intel E2160
2GB DDR2 memory
Areca ARC-1880ix-16
8*Samsung F2 1,5TB @raid6
4*Samsung F4 2TB@raid5
4*Seagate 7200.11 1,5TB@raid5
Windows Server 2008 R2
 
Last edited:
my overkill whs server upgrade...

dsc0039cg.jpg


AMD Athlon2 x2 250
Asus M4A88T-M/USB3
4GB G-Skill DDR3 1333
1x WD 640 GB Black aals
4x WD 2 TB Black fass
Corsair 600T
Corsair HX620
 
Last edited:
what's that board like? i'm looking at one for my NAS

I personally have been really happy with it, performance isn't bad (it is an atom, after all) and the built in IPMI made setting everything up really easy, loading iso's over the network is awesome compared to having to mess around with actual discs.
 
Cheers - that 50-70mb is 30-50mb faster than my readynas duo currently :rolleyes:

Thats likely just because of samba. Even on fast machines I Have never seen more than 30-40 megabytes a second on sambe.

I use that same board on my router box:

Code:
admin@zeroshell: 07:12 PM :~# dmidecode | grep -i name
        Product Name: X7SPA-H
        Product Name: X7SPA-H
admin@zeroshell: 07:12 PM :~#

I use it for bonded VPN and it can handle about 90-100 megabits of VPN traffic per core. Via http and ftp I have no problems maxing out gigabit off this box:

Code:
root@dekabutsu: 07:13 PM :~# wget -O /dev/null http://1.1.1.1/2gb.bin
--19:13:23--  http://1.1.1.1/2gb.bin
           => `/dev/null'
Connecting to 1.1.1.1:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1,992,294,400 (1.9G) [application/octet-stream]

100%[============================================================>] 1,992,294,400  111.68M/s    ETA 00:00

19:13:40 (111.48 MB/s) - `/dev/null' saved [1992294400/1992294400]

root@dekabutsu: 07:13 PM :~#

Using a multi threaded version of ssh/scp I am able to get around 40 megabytes/sec over scp which the only reason its that slow is because of the encryption uses a lot of CPU usage just like VPN.
 
Never? Really?

I get 100-110 MB/s with samba all the time. And I did not do anything special.

I think maybe you are the only one? I know many others who don't get good results on samba as well. You get > 100 megabytes/sec What OS -> What OS?
 
I'm also able to max out gigabit from my Solaris 10 filer to my win 7 box with samba.
I think the bottleneck is somewhere in the local filesystem / disks..

If I copy stuff to a linux box on the same network I only get 30-40mbs..
 
The goal of this project was to replace an aging WHS server based upon a Dell Precision 690 workstation conversion I did in 2009. I had increased the workstation’s storage capacity with SansDigital TowerRaid cabinets through eSATA cards. There were several things dictating the change
  • My expanding home media library had begun to approach the maximum capacity of WHS.
  • The workstation ran into network and drive bottlenecks when viewing HD content, especially when other tasks were active
  • The eSATA connections were a little finicky and would come unseated from time to time.
  • While quite powerful when new, the 690 generated much more heat than processing – consuming nearly 400 watts for the server alone.
  • While I have backups of all digital media on separate hard drives as well as the original media, I wanted better protection from a drive failure. WHS duplication would limit the maximum storage to about 30TB.
  • While WHS 2011 (a.k.a. Vail) is still in development, I wanted to prepare for the inevitable move in the future.
I remain a fan of WHS for a number of reasons and it is likely I will deploy WHS 2011 sometime this year. I decided the best solution was to move all of my media files, consisting of DVD rips and BluRay rips to a separate storage system. Windows 2008 has proven to be very reliable in my office, so I decided to deploy it for my Media storage. For the time being I will stay with WHS V1 for its ability to maintain backups of all of our home computers as well as for integration with MyMovies and WMC for our HTPCs. MyMovies for WHS is painless and automatic for rips of our new DVD, BluRay and CD purchases. Every few weeks I will move the new rips from the WHS server to the Windows Server 2008 storage.

I used two identical platforms to begin with.
  • Norco RPC-4224 case
  • Supermicro X8DT6 server board
  • Single Xeon E5506 processor
  • Intel BXSTS100C active/passive processor cooling solution
  • Crucial 2gb (x3) server memory
  • Corsair 750 ATX power supply
While these server boards will support dual processors, this application won’t be very processor intensive. I can always add another and more RAM if needed. The WHS installation will only see 4gb of the installed 6gb of RAM, but I plan on replacing it with the 64-bit WHS 2011 at some point. I replaced the mid fan assemblies with the 120mm bracket and installed 3 Cooler Master Blade Master PWM case fans. The rear fans were replaced with Cooler Master Blade Master PWM case 80mm fans. This allows me to have the server board control the fan speed to keep the noise at a more tolerable level for home use. I used a WD Raptor 300 as the system drive, drilling the case to mount it inside. The Corsair power supply was chosen because its modular design allowed me to tailor the power connections to this specific build, easing cable management. The Windows Server 2008 box has an Areca ARC-1880iX RAID controller. I also retained the SI3124 eSATA card for external drive connection. There are 12 Hitachi 7K3000 3TB drives in RAID6 (30TB) and 12 Hitachi 7K3000 2TB drives in RAID6 (20TB)

The WHS server utilizes 4 Hitachi 5K3000 2tb drives in the second case as straight SATA connections to be used for the WHS Drive Extender pool. This group is mostly duplicated, so the 8TB capacity is about halved. A WD Raptor 300gb is used for the system drive. The HP SAS extender board is connected to the 5 remaining backplanes for the third RAID set of the Areca Controller. There are 20 Hitachi 5K3000 drives in this array (36TB).

The total drive capacity of the Windows 2008 server is approximately 86TB and the WHS server is approximately 8TB.

Here are some pictures of the Windows Server 2008 package:

You can click on any image to launch a larger version







The slot numbering for the drives connected to the Areca with SFF-8087 cables was right to left.



Here is the WHS server build. You can see the top four drives are connected directly to the motherboard SATA ports.







The drive slots numbered left to right when connected to the HP SAS extender.



The whole installation in a temporary location on our back porch. Once the shakedown is completed, they will move to our equipment racks in the basement. The WHS server is on the left, with a BluRay and a DVD drive connected by USB for ripping new media. This is an automated process 99% of the time. Simply insert a disc and walk away. It is ripped, stored and cataloged automatically. The Windows Server 2008 is on the right, with one remaining SansDigital 5 drive towers attached. This is the fastest way to offload media for backup or to reload from backups.





The Toshiba portable is what I use to transport files between home and office, with Synctoy used to keep all the files synchronized. The single 2TB drive is in the SansDigital tower and is used for backups of all my media files. When a drive is full, It is stored as a backup. It is much faster to restore from a drive than to have to rip each title again. Picking up a 2TB drive for $70-80 is easy now and much less expensive than tape. I keep a catalog of each backup drive on a duplicated volume of the WHS server.



A backup of the individual PC backups is stored on the 2008 server.



This project went splendidly and has been operating without a hiccup for a couple of weeks. I initially tried to build the RAID arrays out of Seagate ST32000542LP drives I was using in the original WHS, but they proved to be very flakey. I flashed them all with CC35 firmware and after that they were very stable, but I really didn’t trust them. Twice they dropped several drives in an array at once, the first time I was able to rebuild the array, but data was corrupted and the second time an array failed and couldn’t be saved. The Hitachi 7K2000 drives have a good reputation so I opted to go with the 7K3000 for my two primary arrays. The set of 5K3000 drives I used to build the third array have also proven to be stable so far. The third array is not used for any active data at this time. The drive temperatures are running at approximately 32C for the 5K3000 and 7K3000 2TB drives and about 38C for the 7K3000 3TB drives. This is with fan management set to “balanced”, which keeps the fans at a reasonably slow speed. If I move the fan management to “performance” I drop the drive temperatures another two or three degrees – with them on full speed the drive temperatures drop to below 30C. This is in a room that is currently about 70F.

The only thing I would do differently and may correct in the near future, is to put the three 120mm fans on the back side of the fan bracket to give a little more room for the SFF-8087 connectors and cables. For now I zip-tied them with a tight bend to prevent them from hitting the fans.

So far I couldn’t be happier with the stability, performance and function of this system. We stress tested it by serving up 3 simultaneous BluRay ISO files without a hiccup at any of the machines. The next step will be to move to WHS 2011 if it proves worthwhile, though this solution could be viable for quite some time.

I have not tested to see if the third array will come online if powered up after the Windows Server 2008, so for now I make sure and start it first. I have used the CyberPower UPS scheduling on the 2008 server to power it down at 11:00pm every night and restart it just before we get home in the evening each weekday and to leave it on all weekend. The WHS server is scheduled to run until the morning so as to be able to run scheduled PC backups, then is powered down. It wakes up 15 minutes before the 2008 server. Both of these servers together consume about the same energy as the Dell workstation and the attached drive arrays and this scheduling has them powered down for about half the time.
 
Last edited:
The only thing I would do differently and may correct in the near future, is to put the three 120mm fans on the back side of the fan bracket to give a little more room for the SFF-8087 connectors and cables. For now I zip-tied them with a tight bend to prevent them from hitting the fans.

Very nice build! My only comment is that it scared me seeing that tight 180-degree bend zip-tied on. I agree that it would be best to ease up a bit on that by moving the fans to the other side.
 
I think maybe you are the only one? I know many others who don't get good results on samba as well. You get > 100 megabytes/sec What OS -> What OS?

samba server v3.5.6-1 is running on a linux box (archlinux with kernel 2.6.37)

client is Windows 7, and I can get 100+ MB/s downloading or uploading
 
That is a mapped share on my WHS server. It is a well known flaw in WHS DE. It reports the space remaining on the total pool, but only the capacity data partition on the system drive, not that of the drive pool - when they are mounted as shares on another computer. I believe that issue is resolved in WHS 2011 - because they abandoned DE ;).

If you notice the capacity of the Server 2008 system drive is 279GB. I used the identical drive for the WHS server and by default the install creates two partitions 20GB for the system and the remainder (259GB) is used for the data partition. DE pools all of the added drives as volumes with mount points in that first data partition, but only reports the data partition as the total capacity. It does calculate the remaining space by looking at the entire pool. We thought M$ would correct the issue in the first power pack, then it was rumored to be fixed in the second one, and so on... Not unusual for that company to have a big picture, but miss many little targets.
 
Last edited:
The goal of this project was to replace an aging WHS server based upon a Dell Precision 690 workstation conversion I did in 2009. I had increased the workstation’s storage capacity with SansDigital TowerRaid cabinets through eSATA cards. There were several things dictating the change
  • My expanding home media library had begun to approach the maximum capacity of WHS.

    <snip>


  • Beautiful build. Well done. Couple of questions - not criticizing - just curiosities:

    1. With Server2008, couldn't you have just run the WHS part as a guest in Hyper-V? Would have saved well over $800 on that second MB, CPU, Memory, etc.
    [OK, re-reading I think I get it. You wanted different power-up times for the WHS than you did for the main FS]

    2. Why not go with a much ligher-weight MB/CPU for the WHS part? Its not like WHS is very CPU hungry. Could have used a 34xx Xeon on something like a X8SIA-f and save a ton. You've got plenty of horsepower potential in the main server if you upgrade from the low-end Xeon and/or grow the 2nd CPU.
 
Beautiful build. Well done. Couple of questions - not criticizing - just curiosities:

1. With Server2008, couldn't you have just run the WHS part as a guest in Hyper-V? Would have saved well over $800 on that second MB, CPU, Memory, etc.
[OK, re-reading I think I get it. You wanted different power-up times for the WHS than you did for the main FS]

2. Why not go with a much ligher-weight MB/CPU for the WHS part? Its not like WHS is very CPU hungry. Could have used a 34xx Xeon on something like a X8SIA-f and save a ton. You've got plenty of horsepower potential in the main server if you upgrade from the low-end Xeon and/or grow the 2nd CPU.
WHS standalone/ Hyper-V - this was all part of the planning evolution. When I started this I was planning on multiple RAID6 arrays for media storage, using 2TB drives. My collection was over the 40TB that I could put on 24 drives, so I knew I would have to go with some sort of SAS expansion. It seemed to me, after looking at Chenbro (UEK or CEK), Areca ARC-8026 or HP that the HP offered the greatest likelihood of compatibility at the lowest cost. After all of that I needed to figure out how to power the expansion card, control the power to the second cabinet of drives and deploying an instance of WHS - it seemed to me that using an actual motherboard and power supply would work the best. The new DS-24E from Norco at its cheapest is $1349. Every other expansion option is much more. My build of cabinet, motherboard, CPU, RAM and HP card totaled $1258. And as you pointed out, I can now run my servers on different schedules.


Motherboard/processor: the short answer - I got a deal. Two identical motherboards, processors, cooling and RAM ~$1100. While it is overkill for WHS, my future WHS plans are still fluid. If I abandon WHS V1, then I re-deploy that box for some other purpose. I am really not smart enough to know what processing and memory I would eventually need, so I opted to start small and leave room for growth. While I have a lot of experience with VMWare and Server 2003, I am very new to Server 2008 and have 0 Hyper-V experience. I do plan on learning - another reason to leave room for growth in processing/memory.
 
I've been waiting a while to be able to make this post:

Norco 4220 (thought I had the new one but I must have gotten one of the last ones)
Gigabyte GA-EP45-UD3R, Pentium E5200, 4gb RAM
4x750GB Samsungs
1x1TB Samsung (F1 I think)
4x1TB Hitachi
2x2TB Hitachi 5k3000
Plus a 250GB cache drive I scavenged

I'm running Unraid 4.7 and the total usable space comes out to 10TB. I've been adding to this system for a couple of years now and with the new 2TB Hitachi's that were on sale at NewEgg last week I finally reached 10TB.

Used for DVD, Blu-Ray, Photos and Music as well as some misc. stuff. I'm mostly pulling from this for SageTV and iTunes.

I ordered one of the new 4220 fan brackets and found it didn't fit. Oops. So, I modified the old fan bracket to hold the 120mm fans to reduce the noise. Works pretty well except that I need to open up some more holes to allow more airflow. It's managing as is but just barely.

Rack mounted in my cabinet:
IMG_2194.JPG


Hmm...kind of blurry:
IMG_2195.JPG


UnMenu for Unraid:
Unraid%2010TB.PNG


Drive share from a Windows PC.
10tb.PNG
 
Barely ? 41-44C is a normal temperature for hard drive. More like the ideal temperature. Plus to all those Samsungs you should add 5-7C, as that is the number of degrees Samsung usually "cheats" with their sensors. In other words, you Samsung drives are around 37-38C too. And once again, 35-45C is the ideal temperature for a hard drive.
 
faugusztin, that's with a big floor fan blowing on the front of it. :) Without that fan the Hitachi's get up to 50+c. I think I had a couple hit 53c before I put the big fan in front of it. I had the whole system running before with better flow on the 120mm's thanks to jamming the "new" fan tray in there and they were all in the 30's, low 40's. It was just really bad having that fan tray in there since it didn't really fit.

Some holes on the old fan tray to improve the 120's airflow should get it right, I think. If that's not enough I can look to adding some quiet 80's at the back of the case.
 
Status
Not open for further replies.
Back
Top