The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
I am getting horrible performance with ZFS on FreeNAS. I can't get any more than 30mb/sec off the drives. When using a single drive in UFS, I can get 70-80MB. I think I need to do some tweaking. Anyone have any tips?

Don't use one big vdev like that; split it into two or three or four groups. Or use mirrors.
 
@unhappy_mage

FreeNAS is a GNU/Linux distribution. ZFS isn't GPL compatible, so I'm assuming you're using ZFS in conjunction with FUSE, which will impact performance.

You could try using JFS or ext4 as the filesystem format in conjunction with Linux software RAID 5 or 6, and see what kind of performance you get. In fact create a RAID 0 first, just for testing - this will tell you if you have hardware issues.
 
@BlueFox

oops, why did I think it was GNU/Linux-based? I probably thought it was based on Linux because OpenFiler is. My mistake.

@unhappy_mage

Obviously, you will disregard my evident n00bidity. However, I would still try things like testing speeds with RAID 0, just to see if you might have some kind of hardware error or mismatch.

It might be worth building 2 RAID 0 arrays first, one using the on-board controller ports and the other using the ports on the RAID card - each array should perform within expectations.
 
I am getting horrible performance with ZFS on FreeNAS. I can't get any more than 30mb/sec off the drives. When using a single drive in UFS, I can get 70-80MB. I think I need to do some tweaking. Anyone have any tips?
I don't know how ZFS works, but if it needs any sort of parity calculation when writing, then some to large throughput hits should be expected, since the CPU is doing the parity calculations.

Else, I'll let the *nix gurus crawling this thread provide answers, I don't really even know how to begin answering otherwise... lol

Also, anyone know of ways to improve Win2k8 RAID performance? Do I need a hardware RAID card to get good Win2k8 performance?
RAID0 and RAID1 in Windows 2008 shouldn't incur in any noticeable performance hit, since little to no CPU muscle is needed.

RAID5, however, is another beast. That RAID level has parity calculations on writes (and possibly even on reads, to ensure data integrity), and that causes LOTS of CPU requests. Most current CPUs are able to handle it just fine, but the increased I/O load to/from the HDDs, memory and CPU usually causes throughput drops.

Most recent Intel ICHxR reviews I've seen haven't gone too bad (AMD not so much), but if you want massive RAID5 (and RAID6 for that matter, but Windows doesn't allow that RAID level using software) speeds you'll need a dedicated RAID card with a parity engine, not a flimsy softRAID card, which does exactly what RAID implementations on chipsets do: offload parity to the system CPU.

Now, if only someone decided to implement a parity engine on a Southbridge, THAT would be sweet.

Cheers.

Miguel
 
@unhappy_mage

Obviously, you will disregard my evident n00bidity. However, I would still try things like testing speeds with RAID 0, just to see if you might have some kind of hardware error or mismatch.
I think you mean to address this to "decryption", who's having speed troubles with his array. I'm currently running ZFS on OpenSolaris, and getting great speeds.
It might be worth building 2 RAID 0 arrays first, one using the on-board controller ports and the other using the ports on the RAID card - each array should perform within expectations.
ZFS likes whole disks. Building raid0 arrays is a bad idea for this reason.
 
@unhappy_mage

Sorry for not getting back to this earlier. Are you saying then that ZFS does not support RAID 0? What do you mean by, "ZFS likes whole disks. Building raid0 arrays is a bad idea for this reason."?
 
@unhappy_mage

Sorry for not getting back to this earlier. Are you saying then that ZFS does not support RAID 0? What do you mean by, "ZFS likes whole disks. Building raid0 arrays is a bad idea for this reason."?

ZFS "supports" raid 0, in that if you feed it raid 0 arrays it'll put your data on them. But if a disk fails in the array (and you have enough parity to fix it), you need to create a new array of the same size manually before feeding it to ZFS. If you use individual disks, ZFS can recreate the single disk (onto a hot spare, automatically, if you have one) with much less effort on your part.

And if you feed it whole disks instead of stripes, it stripes them inside ZFS anyways. So there's no performance benefit.

Make sure, whatever you do, you use enough parity to rebuild your ZFS pools. Backups aren't a bad idea, either.
 
Can I please be placed at number 4 for most total storage: 31.5TB new server + 7TB old server.

More importantly, can I please be placed at position #2 on most storage in one chassis at 31.5TB

Here is the build log I constructed:
http://yabb.jriver.com/interact/index.php?topic=52549.0

And the posting that alerted me to this thread:
http://yabb.jriver.com/interact/index.php?topic=55059.new

The entire 31.5TB is running as a single RAID6 unit and gives me a massive shared drive which is called "Beryllium"

Complete Specs
SUPERMICRO MBD-X7SBL-LN2 LGA 775 Intel 3200 Micro ATX Intel Xeon/Core 2/Pentium/Celeron Server Motherboard
SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount Server Case
areca ARC-1280ML-2G PCI Express SATA II Controller Card
Intel Xeon X3220 Kentsfield 2.4GHz LGA 775 105W Quad-Core Processor
30 x Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive
Crucial 4GB DDR2 800
APC 1500VA UPS RS

12.jpg
 
Read the rules for posting in the first post, you also should get some pics of the server itself :)

The rankings hasnt been updated in a while either...
 
The thread I linked to has many pictures of the server in addition to screenshots. I did read that post. It isn't a big deal that the rankings get updated but in the other thread, someone told me I should post here.
 
I am getting horrible performance with ZFS on FreeNAS. I can't get any more than 30mb/sec off the drives. When using a single drive in UFS, I can get 70-80MB. I think I need to do some tweaking. Anyone have any tips?
This is typical for the FUSE implementation of ZFS.
You probably want to use Nexenta. It's basically Ubuntu with the OpenSolaris kernel. Don't mistake it for NexentaStor which is the same thing, except it costs thousands of $ and includes a support package.
There's also EON, based on OpenSolaris.

RAID5, however, is another beast. That RAID level has parity calculations on writes (and possibly even on reads, to ensure data integrity), and that causes LOTS of CPU requests.
The problem (nowadays) is not CPU speed, even with "slow" CPUs. The reason for having a RAID card that calculates parity is when your CPUs are bogged down with work, and they need to write data to the array. As the CPUs are busy, parity calculation may be slightly delayed, impacting write speeds, increasing waiting time to process data, increasing wait time for parity calculation, slowing down the write speed, etc. This is obviously a Bad Thing™ and is why Real Servers will have RAID cards that calculate parity and server network cards that take care of the data transmission.

Regarding slow RAID5 arrays, let us look at a 3-drive array. This is effectively two drives RAID0 and one drive parity, except that for every stripe, the data and parity is shifted around, somewhat like this:
Code:
          HD1 HD2 HD3
Stripe 1: D1  D2  P1
Stripe 2: D3  P2  D4
Stripe 3: P3  D5  D6
Stripe 4: D7  D8  P4
D=Data, P=Parity

The reason RAID5 is "slow" is that on every write it does the following:
1) Write out data to the drives.
2) Read data from the stripe just written in order to calculate parity.
3) Calculate parity.
4) Write parity.

Also, if you lose power between step 1 and step 4, there's a chance your array is fucked. At best, the data just written is corrupted. Might not be a problem. Might be a big problem. Use RAIDZ if it is.

Most recent Intel ICHxR reviews I've seen haven't gone too bad (AMD not so much), but if you want massive RAID5 (and RAID6 for that matter, but Windows doesn't allow that RAID level using software) speeds you'll need a dedicated RAID card with a parity engine, not a flimsy softRAID card, which does exactly what RAID implementations on chipsets do: offload parity to the system CPU.
What you're saying is true of driver-RAID, not proper software RAID implementations like Linux' MD and Suns ZFS. Again, it's not about CPU speed.

ZFS "supports" raid 0, in that if you feed it raid 0 arrays it'll put your data on them. But if a disk fails in the array (and you have enough parity to fix it), you need to create a new array of the same size manually before feeding it to ZFS. If you use individual disks, ZFS can recreate the single disk (onto a hot spare, automatically, if you have one) with much less effort on your part.
Also, ZFS will detect bad drives and try to fix the data (imagine having a bad sector, ZFS will work around it). When using RAID0 (some RAID cards do not present raw drives to the OS, forcing you to create one RAID0 "array" per drive,) ZFS can only complain at you, and worst case eject the drive from the array.

And if you feed it whole disks instead of stripes, it stripes them inside ZFS anyways. So there's no performance benefit.
But please, use RAIDZ. Dataloss ≠ nice.

I did read that post.
Not well enough, because you didn't follow the guidelines :p

That said, less yappin and more systems please :D
 
Last edited:
The reason RAID5 is "slow" is that on every write it does the following:
1) Write out data to the drives.
2) Read data from the stripe just written in order to calculate parity.
3) Calculate parity.
4) Write parity.

Also, if you lose power between step 1 and step 4, there's a chance your array is fucked. At best, the data just written is corrupted. Might not be a problem. Might be a big problem. Use RAIDZ if it is.

People overdo the overhead involved in Raid-5.

If the entire stripe across all drives just got written to (massive amount of data being written without fragmentation concerns...), then the controller has all of the data already in its buffers... It doesn't need to read them again, it just needs to calculate the parity based on what it already has.

If a small write, and it only needs to write to one drive in that stripe, it only needs to read the original data from that drive, write the new data to that drive, read the parity drive of that stripe, XOR back out the original data from that drive(so the parity data is where it would be without that old data being there), then XOR in the new data, then write that parity.

I'm sure not all controllers do this, but they would reduce their overhead if they did.

I've seen the same controller be painfully slow and blazingly fast, so I do believe some controllers have these optimizations in place.

But I've seen enough people (not you) make comments about RAID-5 to know that way too many people think that all parity RAIDs have the same healing capabilities as RAID-Z... Not true... Just because you have parity does not mean that if there's a mismatch somewhere that the controller knows who the liar is... You need metadata for that, which RAID-5 doesn't have...

And losing power (buy a UPS!) or just having the OS BSOD or something, you stand the same chance at corruption having an array as you do a single drive...
 
I am also very interested in a flexraid configuration with low cost options for connecting the drives (sas expanders etc), so a build log from you would be great :)
 
Complete Specs
SUPERMICRO MBD-X7SBL-LN2 LGA 775 Intel 3200 Micro ATX Intel Xeon/Core 2/Pentium/Celeron Server Motherboard
SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount Server Case
areca ARC-1280ML-2G PCI Express SATA II Controller Card
Intel Xeon X3220 Kentsfield 2.4GHz LGA 775 105W Quad-Core Processor
30 x Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive
Crucial 4GB DDR2 800
APC 1500VA UPS RS

dang, how'd you cram 30 drives in there?
 
dang, how'd you cram 30 drives in there?
Having a uATX motherboard probably helps A LOT, since you suddenly have an extra 24.4cmX6.1cmX12cm space full of just air to mess around with.

If you use the correct angled SATA cables, you might actually fit more than 6 extra HDDs on that space.

Also, there is usually some extra space right above the top HDD tray row, which usually houses slim ODDs and the odd 2.5'' HDD, but it might fit at least low profile 3.5'' HDDs. At least the Norco cases allow for this kind of setup. No hot-plug or hot-swap (without pulling the server from the rack, that is... but good luck trying to balance a beast like that without breaking something... lol), but it's doable. Case vibrations might be an issue, though.

Cheers.

Miguel
 
From his posted build log he's got 23 drives in RAID6, 1 hotspare, and two 1.5tb drives in RAID1 for the OS with the rest in extra drive caddies as cold spares.
 
I think it's a typo. He's using 1.5TB drives, and claims a 31.5TB (i.e., 21-drive) volume.
After re-reading the link he provided, I have to agree, it seems a typo. He actually says he's using a 23+1+2 1.5TB setup. (23-drive array with double parity and one hot spare, plus a mirrored 1.5TB system drive).

However, my point still stands, that case CAN fit several more HDDs in the spaces left between the motherboard and the left side, and between the motherboard and the rather odd-looking PSU. Those 30 drives, while not a fact right now, are very possible...

Cheers.

Miguel
 
Can I please be placed at number 4 for most total storage: 31.5TB new server + 7TB old server.

More importantly, can I please be placed at position #2 on most storage in one chassis at 31.5TB

Here is the build log I constructed:
http://yabb.jriver.com/interact/index.php?topic=52549.0

And the posting that alerted me to this thread:
http://yabb.jriver.com/interact/index.php?topic=55059.new

The entire 31.5TB is running as a single RAID6 unit and gives me a massive shared drive which is called "Beryllium"

Complete Specs
SUPERMICRO MBD-X7SBL-LN2 LGA 775 Intel 3200 Micro ATX Intel Xeon/Core 2/Pentium/Celeron Server Motherboard
SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount Server Case
areca ARC-1280ML-2G PCI Express SATA II Controller Card
Intel Xeon X3220 Kentsfield 2.4GHz LGA 775 105W Quad-Core Processor
30 x Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive
Crucial 4GB DDR2 800
APC 1500VA UPS RS

http://ppcpathways.com/public/server/12.jpg

just finish reading the whole thing, tell me exactly....how much did you spend on the whole system?
 
I just bought 24 x 1.5TB drives, so I'm over 50TB now. Case comes tomorrow and then I'll post some more pictures.

DSC00901s.jpg
 
Last edited:
Probably going to test one of the LSI cards and WHS if that doesn't work. The last 4 x 1.5TB drives aren't going to be part of the array (and aren't pictured). As for the case, I am getting another Norco (as you can see, I already have one at the bottom of my rack and it is going to go on top of it)
 
Just curious, what do you store with all that space?
Same thing I imagine everyone else does...video (primarily of the high-definition sort). Well, lots of other stuff too. That just takes up the most space.

Anyway, I got my second Norco 4020 in today:

DSC00902s.jpg
 
I just bought 24 x 1.5TB drives, so I'm over 50TB now. Case comes tomorrow and then I'll post some more pictures.

As soon as the 2TB's come down a bit more in price I will be getting 20x2TB bringing my total storage up to 85TB so you won't stay ahead for long =) I am starting to get low on space already:

Code:
root@sabayonx86-64: 05:06 AM :~# df -H /data
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdc3               18T    13T   5.4T  71% /data
root@sabayonx86-64: 05:06 AM :~#
 
niceone guys
i envy you
dont have em aviable down here ;/
anyway
done with second rack

bringing total to 88
 
Good GOD, two guys alone are responsible for more than 100TB of storage capacity... That's... WEIRD!

Well, if I didn't know already, I'd say this is [H]ardcore stuff... :p (I know, lame).

Congrats on you guys for your builds.

Btw, asgards, I'm spotting a theme on your servers... lol It looks like a very interesting combo, though I can't help but wonder if G31+E3200 wouldn't be a tad more energy efficient nowadays. I have one of those combos on my yet-to-enter-the-10TB-club WHS machine (1TB currently, 4TB in about a week - I've had the new disks ready for a couple of weeks now, I simply didn't have the time to upgrade it), and the CPU sucks up less than 15W during load, and about 5~6W idling...

Mind you, I don't mean to bash, and besides, those builds are not exactly pristine new (the E3200 is only out for about 4 months). I was just wondering, can you tell how much power the CPU eats up? I'd love to know if I made the right choice...

Cheers, and again, congrats to everyone.

Miguel
 
shhh
for some 100 is a plan for xmas ;)
about energy saving,
ive noticed many of todays intels can beat amd on this, got those setups in last winter/spring, nothing could come close to those numbers
also, amd has cnq support under linux, havnt seen one for intel
in perspective, those two towers i have in home, on average drain ~70w when used, each, ... i think i even posted in powersaving thread monthly results of those two, on absolute idle its ~40w
so, swaping hw, trying to get rid of oldone ... those few wats atm arent worth the hasle ill get with sellings
 
shhh
for some 100 is a plan for xmas ;)
:eek::eek::eek:

ive noticed many of todays intels can beat amd on this, got those setups in last winter/spring, nothing could come close to those numbers
So it seems for now I made the right choice. Mind you, I was just referring CPU load (eventually including the power phases - which with the undervolted Celly I believe to be just one, thanks to ASRocks's IES), not full motherboard load, which I currently have NO way of checking (I haven't seen a Kill-A-Watt for sale here in Portugal... yet). However, if I had to guess, I'd say about 30~35W idling with only the system disk, since the PSU isn't 80+ certified and the single HDD (for now) is a 1TB Samsung F1, which isn't really that power friendly...

also, amd has cnq support under linux, havnt seen one for intel
I was under the impression there were (third-party) kernel modules that could handle EIST and such. Not sure about the names, and I do remember hearing rants on how bad some of them worked about two years ago, but I do believe they are available. I was searching for Linux over notebook hardware at the time. You might want to try reading up on that subject if/when time comes to build a Linux Intel rig.

in perspective, those two towers i have in home, on average drain ~70w when used, each, ... i think i even posted in powersaving thread monthly results of those two, on absolute idle its ~40w
That is actually VERY good. That much storage capacity on such a small power envelope is something to be proud in. Especially since you're using non-dumb HDD controllers. Sweet!

so, swaping hw, trying to get rid of oldone ... those few wats atm arent worth the hasle ill get with sellings
Exactly. Not to mention the added hardware would incur costs that those few watts less would take YEARS to offset.

Thanks for the time.

Cheers, and good luck.

Miguel
 
I just ordered 10 x Samsung HD103SJ F3 1000Gb/1Tb 32Mb SATA II only 10TB physically to start but I'll get it to 20TB in no time. :)
 
Okay...now that the cabling is somewhat cleaned up, I finally broke down and wrote up my WHS box for submission...

21TB total storage

Case – Cooler Master Centurion 590
PSU – BFG Tech 550W
Motherboard – Gigabyte EG45M-UD2H
CPU – Intel C2D E8400
RAM – 2x1 GB (don’t recall the brand right now…)
GPU – on board Intel X4500HD
Controller Cards – 1 AOC-SASLP-MV8 (in the PCIe x16 slot)
1 cheapo PCIe 2xSATA Si3124 based card (don’t recall the brand)
Optical Drives – none (I have a Buffalo USB-powered DVD drive for installations if need be)

Hard Drives (include full model number)

Attached to the motherboard (ICH10R southbridge – configured as ACHI)
  • 2x 1.5TB Seagate Barracuda 7200.11 ST31500341AS
  • 2x2 TB Hitachi Deskstars HDS722020ALA300 (7200 rpm)
  • 1x2 TB Seagate Barracuda LP ST3200542AS (5900 rpm)

Attached to the cheap PCIe card
  • 1x1 TB WDC Green drive WD10EACS

Attached to the AOC-SASLP-MV8
  • 2x2 TB Seagate Barracuda LP ST3200542AS (5900 rpm)
  • 1x2 TB WDC Green drive WD20EADS
  • 5x1 TB WDC Green drive WD10EACS

The first 4 drives are in a 4-in-3 module that came with the case (120 mm intake fan on the front). That's at the bottom of the case. Drives stay around 31C.

The other 10 drives are in a pair of 5-in-3 Supermicro CSE-M35T-1B enclosures. The enclosures still have the stock 90mm fans on them, but I have them throttled down so that it’s not as obnoxious. Drives bounce between 35 and 40C, depending on how heavily they’re used and whether or not my wife has turned the heat up.

Battery Backup Units (if any) Right now, it’s sharing an APC BR1500 1500VA with the HTPC. Once I finish putting a vent on the closet door (the closet is only 8” deep, so the heat has nowhere to go), it’ll have it’s own smaller UPS to share with the cable modem and router (the primary purpose of that closet is for the cable and phone lines to enter the unit, with splitters going to cable/phone/ethernet jacks in the various rooms.

Operating System – Windows Home Server w/PP3

Primary purpose of the machine is to back up our home PCs (one for me, one for my wife, one for the TV, and one for the road), provide shared storage, and hold all of the large media files we’re accumulating (DVDs, Blu-Rays, recorded TV, music, legally purchased software, ISOs downloaded from TechNet, backup copies of game CDs/DVDs, etc.) For example, TV shows are converted to dvr-ms files from the PC hooked up to the TV, stripped of commercials by LifeExtender, and moved to the WHS box after a couple of weeks if it’s something we’d want to keep or rewatch at some point.

Redundancy is achieved through the folder duplication feature of WHS. Anything I would scream about losing is duplicated (right now, I’m duplicating almost everything because I have the space…if and when space becomes scarce, I’ll stop duplicating folders containing DVDs and games I know where I’m storing the disks, and things like that).

Right now, backups of the most important stuff (i.e., important documents any photos, and other irreplaceable items) are handled by the other machines; I’m still working on an easier solution than having a USB hard drive in my desk at work. Maybe it'll go into the clouds some day...

Pretty pictures:

4195115022_46d168009f.jpg


4195114978_02e4a840dd.jpg


4194358219_ca8f5ea544.jpg
 
tidy??? :eek:

what was it like before? :D

i have the same case, and also two of the SM 5-into-3. fantastic case. cheap too. ;)

I said somewhat...I have stubby fingers, so the effort spent trying to really clean things up just leads to frustration, swearing, and bleeding. Previously, it was a rat's nest, though not as bad as the HTPC.

For this one, I swapped out the 24" SATA cables with 10" SATA cables for the 4-in-3 housed drives, and put in the Supermicro/Marvell card instead of using Addonics port multipliers for the other drives (it'd be nicer still if I could find shorter cables...but 18" is still better than a meter). I still need to fuss with the power cables to see if I can get a few more of them routed through the other side of the case (instead of being front and center in the photo).

It is a nice case. My requirements were "cheapest case that didn't look bizarre that could hold two of the 5-in-3's and at least 4 other drives." This one fit the bill perfectly. :)
 
It is a nice case. My requirements were "cheapest case that didn't look bizarre that could hold two of the 5-in-3's and at least 4 other drives." This one fit the bill perfectly. :)

did you have to file out the tabs from your case?

i replaced the stock fans in my two CSE-M35TQ modules
ah, peace and quiet
 
did you have to file out the tabs from your case?

i replaced the stock fans in my two CSE-M35TQ modules
ah, peace and quiet

Umm...filing takes too long. I just jammed a screwdriver in the gap and used that as a lever to bend the tabs out of the way. You can't easily tell because of the flash, but those areas around the clips aren't that straight anymore...

Right now, I have the fans connected to a couple of 5V plugs, which slows them down nicely. I need to find my little fan controller so I can fine tune it a bit more...but at the default speeds, they're crazy loud.
 
Status
Not open for further replies.
Back
Top