The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
My server.... =]
Nice system. Clean looking outside, regular theme going on, cheap (if you went with 12x2TB the price on the HDDs would more than double right now (2TB = €200+ a pop, 1.5TB = ~€95 each), and the storage capacity would only go up by 33%.

I especially like the Samsung F2 drives, and I plan to use them on my NAS. They're not as low power as WD GP drives, but you can actually count on Staggered Spin-up (not a standard feature on desktop WD drives), which can ultimately mean you can get by with a smaller PSU at boot time.

One question, though: what controller are you using, if I may ask? Supermicro? Or is it rather a standard SiI-based 4-port PCI SATA2 card, like Addonics sells?

Oh, and btw, does that case handle the 12 drives without adapters? Or are you using something special?

Also, to improve your airflow, you might want to redo your cabling. I know, nit-picking. Though I assume you'll get to it when you receive the rest of the hardware, right?

The systems posted in this topic inspire me and, at the same time, make me feel completely inadequate.

Not sure whether I should start building my own 10tb system ...or go hide under a table, in the fetal position.
I hear you. 33TB on a single system is nuts, and that actually can be bumped rather easily to way over 50TB if you go with a Norco case and fill it up till it bleeds (including extra drives inside the case using a uATX mobo...

I personally went with the under the table approach. I did happen to get comfortable, and fall asleep. While I was asleep I had a dream of a 10TB+ server, and it was awesome. So when I woke up, I was motivated to start working on my first one, it's not done yet tho.
LOL for that one.

Can't wait to see the pics, they're always inspiring.

Cheers.

Miguel
 
My server.... =] I love being a geek. I have 11 1.5TB Samsung 5400rpm drives and 1 1.5TB WD green. The green is the system drive. I haven't gotten the power cables yet so I couldn't hook up the last 4 drives. As soon as I get the stuff in from frozen CPU I'll update the post.


12 1.5TB = 18TB advertised.

SKOL2.jpg

what are those things plugged into the surge protector before the power cables?
 
My guess is US --> EU plug adapters.
Unless those tiny things actually act as 110v-to-220v AC converters in the process, which they seem too small to be capable of (but hey, it might work), then I don't think so. The EU power grid works on 220v~230V AC, the US power grid works with 110v~115v AC (and some very rare cases still use DC current, but that's besides the point), which usually needs a rather large AC voltage converter brick to make stuff work.

They could, however, be UK-to-EU, or EU-to-UK plug adapters, since plugs are different in the UK and EU, and we share the same voltage range. Those actually look like UK power cables, EU ones usually end on either very slim 2-prong connectors (no ground), or large round 2-prong connectors (with two extra slits for the ground connectors) - check the Wikipedia article on AC power plugs, I'm talking about Europlugs and CEE 7/7 plugs.

Cheers.

Miguel
 
All computers take 110v and 220v. You don't need voltage converters for them (same goes for frequency, which you forgot to mention). Due to the shape of the plugs, I still believe it is a US --> EU plug adapter. I know all about European plugs as well. I've lived in both the US and Europe for many years.
 
Out with 500GB drives, in with 16x1.5TB.. then the darn thing wont post, with or without drives. WTFBBQ!

Edit: dont use a chassis intrusion switch as a power switch, ye gods I must be extra stupid today :(

Edit2: my main post updated - 36.something total advertised TB. here

Edit3: a teaser:

DSC_8023.sized.jpg
 
Last edited:
All computers take 110v and 220v.
IF the PSU handles both voltages. Most of the newer ones are full range, somewhat older ones have a switch, but there are some rare ones that are single-voltage.

Also, we can't know for sure that power strip is only powering computers, which must be taken into account even if we assume the PSU is dual-voltage compatible (fairly good assumption, since it's a new build) and the guy didn't use a power cord with the correct ending for whatever reason (most "heavy" appliances use the 3-prong connector also present on the back of PSUs these days).

Due to the shape of the plugs, I still believe it is a US --> EU plug adapter.
I must admit, I'm curious about this one. Maybe it's best we wait for an explanation, no?

Edit: dont use a chassis intrusion switch as a power switch, ye gods I must be extra stupid today :(
I'd vote for extra tired, instead of extra stupid. I mean, hooking up that many cables must be tiring, and you're prone to mess something up with that much stuff going on...

Congrats on the upgrade. One question, though: is there any way that all-black fan will "morph" to match the other two? Or you'll be covering the front up? I'm assuming for airflow reasons it will stay open, (if not, it wouldn't matter), it would be a shame if it couldn't match (I'm a symmetry fanatic, I know... :p).

Cheers.

Miguel
 
I'm waiting for a few more of the normal coolermaster 5.25" bay covers with dust filters, then they'll be covered up. This doesnt degrade airflow in any big way with those fans, and makes things easier to keep clean :)

Trying to figure out the best settings for md now, not getting the best performance atm. 70ish MB/s write, 500MB/s read...
 
Those actually look like UK power cables, EU ones usually end on either very slim 2-prong connectors (no ground), or large round 2-prong connectors (with two extra slits for the ground connectors) - check the Wikipedia article on AC power plugs, I'm talking about Europlugs and CEE 7/7 plugs.

Cheers.

Miguel

They don't look like UK plugs to me; our plugs are larger and flatter with the cable coming out to the side (a much better design might I add): http://www.jjeac.com/UploadFiles/JJA-16 UK plug.jpg
 
IF the PSU handles both voltages. Most of the newer ones are full range, somewhat older ones have a switch, but there are some rare ones that are single-voltage.
I have yet to come across one in at least 10 years that doesn't accept both.

I must admit, I'm curious about this one. Maybe it's best we wait for an explanation, no?l
The shape is a giveaway. You can see the protrusion in the plastic where the US ground is. It is the wrong shape to be a UK plug and it can't be a mainland European one either or else it wouldn't have that protrusion.
 
Nice system. Clean looking outside, regular theme going on, cheap (if you went with 12x2TB the price on the HDDs would more than double right now (2TB = €200+ a pop, 1.5TB = ~€95 each), and the storage capacity would only go up by 33%.

I especially like the Samsung F2 drives, and I plan to use them on my NAS. They're not as low power as WD GP drives, but you can actually count on Staggered Spin-up (not a standard feature on desktop WD drives), which can ultimately mean you can get by with a smaller PSU at boot time.

One question, though: what controller are you using, if I may ask? Supermicro? Or is it rather a standard SiI-based 4-port PCI SATA2 card, like Addonics sells?

Oh, and btw, does that case handle the 12 drives without adapters? Or are you using something special?

Also, to improve your airflow, you might want to redo your cabling. I know, nit-picking. Though I assume you'll get to it when you receive the rest of the hardware, right?

I'm liking the Samsung drives. They're actually running much cooler then the WD green.

The controller is a standard super micro affair. It's the PCI-x one. I would have like to get the PCI Express but newegg wasn't selling it.

As For adapters I'm using the Coolermaster 4 in 3 adapters. THey were cheaper then the Lian-Li's and I don't really need hotswap As the system is only used for streaming video and basic central storage.

Once I get the parts in from frozen cpuI'm going to clean up the cabling. I have a bunch of 90 deg. to 180 SATA cables coming that are much longer to make it cleaner. I'm really not satisfied with the cabling right now. And last but not least, thanks for the compliments =]
 
I'm liking the Samsung drives. They're actually running much cooler then the WD green.
Interesting to know. That might actually mean I'm not giving up on temperatures (and/or power consumption?) by going with Samsung. Sweet.

The controller is a standard super micro affair. It's the PCI-x one
Glad to know my eyes are not that bad after all, since I botched up the "what's this plug" contest... :p

Sad stuff like that isn't usually available in Portugal... I have yet to find an easy (and cheap... and somewhat quick) way to get stuff from the US...

Oh, btw, do you know if the SuperMicro controller can handle Port Multipliers?

As For adapters I'm using the Coolermaster 4 in 3 adapters.
Oh, nice. I forgot those things exist. I'm considering suspending the drives (I'm an avid SPCR reader :D), like 3x5-in-3 (I'm using a Nox Coolbay case, I just need to get my hands on one more full-mesh 5.25'' cover to make the thing look nice), but I'd be willing to consider drive bays if the noise/vibration is low enough. How does that work with your setup?

Once I get the parts in from frozen cpuI'm going to clean up the cabling. I have a bunch of 90 deg. to 180 SATA cables coming that are much longer to make it cleaner. I'm really not satisfied with the cabling right now. And last but not least, thanks for the compliments =]
Those cables are probably going to do the trick. Can't wait to see the end result.

And you're welcome.

Cheers.

Miguel
 
I'm drunk and was just looking at my nice new 18tb box... it sure is pretty.
 
Decided since it's basically a completely new build it deserved a new post.. (this replaces my old server build, a CM590 build). Still working out a permanent fan solution but happy to have it off the ground so far. Idea was to build something that is easy to add storage to as I need it to grow and (ideally) is rock solid stable.

ADVERTISED: 18.32TB

Case: Norco 4220
PS: Corsair 750TX
Mobo: Supermicro MBD-C2SBC-Q-O
CPU: Q9550
RAM: 8GB Corsair DDR2

Storage:
(2x) WD 320gb 7200rpm 2.5" (RAID 1)
(2x) Supermicro AOC-SASLP-MV8
(3x) WD Green 2.0 TB
(6x) WD Green 1.5TB
(1x) Seagate 1TB
(4x) Seagate 500GB

OS:
Windows Server 2008 R2 Datacenter
WHS (Hyper-V)
(Plan to add more VMs to play around with)

Could use a touch of cable tuckaway but the SAS cables makes it sooo easy :)
4103856269_c20f6f03a6_o.jpg

4104606202_0439017530.jpg

4104606986_582315806e.jpg

4103843787_c9dc7eec3d.jpg


Yes the server is in my closet and no, I don't pay for electricity :D (or rather it's a flat rate included in condo dues)
 
WHS as a guest was easy to setup? Hyper-V does direct disk pass through right?

99% easy. And yes on disk passthrough. I have two small issues (re temp monitoring and hotswapping) I still need to resolve, but it's very straightforward to setup in Hyper-V. Disk transfer speeds are plenty good.

Plus the hope is that now I should be fairly secure against the possibility of a server disk fail and worrying about a reinstall.
 
4103843787_c9dc7eec3d.jpg


Yes the server is in my closet and no, I don't pay for electricity :D (or rather it's a flat rate included in condo dues)

I noticed in that last pic that the cover was off and it was running. Just a little warning, you should be very careful on doing that for more than a few minutes as it can/will cause the drives to overheat.
 
99% easy. And yes on disk passthrough. I have two small issues (re temp monitoring and hotswapping) I still need to resolve, but it's very straightforward to setup in Hyper-V. Disk transfer speeds are plenty good.

Plus the hope is that now I should be fairly secure against the possibility of a server disk fail and worrying about a reinstall.

I have the same problems.
What controller are you using?
 
I noticed in that last pic that the cover was off and it was running. Just a little warning, you should be very careful on doing that for more than a few minutes as it can/will cause the drives to overheat.

No worries, was just 2 minutes to snap the pic of the internals; had forgotten to do it when I was building it.

I have the same problems.
What controller are you using?

The Supermicro aoc-saslp-mv8 card for 16 of the bays and then a reverse breakout off the mobo for the remaining 4. It appears to be a problem with all the bays though.

EDIT: Found that I can get speedfan to work under the the host OS (Server 2008) but no temp diodes can be detected within WHS. At least I can check disk temps now (and they are surprisingly low considering the low speed fans I replaced the main fan wall with... actually a bit surprised). Not sure if there is a workaround or not.

Still need to figure out the hotswap issue at least!
 
Last edited:
No worries, was just 2 minutes to snap the pic of the internals; had forgotten to do it when I was building it.



The Supermicro aoc-saslp-mv8 card for 16 of the bays and then a reverse breakout off the mobo for the remaining 4. It appears to be a problem with all the bays though.

EDIT: Found that I can get speedfan to work under the the host OS (Server 2008) but no temp diodes can be detected within WHS. At least I can check disk temps now (and they are surprisingly low considering the low speed fans I replaced the main fan wall with... actually a bit surprised). Not sure if there is a workaround or not.

Still need to figure out the hotswap issue at least!

Do those cards support port multipliers? I'm wanting to be able to build a nice norco case also. and am trying to find the cheapest/best way to easily expand up to 20 drives.

Shawn
 
well, two supermicro cards = 16 ports + 4 onboard ports = 20 :)

Do those cards support port multipliers? I'm wanting to be able to build a nice norco case also. and am trying to find the cheapest/best way to easily expand up to 20 drives.

Shawn

This :) The mobo came with 6 onboard sata ports, each of the cards does 8 sata drives so that's 22 sata devices. Works perfect for 20 data drives, an os drive, and a disc drive (or, in my case, two OS drives in RAID 1).

The cards are ~$100 a piece plus the cost of the cables. Cheap and easy!
 
Hey man, any good links on how to do WHS using the Hyper-V?

Not that I know of.
If you are familiar with virtualizing stuff its pretty straight forward IMO.

If you plan on doing a migration, you may run into issues depending on how many disks you have.

BTW: In order to do disk passthrough you must take the disk offline on the Host before mapping it to the VM.
 
i wonder why you cannot get hotswapping to work, does it work in host os?

With the VM offline, it works perfectly. I was able to quickly add 9 or 10 drives and then pull them offline in Disk Management. With the WHS VM running, it would actually screw up Disk Management in the host OS when trying to hotswap drives and pull them offline (THIS was the big problem because it required a full restart of the host OS to get Disk Management working again)

I can't remember if I tried adding/removing drives to the WHS VM with it running, after having already added them physically prior to it starting (i.e. "virtual" hotswapping them). I know it's supposed to be supported by Hyper-V R2...

Luckily I don't plan on adding/removing drives often and it's still relatively quick to power down the VM, add a drive, and power it backup... though still annoying.
 
Mobo: Gigabyte GA-MA790XT-UD4P
CPU: AMD Athlon II X2 240
RAM: 2GB Kingston DDR2-800 (2x 1GB)
Case: Norco 4220
Power: Corsair HX1000
Controllers: Intel SASUC8I HBA (2x)

Storage: (20x) Western Digital WD15EADS (16 attached to HBA - 4 attached to onboard mobo)

OS: Right now, FreeNAS. Want to use Windows 2008 R2, but RAID-5 performance sucks balls. ZFS rocks.

Code:
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
bafs                   27.3T    144G   27.1T     0%  ONLINE     -

 pool: bafs
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	bafs        ONLINE       0     0     0
	  raidz2    ONLINE       0     0     0
	    ad12    ONLINE       0     0     0
	    ad14    ONLINE       0     0     0
	    ad16    ONLINE       0     0     0
	    ad18    ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	    da6     ONLINE       0     0     0
	    da7     ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	    da9     ONLINE       0     0     0
	    da10    ONLINE       0     0     0
	    da11    ONLINE       0     0     0
	    da12    ONLINE       0     0     0
	    da13    ONLINE       0     0     0
	    da14    ONLINE       0     0     0
	    da15    ONLINE       0     0     0
Pics: http://www.flickr.com/photos/aagius/sets/72157622182971220/

I am getting horrible performance with ZFS on FreeNAS. I can't get any more than 30mb/sec off the drives. When using a single drive in UFS, I can get 70-80MB. I think I need to do some tweaking. Anyone have any tips?

Also, anyone know of ways to improve Win2k8 RAID performance? Do I need a hardware RAID card to get good Win2k8 performance?
 
Mobo: Gigabyte GA-MA790XT-UD4P
CPU: AMD Athlon II X2 240
RAM: 2GB Kingston DDR2-800 (2x 1GB)
Case: Norco 4220
Power: Corsair HX1000
Controllers: Intel SASUC8I HBA (2x)

Storage: (20x) Western Digital WD15EADS (16 attached to HBA - 4 attached to onboard mobo)

OS: Right now, FreeNAS. Want to use Windows 2008 R2, but RAID-5 performance sucks balls. ZFS rocks.

Pics: http://www.flickr.com/photos/aagius/sets/72157622182971220/

I am getting horrible performance with ZFS on FreeNAS. I can't get any more than 30mb/sec off the drives. When using a single drive in UFS, I can get 70-80MB. I think I need to do some tweaking. Anyone have any tips?

Also, anyone know of ways to improve Win2k8 RAID performance? Do I need a hardware RAID card to get good Win2k8 performance?

Make sure you check drive temps with those noctua fans, they might be a little under powered but you might be fine because the drives are all low power.

As for software based raid, make sure you have a very fast CPU and a good network card to offload work from the CPU, and good luck with 20 drives (thats a lot of parity calculations)

Also, what's your CPU load at when you benchmarked that? I had some problems getting a server to use more than 2-3% of the cpu power for parity calculations and the only thing anyone could come up with was a possible software conflict somewhere.
 
Are you trying to kill your drives? get those noctua fans out of there, they're not suited for that usage at all.
 
Are you trying to kill your drives? get those noctua fans out of there, they're not suited for that usage at all.

O please d00d they are EADS drives they put out zero heat.
I have Panaflows in my Norco that move hardly any air and do not have heat problems.

Seriously at work drive temps on our 15k SAS array are 54-58c and they dont have problems.
Heat is not as big of a deal as people make it out to be. lower is better cause it means less fan noise and potentially more efficient box but from a reliability standpoint there is nothing to worry about.
 
15k SAS drives are rated for higher temps than consumer drives. The problem with those old-style noctua fans is that they dont handle high impedance well at all, and will probably flow significantly less than your panaflo's at the same RPM.

And 20 of those drives is still 120W of heat while reading/writing, and ~80W of heat idling, thats not insignificant, even tho its not as bad as it could be.
 
Status
Not open for further replies.
Back
Top