The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
Much of the reason for the cabling setup was both aesthetic as well as necessary. This machine was getting shipped from one side of the country to the other and I've had problems in the past with connectors coming lose that I had NO idea could possibly do so. I guess the vibrations can get pretty intense during shipping. So I clamped all cables like I used to when I was wiring aircraft (I wired much of the B-2 Stealth Bomber cockpit). I've long employed hot glue dabs at connection points --and SATA connections are an especially good place for them. I've never had problems pulling them off with a moderate amount of hand force though. No biggie actually. This time, the shipping went through without a single wiring problem after arrival.

...Mind you, I've wanted to wire a case like that for a very long time. Many years ago, I experimented with custom harnesses like this:
0026.jpg

But then SATA came out, and I never had a machine to build that was worthy of the effort until this one...
Haha... I saw those clamps and I was like... "Those are like the clamps I use at work!"

I'm an F-15 avionics troop.
 
@ Dieta, Nice little setup, although I would consider physical location next time - i am not sure I would want to stick my setup that close to pipe work joints, or what looks to be some kind of pressure tank!
 
I wouldn't recommend ext4 if you are using all the storage on a single file system (over 16 TiB). Using > 16 TiB is still not mature on ext4 and when I emailed their mailing list asking about it a few months ago I was told that it isn't considered stable and I shouldn't use ext4 if I wanted stability.

The only file-systems that I know of that seem to handle >16 TiB correctly is JFS/XFS. I am not a big fan of XFS do to corruption I Have seen in the past and JFS has a bit of annoying bug where an fsck is required on an unclean shutdown if you have more than 12 TB of data on the array do to the log not being able to re replayed.

I chose JFS as an fsck on my 36 TB file-system only takes around 15 minutes and i almost never unmount uncleanly. It took around 8 minutes on my 18 TB file-system.

thanks for the info , i already know it, i was planning to use three fs anyway but it's pity for sure.

creation of ext4 fs is under 3 min for 12To
and a forced check 2min 30 s ( over a clean fs)

for the choice xfs/jfs , i don't know well JFS but i got performance trouble with xfs under charge from multiple users ( heavy fragmentation )
 
Last edited:
Fragmentaiton is another problem I saw with XFS. I saw much better results on JFS. Using a file-system that is 90-99% full most of the time and consrtantly writing new files (mythtv) I saw very few files that had over 10-15 fragments and we are talking multiple gigabyte files so the level at which performance was not effected. I saw excessive fragmentation with mythtv on XFS.

If you do wanna test out JFS you will need to make sure you use the CVS version for > 32TiB support for mkfs/fsck.
 
@ Dieta, Nice little setup, although I would consider physical location next time - i am not sure I would want to stick my setup that close to pipe work joints, or what looks to be some kind of pressure tank!

Understandable argument - from your point of view :) The "Pressure tank" is not for pressure but for small amounts of heating water (few ml/year) that comes from the floor heating in the house. The pipes you see in the background are indeed for the heating itself, but as they are very well isolated, the temp around it is very low. Nether the less - a good point and maybe i should move it a back more into the middle of the room.

Speaking about the storage itself. I was running a backup script, written in bash using rsync and prowl (perl) for the last 3 days. Believe it or not - the Firewire 800 bandwith at this Mac Mini is so slow, I was just able to transfer up to 10-20MB/s. Backing up ~4TB took ages - and I cancled it. I used a PCIe x1 eSATA card in my Mac Pro before I sold it and 2 of these Onnto DataTales, I just found out that it was slow, too. Sure, having a native FS is nice, but since this is poor speed, I just installed a Win Server on an old PC (Athlon64/3700+, 2GB RAM) - and built in this particular Sil3132 card. For my surprise, the
benchmarks and real-life testing is more than surprising.

HD Tune Pro for example says the external RAIDbox has a average transfer of 97MB/s.

Real-Life testing - means: copying a VM with 2GB files from the internal HD (Samsung 1.5TB) to the external eSATA box ends up around 82-85MB/s. That's MUCH more than 10MB/s ;)

So I have to admit, that getting Mac Minis is a BAD idea if you fight with a bit more data than just some GByte/s of Music, Photos or so. I also have to admit, that this is the first Windows machine since a long time in our Mac environment.

See this as warning if anyone find the idea of using a Mac Mini as fileserver good ;)
 
After having snapped a few shots of my new Norco case, I thought it was time to show off my current and future setups alongside eachother :)

My current file-server (Fedora13):

MB: Asus M2N WS
Proc: Athlon64 3000+
Mem: 2x2 GB KingMax DDR2-800
SATA ctrl: Supermicro PCI-X card with Marvell MV88SX6081
Case: Chieftec tower
PSU: CoolerMaster 620W
HDDs (all in JBOD formatted as Ext3):
1x250GB WD system
1x400GB Samsung SATA
6x500GB Samsung SATA
1x500GB WD SATA
1x750GB Samsung SATA
2x1TB WD SATA
1x1,5TB Samsung SATA
1x1,5TB Segate SATA
1x2TB WD SATA
-----------------------------
Total: 11,9TB raw
~10,66TB formatted

Backup (Fedora13):

MB: some refurbished Fujitsu S775 board
Proc: CeleronD 2,4GHz
Mem: 2x512 MB Corsair DDR-400
SATA ctrl: noname 2 port SiliconImage PCI experess 1x card
Case: small ATX case with an extra drive cage screwed on top
PSU: Asus 400W
HDDs (all in JBOD formatted as Ext3):
1x60GB Maxtor IDE
1x300GB Maxtor IDE
1x120GB WD 2,5" SATA
2x500GB Samsung SATA
3x500GB WD SATA
---------------------------------
Total: 2,98TB raw
~2,3TB formatted

New file-server to be (will probably have Fedora14 by the time everything gets here):

MB: Tyan S8005 (ordered, not sure which version will ship, probably the basic without SAS, this one will be coming from Germany)
Proc: probably a dual core Phenom II or maybe an AthlonII, I certainly won't go Opteron (not yet ordered)
Mem: a 4GB Kingston DDR3 kit (2x2GB modules, not yet ordered)
SATA ctrl: Supermicro AOC-SASLP-MV8 (already on hand, came in from the Netherlands)
Case: Norco RPC-4020 (which ended up costing me about $750 to get to Hungary, also got the rails for it)
PSU: CoolerMaster Real Power M620 (got it from a French company, who shipped it from Germany)
HDDs (will be in linux soft RAID 5):
1x60GB Seagate 2,5" SATA already built in :)
These are still to come, will be 5 or 6 WD green series 2TB HDDs and I'll add the 1 I already have to the volume once the data is moved. So it will be at least 10TB of usable space at launch, later more drives will come to fill up the bays.

On to the pictures:






And here is the optical drive I've got out of an older notebook, the back plane almost fits :)


I've also got a Salgó shelf coming with rack mount sized shelves. It's basicly 4 legs with lots of predrilled wholes and a couple of 800x500mm shelves. This, counting the dimensions of the legs will make it rack compatible in size, I hope. :) It only costs about ~$50-60 and it's available locally. I hope it will fit the cases without much hassle. :)

Here is a link, if someone is interested: http://www.ugp.hu/csavaros_salgo_polcok
 
Looking good Klart, what motherboard / spu / ram u planning on putting into that beast ?
 
Looks pretty cool, klart, but isn't it a bit risky to use this stuff as JBOD? I mean, if one drive fails, you will loose a lot of data. (as far as I understand, JBOD (just a bunch of Disks) spans a volume over X drives and write file over the "edges" to the other HD, if one HD failes, you will loose the data of this HD and the overlapping files, right?
 
Looking good Klart, what motherboard / spu / ram u planning on putting into that beast ?

I've got the whole thing described under the new file-server part.

Looks pretty cool, klart, but isn't it a bit risky to use this stuff as JBOD? I mean, if one drive fails, you will loose a lot of data. (as far as I understand, JBOD (just a bunch of Disks) spans a volume over X drives and write file over the "edges" to the other HD, if one HD failes, you will loose the data of this HD and the overlapping files, right?

We have a bit of confusion here, as Wikipedia states, JBOD can mean 2 things. I actually meant standalone HDDs. I know it isn't too secure, since if a drive that isn't backed up fails, I'll loose the data. But at least it wouldn't affect any other drives. As I'll be keeping the old server for backup, data security will get a lot better as soon as the new server is complete.

BTW, I've been using a file-server for over 8 years now, I've never had a complete drive failure in it. Some drives developped a few bad sectors resulting in the loss of 1-2 files, but that usually wasn't that much of a pain. SMART daemon and e-mail notification rock :)
 
I've just got some bad news, the Tyan board I ordered can no longer be acquired by that retailer. I did wan't to go professional on this build, but more advanced Opteron boards would be just too expensive and the CPUs fitting them would consume lots of energy without me being able to use their potential.

So I'll probably go with a widely available MSI 790FX- GD70 and a 45W AthlonII dual-core.This board has lots of PCI express slots for future expansion, no integrated video to mess with the sys ram and has 2 NICs which I kinda need. I'll just add an old 1MB PCI VGA and it will do just fine :)
 
It would be interesting to read some info on cost, as well as what all this HDD space is being utilized for.
 
It would be interesting to read some info on cost, as well as what all this HDD space is being utilized for.

Well, the use is the simplest, as for most home users, this server will mostly store my multimedia files, offering it over the network. I mostly use it on my desktop, which dubles as my media center.

Talking about cost, it may not be much relevant to most of you, since I live in Hungary (EU member since 2004, located in Central Europe, the western edge of the ex-sovjet block). The currency we use is the Hungarian Forint (HUF). As of October 26. 2010, it's 196 HUF for 1 US$ and 273 HUF for 1 EUR (Euro is the common currency of most EU states).

The bottom line is, those of you living in the US or Canada, could probably get the same setup for much less money.

After all that, let's see the costs:

Norco RPC-4020 + RL-26 rails + slim adapter: US$ 606 (~340 for the cost, rest for shipping) + 31.000 HUF for Hungarian customs fee and handling.

Supermicro AOC-SASLP-MV8: 111EUR + 20EUR shipping

CoolerMaster Real Power M620: ~25.000 HUF from an EU wide online retailer, would have been more expensive locally

2x SATA ML -> SATA cables: a bit over 9.000 HUF (bought locally at about 2-3 times the price since shipping costs would have been even higher)

The rest is widely available locally since those are consumer products, because of their availability and my shortness of cash, I'm holding off on buying them :) :

MSI 790FX- GD70: 41.000 HUF
Athlon- 2 X2 235e BOX: 16.875 HUF
6 x 2Tb SATA-II 64Mb WD20EARS: 6 x 24.375 HUF = 146.250 HUF


It all comes to 423.574 HUF ~ 2161 US$ ~ 1551 EUR. For Europeans this would probably cost about the same, but for those in the US, it would come to about 2/3 that or less. I had to notice that most prices on computer hardware are usually the same amount in US$ as in Euros. Which is strange, since 1 EUR is worth ~ 1,39 US$. I think it must have to do with customs or shipping costs. Or maybe Europeans just buy less stuff :)

As for a fully loaded server of the same config (20x 2TB WD Greens), it would be 764.824 HUF ~ 3902 US$ ~ 2801 EUR.

Just as a reference: a guy like me, a relatively junior network/system admin in Hungary, earns about 150.000-200.000 HUF a month, after taxes. This translates to ~ 765-1020 US$ or ~ 549-732 EUR. If it wans't for the financial support of my family, I could never afford all this and maintain my current lifestyle. I think you can see that I'm a real maniac for spending this amount of money on these things :D

I actually did lots of research on trying to find the cheapest way to get it all together, so if someone else in Europe would want a similar kind of a setup, I could give them a few pointers.
 
WOW and to think my plans to build a 8TB home server felt overkill to me now seems weak. I think 99% of the guys in this thread scare the hell out of me with the exception of the one guy with the Antec Mini 180, it's just right . LOL What are you doing with that much data/porn J/K great thread and very informative if you read all the post. I learned a thing or two so thanks guys. Now I want to see a 10+TB server built with SSD's. Ahhhhhhh
 
Morning guys,

I'm introducing my 22TB Storage server "Dockmaster FS 20000".

logo.png

Amount of storage in the following system: 22.08TB

Advertised Space: 22,08TB
Formatted Space: 16,44TB

Case: Yeong Yang YY-B0221
PSU: Corsair 650W
Motherboard: ASUS M4A89GTD PRO
CPU: AMD Athlon64 X4 605e
RAM: 12GB Kingston ECC DDR3/1333
GPU (if discrete): onboard ATI
Controller Cards (if any): 2x Intel SASUC8i
Optical Drives: external USB LG DVD-R/W/RAM
Hard Drives (include full model number):
System: 160GB IDE HDD WD Caviar Blue
Storage:
5x Samsung HD154UI 1.5TB (RAIDz1)
3x WD Green WD15EARS* 1.5TB (RAIDz1)
5x WD Green WD10EADS/EACS/EAVS 1TB (+ 1x HotSpare) +
2x Hitachi HDS721010CLA332 1TB (RAIDz1)
3x WD Blue 640GB (RAID5) in external Case via eSATA


Battery Backup Units (if any): APC UPS RS800
Operating System: Nexenta Core

The system is meant to be a fileserver for several Mac clients in the LAN. Every room in our house is laid out with Gigabit network (Cat6 & Cat7 cables, connected to a Cisco GBit Switch in the basement and a HP Pro.Curve in the office).

I will use SMB for regular file sharing as it's much faster than AFP and don't have annoying write & read permission problems like AFP has. For video editing content, I will create a 2TB iSCSI target.

Backup will be made onto the smallest 4.5TB RAID5 (3TB usable) and only really important data will be backupped like Photos, Documents, work content. A huge part of the HDs will be used for a copy of our own DVDs & BRs. We've 3 Media boxes throughout the house who are able to stream different formats. Also, we've 2 Mac Minis in our environment, while one runs OS X Server (iSCSI Time Machine volume will be used on this one) the other one has an Elgato EyeTV DVB-S2 USB box to record TV in HD (or SD) to the server.

The fileserver's FS will be ZFS.

This server is energy efficient as it's based on a low energy CPU (Athlon "e" series). Even though, I think it will suck some power.

The system itself is planned to be set up today. Just got the last spare parts yesterday.

Photos:
dockmaster-1.jpg


dockmaster-2.jpg


dockmaster-3.jpg


dockmaster-4.jpg


and a few beauty photos of the hardware I've used:
luefter.jpg


yybox-single.jpg


yybox-double.jpg


satakabel.jpg


case-boxed.jpg


case-in-box.jpg


case.jpg


case-openfront.jpg


ssdhalter.jpg


nic.jpg


nic2.jpg


sff.jpg


ram.jpg


nt.jpg


ntopen.jpg


mbpcie.jpg


ports.jpg


sata3.jpg


mb.jpg


mediapool.jpg


hddin.jpg


view1.jpg


view2.jpg


backview.jpg


frontview.jpg


zahlen.jpg



Final photos:

board-open.jpg


hdds-open.jpg


front-open.jpg


ports-back.jpg


ports-back2.jpg


fans-back.jpg


case-front.jpg



The eSATA drive is not yet attached in the following console outputs. Will update that, once I've done the backup from that particular device and formatted it with ZFS :)

Code:
dieta@gigi:~$ zpool status
  pool: backup
 state: ONLINE
 scan: scrub repaired 0 in 3h27m with 0 errors on Tue Nov  9 21:16:06 2010
config:

        NAME        STATE     READ WRITE CKSUM
        backup      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c3t1d0  ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c3t3d0  ONLINE       0     0     0

errors: No known data errors

  pool: mediapool
 state: ONLINE
 scan: scrub repaired 0 in 5h43m with 0 errors on Tue Nov  9 23:31:44 2010
config:

        NAME        STATE     READ WRITE CKSUM
        mediapool   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c0t0d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0

errors: No known data errors

  pool: storagepool
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Tue Nov  9 21:47:42 2010
config:

        NAME        STATE     READ WRITE CKSUM
        storagepool  ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c3t0d0  ONLINE       0     0     0
            c3t4d0  ONLINE       0     0     0
            c3t5d0  ONLINE       0     0     0
            c3t7d0  ONLINE       0     0     0

errors: No known data errors

  pool: syspool
 state: ONLINE
 scan: scrub repaired 0 in 0h3m with 0 errors on Tue Nov  9 21:17:48 2010
config:

        NAME        STATE     READ WRITE CKSUM
        syspool     ONLINE       0     0     0
          c1d0s0    ONLINE       0     0     0

errors: No known data errors

dieta@gigi:~$ zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
backup                   223G  2.45T   223G  /backup
mediapool               2.80T  2.56T  2.80T  /mediapool
storagepool              150K  5.29T  47.7K  /storagepool
syspool                 11.1G   135G  35.5K  legacy
syspool/dump            8.38G   135G  8.38G  -
syspool/rootfs-nmu-000  1.68G   135G  1.33G  legacy
syspool/rootfs-nmu-001  35.5K   135G  1.02G  legacy
syspool/swap            1.03G   136G    16K  -

dieta@gigi:~$ uname -a
SunOS gigi 5.11 NexentaOS_134f i86pc i386 i86pc Solaris

dieta@gigi:~$ df -h -F zfs
Filesystem             size   used  avail capacity  Mounted on
syspool/rootfs-nmu-000
                       146G   1.3G   135G     1%    /
backup                 2.7T   223G   2.4T     9%    /backup
mediapool              5.4T   2.8T   2.6T    53%    /mediapool
storagepool            5.3T    47K   5.3T     1%    /storagepool

Ciao
Dennis
 
Last edited:
Love those cubes, I still have one from 1998, they haven't changed too much.
 
wont that get a bit warm?

You mean my server? No! It's pretty cold actually. I've a lot fans inside which makes it a bit noisier, but not screaming like 19" servers ;) In the end, it will be in the basement, so that's fine.
 
Fragmentaiton is another problem I saw with XFS. I saw much better results on JFS. Using a file-system that is 90-99% full most of the time and consrtantly writing new files (mythtv) I saw very few files that had over 10-15 fragments and we are talking multiple gigabyte files so the level at which performance was not effected. I saw excessive fragmentation with mythtv on XFS.

If you do wanna test out JFS you will need to make sure you use the CVS version for > 32TiB support for mkfs/fsck.

xfs_fsr ?

You give a compelling discussion for checking out JFS though, I will have to try it out myself. I've been using XFS for some time on my OpenFiler boxes just as NFS servers for backup of my web servers (I own a small hosting company) but have been testing various configurations at home for my 64TB media storage box, and have not found something where I am happy enough to move my data back onto it yet.

After trying Openfiler on it for a couple of weeks, was too frustrated at drives timing out of the array, even booting a cd to set scterc,70,70 first (2.6.29 of openfiler comes with smartctl 5.38)

I'm to the point now of tossing out all my WD20EADS and replacing them with SAMSUNG HD204UI, of all the drives in this box, those have never timed out, 1 Hitachi HDS722020ALA330 did, but I think it was a faulty drive, I'd still buy them, but I am just at my whits end with Western Digital. Thank god the Egg had the Samsung F4's at $80 over the weekend.

I spent the day at work today reading all 49 pages of this threat, alot of very impressive ideas at work here, I'll have to find a camera to snap some shots and join the fray ;)
 
30.12TB in a single chassis (Edit: 6/6/2011)
18 disks - (6 - Hitachi 2TB 5k3000, 2 - WD 2TB Green, 1 - WD 2TB Black, 8 - Seagate 1.5TB 7200.11, 1 - corsair force 120gb)

I have officially bastardized my cosmos case. It is the tower that wanted to be a rack mounted server chassis. I added an additional 4 more 2tb disks (hitachi 5k3000) to what I had below. I was rather amazed with my final update to fit in a few more disks. The cooler master cosmos 4 into 3 fit quite nicely here. I did a good deal of fabrication to fit this in a way with "good" airflow including cutting a big hole into the forward drive bay for the lower 120mm fan. In the pic I have only 2 of the 4 disks in the enclosure but you get the idea. Actually once it is all inside it looks like it was meant to be. The mobo power/cmos buttons and lcd is plainly visible in about the only place it would be. I just hope my mobo never dies :). That 4 in 3 enclosure is actually resting on one of what was meant to be one of its side panels. I riveted on some tabs to the side panel and then the side panel is riveted to the case. The 4 in 3 enclosure (actually 5 with OS ssd attached) lifts above the tabs I mounted on the side panel and sets into place and then gets screwed in. Works quite well but it is pretty tight. Though I'm sure I could fit some more drives in there (look at all the space up top) I think I'm done with this one. Next step will be a norco case I think.

Temps are still quite tolerable. I've been doing a bulk transfer to the new disks for a couple hours. Areca temps are 58 cpu and 54 controller. Hottest drive after 2 hours of full speed transfer is 49 degrees and is in the istarusa 5-3 enclosue and is an old seagate 7200.11 1.5tb drive so those run hotter anyway. The hitachi 5k3000 2tb drives are under 40 degrees. My i7-930 cpu is mid 50s but file transfer isn't too taxing. I may add the other fan back to the other side of the cpu hyper 212 heat sink if I notice it ever hit 80+ at load. I'm quite comfortable with those numbers as they are really good especially for a tower like this.

Some other updates:
Sapphire ATI 6950 2gb (for eyefinity), Added a hp zr24w as a third monitor at the left side in vertical orientation. This is also on a 4 way adjustable arm with swivel. I had to move my logitech speakers all to the top of the tv so all the monitors are next to each other. I replaced my areca 1880i with an areca 1880ix-16. I just wanted to use a single slot and I could never get that hp expander working nicely before anyway. The bigger areca cards are a bit lacking in the heat sink dept imho so I added a slot cooler. Finally, I swapped the fan in the 5-3 front enclosure with a silenx with red led because the stock fan was too loud and didn't even have any better airflow.

IMG_4724.jpg

IMG_4828.jpg


21.12TB in a single chassis (as of summer 2010)

Case: COOLER MASTER COSMOS 1000
PSU: CORSAIR HX Series CMPSU-1000HX 1000W
Motherboard: ASRock X58 Extreme LGA 1366 Intel X58 ATX Intel Motherboard
CPU: Intel Core i7-930 Bloomfield 2.8GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366 130W Quad-Core
Video Card: MSI N460GTX CYCLONE 1GD5/OC GeForce GTX 460 (Fermi) 1GB 256-bit GDDR5 PCI Express 2.0 x16 HDCP
Ram: 2 for 12gb - CORSAIR XMS3 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) (had to rma one set so one set of patriot in there now)
Raid (HBA) Card: areca ARC-1880i PCI-Express 2.0 x8 SATA / SAS (Operating in JBOD mode)
Expander: HP SAS Expander
5-3 Enclosure: iStarUSA BPU-350SATA 3x5.25" to 5x3.5" SATA2.0 Hot-Swap Backplane Raid Cage
Optical - 2
Monitors - 40" Sony KDL-40Z4100 LCD and 23" Dell U2311H
Sound: Logitech z5500 fed off optical from mobo
OS HD: Corsair Force CSSD-F120GB2-BRKT 2.5" 120GB SATA II MLC Internal Solid State Drive (SSD)
Other Hard Drives: 1 - WD Black 2tb, 2 - WD Green 2tb, 10 - Seagate 7200.11 1.5tb

Windows 7 64bit Ultimate
Raid 6 (t2+) - Flexraid!

14 drives in a case made for 6. Lots of extra fans and everything running very cool. Made brackets out of 1/4" plywood to house 2 more hard drives in space behind the other 6 in cages (not enough room for oem cages). There is a close up pic of this below and it shows how I cut out this plywood bracket to keep airflow between the drives. The bottom plywood bracket is screwed to the drive. The top one is placed in later as a wedge once the drive is lifted over the slots next to it and set into place. It cannot slide around at all since the lower bracket is cut to the exact width of the case and size of that space and the wedge at the top doesn't allow it to tip either. This is unconventional but totally secure and the only way I could think of to make it work. To get those 2 drives out you'd need to remove the tray next to it first. Not a big deal and it seems like wasted space in the design of the case but then I'm sure they didn't design it with 14 drives in mind either. SSD mounted in plain sight right behind cages.

Too bad that Ceton InfiniTV 4 (4 tuner cablecard) is selling for around $800 (twice retail) on ebay or I’d have one of those in here too.

I wanted a single pc solution for everything that was quiet enough to use in my bedroom. In a couple years I'll have structured wiring in place throughout my house and migrate to a rack setup with a dedicated media server like most of you are using. As you can see some of this build is new and some is stuff taken from my last one. I had all the 1.5tb drives before and it isn't worth it for me to upgrade them to 2tb. The hd dvd optical drives are obviously a little dated but I still use them and really like my dual format hd dvd/blu ray LG :). I bought the areca 1880i in case I needed to use hardware raid 6 but flexraid is working nicely for me so if I was doing it over I'd use the lsi 9211-8i and save $350. I use the 40" sony and 23" dell as my monitor/tv in my bedroom. Both on 4 way adj arms. Makes it nice to view the 40" from bed. Fabricated the supports for the speakers onto the tv mount so they stay in proper orientation depending on where the display is rotated and it is less clutter on the desk. Dell 23" was a recent addition and mounted at right height to pull out right under the speaker on the main lcd and fit under my les paul. Also stream to a 60" lcd in another room through ps3 using ps3 media server. I'm pretty happy with all this. Now to add a norco 4224 to make use of that expander!

IMG_2471.JPG

IMG_2480.JPG

IMG_2576.JPG

IMG_2562.JPG

IMG_2560.JPG

IMG_2505.JPG

IMG_2537.JPG

IMG_2550.JPG

IMG_2558.JPG
 
Last edited:
21.12TB in a single chassis...

Too bad that Ceton InfiniTV 4 (4 tuner cablecard) is selling for around $800 (twice retail) on ebay or I’d have one of those in here too.

$1,500+ on a monitor, somewhere tween $1-2k on disks depending on when you bought them, lots of other first class touches, and you can't spring $800 for a tuner? Go figure... :)

Kidding aside - nice rig. Well done.
 
$1,500+ on a monitor, somewhere tween $1-2k on disks depending on when you bought them, lots of other first class touches, and you can't spring $800 for a tuner? Go figure... :)

That does start to put it in a new perspective... :)
...but I am too cheap to pay retail on anything...let alone double. I have to have some restraint...

You are right on with costs. Displays cost $1500. I buy hard drives when they hit the $110 mark with the exception of the ssd which was $285 when I bought it at release (still $10 off retail ;)). I'm late by about 3-4 months posting this so prices have dropped some especially on these new sandforce ssds.
 
yeeh, this file server i store movie 1080p and blu-ray only. So i just do a JBOD .

So why the decision to use JBOB with the movies?

I am presently backing up all my DVDs onto 5 2TB hard drives in RAID 5.
I will get rid of the DVDs once Im done so really wouldnt want to lose it.

I was facing this dilema of either going RAID 5 or JBOD and choose RAID 5.
So did I make the right choice?

If you are using JBOD how did you arrange the movies.........spread them out over different hard drives?
 
Agreed, RAID is NOT a backup solution!

He is running RAID 5 which means if he looses a drive and replaces the bad drive with a new one and starts the rebuild process and during this time another drive dies, he will loose ALL of his data (movies, pictures, etc.)!

He could simply keep his DVDs, that WOULD be his backup ! :)
 
agreed....i would def NOT get rid of the originals!!!! a disaster waiting to happen
 
Realized I hit the 10TB mark a while back, rough but able :)
Hardware shots in the morning when I have a decent source of light.
Screen%20shot%202010-11-11%20at%2010.24.54%20PM.png
 
Well I finally finished building it, been crazy busy lately. But my official 10.0TB+ entry has been completed.

10.5TB Advertised, 9.7TB formatted

Case: Antec 900
PSU: CORSAIR HX Series 540W
Motherboard: DFI X58-T3EH8 LGA 1366 Intel X58 ATX Intel Motherboard
CPU: Intel Core i7-920 Bloomfield @ 3.0GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366 130W Quad-Core and fiolds 24/7
Video Card: Offbrand nvidia 6400 or 7200, I forget which. no room for full length card in this case so it runs headless
Ram: 6gb - CORSAIR XMS3 Dominator, ripping is done on another machine so it doesnt need much Ram. it runs Server 2003 x64 Standard edition
Raid (HBA) Card: HighPoint RocketRAID 2320 PCI Express x4 SATA II (3.0Gb/s) RAID Card
Expander: none
Other items: Coolermaster 5 in 3 drive bay, non-HotSwap
OS HD: Western Digital 250GB SATA drive, nothing special
Other Hard Drives: 8 - Seagate 7200.11 1.5tb in RAID 5

While folding this server uses ~284Watts at the wall, pulls about 400 on startup.

Will try posting some pics and benches tonight, the case is a mess inside since Im still trasnfering data off extra drives for consolidating. its sitting at about 30% used, will be dropping another TB onto it tonight.

Next project is the backup server for this. Which will consist of a hodge podge of JBOD drives, probably sitting off just the mobo + random SATA controller I have around. Not sure what I will do when I run out of space, will either upgrade to a NORCO case and add another RAID card or build another server...who knows.

Here are some benches.

This is with the server sitting idle and using the fully finished partition.
crystalmark97tbRAID5.jpg


This is one I ran during heavy load to see what it would take to kill the speeds. For this test there are three consecutive unstoppable copier sessions running from 3 different locally attached SATA drives and a 4th copy running over the network (currently only 100Mbit connection). Still plenty of bandwidth left for a couple HD streaming video sessions.
crystalmark97tbduringdblwrite.jpg


Here is the setup, in all its messiness. Cleanup comes next haha. The blue SATA are tot eh controller, the green is the OS and the red is a temp drive I was copying data off of.

P1010255.jpg

P1010254.jpg
 
Last edited:
@pyrodex

Nice build you got going there!

I actually considered the Lian Li case as well but it costs way too much here in local stores.
So I figured for that kind of money I could get the Norco case and wouldn't have to spend more money on hot swap backplanes etc. and even have a lot more disk slots available.

So I'm curious why you went for the Lian Li? :)

Also, and this is just a suggestion, I would mirror your OS drive ;-)
In three separate systems I had a failing disk each, all in the last two months..
 
@pyrodex

Nice build you got going there!

I actually considered the Lian Li case as well but it costs way too much here in local stores.
So I figured for that kind of money I could get the Norco case and wouldn't have to spend more money on hot swap backplanes etc. and even have a lot more disk slots available.

So I'm curious why you went for the Lian Li? :)

Also, and this is just a suggestion, I would mirror your OS drive ;-)
In three separate systems I had a failing disk each, all in the last two months..

I considered the case since I am living in my Fiance's house at the moment and the room this system would be going in is my small closet converted into an AV Closest. I don't have a "man cave" yet with good enough ventilation to setup a nice rack. The Norco's don't support decent airflow in a restrictive environment but do well in an open area.

As for the OS Drive I went with SSD for the lack of mechanical failure but I use a process I created for various other Linux based systems by checking all my changes into a repository and I am able to rebuild a system after simply laying down the operating system. I build a USB drive with my configuration so I boot off the USB drive walk away and come back to the system with a fresh OS install, all packages updated, all additional packages/software installed, and all the configuration files restored.
 
Hello all, built this system back in March after research on this forum and others.

24TB Advertised, 16.31TB formatted

Case: Norco 4220
Motherboard: SUPERMICRO MBD-X8DTE-F-O
CPU: Intel Xeon E5506
Memory: 2 x Kingston ValueRAM 4GB 240-Pin DDR3 SDRAM DDR3 1333 ECC Unbuffered Server Memory
Power Supply: SILVERSTONE ST85F-P 850W
Raid Controller: areca ARC-1680IX-24 2GB
RAID6 Set (Data): 10x Hitachi Deskstar 2TB
RAID6 Set (ESX): 4x Seagate 1TB (Encrypted)
OS Drive: Seagate 100GB 2.5"
Operating System: Windows Server 2008 R2

5231383447_4066fd69b4_b.jpg


5231974956_32180e467b_b.jpg


5230805490_ae053b7909_b.jpg
 
Hello all, built this system back in March after research on this forum and others.

24TB Advertised, 16.31TB formatted

Case: Norco 4220
Motherboard: SUPERMICRO MBD-X8DTE-F-O
CPU: Intel Xeon E5506
Memory: 2 x Kingston ValueRAM 4GB 240-Pin DDR3 SDRAM DDR3 1333 ECC Unbuffered Server Memory
Power Supply: SILVERSTONE ST85F-P 850W
Raid Controller: areca ARC-1680IX-24 2GB
RAID6 Set (Data): 10x Hitachi Deskstar 2TB
RAID6 Set (ESX): 4x Seagate 1TB (Encrypted)
OS Drive: Seagate 100GB 2.5"
Operating System: Windows Server 2008 R2

5231383447_4066fd69b4_b.jpg


5231974956_32180e467b_b.jpg


5230805490_ae053b7909_b.jpg

Very impressive setup. This is similar to what I'm looking to get as well.

Couple questions:

What do you use this system for?

Where does everyone get their SAS fan-out cables from? (newegg?)

What rack are you using?

Why did you decide on that raid card? seems like a lot of cash to drop.
 
Status
Not open for further replies.
Back
Top