Project: Galaxy 5.0

So is this to not be seen at WML in the next couple weeks? :(

It's still sweet you're upping the capacity again :D
 
The time has commeth.
rackmount.jpg

All credit goes to d3vy for this picture.

Kudos, Ockie - great thread and great projects. I'm a storage whore myself, mostly video media, and for too long I've suffered with a disorganized situation that your Galaxy projects have inspired me to change. I've right now got 5 or 6 econo Dell servers with 5 or 6 drives each, plus myriads of USB2-based external storage (maybe 15?) and so trying to FIND media now has gotten to be a real headache (ie okay let's turn on server #4 - let's look through its drive letters). I've been waiting for the right case to consolidate into.

When I first saw the SuperMicro case's price, I thought "holy crap - $1000+ for a case!" but then it slowly sank in its actually a GREAT DEAL: aside from rackmount cases coming at a premium anyway, if you buy 6 x Athena Power 4-drive backplane cages, that's $600, and if you buy 2 x 900W HE (85%) power supplies, they'd be $250 each minimum based on a search. SuperMicro taking the guesswork out and providing them ready to power 24 drives is also worth something, not to mention that they're long and suited for this case rather than big boxy standard PSUs. Now we're at $1100 just for drives cages and power supplies, so you're getting everything else case/fans/etc for free as I see it. Lastly, consider it costs about the same (within $15) as what their 16-bay case costs! Note they have a silver version of this case coming out too, part# CSE-846TQ-R900V, in addition to the black one (B instead of a V at end of part#).

I checked Ingram Micro (nationwide distributor) and according to the SuperMicro rep they will be in stock 1/8/08 so that's probably why all the etailer sites out there show "out of stock". List $1165 and cost $992 on this beast. I already backordered mine!

Also I noticed you were going with E5310 CPU's - I read there's a mod you can do to bring them from 266mhz FSB to 333mhz FSB just by covering Pad 30 with electric tape, and they'll run at 2.0Ghz at only a couple degrees hotter - I may try that myself as I'll likely get the same motherboard you're getting.

Great components you chose and I'm still debating going "drive-letters only" or going RAID5. I agree that losing 33% of your space in a 3-drive RAID5 is a tragedy, however I'd be running 16-port raid controllers which would mean only losing 6% of my total storage potential per array - much easier to swallow (controller failure risks aside). Only downside to a single RAID5 volume is all 16 drives have to be spinning and sucking power to access a file; whereas going drive-letters only and individual volumes, all the drives can sit spun down until one needs to get accessed. Of course with drive-letters only there's no protection and it's russian roulette waiting to find out which of your drives is going to die first. Since at this level of storage size, backups aren't really feasible without doubling up on harddrive costs, as I see it raid5 is the only safety net that *is* feasible in lieu of backups.

Last thing I have to decide is buying 1Tb drives or 500GB drives. With 24 drive bays in this case it's hard to resist the price/performance sweet spot of $97 for 16MB buffer 500GB drives. I also like the idea of being able to add a drive at a time as needed, and running dissimilar drives is 1 point for running driveletter-per-drive (non-raid) then again if I end up in raid5 I can do online array expansion.

Decisions, decisions..

anyway keep up the great work.

-Odditory
 
^^^ I would guess you could mod the CPU but then again, I would never reccomend overclocking a server.:cool:
 
By the way, I was just thinking - for a 16TB raid5 array, I shudder to think how long a defrag would take! I guess you'd be almost forced to employ something like Diskeeper that's doing defragging 24x7.
 
By the way, I was just thinking - for a 16TB raid5 array, I shudder to think how long a defrag would take! I guess you'd be almost forced to employ something like Diskeeper that's doing defragging 24x7.

I have found that arrays dont really frag up that much if they contain non system files.
 
I have found that arrays dont really frag up that much if they contain non system files.

Well in my case I'd be having multiple computers separate from the one with all the harddrives, that are built around overclocked Q6600 quadcore CPU's, purpose-built for a high processing-performance to cost ratio, and they'll be batch transcoding hundreds to thousands of DVD's to H.264 (mpeg4). So, if you have several computers writing 1-2gigabyte files each to the array simultaneously, assumably that's going to fragment up the array, unless I come up with some sort of strategy for first placing the files to an intermediary harddisk array and then in a second step xcopying them one at a time so they are written more sequentially across the array one at a time. Testing will have to determine if it's worth the extra effort or if the essence of striped data across many harddrives makes the issue of fragmentation mute.
 
Kudos, Ockie - great thread and great projects. I'm a storage whore myself, mostly video media, and for too long I've suffered with a disorganized situation that your Galaxy projects have inspired me to change. I've right now got 5 or 6 econo Dell servers with 5 or 6 drives each, plus myriads of USB2-based external storage (maybe 15?) and so trying to FIND media now has gotten to be a real headache (ie okay let's turn on server #4 - let's look through its drive letters). I've been waiting for the right case to consolidate into.

When I first saw the SuperMicro case's price, I thought "holy crap - $1000+ for a case!" but then it slowly sank in its actually a GREAT DEAL: aside from rackmount cases coming at a premium anyway, if you buy 6 x Athena Power 4-drive backplane cages, that's $600, and if you buy 2 x 900W HE (85%) power supplies, they'd be $250 each minimum based on a search. SuperMicro taking the guesswork out and providing them ready to power 24 drives is also worth something, not to mention that they're long and suited for this case rather than big boxy standard PSUs. Now we're at $1100 just for drives cages and power supplies, so you're getting everything else case/fans/etc for free as I see it.

I checked Ingram Micro (nationwide distributor) and according to the SuperMicro rep they will be in stock 1/8/08 so that's probably why all the etailer sites out there show "out of stock". Cost is $992 on this beast - I can let you know when it's available if you like. I already backordered mine!

Also I noticed you were going with E5310 CPU's - I read there's a mod you can do to bring them from 266mhz FSB to 333mhz FSB just by covering Pad 30 with electric tape, and they'll run at 2.0Ghz at only a couple degrees hotter - I may try that myself as I'll likely get the same motherboard you're getting.

Great components you chose and I'm still debating going "drive-letters only" or going RAID5. I agree that losing 33% of your space in a 3-drive RAID5 is a tragedy, however I'd be running 16-port raid controllers which would mean only losing 6% of my total storage potential per array - much easier to swallow (controller failure risks aside). Only downside to a single RAID5 volume is all 16 drives have to be spinning and sucking power to access a file; whereas going drive-letters only and individual volumes, all the drives can sit spun down until one needs to get accessed. Of course with drive-letters only there's no protection and it's russian roulette waiting to find out which of your drives is going to die first. Since at this level of storage size, backups aren't really feasible without doubling up on harddrive costs, as I see it raid5 is the only safety net that *is* feasible in lieu of backups.

Last thing I have to decide is buying 1Tb drives or 500GB drives. With 24 drive bays in this case it's hard to resist the price/performance sweet spot of $97 for 16MB buffer 500GB drives. I also like the idea of being able to add a drive at a time as needed, and running dissimilar drives is 1 point for running driveletter-per-drive (non-raid) then again if I end up in raid5 I can do online array expansion.

Decisions, decisions..

anyway keep up the great work.

-Odditory

Glad I could help! I'd go with keeping the processors stock for the reason that performance isn't so benificial as much as stability would be.
 
It's a quad core, 8gb ram, raptor x's, dual 8800gtx's, 680i (now 780i when it comes), etc etc etc.

Right on - we got similar main sys's - I just got the 780i from EVGA a few days ago (step up) and am waiting on my third 8800GTX to come in so I can run Triple-SLI'd Crysis at VERY HIGH on my 30" screen at native res. Also have an Q6600 on water @ 3.8

This new 780i board overclocks quads effortlessly in my own experience plus many forums; so much easier than the 680i did anyway. In January (or whenever since there's a rumored delay) will grab a Q9450 and hopefully run in the 4.x Ghz range on water as many others are seeming to do with ease on the slightly higher end chips already available. These new new Penryns are overclocking monsters!
 
<snip>
Supermicro X7DBE-O Dual 771 Intel 5000P Extended ATX Server Motherboard<snip>

Few questions:

1) Why the Supermicro board, when a "true blue" Intel board, such as the Intel S5000PALR for example, is available at the same price point? (within $4 of each other from my source, anyway).
2) What is it about that particular board and/or chipset you like versus some of the other dual socket server class Xeon boards costing hundreds less? # of PCIe slots?
3) Why the E5310 CPU instead of one of the new 45nm CPU's like the E5405 @ 2.0ghz / 1333fsb which runs cooler (same 80W TDP as E5310), has 12MB L2 cache instead of 8MB on the E5310, and is at practically the same pricepoint (maybe $10 more)? If you're having to wait out the case due to availability, may as well wait out a E5405 - that's probably what i'll be doing anyway.
 
Right on - we got similar main sys's - I just got the 780i from EVGA a few days ago (step up) and am waiting on my third 8800GTX to come in so I can run Triple-SLI'd Crysis at VERY HIGH on my 30" screen at native res. Also have an Q6600 on water @ 3.8

This new 780i board overclocks quads effortlessly in my own experience plus many forums; so much easier than the 680i did anyway. In January (or whenever since there's a rumored delay) will grab a Q9450 and hopefully run in the 4.x Ghz range on water as many others are seeming to do with ease on the slightly higher end chips already available. These new new Penryns are overclocking monsters!

Did evga already contact you about a stepup??!? I've had no response yet.

Also, I'm also looking at tri sli for the 3007wfp :)

Few questions:

1) Why the Supermicro board, when a "true blue" Intel board, such as the Intel S5000PALR for example, is available at the same price point? (within $4 of each other from my source, anyway).
2) What is it about that particular board and/or chipset you like versus some of the other dual socket server class Xeon boards costing hundreds less? # of PCIe slots?
3) Why the E5310 CPU instead of one of the new 45nm CPU's like the E5405 @ 2.0ghz / 1333fsb which runs cooler (same 80W TDP as E5310), has 12MB L2 cache instead of 8MB on the E5310, and is at practically the same pricepoint (maybe $10 more)? If you're having to wait out the case due to availability, may as well wait out a E5405 - that's probably what i'll be doing anyway.

1) Supermicro is a fine product, everything they produce is top notch. The S5000PALR is weak, no slots, very proprietary in design (not meant for this application). PALR was not meant for high volume storage systems, rather space saving concepts such as 1U cases.

2) IPMI, PCI-e Slots, Price, Brand, Past Experience, other smaller features which I desire for future useage.

3) This is why the CPU is waiting till last. If one keeps waiting on a processor, one will never stop waiting. I will buy whatever CPU that is availible at the time when I make the purchase.
 
Did evga already contact you about a stepup??!? I've had no response yet.

Also, I'm also looking at tri sli for the 3007wfp :)

I didn't wait for any email from EVGA - I hopped right on as soon as someone on another forum pointed out the 780's availability on the day of (a week ago today) a link http://www.evga.com/680iUpgrade/ where I registered and then sent my old 680i in. EVGA's office is within 10 miles of my house so turnaround was pretty fast. They did not cross ship though - I had to send in and wait for the 780i. Also don't make the mistake of ordering the Triple-SLI bridge connector like I did, because it already comes with the motherboard so now I have two (ugh).

Also, head's up: you can get a deal on your third 8800GTX - today Newegg started a $440 after $50 rebate on it. http://www.newegg.com/Product/Product.aspx?Item=N82E16814150232

I went back and forth quite a few times about how (un)smart adding a third 8800GTX really would be, especially given they'll probably be releasing a G92 based high-end part within 3-6 months. Then I thought "why not", and "I will because I can" since it will still be very future proof for the time to come regardless of new parts that come out from Nvidia, not to mention that I love Crysis (practically built a whole new system just for it), and because scaling has been shown to be excellent by various reviews for some of my favorite games like company of heroes 1 and 2 (2.8x - 2.9x FPS that of a single card with Triple-SLI), though improved Crysis performance with 3xSLI is still being optimized.

We only live once, right? Since all this is temporary, may as well have the most fun at it.
 
I think I'm going to nab the SuperMicro X7DWN+ board for $520, since its the 5400 (Seaburg) next gen Xeon chipset to go with the 45nm Xeons (supports 1600FSB Quad Xeons and 128GB ram) and just a bit more future proof than the X7DBE for only about 15% more money. http://www.supermicro.com/products/motherboard/Xeon1333/5400/X7DWN+.cfm


From http://it-review.net/index.php?option=com_content&task=view&id=2226&Itemid=105

"...the 5400 series chipset-based platform with 1600 MHz Front Side Bus sets new world records** on key high-performance computing and bandwidth-intensive benchmarks including the SPECfp*_rate2006 benchmark that measures floating point throughput performance. World records were also achieved in key HPC benchmarks, including Fluent*, LS-Dyna*, SPECOMP2001* and Abaqus*. For detailed system and testing information on these and other performance benchmarks, visit www.intel.com/performance/server/xeon/summary.htm.

Intel's 45nm Hi-k Xeon processors also extend performance-per-watt leadership by delivering an improvement of 38 percent1 over its previous-generation Quad-Core Xeon 5300 Series processors.

The move from 65nm to 45nm involves more than just a shrink of current chip designs. The processors include such additional features as new IntelR Streaming SIMD Extensions 4 (SSE4), which are 47 new instructions that speed up workloads including video encoding for high-definition and photo manipulation, as well as key HPC and enterprise applications. Software vendors supporting the new SSE4 instruction set include Adobe*, Microsoft* and Symantec*.
"
 
areca 1280ML 24 Port controller just came in stock at newegg, so I snatched it up :) Can't wait for it to get there... although, still don't have a case, mobo, procs, or memory :(


Processors are still on back order (12mb cache chips are out, so I want to snatch those up when they come in), the case is also still on back order with no real ETA.
 
areca 1280ML 24 Port controller just came in stock at newegg, so I snatched it up :) Can't wait for it to get there... although, still don't have a case, mobo, procs, or memory :(

Processors are still on back order (12mb cache chips are out, so I want to snatch those up when they come in), the case is also still on back order with no real ETA.

As I mentioned in a post above, I called my Ingram rep and he stated the SuperMicro rep confirmed 1/8/08 as a target ETA for the case. For the E5405 CPU they're still showing "No ETA" due to Intel's strange and tight-lipped nature (and to give themselves room to change release dates from week to week as they desire), however the rep is thinking 2nd or 3rd week in January based on phone convo with Intel rep.

I did go ahead and buy the SuperMicro X7DWN+ motherboard; arriving today. A monster board!

Gotta hand it to you - Areca 24-port card is sexy as hell. I'm throwing in a pair of lowly Adaptec 31605 16-port PCIe SAS controllers; they list for $995, I snapped two up for about $150/ea from someone that didn't know/care what they were worth.

Still debating harddrive choice - Seagate's 5yr warranty suits a longterm couple of RAID5 arrays spanning 24 drives but performance is below average on benches. What scares me about the WD10EACS greenpower WD drives is the whole "5400-7200rpm" veil of secrecy of WD where basically stated the ones out now spin at 5400 but they 'reserve the right' to start releasing them as 7200rpm at any time without any notice or diff part # (so my guess is they're still working on 7200). Mixing 5400 and 7200 rpm in a raid array is a no-no and I don't want to quite buy all 24 x 1tb drives at once.
 
By the way, Ockie, have you ever checked out unRAID ? I was reading it over this morning (I know everyone else prob already knows about it) and it seems great in terms of not striping data and basically running a modified RAID4-without-striping scenario given it dedicates one drive to parity so if you lose a drive or two it doesn't mean losing all data. What this lets you do is spin down all the drives and then when you access one it spins up only the drive that contains the data rather than the whole array (again, because there's no striping). Unfortunately unRAID is linux-based and hardware support is limited (good luck crowbarring a couple of 16 port Adaptec Raid adapters or 24 port areca) in. Nice option to consider were it available for windoze.

I'm still leaning toward Win2003 or Win2008 + Raid5 on my galaxy5-also-ran box :)
 
areca 1280ML 24 Port controller just came in stock at newegg, so I snatched it up :) Can't wait for it to get there... although, still don't have a case, mobo, procs, or memory :(


Processors are still on back order (12mb cache chips are out, so I want to snatch those up when they come in), the case is also still on back order with no real ETA.

Very nice job on the 24 port card:) good God this server hardware is hard to track down....



Still debating harddrive choice - Seagate's 5yr warranty suits a longterm couple of RAID5 arrays spanning 24 drives but performance is below average on benches. What scares me about the WD10EACS greenpower WD drives is the whole "5400-7200rpm" veil of secrecy of WD where basically stated the ones out now spin at 5400 but they 'reserve the right' to start releasing them as 7200rpm at any time without any notice or diff part # (so my guess is they're still working on 7200). Mixing 5400 and 7200 rpm in a raid array is a no-no and I don't want to quite buy all 24 x 1tb drives at once.


I am also very nervous of the WD green drives in a raid config, And if the speed ramp up an down is not perfect, raid arrays are going to go nuts when the drives enter a power saving state... And with all the bugs in the past with WD consumer drives in raid arrays, I think it is much better to go with Seagate and the 7200.11 series is benching much closer to the rest of the pack anyway....
 
Still debating harddrive choice - Seagate's 5yr warranty suits a longterm couple of RAID5 arrays spanning 24 drives but performance is below average on benches. What scares me about the WD10EACS greenpower WD drives is the whole "5400-7200rpm" veil of secrecy of WD where basically stated the ones out now spin at 5400 but they 'reserve the right' to start releasing them as 7200rpm at any time without any notice or diff part # (so my guess is they're still working on 7200). Mixing 5400 and 7200 rpm in a raid array is a no-no and I don't want to quite buy all 24 x 1tb drives at once.

The only unfortunate thing is that only the RE2 drives will run RAID successfully. The regular drives may drop from the array due to the good ol' TLER bug. :rolleyes:
 
I'm stuck with the green drives, no whay I'd be able to sell these and do a direct drive to drive swap without a major loss. Also, I would rather save power and enjoy non-raid than I would spending more power.


Anyways, yeah, the hardware is a pain to track down, not all the stores deals with high end hardware, and the ones who does, usually only have a couple in stock. :(
 
I'm stuck with the green drives, no whay I'd be able to sell these and do a direct drive to drive swap without a major loss. Also, I would rather save power and enjoy non-raid than I would spending more power.<snip>

I wouldn't call it "stuck" - those are great drives based on all the benchmarks. Were you planning on still going JBOD or are you starting to think RAID5/RAID6? Would be an aweful shame to run an Areca 24-port just for JBOD (except for the caching). I thought I read there's a fix for the TLER thing.

FYI - I have a contact that deals in large quantities of hard-drives from all the major manufacturers (another distributor) and he stated today that most 1Tb HD sku's are slated to drop substantially in price after Jan 1. I am going to wait to see if this happens and run my 12 x 500gb existing drives JBOD for now and then migrate to 1Tb's in january.
 
I wouldn't call it "stuck" - those are great drives based on all the benchmarks. Were you planning on still going JBOD or are you starting to think RAID5/RAID6? Would be an aweful shame to run an Areca 24-port just for JBOD (except for the caching). I thought I read there's a fix for the TLER thing.

FYI - I have a contact that deals in large quantities of hard-drives from all the major manufacturers (another distributor) and he stated today that most 1Tb HD sku's are slated to drop substantially in price after Jan 1. I am going to wait to see if this happens and run my 12 x 500gb existing drives JBOD for now and then migrate to 1Tb's in january.

Well I wouldn't call it "stuck" either, it's just a term I used to describe that I won't be making changes.

Yes, I am planning on running that 1280ml as JBOD :D I'm thinking that at least I do have the future advantage and I also do have the advantage if I wanted to get the RE green drives and slowly introduce them.


I also expect to see a huge price drop in the coming month :)
 
I've decided that I'm going to go with the Supermicro MBD-X7DWN+ motherboard, more memory, diffrent memory (cheaper), and the Intel Xeon Harpertown 45nm quads.


Right now the only thing I got coming is the 24 port contoller. I'm still waiting on case availibility and I'm waiting on processor to be restocked. Right now I'll be sitting idle for a bit while I wait for these things to come availible and trying to balance the entire budget.
 
I've decided that I'm going to go with the Supermicro MBD-X7DWN+ motherboard, more memory, diffrent memory (cheaper), and the Intel Xeon Harpertown 45nm quads.


Right now the only thing I got coming is the 24 port contoller. I'm still waiting on case availibility and I'm waiting on processor to be restocked. Right now I'll be sitting idle for a bit while I wait for these things to come availible and trying to balance the entire budget.

I like the specs on that board, good God, 128GBs of ram supported via 16 slots....!!!!

You are to need a small loan to pull this off (LOL), way more expense gear then Galaxy 4.5!!!!
 
You are to need a small to pull this off (LOL), way more expense gear then Galaxy 4.5!!!!

Thats a sentence if I ever heard one ;) Hope all goes well with the build, I would love to do something like that if I had the means (aka: $) to do so.
 
Thats a sentence if I ever heard one ;) Hope all goes well with the build, I would love to do something like that if I had the means (aka: $) to do so.

That what happens when you wake up with 2.5hrs of sleep and sit on [h] for a while, lol:)

I got Galaxy 4.5 today (I am stuck at work till 7 PM, but look for an update on my worklog, with pictures of everything...)
 
Just ordered the Supermicro MBD-X7DWN+ motherboard. I couldn't resist :D

According to newegg tracking, my controller arrives on wed.... so much for 3 day shipping (6 days)
 
Just ordered the Supermicro MBD-X7DWN+ motherboard. I couldn't resist :D

According to newegg tracking, my controller arrives on wed.... so much for 3 day shipping (6 days)
nice motherboard...
you going with dual Harptertowns?
 
Just ordered the Supermicro MBD-X7DWN+ motherboard. I couldn't resist :D

According to newegg tracking, my controller arrives on wed.... so much for 3 day shipping (6 days)

No doubt the extra holiday traffic is postponing your shipping. You'll get it soon enough. ;)
 
Just ordered the Supermicro MBD-X7DWN+ motherboard. I couldn't resist :D

According to newegg tracking, my controller arrives on wed.... so much for 3 day shipping (6 days)

Nice. I've been running that board a few days now with Win2003 Server Enterprise R2, and am REAL happy with it. Sure its over $500 but its SM's flagship 5400 board and will run forever like a Toyota (Lexus?). :D I got IPMI board also. I am running a single E5405 2ghz harpertown Xeon with a second one backordered. That CPU runs so cool you don't even need to connect the stock fan power (but I do) - the heatsink barely gets warm without the fan on. Ahh the joys of 45nm. Make sure you change the fan setting in BIOS when you get the board because default is full speed regardless of CPU temp which is unnecessarily loud.

I've got a ton of drives laying around here now and am getting anxious about the case taking so long since loose drives laying around powered on, albeit for testing, make me nervous! (one false bump and click, click, click)

Happy new year guys....
 
Nice. I've been running that board a few days now with Win2003 Server Enterprise R2, and am REAL happy with it. Sure its over $500 but its SM's flagship 5400 board and will run forever like a Toyota (Lexus?). :D I got IPMI board also. I am running a single E5405 2ghz harpertown Xeon with a second one backordered. That CPU runs so cool you don't even need to connect the stock fan power (but I do) - the heatsink barely gets warm without the fan on. Ahh the joys of 45nm. Make sure you change the fan setting in BIOS when you get the board because default is full speed regardless of CPU temp which is unnecessarily loud.

I've got a ton of drives laying around here now and am getting anxious about the case taking so long since loose drives laying around powered on, albeit for testing, make me nervous! (one false bump and click, click, click)

Happy new year guys....


Which IPMI card did you get?

Oh and the toyota comment, it's a myth now (in todays production) ;)
 
That CPU runs so cool you don't even need to connect the stock fan power (but I do) - the heatsink barely gets warm without the fan on.

At what load? 100% or mostly idle? I can leave my 25W p3s fanless, but that chip is still an 80W part.
 
Back
Top