Project: Galaxy 5.0

PS: do you have a thread about building your server?

He actually used this very thread to post updates about his server when Ockie wasn't posting. Look through the thread, you'll see some of his posts here and there, some with pics of his server IIRC.
 
Holy crap that Adaptec 52445 is sweet, 1.2ghz dual core IOP, i dont even have a dual core pc never mind a dual core raid card. Good luck migrating the array it wont be easy and u should backup what u need but good luck finding something that u can backup onto.

PS: do you have a thread about building your server?

Well, backing up the existing array goes without saying.. I just meant it's going to be a hassle switching raid controllers, since it means having to copy the data twice (once to an intermediary volume, and then again once the old array's drives are attached to the new controller. It'll be worth it though.

I didn't really post many details of my build in this thread since I didn't want to steal Ockie's thunder with it being his thread, but I did post a build log on another forum I'm active on. http://www.mymovies.dk/forum.aspx?g=posts&t=6755
 
Wow bud Wow...
Great work on the Project
I hope you you dont have the problem with those servers as i did with one of mine. The fans ran at 100% all the time, sounded like a damn train. Dell had no idea what was wrong, they replaced everything and still did it. Finally they just replaced the whole server
 
Why am I selling? Well it crashed my array and I lost all my porno :( Only kidding. :)

Actually its because of a little card called the Adaptec 52445 that's come along in a new gen of Adaptec cards (a.k.a. the "5" series) and came into stock only a few days ago. Price was $1287. What I like its ability to attach external storage in addition to the 6 x 4port internal miniSAS. With SAS expanders you can go bonkers and build an array up to 512Tb (ofcourse that wouldn't be feasible since raid controllers need to evolve a few more years before they'd be able to deal with the burden of arrays that big without performance sucking).

If I may ask, what retailer is selling the card? I can't seem to find it for sale. My google-fu is weak.
 
Wow, excellent worklogs [all of your galaxy worklogs].

I was building a custom SGI 02 NAS with a big bad 5 750gb WD drives [lol] and an addonics port multiplier + controller card combo.. then all of a sudden it started to bluescreen windows. So i decide to format and install XP thinking maybe its just vista being a PITA.. boot XP up, plug addonics card in and it starts to rebuilt parity or whatever... 16 hours total was how long it needed, and it fails everytime at like 12-14 hours. So i lost my whole raid [1.5TB of HD movies and crucial stored data] because i was stupid and i tried cheap hardware. After learning that lesson and reading this worklog [and Plutonium 3 with your old case] i've decided the areca 1231ML with 2gb ram upgrade, cube case, more HDD, and some athena power backplanes is totally worth the near $2k expense. Once you've lost data you realize how much of a pain it is to replace.

Long story short, its great that this turned into a RAID setup finally. Thanks for all of the knowledge i can strip from your worklogs. Its funny how you can take a very knowledgeable person when it comes to computers, then throw them into the network storage/server side of things and turn them into a total noob again.

[OH and my SGI 02 case will be a worklog for a media center for my bedroom so at least it wasn't a total waste.]

I do have one major question perhaps you guys here can answer. Why go all out on the RAM and cpu power for a server/NAS? Is it being utilized just sitting there? I have zero server knowledge. I was wondering why such a massive ammount of hardware went into a server that seems to not need it.
 
If I may ask, what retailer is selling the card? I can't seem to find it for sale. My google-fu is weak.

No retailer that I'm aware of. My card was was sourced from a distributor (Ingram Micro) that I have an account with. If you want one, PM me.

I've run several Areca cards now including a few at work and two at home in the last month, and one MAJOR difference between the Areca and the Adaptec cards (generations "3", "4" and "5" and possibly earlier) is that with the Areca it sometimes will just decide to kick a drive out of the array, in fact a few days ago it kicked 2 of my 24-drive RAID6 array out at once - so clearly it couldn't have been a physical defect issue with the drives since they both were kicked out at the exact same moment (all of a sudden I heard the alarm beeping from another room). As soon as I removed and re-inserted both drives the array began rebuilding and has been fine since. The Areca offers no way of "forcing" the drives back online, so had more than 2 drives failed at the same time I would've been screwed.

In the case of the Adaptec there's an option to "force" a drive or multiple drives back online (into the array), so let's say one of the power connectors to the backplane of the 24-bay case referenced in this thread got loose and 4 drives disappeared, after powering them back up you could simply force the failed drives back online and its business as usual without any need for the array to rebuild. In the case of the Areca controller in the same scenario, the controller just marks the drives as failed and that's it, it won't let them back into the array even after powering the drives back up or rebooting. There's a "RESCUE ARRAY" command in the GUI which is supposed to help in situations like this, but in my experience it hasn't really worked - perhaps it was my own ignorance though and I just didn't spend enough time with it; people on some forums said it should work.

Because of this seeming fragility of the Areca based on a few bad episodes where I almost lost all my data, they make me too nervous to continue using - again, could all be "due to end user" or maybe I should've used different harddrives like Seagate or something else (mine are Hitachi 1Tb 0A35155). Hence I've gone Adaptec now that their cards actually have decent performance (not to mention a 24-port version) with this new 5-series.
 
Wow, excellent worklogs [all of your galaxy worklogs].

Long story short, its great that this turned into a RAID setup finally. Thanks for all of the knowledge i can strip from your worklogs. Its funny how you can take a very knowledgeable person when it comes to computers, then throw them into the network storage/server side of things and turn them into a total noob again.

[OH and my SGI 02 case will be a worklog for a media center for my bedroom so at least it wasn't a total waste.]

I do have one major question perhaps you guys here can answer. Why go all out on the RAM and cpu power for a server/NAS? Is it being utilized just sitting there? I have zero server knowledge. I was wondering why such a massive ammount of hardware went into a server that seems to not need it.

Addonics? *cringe* Areca or Adaptec are the way to go right now. I can't speak for what Ockie uses his dual-quadcore xeon and tons of ram for, but in my case I do lots of video transcoding in batch fashion, so in fact my ram and CPU are pegged at 100% for days to weeks on end when crunching large amounts of data. I *think* Ockie did his build just for the challenge of finding "best of breed" components for every component category of the server (that's part of my motivation also), rather than finding a cheap mobo/cpu combo to host a NAS.

in my case I've already built a second system where I went "cheap" (or rather just "cheaper") on everything but the harddrives and raid card, in the identical 24-bay supermicro chassis: $100 dualcore 45nm CPU, minimal ram and $100 open-box Intel D975XBX2 motherboard, etc. This system backs up the main system and does some other stuff.
 
Hey thanks for the reply. Yeah I'm a CGI Designer [weird name for a 3d modeler] so trust me i understand power =]

Just didn't realize the servers were doing something along the lines of a render farm.

Very nice setups, both of you guys.

But yeah Addonics.. ew. Before i talk trash though i should at least contact customer service first.
 
Hey thanks for the reply. Yeah I'm a CGI Designer [weird name for a 3d modeler] so trust me i understand power =]

Just didn't realize the servers were doing something along the lines of a render farm.

Very nice setups, both of you guys.

But yeah Addonics.. ew. Before i talk trash though i should at least contact customer service first.

No, its okay - you are justified in talking trash. their customer service probably doesn't speak english anyway.
 
No actually, i got a hold of their C.S. last night and the guy was very knowledgeable and figured out my problem in under 30 seconds really. Nice fellow too. I still think their hardware bites, but smart/friendly C.S. person.

The problem, for anyone who's interrested, is most likely caused by overheating. The reason i could not see the rebuilt raid in windows xp was due to the fact that XP can only see single volumes up to 2TB in size [or something like that - over my head]. My raid is closer to 3. So i installed vista again and it showed right up. I copied files off and sometime during the night it bluescreened again, most likely failing due to an overheating controller. The thing is i have a 30" box fan blowing on the box at full speed to try and keep it cool long enough to copy my files off and it still tanks. Oh well.

Odditory, what OS do you use in your setup?
 
I successfully had a 6TB array with my Areca card in Windows XP before I switched over to Server 2003. All the Areca cards support it as there is a setting you can choose when creating the array for it to show up properly.
 
Odditory, what OS do you use in your setup?

I know you were asking Odditory, but here is mine, Windows Server 2003 R2 x64 Enterprise Edition... Works great with large GPT drives and 8GBs of RAM....

I successfully had a 6TB array with my Areca card in Windows XP before I switched over to Server 2003. All the Areca cards support it as there is a setting you can choose when creating the array for it to show up properly.

My array on my 1280ML is at 13.6TB currently..... (with 2 hot spares, so I loose 1.4TB of drive space to the hot spares, or I would be over 15TB!!!)
 
Odditory, what OS do you use in your setup?

He was using:
Windows Server 2003 R2 x64 Enterprise Edition

Then he made the move to:
Windows Server 2008 x64

any updates on the backup system (parallel galaxy)


Right now the project has been stalled. I think my main objective right now is to get this basement finished and find 3 rack cabinets for home.
 
odditory, do you have any benchmarks comparing the performance of the Areca 1280ML with the Adaptec 52445? Also, does the fact that the Adaptec 52445 only has 512MB of onboard cache (compared to 2GB on Areca 1280ML and 4GB on Areca ARC-1680ix-24) affect the performance? Thanks.
 
odditory, do you have any benchmarks comparing the performance of the Areca 1280ML with the Adaptec 52445? Also, does the fact that the Adaptec 52445 only has 512MB of onboard cache (compared to 2GB on Areca 1280ML and 4GB on Areca ARC-1680ix-24) affect the performance? Thanks.

Funny you should ask - I spent hours last doing exactly that. I'm actually dumping my Adaptec 52445 cards since I found out the hard way they have a 16-drive limitation to Raid5/Raid6 arrays - DEALBREAKER. My ARC-1680ix-24 is on its way. Long live Areca!

Though I don't have time right now to compile the benchmarks and post them, I will share a few facts in the meantime: This new Intel IOP348 1200MHz dualcore chip found on the Adaptec 5-series and the Areca 1680ix series is an f-ing SCREAMER. Why?

It builds (and repairs - same thing) RAID5 and RAID6 arrays in around 1/3 the time it takes the previous generation 800MHz chip. Since the 1200Mhz IOP is dualcore, you're really getting 2400Mhz of power which is 800Mhz * 3 so its pleasing to see build performance has scaled pretty much linearly with the chip. In practical terms it means you spend less overall time sweating and hoping nothing else goes wrong during a rebuild operation, since the rebuild window is smaller!

Example, using the same 12 x Hitachi 1Tb 7200RPM drives, I built a raid5 array first on the Adaptec 52445 and then the Areca 1280ML. Adaptec did it in around 1/3 the time! I don't think the Adaptec has anything technologically superior to Areca besides that faster IOP chip, so I expect to see same or better performance with the newer Areca ARC-1680ix-24.

Areca 1280ML: 12 x Hitachi 1Tb 7200RPM drives @ Raid5 = 20 hours, 8 minutes (build time)
Adaptec 52445: 12 x Hitachi 1Tb 7200RPM drives @ Raid5 = 6 hours, 54 minutes (build time)

Here's another major selling point, and I've only been able to verify this with up to 16-drive arrays due to Adaptec limitation, but so far Raid6 write performance = RAID5 write performance = RAID0 write performance. In other words benchmarks show no write performance slowdown going RAID6 over RAID5, or even RAID5 over RAID0. That's some parity calculating power since previous generations of IOP chips could not keep write performance of RAID6 on par with RAID5.

Again I've only been able to verify this with 16-drive arrays. 24-drive arrays may be a diff story but not by much. I will post RAID6 vs. RAID5 read and write performance when I get the Areca ARC-1680ix-24 to see if that still holds true with a larger number of drives. At the moment, it seems harddrives have become the bottleneck once again, and no longer the IOP (XOR) chip.

As to your question about cache size affecting performance, in my testing it doesn't make much difference in sequential read and write operations (ie copying files larger than the cache size). I benched the 256MB default cache module of the 1280ML against a 2Gb one I installed, and sequential throughput saw no difference, which makes sense. Larger cache modules only appear to help on RANDOM read/write performance, since "hotspots" (frequently accessed sectors) of an array can sit cached in the memory module.

Benchmarks to come once I get the new Areca ARC-1680ix-24.

At this point, Areca regains its crown as THEE controller of choice for large arrays.
 
Thanks for keeping us informed, the new chipsets sound like a worthwhile upgrade.
Re perf hit with RAID6 - I would have expected the extra drives latency to have caused the slowdown, not the parity generation so maybe there is something more cleverer than I know going on under the hood of that thing.

Very impressive rig - thanks for all the info and help.
 
Guess you got the 1680ix-24 like I had suggested after all. :p
You're really tempting me to get one now instead of the 1280ML since the performance is that much higher.
Here's a comparison that Areca published: http://www.areca.us/products/pcietosas1680performance.htm

Well you might wait before you leap since I think "performance" is a very broad brush. We have to narrow performance down into a few categories to put the differences into perspective.

Throughput (Transfer Rate, MB/s): My guess is sequential read performance for example will be very similar whether it be with the 1280ML or the 1680ix-24.. A faster IOP chip isn't going to make the harddrives faster!

Array operations (Building, Rebuilding, Expansion of Raid 5, Raid6, etc. arrays): This is where the biggest performance difference will probably be seen, since a 1200Mhz dualcore chip can simply compute parity much faster than an 800Mhz single core chip.

So, if you have a 1280ML as I do, you have to ask yourself, if the Throughput (transfer rate) ends up being very similar between the controllers, then is the extra cost of selling the 1280ML and buying a 1680ix REALLY worth the money?

In my case I'm basically just archiving audio/video onto my large arrays so I likely won't see much practical difference between the 1280ML and 1680ix-24, however if I was setting up an array for a multi-user server where people are working on audio/video or generally just large files then I'd get a 1680ix-XX with a 4Gb cache module.

Why am I getting a 1680ix-24? Because I'm a mental case that is more fascinated with the latest and greatest of tech than with stopping the bleeding of money from my wallet.

I fully expect buyer's remorse with the 1680ix-24 when I see the Transfer rate benchmark, and that remorse will only slightly be offset by the speed at which it builds/rebuilds/expands arrays.

-Odditory
 
Well you might wait before you leap since I think "performance" is a very broad brush. We have to narrow performance down into a few categories to put the differences into perspective.

Throughput (Transfer Rate, MB/s): My guess is sequential read performance for example will be very similar whether it be with the 1280ML or the 1680ix-24.. A faster IOP chip isn't going to make the harddrives faster!

Array operations (Building, Rebuilding, Expansion of Raid 5, Raid6, etc. arrays): This is where the biggest performance difference will probably be seen, since a 1200Mhz dualcore chip can simply compute parity much faster than an 800Mhz single core chip.

So, if you have a 1280ML as I do, you have to ask yourself, if the Throughput (transfer rate) ends up being very similar between the controllers, then is the extra cost of selling the 1280ML and buying a 1680ix REALLY worth the money?

In my case I'm basically just archiving audio/video onto my large arrays so I likely won't see much practical difference between the 1280ML and 1680ix-24, however if I was setting up an array for a multi-user server where people are working on audio/video or generally just large files then I'd get a 1680ix-XX with a 4Gb cache module.

Why am I getting a 1680ix-24? Because I'm a mental case that is more fascinated with the latest and greatest of tech than with stopping the bleeding of money from my wallet.

I fully expect buyer's remorse with the 1680ix-24 when I see the Transfer rate benchmark, and that remorse will only slightly be offset by the speed at which it builds/rebuilds/expands arrays.

-Odditory
Well, I'm mainly using this for archiving data like you...I have something a lot slower though. 1130ML with a 500mhz processor that's limited to my server because it's PCI-X. That's why I'm still interested in buying your 1280ML. :p
 
Thanks for keeping us informed, the new chipsets sound like a worthwhile upgrade.
Re perf hit with RAID6 - I would have expected the extra drives latency to have caused the slowdown, not the parity generation so maybe there is something more cleverer than I know going on under the hood of that thing.

Very impressive rig - thanks for all the info and help.

No, its well known that Raid6 has tended to suffer in Write-Performance compared to the same number of drives in a Raid5 array, simply because there's double the amount of parity to calculate per write operation. Depending on the combination of drive capacity and amount of drives in the array, older IOP chips have trouble keeping up compared to the same write operation on a Raid5 array.
 
Well, I'm mainly using this for archiving data like you...I have something a lot slower though. 1130ML with a 500mhz processor that's limited to my server because it's PCI-X. That's why I'm still interested in buying your 1280ML. :p

I see.. well the 1130ML is a great card, and PCI-X has a lot of bandwidth to saturate with large drive array transfers, even though ofcourse all anyone talks about is PCIe these days.

On a sidenote, I've got an experiment going where I bought 8 x WD10EACS drives (the Western Digital Greenpower drives that everyone frowns on being in Raid arrays) and after enabling TLER (the critical ingredient) with WD's TLER dos based tool, I've had the 8 drives in a Raid5 array on an older Adaptec array controller for 6 days now getting thrashed / stress-tested 24 hours a day with read/write operations and so far not a hiccup. They DO bench about 15%-20% slower in read/write than 7200RPM Hitachi drives in the same 8-drive Raid5 configuration on the same card, but such is to be expected from 5400RPM drives. The question now is whether to buy more WD10EACS or just wait for 7200RPM 1Tb drives to keep dropping in price - might be another 9-12 months before Seagate 1Tb is below $200.

I was compelled to see if I could get a WD10EACS drive to get kicked out of an array even with TLER on, since those drives are so cheap now (under $200 and falling) and therefore really compelling for archival purposes.
 
I see.. well the 1130ML is a great card, and PCI-X has a lot of bandwidth to saturate with large drive array transfers, even though ofcourse all anyone talks about is PCIe these days.
I know that even the 1280ML couldn't saturate a 133mhz PCI-X bus, it's just that I'm kinda stuck with it in my server. I haven't found any workstation motherboards that I like with PCI-X (not to mention I'm out of space anyway).
 
I know that even the 1280ML couldn't saturate a 133mhz PCI-X bus, it's just that I'm kinda stuck with it in my server. I haven't found any workstation motherboards that I like with PCI-X (not to mention I'm out of space anyway).

Then get a Supermicro X7DWN+ problem solved :) Two PCI-X slots
 
Then get a Supermicro X7DWN+ problem solved :) Two PCI-X slots
I'm also out of space, so I need a new RAID card (12 x 500gb doesn't last long)...I have 6 1TB drives laying on my floor doing nothing at the moment.
 
Hey Ockie, saw your FS thread....what's the deal there?

Nothing new really. Just doing something diffrent. I need more storage, thus the reason of the SAS drive array deletion.

Also, I ran into an interesting problem a couple weeks ago which solidified my reason for selling the SAS array.... the way that the SAS array is mounted is causing impact to the board which is rattling loose my processor during transport. I'm worried it will cause more than the processor to rattle loose and damage something next time. I'm also not comfortable about the thing rattling on a $750 board. It is also a major PITA to work on the machine or open in with that drive unit there.

I'm also looking to sell my quad NIC and replace it with a 10GigE nic. As for the SAS drives sale, I'm replacing those with a 1TB drive... and the remaining money will fund a mirroring solution or a storage upgrade solution.


I am basically thinking of two options:

Option 1: Buy another Supermicro case, buy the same board and controller. Take one processor and 16 gigs of ram out of the existing system and put it in that system... add drives. Then take this "clone" system and stick it in my data center and keep the primary one at home. Create a simple VPN for the systems to communicate and mirror files when changes occurs.

Option 2: Build a galaxy 3 again, use a stacker with the 850w psu, hot swap 21 drives. Use cheap hardware such as an X2 4200, 1GB RAM, and Highpoint raid controller... cost effective and provides me with a complete mirrored backup (minus 3tb since it would be raid 5). Keep this system at home, put the rack mount in a rack at the data center and do the same in terms of mirroring files.
 
The never ending project!!!!!!!!

Ockie just sell everything and start over =p
 
I haven't put any additional thought in this project, I would like to relocate the server to the data center by the cabinet I have is nearly out of power (I would push it over the limit, which means provisioning an additional 30amp 208v source, increasing the cost).

Anyways, I'm still working on this, I will take some pics of the existing galaxy now that it has an internal SATA drive instead of the SAS array.

I think the most likeliest scenario involves a cheaper galaxy at home with the main galaxy at the data center.
 
I am told that the IOP on the Areca 1680ix series is a single core chip NOT a dual core like Adaptec uses. FYI!
 
Back
Top