Build Your Own Storage Server

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Modders Inc. has put together a guide to building your own storage server and saving a little money in the process.

First of all, the scope of the project: A storage array using eight 4tb Hard Drives, which would be spread out on to RAID 5 for redundancy. Sounds easier said than done right? By having eight drives in a RAID 5 array we would get a storage capacity of approximately 28TB, which in my opinion would be enough to hold around 5 years of data.
 
I just redid my array with 5x 4TB drives but did it with RAID 6. Too much of a risk of another drive failing during the rebuild with such huge disks for me to trust RAID 5 anymore. I need to update my sig.
 
Last edited:
I did something similar, but last year with 7 3tb drives.
The spoiler? In an itx case.
Arguably cheaper than a qnap/synology setup with a lot more functionality (serves as my htpc and nas and everything else).
I'd be happy to post the specs/build/pics if anyone was interested.
 
Look at the 2nd page parts list... This is why Enthusiasts do not always make good Architects. According to the article, "Build You Own Storage Server For Less", Since when does $6,700 qualify as less? This guy built a damn gaming box with a RAID card and 8 x 4TB drives. Don't get me wrong. This is a great concept to get the DIY enthusiast thinking, and when written right can help people along, but this article makes it hard NOT to nitpick.
 
8 4TB drives in a raid 5 array?
That's just asking for trouble.

And Like Nate7311 said, $6700 for this build? well... that's just crazy. I did notice that the article was originally published a year ago, so that might be one of the reasons why the cost looks so high, but I built a 24 x 3TB storage server for just about 300 more last year.
 
Last edited:
Heh, you could but a pre-built Dell T320 with hard drives from Dell for less than $6700. I priced out a pre-built for a friend a while back, with enough RAM and Windows Server for I think $5700.
 
What's the purpose of this PC? Why are they using a combination of server hardware (LSI RAID controller, SAS drives, etc.) with desktop hardware (MB, processor, etc.). At first, I thought they were building a high performance workstation. Then, I saw "Sapphire HD 7700 Video Card". WTF? Yeah, these people have no clue how to build a "storage server" and have way too much money...
 
Look at the 2nd page parts list... This is why Enthusiasts do not always make good Architects. According to the article, "Build You Own Storage Server For Less", Since when does $6,700 qualify as less? This guy built a damn gaming box with a RAID card and 8 x 4TB drives. Don't get me wrong. This is a great concept to get the DIY enthusiast thinking, and when written right can help people along, but this article makes it hard NOT to nitpick.

yup, it's like a 14 year old built this box...

you got a window in your storage server? wat?

also, absolutely nothing in there about software...
 
Just built a NAS for ~$2400.
Chassis with hot swap bays
Supermicro board
Intel Xeon something something
32GB ram
2 x SAS HBA(nonraid) controllers
10 x 4TB SATA
FreeeNAS w/ raidz3

22.8TB
 
Yet another article pushing RAID 5. :rolleyes: I wouldn't even use RAID 6 these days.
 
Where does the fail begin and end in this abortion.

Let's start with this:

EDITOR’S NOTE: This article was originally published at UMLan.com in March 2013 and has been re-published with permission from the original author who now writes for Modders-Inc.

2+ year old parts and methodology. Resurrecting old as new for page count views.

As I have mentioned before we are going to be using Antec 1200 v3 case. The reason why I picked this type of case is because the front bezels can be removed and that’s exactly what we need for Icy Dock DataCage Classic.

With little work I installed three Icy Dock cages and secured them in. The case can actually take up to 4 Icy Dock Cages, but since I am using 8 drives total for this project, 3 is more than sufficient.


Wait a second here. You buy a GIANT case with 9+ 5 1/4" bays and you pickup a 3n2 icy dock adapter and then state you want to save space? Icy Dock makes a 5n3, so you could've bought 2 (or told IcyDock to send you those) and saved yourself more space to hold 15 drives! I know thinking outside that box and future proofing, silly me.



List of Components (price taken on March 2013, the original publishing date of this article):
(1) Antec 1200 V3 Case – 158.99
(1) Gigabyte GA-x79-UD3 Motherboard – 239.99
(1) Intel i7-3960x CPU – 1069.99
(1) Kingston HyperX Beast 32 GB Ram Kit – 264.99
(1) Sapphire HD 7700 Video Card – 99.99
(1) LSI MegaRAID SAS 9271-8i RAID Controller Card – 689.99
(1) Cooler Master Seidon 120M Water Cooler for CPU – 64.99
(3) ICY Dock DataCage Classic MB453IPF-B Hard Drive Enclosures – 227.97
(1) Cooler Master Silent Pro Gold 1200W Power Supply -229.99
(8) Seagate SAS 6Gb/s Constellation ES.3 Hard Drives – 3453.60
(1) Kingston HyperX 3k 240 Gb SSD OS Hard Drive – 239.99
Total $6740.48

EL OH EL OH EL OH EL OH EL.

32GB of RAM? For a Storage server? Not even EMC's come with that much unless you are talking the big boys.

Single Power Supply? Cop out.

I wonder what the price for an i7 vs. low end Xeon would be.


Best Part:
$700 for a raid card? Are these people from Dell or HP?

IBM M1505=$200 on Amazon buy it now that does the same DAMN THING for RAID and performance but not SAS. Why the hell do you want SAS?!!?

First of all, the scope of the project: A storage array using eight 4tb Hard Drives, which would be spread out on to RAID 5 for redundancy.

and

What I wanted to do is to build a similar system, which could be brought into any small to mid size business and provide storage capabilities applied to many applications.

No sane business would buy something like this for any application outside of a backup server. I could SAS being used for being Hot swappable which reduces downtime so perhaps that's a good reason why. Wouldn't use this for VM's or Databases. File storage? Sure.



I am going to be using Windows 7 Ultimate with our setup, however, you are not limited to Windows OS here. There are multiple NAS solutions available, FreeNAS (freenas.org) and Caringo (caringo.com) are my two favorite operating systems for storage optimization.

You are using...windows7 ultimate ...for your storage server? :RAGEDEATH:

WHY THE F@%K WOULD YOU EVER USE A DESKTOP OS FOR A SERVER!?!?!?

Junior Engineer Class 12 mentions 2 possible other OS' but I believe FreeNAS uses ZFS which blows away the need for RAID5 and I'm sure Caringo does something similar.

Now he goes onto testing which blows everything out of the water and I would assume so because he's local to the box doing all these tests and he's tweaked and tuned all the drives for performance locally. Sadly this is NOT how you test a storage server as, correctly me if I'm completely off base, every application is hitting the server over the network.

A better test would be mount all those drives on a windows machine, run the same tests over the network using this as a virtual drive. If SlickRickAdmin did this in the instructions I missed it as the red haze of fury overcame me or it was my monster PUNCH baller blend dripping from the screen.


Conclusion:
First I would like to thank LSI, Seagate and IcyDock for providing us with most important parts, which without them, we wouldn’t be able to build anything at all. These three vendors have multiple solutions for any budget

Yup given his shit for free. FAK ALL THAT WAS WRITTEN ABOVE it was an advert for the companies who he had to gnobslobber to get the gear. MURDER MEATBAGS.

As far as CPU selection goes, I strongly recommend a quad core processor like the i7 3770k Ivy Bridge. You might be able to get away with a lower grade desktop processors such as i7 3820 Sandy Bridge. But please stay in the i7 category when choosing an affordable processor.

It's stupid overkill and a 6 core xeon is 1/2 the price FFS. He nuthugs all over "OMG INTEL IS ENTRPRIZE ONLY NAO!" and goes with a full desktop board. Who is this for? Home users? Business? Send-Me-Free-Review-Stuff hobbyist?

Our CPU is liquid cooled by Cooler Master but you don’t need to have liquid cooling at all.

Holy raging python batman how the name of Greek God Hades did I miss that? NOONE USES WATER COOLING IN A SERVER! WATER+HARD DRIVES=DEATH.

Unless you are doing that oil cooling shit in a bunker in Sweden with 10Teraquads of gigaram server running NSALOL coding. Someone get this kid a coupon to go take his A+ cert at Office depot.

Investment or an expense? Definitely an Investment. If you look from price point perspective, our complete build is just about $6700 USD (as of March 2013, the original publishing date of this article). If you factor this as an investment that would last 5 years, the price per year would be roughly 1350$ per year for a redundant solution of 26TB storage. You can also look at this from price per terabyte. If you take you’re yearly investment which is 1350$ and divide it by 26(TB) it comes up roughly 52$ per TB/year. Now what you have to ask yourself, can you afford to lose data and what it will cost you to recover it (if possible). Some online services require you to send your hardware to their designated locations for recovery and on top of that they slap you with a bill upwards of $5000. In my life time, I have heard horrible stories of data loss and never end well.

No FLAKSTICK NO. If you factor in all the shitty design implimentations you just created and rolled it into a clusterfLUK mess with water cooling, 32GB RAM, desktop class hardware mixed with server mess you are looking at more down time and headaches due to cludgey mess you just created and have to support when you build it. If your end goal was to sucker a business with a yearly support contract then YOU WIN THE INTERNETZ!

28TB for enterprises on the cheap can get a Dell Powervault, Infortrend EONStore, or even an HP LeftHandCheapWhatever is on the refurb market for the same damn price with Next Day support and enterprise parts.


I'm going to go take a few laps in my work parking lot in the cold, maybe stick my head in the ocean, or see if the planes taking off by the airport can blast the stupid out of my brain that I just read.
 
This article is terrible. Everything is wrong. Has he never heard of RAID 10/100? 5/6 is garbage.
 
Uhh..

RAID5? REALLY?

That is some REALLY bad advice.

RAID5 is dead due to large drive sizes and average hard drive error rates.

With RAID5, if one drive fails, you have no redundancy during the rebuild. A single flipped bit, can thus neither be detected or corrected, and given how large modern drives are and their error rates, you are more or less guaranteed to have AT LEAST one flipped bit during your rebuild.

I would consider RAID6 the new minimum.

Essentially, you need to make sure that while your array is degraded due to drive failure, you still have at least 1 drive worth of redundancy, that can correct errors.

The key to building a safe array is to have one parity drive more than the drive redundancy you desire.
 
Look at the 2nd page parts list... This is why Enthusiasts do not always make good Architects. According to the article, "Build You Own Storage Server For Less", Since when does $6,700 qualify as less? This guy built a damn gaming box with a RAID card and 8 x 4TB drives. Don't get me wrong. This is a great concept to get the DIY enthusiast thinking, and when written right can help people along, but this article makes it hard NOT to nitpick.

Yeah, not cost effective at all.


I built my dual xeon server for WAY less than that.

Specs:
Norco RPC-4216 case
2 x Xeon L5640 (6 cores each, 2.27Ghz)
96 GB DDR3 Registered ECC RAM at 1333Mhz
Three dual port Intel Pro/1000 PT NIC's
One Broadcom NetXtreme single port NIC
Two IBM M1015 SAS controller's flashed to IT mode
Datastore/Boot: Samsung 840 Pro SSD, plus an old WD Blue hard drive for backups.
ZFS Array: 12x WD Red 4TB, 2x120GB Samsung 850 pro for cache, 2x Intel S3700 100GB for SLOG/ZIL
 
I reduced my build to the level of storage he is into, using (admittedly used, and older, but still effective) server grade parts in the comments on that article.

I spent ~30% of his total on hardware, and wound up with a much more robust setup, less likely to lose all your data...
 
It's expensive because he went the SAS route. At least I think that's the reason.

A 5n3 is only $120 on newegg. SATA only but so what.

The Silverstone DS380 has 8 hotswap SAS for $150 and it includes the rest of the case.

FWIW, I use the StarTech 2.5" SAS cages in my workstation. They work great but for this build he could have bought a case with all of the necessary features integrated for much cheaper.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Zarathustra[H];1041291006 said:
I know.

If you combine what he spent on hotswap enclosures and his case, he could have bought a REAL storage case, like a Norco 16 or 24 bay unit.

I suspect this guy would have asked LSI for another 2 cards if he would have been given a 24 bay Norco. He doesn't even seem to know what the BBU is for, so I doubt he has a clue about expanders.
 
What's the purpose of this PC? Why are they using a combination of server hardware (LSI RAID controller, SAS drives, etc.) with desktop hardware (MB, processor, etc.). At first, I thought they were building a high performance workstation. Then, I saw "Sapphire HD 7700 Video Card". WTF? Yeah, these people have no clue how to build a "storage server" and have way too much money...

Unless you need the added heat for your house, a home storage server should be as low power as possible, otherwise the electrical cost will end up costing more than the hardware.

My preference for home:
Small solid state boot drive, and a large software or motherboard based raid using as large of drives as affordable to keep the count/power down. I'd rather have 4x4TB instead of 8*2TB.
 
I suspect this guy would have asked LSI for another 2 cards if he would have been given a 24 bay Norco. He doesn't even seem to know what the BBU is for, so I doubt he has a clue about expanders.

Personally I prefer (if they are cheap enough) multiple SAS controllers over expanders.

I had a terrible time with th eperformance of the SAS expander built into my old HP DL180 G6
 
I just redid my array with 5x 4TB drives but did it with RAID 6. Too much of a risk of another drive failing during the rebuild with such huge disks for me to trust RAID 5 anymore. I need to update my sig.

Exactly. Anything bigger than 2 TB makes RAID 5 useless.
 
<rant>
I read the [H] almost every day, but hardly ever post. With this, I had to.

Where does the fail begin and end in this abortion.
...
I'm going to go take a few laps in my work parking lot in the cold, maybe stick my head in the ocean, or see if the planes taking off by the airport can blast the stupid out of my brain that I just read.

That pretty much sums up my opinion. I'm putting together a FreeNAS system with the 8-bay Silverstone DS380B and an AsRock C2550 Avoton mobo. Cost without drives? $850 - and that includes 32GB of ECC (for those who don't know, FreeNAS is a ram hog). With 8x 6TB WD Reds? About $3000.

This build was stupidly specced and overpriced 2 or 3 years ago when the article was written. Reposting it now is really a head-scratcher. I could see if the ideas within were solid, but this guy is all over the map, for reasons well-discussed by other commenters.

I'm particularly confused why anyone would have posted this FP on [H]. All respect to the staff, but anyone who read half of the first page (about all I could stand to read myself) should have seen that this article is worthless (unless it was posted as comedy).

</rant>
 
Here is another question..

WTF was Steve doing on that site!? I read some of the reviews after glancing at this server article(or trash if you prefer) and they totally suck.
Dam internet will let anybody have a website! :eek:
I need an aspirin now......
 
Here is another question..

WTF was Steve doing on that site!?

Author probably emailed him the link in a desperate attempt to get more traffic...

I don't blame Steve. he gets tons of shit in a day, and has to decide within a few seconds what is worth going on the main page, and what isn't.
 
i don't even use RAID... i've got a self built solution based on StableBit (very similar to the old Windows Home Server with drive pooling and folder duplication)... JBOD that i can increase in capacity as time goes on, one drive at a time...

i don't care for RAID, i don't need its redundancy, speed, nor disk limitations

my important data is automatically double duplicated by StableBit across several drives, while my media files are easily replaceable enough that i don't care to waste storage or processing
 
I like in how the article he states "this is for small to medium sized companies".

In my real life experience most small to medium sized companies that do not work in 'media' usually only require between 500GB to 1TB of storage and that's pretty generous.

Amazingly plumbing companies, decorators, financial advisers, builders, fabricators, consultants, caterers etc. do not actually create a lot of data.

I've been in business for 6 years and my 'business data' (mainly invoices I've sent by email and proposals) comes in at a whopping...75MB.

If a small business asks me for a file storage option with level 1 basic backup (one on-site copy for all machines) then a dual 1TBx2 NAS in RAID1 goes in. Simple.
 
I hope no one looks at that page looking for storage server information.

Exactly my thoughts. Raid 5 today is like grenade with it's pin hanging by a thread...

For working on arrays - RAID 10 all the way - or simple RAID 1 (& mentioned StableBit) for home-archive environments. RAID5 is madness with massive HDDs of today. If any RAID6/60 is another viable option, but it has some serious drawbacks vs 10.

StableBit has some good sides, but I really like boost that hardware RAID gives when dealing with thousands of files in motion. And switching system/motherboards is much easier and convenient, just unplug/plug-in card, no need to toy with all these cables. :)
 
That is not how you build a "Storage Server". RAID 5, works in certain scenarios, but for what he is doing RAID 6 or ZFS would be a lot better. His rationale for not using a Supermicro case or similar speciality case is moronic. He spent more money on motherboard, case, hot-swap bays and single power supply than he would have just buying a Supermicro bare-bones system that would have had redundant power supply, Xenon motherboard with support for ECC memory and has hot-swap carriages with a proper backplane. That motherboard is ridiculous, if are not going to buy a proper server motherboard at least don't buy some gaming motherboard. 240GB SSD? Why waste the extra money, you will never need more than 120GB? Lastly, if you are going to blow money on things, get a Intel NIC card with either dual or quad ports for channel bonding.

This is clearly a system built by someone who builds game rigs and reads NewEgg reviews to find their parts lists. Generally speaking, you are better off just buying a Thecus or QNAP box, unless you have specific needs. Just go to the [H]ard Storage forum, the worst commenter there will give you better advice than that article.
 
4TBs are the sweet spot right now. I would do with those if I had to build new.

And of course anything less than zfs is just silly.
 
It's expensive because he went the SAS route. At least I think that's the reason.

A 5n3 is only $120 on newegg. SATA only but so what.

Well, for what he spent on those hot swap bays + his case, he could have gotten a real storage case, like one of these:

http://www.norcotek.com/item_detail.php?categoryid=1&modelno=rpc-4224

Either way, I guess that site didn't much care for the negative feedback.

Instead of updating the article or amending it, they just pulled it all together :p
 
Back
Top