New server build

berrmich

n00b
Joined
Mar 7, 2011
Messages
59
I'm looking to build a server. It will be running software that saves xrays (windows software). I was originally thinking an all-in-one since that is what I used for my media center and it works well. I used the supermicro X10SL7-F and 1230 V3 for that build. What would you suggest?

Currently there is only 2 tb of data. I would probably at least want 6 tb. Because the data is important and the amount of data is not great I was thinking of mirroring the drives.

Also backups also need to be made in addition. Currently using a usb external drive but that is very slow. Any speedier options?

Any build suggestions? I'm thinking supermicro board, xenon, server grade hds?, and ECC memory.

Thanks!!

-Mike
 
What is your budget? Where are you located (meaning are there specific regulatory requirements for security or encryption of patient files)? Is this transient storage or more permanent? Is this going to be network connected or direct connected?
 
No specific budget. Don't want to spend more than necessary though. Michigan. The software is compliant with security etc. permanent. Network.

-Mike
 
If it's only storing (and not doing any processing), the X10SL7-F and 1230V3 is a fine choice. The onboard SAS controller makes it easy to set up a RAID 1 mirror with two 6 TB enterprise drives. If you want to save a little, I would not recommend the 6TB WD Red Pro (which is most certainly not enterprise) over the WD RE. I like WD because of their advanced RMA policy (requiring only a credit card hold). Personally, I've also had good experiences with Seagate's "Enterprise Capacity" 6TB drives. Stay away from consumer/"NAS"/archive drives.

Also, consider spending just a little more to future proof the setup by moving to the X11 series boards (DDR4) with onboard SAS. The consumer i3 CPUs support ECC (not i5 or i7), and should be sufficient for a file server -- you don't necessarily need a quad-core Xeon for that.

(thanks to StereoDude for carefully reading the Red Pro datasheet and pointing out my error)
 
Last edited:
If it's only storing (and not doing any processing), the X10SL7-F and 1230V3 is a fine choice. The onboard SAS controller makes it easy to set up a RAID 1 mirror with two 6 TB enterprise drives. If you want to save a little, I'd recommend the 6TB WD Red Pro (which is de facto enterprise) over the WD RE. I like WD because of their advanced RMA policy (requiring only a credit card hold). Personally, I've also had good experiences with Seagate's "Enterprise Capacity" 6TB drives. Stay away from consumer/"NAS"/archive drives.

Also, consider spending just a little more to future proof the setup by moving to the X11 series boards (DDR4) with onboard SAS. The consumer i3 CPUs support ECC (not i5 or i7), and should be sufficient for a file server -- you don't necessarily need a quad-core Xeon for that.

if its that important I would go with HGST enterprise drives personally. Must better UBER and less likely to fail in a rebuild. at suck large sizes.
 
if its that important I would go with HGST enterprise drives personally. Must better UBER and less likely to fail in a rebuild. at suck large sizes.

6TB Enterprise drives from any manufacturer have the same 1E10^-15 UBER, 2Mhr MTBF and 0.44% AFR. Non-helium HGST enterprises seem to cost about the same as WD REs and last I checked you have to call and plead with HGST to get an advanced RMA.
 
6TB Enterprise drives from any manufacturer have the same 1E10^-15 UBER, 2Mhr MTBF and 0.44% AFR. Non-helium HGST enterprises seem to cost about the same as WD REs and last I checked you have to call and plead with HGST to get an advanced RMA.

weren't HGSTs still better though? Thats what i recall but i could be mistaken
 
Buy a Dell T20 (G3220 version), pop in some more memory and drives.
Suggestion: 8Gb additional and an ASMedia ASM1061/ASM1062-based controller.
Do a RAID-5 (4 HDD array) or 2 mirrors and get a 2.5 for booting.

As for HDDs. Toshiba or HGST NAS-series.
Don't forget offsite backup.
 
weren't HGSTs still better though? Thats what i recall but i could be mistaken

The HGST NAS series is, by reputation, the best among "consumer" drives for us regular folks.

Once you can afford real enterprise drives like the OP ;) there don't seem to be any significant differences in specs or any reports of abnormal failure rates with the current generation of 7200rpm PMR enterprise drives.
 
The HGST NAS series is, by reputation, the best among "consumer" drives for us regular folks.

Once you can afford real enterprise drives like the OP ;) there don't seem to be any significant differences in specs or any reports of abnormal failure rates with the current generation of 7200rpm PMR enterprise drives.

oh i thought HGST still had the upper hand in enterprise...well then get whatever is cheapest. I heard the Toshiba enterprise were cheap.
 
Raid 5 on spinning rust with a 6 TB array would be like almost wanting your data to be destroyed at some point. Resilvering an array that size will almost certainly encounter a URE and then your data is hosed.

You should not be using any parity RAID type on mechanical drives of this array size. Go with either RAID 1 or RAID 10. RAID 1 will be the cheapest and most reliable. HGST makes many various large size drives. I'd stick with SAS but SATA could be fine too depending on your needs. If it's just bulk storage and you don't need lots of fast access to it, then SATA is fine. Their He line makes some monster size drives. Stick two of those onto a RAID 1 mirror and be done with it.

As far as a backup, well you should be backing up offsite too but that becomes difficult when we start speaking of arrays that are greater than several TBs in size. At the very least I'd get a NAS (Synology makes some good ones) and backup to that. Buy the same set of mirrored disks for the the NAS that are in the server. At least then you have your data in two different machines should one explode on you or something.
 
Raid 5 on spinning rust with a 6 TB array would be like almost wanting your data to be destroyed at some point. Resilvering an array that size will almost certainly encounter a URE and then your data is hosed.

You should not be using any parity RAID type on mechanical drives of this array size. Go with either RAID 1 or RAID 10. RAID 1 will be the cheapest and most reliable. HGST makes many various large size drives. I'd stick with SAS but SATA could be fine too depending on your needs. If it's just bulk storage and you don't need lots of fast access to it, then SATA is fine. Their He line makes some monster size drives. Stick two of those onto a RAID 1 mirror and be done with it.

As far as a backup, well you should be backing up offsite too but that becomes difficult when we start speaking of arrays that are greater than several TBs in size. At the very least I'd get a NAS (Synology makes some good ones) and backup to that. Buy the same set of mirrored disks for the the NAS that are in the server. At least then you have your data in two different machines should one explode on you or something.

Care to provide a source which elaborates on this with measured data?
 
Care to provide a source which elaborates on this with measured data?

Just about every source nowadays says that you shouldn't be using parity RAID any longer for mechanical drives in large array sizes. There is a reason that Dell has officially taken the stance that RAID 5 should not be used for any business critical information.

http://en.community.dell.com/techce...rt-considerations-and-best-practices-released
http://www.zdnet.com/article/why-raid-5-stops-working-in-2009
http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage
http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013/
http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable/

From the "When no redundancy is more reliable" link above:
What happens that scares us during a RAID 5 resilver operation is that an unrecoverable read error (URE) can occur. When it does the resilver operation halts and the array is left in a useless state – all data on the array is lost. On common SATA drives the rate of URE is 10^14, or once every twelve terabytes of read operations. That means that a six terabyte array being resilvered has a roughly fifty percent chance of hitting a URE and failing. Fifty percent chance of failure is insanely high. Imagine if your car had a fifty percent chance of the wheels falling off every time that you drove it.

The issue is the rebuild times on large arrays. Drive sizes are much large than RAID 5 was ever intended to be used on. As a result rebuild times are of orders of magnitude larger. So, one disk fails on an array that is large (a couple TB or larger) and you swap it out. However, before the new disk is rebuilt another one in the array encounters a URE. You are then totally screwed.

It just doesn't make sense to using RAID 5 any longer for mechanicals unless you are just supporting legacy systems and have no budge for an upgrade. RAID 1 and RAID 10 is a much better choice in my opinion. Not only are they faster than any parity RAID they are also much more reliable. RAID 1 with two disks is crazy reliable.

RAID 5 and 6 still have their place with SSDs as the URE issue that plague spinning rust disk don't effect flash. In fact, I have a RAID 5 array of Samsung Enterprise SSDs.
 
Last edited:
^ Why would you chose RAID 6 over RAID 10?? You need a minimum of 4 drives with RAID 6. You need a minimum of 4 drives in RAID 10. RAID 10 has better both read and write speed than RAID 6. RAID 10 has 4x the read speed and 2x the write speed in a 4 drive array. RAID 6 will incur twice the write penalty that RAID 5 will because of double parity.

The decision to pick RAID 10 instead of RAID 6 is a no brainer.
 
^ Why would you chose RAID 6 over RAID 10?? You need a minimum of 4 drives with RAID 6. You need a minimum of 4 drives in RAID 10. RAID 10 has better both read and write speed than RAID 6. RAID 10 has 4x the read speed and 2x the write speed in a 4 drive array. RAID 6 will incur twice the write penalty that RAID 5 will because of double parity.

The decision to pick RAID 10 instead of RAID 6 is a no brainer.

RAID10 requires the "correct 2 drives" fail in order to beat RAID6 in any form of statistics..... PERIOD. The overall uptime and array integrity is better with RAID6. Performance is another metric..but not the only one. If i'm writing a few 100GB a day...do you really think I would give 2 shits about RAID 10 performance?
 
So mostly the server will be saving and distributing data. Not much processing. Right now it's just running on an old dell desktop.

I'm wondering about the Dell T20. Is this a viable option and how would I outfit it?

-Mike
 
You keep throwing in RAID6 as the same as RAID5. I'm not buying it..at least not yet. While what some of what you say is valid...stating it as an overarching that RAID10 beats all doesn't pass.

RAID 6 incurs double the write penalty that RAID 5 does. So on write intensive servers this is a large problem.

Yes, 6 is better than 5. But if you are going to have to purchase 4 drives anyway why not move to RAID 10 and dump parity all together? Unless you just really need the space that RAID 6 gives you it will never makes sense to choose that over RAID 10.

Don't take my word for it. Do a search yourself on the perils of RAID 5. It's been common knowledge in the IT field for years now.
 
Or the supermicro x11SSL-CF with 16gb ECC ram and an i3 processor? Performance isn't a huge deal. Probably less than an gig /day. For backups it may be more important.
 
RAID 6 incurs double the write penalty that RAID 5 does. So on write intensive servers this is a large problem.

Yes, 6 is better than 5. But if you are going to have to purchase 4 drives anyway why not move to RAID 10 and dump parity all together? Unless you just really need the space that RAID 6 gives you it will never makes sense to choose that over RAID 10.

Don't take my word for it. Do a search yourself on the perils of RAID 5. It's been common knowledge in the IT field for years now.

Stop posting about RAID5...SERIOUSLY. It is getting old. You keep stating RAID5 again and again and again and again when I never once advocated RAID5. It is getting to be a broken record. Do you have a bumper sticker that says LOL>>RAID5?

RAID10 if he needs best performance
RAID6 if he needs best overall reliability

If there was one absolute best..it would be there....but it isn't. The world is about trades and understanding your requirements. Not a blanket statement as you seems to be so fond of.

The reason I asked about his image SW having data corruption detection/protection was to help the OP decide if RAID10 would be a better fit. If the SW can fix partially corrupted files then the limits of RAID10 with a proper backup would be good a match. If not...then not being able to scrub and fix bad files without having to compare to the backup (which is also an intensive I/O operation) may not work out well for him. If his backup is online (powered) and local..then RAID10 again may make more sense. However, if his backup is a couple of external 6TB drives...I might think twice about it.
 
RIght now back up is an external drive/usb that saves all the new data for the day. I would like a better solution here too. I don't know that the SW fixes corrupted files? Thanks!

-mike
 
Stop posting about RAID5...SERIOUSLY. It is getting old. You keep stating RAID5 again and again and again and again when I never once advocated RAID5. It is getting to be a broken record. Do you have a bumper sticker that says LOL>>RAID5?

From my original post you highlighted my sentence (in red mind you) that said:
You should not be using any parity RAID type on mechanical drives of this array size. Go with either RAID 1 or RAID 10.

You then asked:
Care to provide a source which elaborates on this with measured data?

So, my entire explanation was based on parity RAID which is RAID 5 and RAID 6. If you didn't want a lesson on RAID 5 you should have been more specific and asked about RAID 6 instead. The RAID 10 vs 6 question always ALWAYS comes down to capacity. Blanket statement inbound: If you need the extra capacity then go with RAID 6 since you can odd number of drives into the array. If you don't, then RAID 10 is ALWAYS better.

OP: Since you are just storing images go with a large RAID 1 mirror. You can get twin 10TB disks that should give you plenty of room to grow.
 
Last edited:
I feel this may have gotten off track. I'm looking to build a server. Any thoughts on the above mentioned supermicro motherboard, etc. I'm thinking of 4 x 4gb WD red pro in raid 10. hardware vs software raid?

Thanks!

-Mike
 
Just about every source nowadays says that you shouldn't be using parity RAID any longer for mechanical drives in large array sizes. There is a reason that Dell has officially taken the stance that RAID 5 should not be used for any business critical information.

http://en.community.dell.com/techce...rt-considerations-and-best-practices-released
http://www.zdnet.com/article/why-raid-5-stops-working-in-2009
http://www.smbitjournal.com/2012/11/one-big-raid-10-a-new-standard-in-server-storage
http://www.smbitjournal.com/2012/11/choosing-raid-for-hard-drives-in-2013/
http://www.smbitjournal.com/2012/05/when-no-redundancy-is-more-reliable/

From the "When no redundancy is more reliable" link above:


The issue is the rebuild times on large arrays. Drive sizes are much large than RAID 5 was ever intended to be used on. As a result rebuild times are of orders of magnitude larger. So, one disk fails on an array that is large (a couple TB or larger) and you swap it out. However, before the new disk is rebuilt another one in the array encounters a URE. You are then totally screwed.

It just doesn't make sense to using RAID 5 any longer for mechanicals unless you are just supporting legacy systems and have no budge for an upgrade. RAID 1 and RAID 10 is a much better choice in my opinion. Not only are they faster than any parity RAID they are also much more reliable. RAID 1 with two disks is crazy reliable.

RAID 5 and 6 still have their place with SSDs as the URE issue that plague spinning rust disk don't effect flash. In fact, I have a RAID 5 array of Samsung Enterprise SSDs.
:rolleyes: *sigh*

Enterprise drives are 10^15, not 10^14, so you're theoretical 50% just dropped to 5%, and RAID-6 is the way to go, but you can continue tilting at the windmills if you'd like.
 
I feel this may have gotten off track. I'm looking to build a server. Any thoughts on the above mentioned supermicro motherboard, etc. I'm thinking of 4 x 4gb WD red pro in raid 10. hardware vs software raid?

Thanks!

-Mike
You should look to use real enterprise drives, not those lipstick on a pig "pro" WD drives.

There are affordable SATA enterprise drive out there like these.
 
You should look to use real enterprise drives, not those lipstick on a pig "pro" WD drives.

The Red Pros do have a misleading URE of 10 in 10^-15, same as consumer drives which are 1 in 10^-14.

Like Reds are better Greens with TLER, Red Pros appear to simply be Blacks with better TLER.

(thanks to StereoDude for carefully reading the Red Pro datasheet and pointing out my error)
 
Last edited:
I feel this may have gotten off track. I'm looking to build a server. Any thoughts on the above mentioned supermicro motherboard, etc. I'm thinking of 4 x 4gb WD red pro in raid 10. hardware vs software raid?

Consider the latest X11SSL-F motherboard for the same or less $ instead of the X10SL7-F. It, too, has onboard hardware RAID with the LSI SAS2308 -- use it. Get the Xeon E3-1230v5 and enterprise drives.
 
:rolleyes: *sigh*

Enterprise drives are 10^15, not 10^14, so you're theoretical 50% just dropped to 5%, and RAID-6 is the way to go, but you can continue tilting at the windmills if you'd like.

yep, or if you have lots of drives RAID60 has been good to me
 
Or the supermicro x11SSL-CF with 16gb ECC ram and an i3 processor? Performance isn't a huge deal. Probably less than an gig /day. For backups it may be more important.

Sorry, I missed this post amidst the RAID hullaballoo. As I said, that's a great mobo choice - onboard HW RAID. i3 works as well. Go for it.
 
The Red Pros do have a URE of 10^-15, but half the MTBF of the WD Re and no specified AFR.
No they don't. From the datasheet: "Non-recoverable read errors per bits read: <10 in 10^15"

The latest enterprise Toshibas (for example). From the datasheet: Non-recoverable Error Rate: 10 errors per 10^16 bits read"

Now I guess if you're being picky you could argue the WD is 9 in 10^15, not 10 in 10^15 making the Toshiba only 9x better, not 10x, but the point still stands that they're not enterprise class drives.

Do you happen to know Toshiba's policy on advanced RMAs for enterprise drives?
Not sure, I haven't had one fail yet. I keep a spare around just in case.
 
No they don't. From the datasheet: "Non-recoverable read errors per bits read: <10 in 10^15"

Nope, thanks for correcting me. I read them wrong, glossing over assuming it was the usual "1 in 1E15" format but their marketing department fooled me. Shame on me :) (and WD!)

I'll edit my previous statements and the recommendation to OP.

I find Seagate Enterprises' URE of "1 sector per 1E15 bits read" to be the most honest, since even one bit left uncorrectable after ECC will effectively result in a bad sector, although the remaining data in that sector should theoretically be correct upon the uncorrectable read. I wonder why WD and Toshiba are using 10 in 1E16 as their standard.

Perhaps an experiment is in order, using a fast 4TB or so 1E14 consumer drive, of a pattern of repeated sequential reads while observing any UNC smart errors or extended attributes like "Hardware ECC recovered" etc. After all, some would have us believe that a 1E14 URE mean an error is practically guaranteed every 11.37TB.
 
I find Seagate Enterprises' URE of "1 sector per 1E15 bits read" to be the most honest, since even one bit left uncorrectable after ECC will effectively result in a bad sector, although the remaining data in that sector should theoretically be correct upon the uncorrectable read. I wonder why WD and Toshiba are using 10 in 1E16 as their standard.
It's the same difference ultimately. Maybe the marketing department got a hold of it. In which case we should see 100 in 10^17 shortly.

Perhaps an experiment is in order, using a fast 4TB or so 1E14 consumer drive, of a pattern of repeated sequential reads while observing any UNC smart errors or extended attributes like "Hardware ECC recovered" etc. After all, some would have us believe that a 1E14 URE mean an error is practically guaranteed every 11.37TB.
It's hard to say. My guess is that you would need to test a lot more than 1 drive to draw any meaningful conclusion. I wouldn't expect you could test the MTBF number with a single drive either but they don't give any context for the URE number. Is it like a MTBF number where half should fail before and half after? Is it a minimum? Is it simply some sort of design target?
 
OK...

Supermicro x11SSL-CF (has SLI 3008 on board)
Samsung 8gb x4 MEM-DR480L-SL01-EU21
Xeon 1230 v5 (or possibly an i3)

Alternative would be Supermicro X10SL7-F with Xeon 1230 v3

Or even more inexpensive the T20 but how would I outfit this to give me the reliability I want?

So what is a good hd then if people don't like the red pros? I'm thinking 4x4tb in Raid 10

Thanks all the help!

-Mike
 
problem with RAID10 is you can only lose one drive per mirror set... I would do a RAID6 personally but it really depends on what kind of controller and data you are storing...

I would go with Hitachi/HGST or if you must do WD, RE4 or Red Pro
 
1. Ok. So if I want to go Raid 6 would it be smart to just get 2 more drives?
2. So if I'm going with this setup what would be a good case to get. I'm thinking something inexpensive that has good cooling. I'm not worried about having more than 6 drives.
3. Backup computer. I'm thinking something cheap as a just in case situation. Maybe a T20 with software raidz2. 6 consumer drives? Maybe offsite if possible but for now I'm thinking just on the network. Backup through gigabit since the size is small.

Thanks for the help!

-Mike
 
Back
Top