Sandy bridge NAS build

terahz

n00b
Joined
May 2, 2011
Messages
21
Hi guys. New here. First to say thanks a lot for all the info on these forums.

I'm planning a NAS build for my home to store mainly images and footage (photographer at home) along with some audio and video files. Main goal is to keep it low power consumption but also to have enough processing power should I decide to use it for an occasional transcoding of videos. After reading the forum here, I realize the recommended config is a supermicro MB paired with a good Xeon CPU (1156 socket) and ECC memory, however that's a bit over my budget right now ($1K).
I plan to run raidz1 with 4 1T drives I currently have and another raidz2 with new drives.

So this is my parts list at the moment:

MB: ASUS P8H67-M PRO/CSM
CPU: Intel Core i3-2100T
RAM: G.SKILL Ripjaws 8GB (2 x 4GB)
SATA board: Intel SASUC8I
SATA cable: 2x Areca CB-R87SA-75M cables
PSU: Antec EA-430D 430W 80 PLUS BRONZE Certified
HDD: 4-8x HITACHI Deskstar 5K3000
Case:LIAN LI PC-V354B
OS: OpenIndiana
Possibly add an Intel NIC for ~$25

The above setup gets me just over $1K with 4 2TB drives and the ability to add at least 4 more without new sata boards. I am not really keen on the case, but I wanted something smaller. Then again that is counterintuitive for storage system so I might get one with more 3.5mm slots (any recommendations?). Also I'd love to use the i5-2390T but the only place could find it has it for $330 which seems a bit high. Is it worth it?

Anyway, just doing a reality check here. Any obvious issues? Any tweaks? Any waste of money?

I'll probably be ordering by Wed or early next week if someone recommends something drastic that I'd need to do extra reading on.

Thanks a lot.
 
After reading the forum here, I realize the recommended config is a supermicro MB paired with a good Xeon CPU (1156 socket) and ECC memory, however that's a bit over my budget right now ($1K).
Actually such a build isn't over your budget. The RAM, mobo, CPU, SAS controller, and Intel NIC in your list totals up to around $550. An older Supermicro build is actually a little less:
$120 - Intel Core i3-540 CPU
$270 - Supermicro X8SI6-F
$140 - Kingston 2 x 4GB ECC Unbuffered DDR3 1333 RAM
---
$530 plus tax and shipping.

That motherboard comes with a built-in SAS controller and already comes with all the cables you need to hook up 14 hard drives. It also has IPMI and KVM over IP which makes it really easy to remote into that motherboard. Finally it has an Intel NIC.

However, the two key cons of the above Supermicro build compared to your SB build is as follow:
- The Core i3 540 is slower and uses a little bit more power than the Core i3-2100T
- That motherboard will take awhile to ship.

Fairly sure there's an updated SuperMicro SB setup but a little lazy to look for it now.
This PSU is overpriced considering that a larger and higher quality PSU is $6 more:
$64 - Antec NEO ECO 520C 520W PSU

With that said, that setup should be fine with this PSU:
$50 - Antec NEO ECO 400C 400W PSU
 
I second Danny suggestion i think its very good, btw danny do you know if supermicro has any SandyBridge similar to the X8SI6-F?

The only thing that i would consider is vs the lian li case, to save some and go with Fractal Design Define R3 $99, should give you the space to place the requiered 8 hdd with still two 5.25 free for additional ssd/hdds/optical.

blbild_2_2.jpg
 
Thanks Danny for the reply. I like that board a lot, but the CPU uses 2-3 times more power than the i3-2100T. I was considering Xeon L3426 with such board, but that CPU is also $300+.

Maybe the X9SCL-F with the ram you listed and the i3-2100T? It just doesn't have the SAS integrated, but with zeroARMY's suggestion might be a reasonable compromise?

Thanks Abula for the case recommendation. I will probably end up with something similar because the Lian Li case just costs too much.
 
I second Danny suggestion i think its very good, btw danny do you know if supermicro has any SandyBridge similar to the X8SI6-F?
If you're talking about the integrated SAS controller, AFAIK, no. But there are Sandy Bridge mobos from Supermicro available which has all of the features that the X8SI6-F have but without the integrated SAS controller.
Maybe the X9SCL-F with the ram you listed and the i3-2100T? It just doesn't have the SAS integrated, but with zeroARMY's suggestion might be a reasonable compromise
Yes and no: Those Supermicro socket LGA 1155 mobos just came out. So there might be some teething issues that haven't become apparent yet. And it doesn't look like many people are jumping on those new SM mobos here in the subforum. So if you're asking for help here, you might be SOL. Then again, some people here on the forums might know whats up without actual experience with those mobos.

Also, the Intel 82579LM NIC on that particular mobo will not work with OpenIndiana at this point in time. So if you don't want waste money on a part you can't use and if you're willing to be basically on the bleeding edge, find a Supermicro LGA 1155 with two Intel 82574L NICs.
 
Ok, that leaves only 2 boards X9SCA-F @ $210 and X9SCI-LN4F @ $230.

The first has 2x GBe and a few more PCI[e] slots, second has 4 GBe and only 1 (x16) PCI-E 2.0 and 1 PCI 32-bit slot.

So if I'm ok with latest hardware the revised BOM will look like this

MB: SuperMicro X9SCA-F
CPU: Intel Core i3-2100T
RAM: Kingston 2 x 4GB ECC Unbuffered DDR3 1333 RAM (fixed description)
SATA board: IBM BR10i
SATA cable: 2x Areca CB-R87SA-75M cables
PSU: Antec ECO 400W
HDD: 4-8x HITACHI Deskstar 5K3000
Case:Fractal Design R3

That configuration with 3 drives (instead of 4) gets me to just over $1K. If I get a cheap case with 7 internal 3.5" bays and few external 5.25" for $30-$40 I can get a forth drive now.

How does the new list look? I'll do some more searching on the MB and OpenIndiana compatibility and possibly a different case for less $.

Thanks again.
 
Last edited:
Stick with X9SCA-F. Fix your RAM item: You list G.Skill but linked Kingston
 
Oops, fixed. Bad copy/paste. Thanks!

As for the case, I'll do some measurements tonight to see if a u2 case would be a better fit for where I want it. The NORCO RPC-250 seems like a good case for a reasonable price.
 
Thanks zeroARMY. Another 30 bucks saved there :)

I've decided on the Norco RPC-250. It fits perfectly in the location I want it (short, but wide and deep).

Danny Bui, I never asked why did you recommend the neo eco 400W PSU vs the one I had chosen. Isn't it better to have a slightly more efficient PSU than one that can provide more current with a single rail? Especially since I don't plan to put any high end video cards or other power hungry parts in that build? Also the reviews on the EarthWatts seem to be slightly better on newegg. Just curious.

Here is a comparison I found between the EarthWatts 500W vs the NEO Eco 520W. It looks like the EarthWatts is more efficient and regulates the voltage better.
EA500Dcold_by_Makalu7.png

coldrun.png



Well it looks like I'm at pretty much exactly $1K with the IBM SAS, the above RAM, 4x 2TB drives and the Norco U2 case. That should get me started for now while I save up for another 4x 2TBs :), then I can post on the 10TB+ thread :).

As always thanks for the help guys. It looks like for the same amount of money I'll end up with a much nicer system than what I had planned in the beginning.
 
Danny Bui, I never asked why did you recommend the neo eco 400W PSU vs the one I had chosen. Isn't it better to have a slightly more efficient PSU than one that can provide more current with a single rail? Especially since I don't plan to put any high end video cards or other power hungry parts in that build? Also the reviews on the EarthWatts seem to be slightly better on newegg. Just curious.
Well I sorta told you why I recommended that Neo Eco in my reply:
This PSU is overpriced considering that a larger and higher quality PSU is $6 more:
$64 - Antec NEO ECO 520C 520W PSU

With that said, that setup should be fine with this PSU:
$50 - Antec NEO ECO 400C 400W PSU

That's the main reason why I did not recommend that Earthwatts: It's simply overpriced IMO. It's $6 more than the Neo Eco 400C and $6 less than the Neo Eco 520C. From a price to performance standpoint, that Earthwatts 430W isn't a good buy. And I personally don't trust Newegg reviews in regards to PSUs as the majority are written by people who know jack about PSUs.
 
Alright. Neo eco it is. The only reason I was going for the eartwatts is because it should be a little more efficient. If that "more" is even 5%, then the PSU will pay for itself over 4-5 years if the system uses on avg 100W (which I hope it doesn't even get close to). I hope, however, that the system idles in the 40Ws so I don't think even the 5% more efficient matter.

I will probably order parts tomorrow so hopefully it will be a fun weekend (if the ram arrives on time) :).
 
My atom 510 server with 4x1.5tb and 1 120mm fan idles at 40W, the 2100T has potential to do the same on 4 disk setup, but more disks (as you planning 8) will go over 40W easy.

On the PSU, i agree with danny, i think the Neo Eco is much better choice. In my delayed server upgrade build, im planning to use a seasonic X400, but its too expensive, so im considering FSP Group AURUM GOLD 400, FSP bricks are really efficient, and according to reviews of this line of PSU they even reach 90%, i never really tried FSP on full ATX PSU, but kinda curious on them, the downside is more expensive than neo/earthwatts.
 
On the PSU, i agree with danny, i think the Neo Eco is much better choice. In my delayed server upgrade build, im planning to use a seasonic X400, but its too expensive, so im considering FSP Group AURUM GOLD 400, FSP bricks are really efficient, and according to reviews of this line of PSU they even reach 90%, i never really tried FSP on full ATX PSU, but kinda curious on them, the downside is more expensive than neo/earthwatts.

Were those proper PSU reviews that you were reading? I.e The criteria listed under "The Goal" section on what makes a good and bad PSU review in this link:
http://www.overclock.net/power-supplies/738097-psu-review-database.html

If not, then I highly recommend disregarding those reviews then. FSP is one of those companies you really really should be thoroughly checking as a lot of FSP's past PSUs have been pretty crappy.
 
i attempted to build exactly the same system:

MB: ASUS P8H67-M PRO/CSM or Asus P8P67 Pro
CPU: Intel Core i3-2100T

Raid: Intel RS2WC080 (based on LSI's chipset)

Mobo didn't boot or even display POST screen.. so I'd recommend you to stick with Supermicro. Does anyone know of any resolution for issues between Asus Sandy Bridge boards and LSI chips?
 
Danny Bui, thanks for that link on PSU reviews. It was very helpful. Now I see why the Neo Eco is a better PSU.

anterus, sorry about the mb not booting.I already placed the order with the supermicro MB. Stuff is arriving on Monday.
 
Well, here they are:
kasa_001.jpg

kasa_002.jpg


I started with only 2x2TB for now to play with while the rest are coming (didn't want to order all from the same batch).

OpenIndiana:
Live USB boot did not work. Had to boot from live DVD. Installed the OS on a small USB hdd I had laying around and it worked well. It detected all the hardware without a problem.
Few things I need to figure out now:
Is it possible to switch to the i3's video card instead of the Matrox G200eW that is on the MB. I have a feeling it will do better. If not, is it possible to run the Matrox G200eW with a better driver than 'vgatext' so that I can at least have a full resolution on my monitor while I'm doing the initial setup. After I have it setup to some extent it will not have a monitor connected to it at all so this is not the end of the world either way.

Case:
These fans sound like jet engines! They do move a lot of air, but I don't think I'll need it yet, so as a first step I'll build a small pwm circuit with a pot to control the speed of these fans, hopefully they are not as loud at lower rpm. Otherwise I'll have to change them. For now, they will stay unplugged.

Power Consumption:
I'm actually pretty happy with the results. Even though I have only 2 HDDs + 1 usb, the idle power consumption is 36W. These hdds are supposed to idle at <1W so if I add 4 more to fill up the satas on the MB I should be < 40W on idle. Will test power consumption under load probably tomorrow, but during boot the system gets to 60W max.

MotherBoard:
There is a "bug" in the manual. On the MB the black RAM slots are listed as DIMMA1 and DIMMA2, in the manual it says the blue slots are DIMMA1 and DIMMA2. Not sure which to believe, but I installed my ram in the two black slots.

Thanks again for the help in choosing the hardware. I will be running some more tests this week and hopefully by the weekend I should be able to start moving data around.
 
Little update.

I had to update BIOS of the MB because I wasn't able to enter the setup. Every time I press Del during boot the system would just freeze... After updating to 1.00b all seems good.

I'm now with 6x 2T hitachis in raidz2. The system uses 55W when idle with the drives spinning (still haven't read on spinning down) and with the case fans OFF! With the case fans It goes to 65W. (4 fans in that case). With a single write job to the pool, usage goes to about 85W.

Here is a small test with dd:

dd if=/dev/zero of=/store/dd.tst bs=1024000 count=20000
20.48 GB in 55.1s = 371.69 MB/s Write
dd if=/store/dd.tst of=/dev/null bs=1024000
20.48 GB in 43.2s = 474.07 MB/s Read

Pretty happy with that, especially given that I will be using it with 1GBe ethernet.

Initially I shared everything with netatalk. Speed was very good over the GBE (avg about 95MB/s topping 110 MB/s) but the stupid afp implementation on mac doesn't allow multiple users of the same machine to use the same mount so I had to switch to NFS.

The NFS speed is all over the place. with "cp" from cmdline it starts at 65MB/s, then slows down and even stalls at some point. 11GB copy takes 3 min 23 secs with CP - about 55MB/s. If I "drag drop" with Finder (Mac client), the file is transferred in 2min 3secs - 92MB/s which is quite a difference. Not sure why, but that's what my tests show.

I've also setup Coherence DLNA server for my bluray player and that is also with mixed results. It can share videos, but no music (because of missing a python library that cannot compile). Also the TED backend crashes my bluray player for some reason. Overall not very happy, but it is the only solution I was able to find for OpenIndiana. Might setup a vbox under a zone with linux in it so that I can run tvMobili which I know works quite well with my bluray player.

HDD temps:
with case open, fans not running, no load: about 40C. Case open, fans running no load: 30C, I'm guessing it will drop even more with the case closed, but I'm still not done with it to close it :)

That's about it for now. Off to continue reading the opensolaris bible.
 
Last edited:
Well, it's been more than 2 years now so I figured I'd post some updates.

Changes from original system:
Case is now SM SC846TQ
Added another 8G ram.
Changed the HBA to 3x LSI SAS 9211-8i with the IT firmware
CPU changed to Intel E3-1220L because I was never sure if ECC worked on that i3 2100T
HDDs: 6x 2TB Hitachi 5K3000s, 4x 2TB WD Red and 6x 3TB WD Red.
Added a UPS: CyberPower CP1500PFCLCD

OS is now OmniOS, and data is in two raidz2 pools, 16.2TB (from 6x3TB) and 18.1TB (from 10x2TB).

Power usage has gone up quite a bit to 150W idle and about 190W under heavly IO. Still acceptable with the amount of drives I have and the fans in the case.

Bonnie benchmarks:
16.2T - Seq-Write: 414 MB/s Seq-Read: 516 MB/s
18.1T - Seq-Write: 695 MB/s Seq-Read: 920 MB/s

According to bonnie, my CPU is going to be the limit if try to write to both of these at the same time at these speeds, but reading is fine. Plus I I'm still only connected to the machine with 1GBe, so I'm not likely to hit the speed limits anytime soon.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Well, it's been more than 2 years now so I figured I'd post some updates.

Changes from original system:
Case is now SM SC846TQ
Added another 8G ram.
Changed the HBA to 3x LSI SAS 9211-8i with the IT firmware
CPU changed to Intel E3-1220L because I was never sure if ECC worked on that i3 2100T
HDDs: 6x 2TB Hitachi 5K3000s, 4x 2TB WD Red and 6x 3TB WD Red.
Added a UPS: CyberPower CP1500PFCLCD

OS is now OmniOS, and data is in two raidz2 pools, 16.2TB (from 6x3TB) and 18.1TB (from 10x2TB).

Power usage has gone up quite a bit to 150W idle and about 190W under heavly IO. Still acceptable with the amount of drives I have and the fans in the case.

Bonnie benchmarks:
16.2T - Seq-Write: 414 MB/s Seq-Read: 516 MB/s
18.1T - Seq-Write: 695 MB/s Seq-Read: 920 MB/s

According to bonnie, my CPU is going to be the limit if try to write to both of these at the same time at these speeds, but reading is fine. Plus I I'm still only connected to the machine with 1GBe, so I'm not likely to hit the speed limits anytime soon.

Great setup.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Thanks guys. Yes indeed, the storage bug got me :). I'm backing up my data into crashplan encrypted with personal key as well, so I finally feel like my storage/data safety problem is reasonably under control :)
 
Back
Top