The perfect ZFS home server motherboard.

I still understand VT-d. Would I need this to run FreeNAS/NAS4Free and windows 8 virtualized for HTPC use (I would have a virtual netowkr between the two) ?

There is another way to go for this. I believe it's still unsupported, though. Install Windows 8 and enable the Hyper-V role. Install FreeNAS in a VM and passthrough the drives that you want to use.
 
There is another way to go for this. I believe it's still unsupported, though. Install Windows 8 and enable the Hyper-V role. Install FreeNAS in a VM and passthrough the drives that you want to use.

But then my NAS would be dependent on W8? How would this be better then ESXi and running both as a VM ?

(Still wondering if I need VT-D for this)
 
There is another way to go for this. I believe it's still unsupported, though. Install Windows 8 and enable the Hyper-V role. Install FreeNAS in a VM and passthrough the drives that you want to use.

This would be a good solution from a power consumption standpoint, though. CPU / Video power management is probably much better in W8 then ESXi (certaintly better then FreeNAS, which sucks)
 
But I would like the option to run WMC, with Shark007's codec setup, or perhaps XBMC if I find that superior. Also Karaoke software would be nice to run as well. with USB sound.

XBMC is generally far superior to WMC unless you have certain specific requirements that get tricky... like if you need Netflix integration or certain PVR features, then you'll want to track down the feasibility of those on XBMC. Though that might not iron out your need for a VM since I don't think there are any XBMC builds for BSD (yet). But supposedly FreeBSD has a Linux compatibility function so that might be a route that works.

Since you're concerned about power consumption, are you considering setting it up to auto-suspend after an hour or so of inactivity? It might be tricky since you'd need a way to bring it back up (like Wireless Wake on LAN, etc.). It's something I thought about myself but never really dived into the details.
 
Anyone know if the Asrock motherboard support port multiplying so it can be used in a Backblaze storage pod?
 
As long as we're asking questions, what the heck happened to this board? We initially heard availability for the 8 core model in Oct/Nov, with the 4 core following a month later. Neither appears to be available anywhere, at any price. What gives?
 
Indeed. Is this thing actually for sale?

It would be an excellent choice for a highend yet compact ZFS NAS server.
 
As long as we're asking questions, what the heck happened to this board? We initially heard availability for the 8 core model in Oct/Nov, with the 4 core following a month later. Neither appears to be available anywhere, at any price. What gives?

Shipping delays from Intel for Avoton. Production is ramping slowly and demand is exceeding forecast. Its affecting all Avoton based products and the low-end consumer board manufacturers are on the back end of Intel's shipping priority list.
 
ASROCK sounds like the issue. They don't answer non-windows questions in any meaningful way in my experience.
 
Shipping delays from Intel for Avoton. Production is ramping slowly and demand is exceeding forecast. Its affecting all Avoton based products and the low-end consumer board manufacturers are on the back end of Intel's shipping priority list.
Ah, I should have expected as much. Thanks. Any idea just how much Intel is behind? Trying to gauge whether I should just bite the bullet and get a proper socketed motherboard or keep waiting for this one.
 
Just a quick update on this motherboard. I received mine this week and have it running ZFS (RAID-Z) on an Ubuntu host with 5x Seagate 4TB NAS drives (ST4000VN000) and 16GB of ECC DDR3 1600. Scrub speed is ~380-410MB/s and reading and writing to the array easily saturates a 1GigE link. I did have one of the Marvell SATA controllers drop offline once (or at least all the disks on that controller) which I think is due to a Debian/linux kernal/whatever bug relating to that controller and SMART queries at certain inopportune times. I haven't had any issues when occasionally querying SMART status, the drop off was when I was using watch on the SMART query (2s refresh for hours on end while copying data at full speed).
 
Just a quick update on this motherboard. I received mine this week and have it running ZFS (RAID-Z) on an Ubuntu host with 5x Seagate 4TB NAS drives (ST4000VN000) and 16GB of ECC DDR3 1600. Scrub speed is ~380-410MB/s and reading and writing to the array easily saturates a 1GigE link. I did have one of the Marvell SATA controllers drop offline once (or at least all the disks on that controller) which I think is due to a Debian/linux kernal/whatever bug relating to that controller and SMART queries at certain inopportune times. I haven't had any issues when occasionally querying SMART status, the drop off was when I was using watch on the SMART query (2s refresh for hours on end while copying data at full speed).

great, thanks for the update. That may be the first performance data available for ZFS and this board, congrats!

How about power; got a killawatt?

Cool to see at least linux zfs runs.

I get 700mb/s scrubs on my 6x 2tb 7200 drive setup, it kinda looks like you MIGHT be cpu limited there? I am running Freenas

Also on your transfers, what are you using? NFS? CIFS?
 
How about power; got a killawatt?
The entire system is pulling around 60 Watts from the wall running on a CX 430M. Obviously that PSU is way overkill, but it was cheap (A Silverstone SFF or Supermicro 1U is 4x the price). The consumption doesn't seem to vary more than +/-5W when actually copying data or scrubbing. I think that's because I'm running the PSU at such a low load that the base current of the mobo/proc and HDDs is dominating the reading. I need to do more research into this by running actual CPU stressing benchmarks and running with the 5 spinning disks disconnected to evaluate the board by itself. I'm considering getting a PicoPSU or one of the options I mentioned above. Or I may build my own PSU, we'll see.

I should note I haven't looked into enabling spindown of the drives yet. So that number is with all 5 platter possessing disks spinning.

I get 700mb/s scrubs on my 6x 2tb 7200 drive setup, it kinda looks like you MIGHT be cpu limited there? I am running Freenas
There were 8 distinct threads running at ~12-14% CPU each in top during the scrub, so I think it was running full tilt across all 8 cores. I need to try to benchmark the array without the 1GigE limitation to see what the raw read/write performance is to determine if the CPU was the limit for the scrub or only having 5 drives (a lowish number) and them being the NAS part (i.e. lower RPM) was the issue. Do you know of any good benchmark utilities? I've seen the ones built into some of the dedicated ZFS/NAS OSes but not a script I can just run on Ubuntu.
Also on your transfers, what are you using? NFS? CIFS?
Samba to a Windows client. I didn't bother to benchmark anything here because I don't intend to get into Infiniband of 10GigE or anything crazy, so as long as it saturates the link, I don't particularly care what it could theoretically do. I'm also running rsync to my old server at around 70-80MB/s, but that's being limited by the old server RAID5 performance, because I've never gotten more than that out of it.
 
The entire system is pulling around 60 Watts from the wall running on a CX 430M. Obviously that PSU is way overkill, but it was cheap (A Silverstone SFF or Supermicro 1U is 4x the price). The consumption doesn't seem to vary more than +/-5W when actually copying data or scrubbing. I think that's because I'm running the PSU at such a low load that the base current of the mobo/proc and HDDs is dominating the reading. I need to do more research into this by running actual CPU stressing benchmarks and running with the 5 spinning disks disconnected to evaluate the board by itself. I'm considering getting a PicoPSU or one of the options I mentioned above. Or I may build my own PSU, we'll see.

I should note I haven't looked into enabling spindown of the drives yet. So that number is with all 5 platter possessing disks spinning.


There were 8 distinct threads running at ~12-14% CPU each in top during the scrub, so I think it was running full tilt across all 8 cores. I need to try to benchmark the array without the 1GigE limitation to see what the raw read/write performance is to determine if the CPU was the limit for the scrub or only having 5 drives (a lowish number) and them being the NAS part (i.e. lower RPM) was the issue. Do you know of any good benchmark utilities? I've seen the ones built into some of the dedicated ZFS/NAS OSes but not a script I can just run on Ubuntu.

Samba to a Windows client. I didn't bother to benchmark anything here because I don't intend to get into Infiniband of 10GigE or anything crazy, so as long as it saturates the link, I don't particularly care what it could theoretically do. I'm also running rsync to my old server at around 70-80MB/s, but that's being limited by the old server RAID5 performance, because I've never gotten more than that out of it.

Yeah, that's way high. I was going to ask, are they 7200rpm drives? But even if they are, still high.

I'm running 6x very power hungry 3tb toshiba 7200rpm drives and I see 54W at idle, and 25W with the drives sleeping. 18.5W with no drives connected (but IMPI and one LAN connected) on a haswell c226 board with 8GB ECC (asrock c226 mini itx)

Since you are willing to build your own PSU, then let me recommend a Dell RM112:
http://www.ebay.com/itm/Dell-RM112-...-/121232834681?pt=PCA_UPS&hash=item1c3a0a4079
And the low price used really seals the deal. Honestly the only PSU that might compete at very low power is some $200 platinum supermicro, even then only a maybe.
It's not ATX so lets say I sacrificed an ATX PSU case and did alot of dremel work.

You won't find a more efficient PSU at low power and I custom wired one for my case / setup. I see about 170W peak at startup with all 6 drives spinning up, but with 5400 rpm drives you could run 12+ of them on this PSU. It beats out the pico PSU by a significant amount in efficiency.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Since you are willing to build your own PSU, then let me recommend a Dell RM112:
That looks like a nice option if I want to minimize the effort expended on this, but when I said build my own power supply, I meant literally from scratch. i.e. custom list of chips and passives, lay out my own PCB, write the code for the control MCU, and build it.
 
Something occurred to me... do typical PSUs even have enough SATA power connectors for a 12-port motherboard like this one?
 
Something occurred to me... do typical PSUs even have enough SATA power connectors for a 12-port motherboard like this one?
No, but you're not likely to have that many drives without a backplane anyway. The backplane is usually 1 SATA or Molex connector per 2-3 drives.
 
Nothing a soldering iron can't fix. The dell rm112 I run only has a single sata connector. But I'm powering six drives off that single wire. You could probably run 12 off a single molex if they are 5400 rpm drives.
 
That looks like a nice option if I want to minimize the effort expended on this, but when I said build my own power supply, I meant literally from scratch. i.e. custom list of chips and passives, lay out my own PCB, write the code for the control MCU, and build it.

I'm interested in your results and also, please post if you ever test more power numbers (no drives, idle, etc)
 
Nice catch! Tempting. Hope some people pick these up and report results with freenas/OmniOS/Linux/Virtualization and also post power #'s
 
Nice catch! Tempting. Hope some people pick these up and report results with freenas/OmniOS/Linux/Virtualization and also post power #'s

I received the C2550d4i, but I won't have time to do anything until past New Year's. I don't have the PSU I want to use yet anyway.

For power #'s, I can run it barebones with just RAM and also after my full config.
 
I received the C2550d4i, but I won't have time to do anything until past New Year's. I don't have the PSU I want to use yet anyway.

For power #'s, I can run it barebones with just RAM and also after my full config.

Awesome.
For a PSU, don't neglect the dell rm112 and some basic soldering. Nothing beats it for efficiency and cost. Mine is running great with 6x Toshiba 7200 rpm drives and a has well setup. I'm sure it would support 12 drives with a c2550 and 5400 rpm drives.

I get 25W at idle with these inefficient drives spun down.on my c226 board in freenas.

Looking forward to your power #'s.
 
Yikes, $70 less than the original price of the 8 core model and over $100 less than the current price. I'm surprised it's that much cheaper, I may have to downgrade. :D

Newegg just added a promo $15 gift card to the 2550 too (same price of $289). This promo was not offered when I bought it.

Will try to get power numbers in next week if I have time. I might just swap over the PSU from my HTPC just to make sure the board is working.
 
I think I'm going to grab the C2550d4i and try to run XPEnology on it. Would be a pretty solid build.
 
I've been umming and ahhing about the Supermicro A1SAM-2750F for a while now, but in terms of "bang for buck" (especially considering what I actually need from them), these ASRock boards look like pretty clear winners to me.

Definitely planning on grabbing a couple of C2550d4i boards once I'm able to find somewhere that will deliver them to AUS.

Planning on (experimenting with) running one as a ZFS host, and one for ESXi.
 
Too bad - with that amount of trouble this looks like this is a dead end for a cheap portable NAS :(
 
Yeah I'm losing confidence in this thing.

So... anyone know of any Xeon mini-ITX boards with 12 ports? :D
 
None that I know of. I wouldnt care about BIOS startup times on a home server.

TEsting has shown that it will max out gigabit via CIFS in windows, no issue there

I do wonder about the performance/reliability of the marvell sata ports in freenas.
 
Why not choose Suprmicro X10SL7-F?

Normal size memory, 14 SATA ports and 8 of them on a LSI HBA, which has better support and is more "enterprise" than Marvell. And together with E3-1220v3, the price is the same.

Matej
 
Why not choose Suprmicro X10SL7-F?

Normal size memory, 14 SATA ports and 8 of them on a LSI HBA, which has better support and is more "enterprise" than Marvell. And together with E3-1220v3, the price is the same.

Matej
True story. X10SL7-F-O + E3-1220V3 combo is $410 on newegg with $45 discount.
But you can get ASRock C2550d4i with 4 cores instead of 8, just for $280. And I want the price to drop even more. Xeon is an overkill for me. I don't need transcoding or any other jails in FreeNAS. I just want to max out transfer speeds.

What do you mean by 'normal size' memory? It looks like memory on C2750/C2550 is 'normal size' too.
 
Sorry about the "normal size", I has a Supermicro board in my mind.

You don't need the Xeon. You can take Pentium G3220 or i3-4320 and you will still be able to max out 1Gbps. I might be a bit expensier as AsRock but then again, you have a proper server grade hardware:)

Matej
 
Back
Top