ZFS processor question / scenario?

tangoseal

[H]F Junkie
Joined
Dec 18, 2010
Messages
9,741
I am narrowing down my new NAS I am building to the following processors and would like advice.

And AMD folks I do NOT want to use AMD because as we have ruled already the softraid controller (SB950) is far slower over all than Intel ICH10R etc...

I am looking at the following two Xeons:

Socket 1155
Intel E3-1240 --
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115082

Socket 2011
Intel E5-2620
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117269

Notes*
Socket 2011 cost more but it supports more RAM and has more cache on the CPU.
Socket 1155 is far less expensive supports less RAM and has less cache but scores really high, almost as high as a 2600K.

However what I am really confused in is how ZFS mainly, (FreeNAS or Nas4Free ZFSv28) which are the two I will be using due to ease of use, uses the cores of the CPU.

For ZFS and SAMBA since it will be connecting to all windows based environment, how will the following matter?

1. Clock Speed? The 1155 is higher than the 2011. How does this translate?
2. Cache? The 2011 has a lot more and how would this help?
3. Cores, obviously the 2011 posted above has more cores, how does this help?

Socket 2011 appears to have more SATA ports than 1155, I could be wrong, however looking at the differences in for example, C206 vs C606 chipsets it would appear that the 606 has more features i.e. more ports supported etc...

I am not concerned with powerusage as NAS typically sits at idle and the 2011 chips whichI I own a 3930K really dont idle that high in wattage but really can roast a chicken at high use.

I guess I am confused at what I will benefit the most from. Should I go with the socket 2011 route using a 6 or even a lesser 4 core model, or should I go 1155 and just be happy with that.

There is also a 1.8ghz Xeon socket 2011 that is 4core/4 thread E5-2603. Is this going to be able to really move data along or am I going to be dragging an anchor behind me with the lower clock speeds?

I will not be using an HBA as ZFS doesnt like to play nicely with these it seems.

I do plan to use Raid Z1 and 2 eventually and I will be using a 10gb Intel Ethernet card as well. I intend to load it up with 32GB of ECC if 1155 and def 32GB on 2011 maybe more if ZFS will make use of it.

I do plan to use disk compression on cmpressible data, i.e. Documents, spreadsheets, etc... and uncompressible datastores on Videos, etc.... as they are already compressed.


******Bonus question.... I have 3 WD 2TB Reds I can allocate for this NAS. My goal is to be able to fill as much of the 10g eth as possible in burst of course. I do not feel these red drives can move data that fast. Should I consider switching to 7200RPM drives or should I just throw say ...3 more reds at it for a total of 6 drives?

Thanks if you can help.
 
If you are using traditional hard drives, then you most likely will never fully utilize your cpu. However with SSD's and compression, your cpu may be a limiting factor. I would go for the 1155
 
What does the softraid controller have to do with a ZFS implementation?

We need more info on how much storage you will have, will you have compression, deduplication, encryption? How fast transfer speeds do you need, gigabit, 10gigabit? How many concurrent users? Are you storing large media files or running multiple virtual machines? Do you need raw transfer speeds or more iops? Are you running raid-z or raid 10?

Also, many many people (including myself) use HBAs with ZFS the IBM M1015 is inexpensive and popular.

With just a few WD Red drives you do not need anything near the CPU you have listed unless you are planning an all-in-one where you will run multiple virtual machines on the same CPU.
 
Home use... i have 10gb cards so why not.

Thats about it. I want raw overkill speed I guess. I plan to use 3-10 2tb reds or whatever. Dont really care just as long as it rips along.
 
Then get the E5-2687w

No need for a $2000 processor.

I want it to rip along ... not waste 99% of the processors power...

NO ONE has answered my questions..... they have simply recommended stuff.

I am asking how ZFS uses the processor cores...

Is Bsd and ZFS multithreaded, single threaded? etc...

What is the point in buying a quad core much less a 8 core chip if all BSD and ZFS uses are one core? That is my point and I do not know. I dont care for overkill chips of that price range. Down to earth is what I am looking at. I just dont want to get something that will hamper my performance is what I am saying.
 
CPU cache should be a complete non-factor for I/O bound workloads. The CPU's core count and clock speed is also mostly a non-factor for I/O. Since I/O occurs mostly in the background without the CPU cores being directly involved.

ZFS itself uses very little CPU. It should remain I/O bound most of the time, even with compression and/or encryption. Samba can burn some CPU, but I think it's pretty much constrained to only using 1 core per file transfer.

More than 4 cores is only going to have an appreciable effect if you regularly have multiple workstations performing high throughput I/O to multiple disk arrays. If they're accessing the same array, then it won't matter, since it will be I/O bound, by that single array.

Even a 2 core, socket 1155/1156 CPU, would probably not be noticeably slower than a 4 core machine the vast majority of the time, given your described workload. Many/most of the 2 core Ivy Bridge CPUs support ECC when installed with a C2xx chipset, see:
http://forums.servethehome.com/showthread.php?825-Ivy-Bridge-Core-i3-and-Pentium-CPUs-out!
 
I have a feeling you are designing a lopsided box. Could you give us more of your proposed specs?
Stated your processors and that you intend to use 32gb ram and 3x 2Tb Red drives. How about controllers, MB, other storage?

You mentioned a C206 chipset. One of the reasons to go with that chipset vs the C202 or C204 or is to allow the a CPU with an integrated GPU to pass it through (i.e an E3-1235)... saving power and build complexity. Is this something you would consider?

You mentioned FreeNAS as a possible OS and compression. Is this this the only use for this box?

Have you considered L2ARC or ZIL devices?
 
Currently FreeBSD uses only 1 core for SAMBA (but may become multi-threaded in the future). On the solaris solutions CIFS is multi-threaded. You mentioned raw speed yet you were talking about using RAID-Z or RAID-Z2. If you want raw speed use RAID-10.

Also if you have 10Gb links then SAMBA is not the protocol your going to want to use. It starts performing badly above 150-200MB/s. You would want to use NFS or iSCSI, and most likely disable write syncs for performance depending on how exactly you use your machine. I would go with the 1155 chipset with an E3-1230, because it gives you a high clock speed, and it is a quad core for expandability. I highly doubt you will ever be CPU limited.
 
No need for a $2000 processor.

I want it to rip along ... not waste 99% of the processors power...

NO ONE has answered my questions..... they have simply recommended stuff.

I am asking how ZFS uses the processor cores...

Is Bsd and ZFS multithreaded, single threaded? etc...

What is the point in buying a quad core much less a 8 core chip if all BSD and ZFS uses are one core? That is my point and I do not know. I dont care for overkill chips of that price range. Down to earth is what I am looking at. I just dont want to get something that will hamper my performance is what I am saying.

for your workload buy the lowest wattage cpu you can afford.

as an example, this is what im buying for my home NAS

SUPERMICRO MBD-X9SCL-O LGA 1155 Intel C202 Micro ATX Intel Xeon E3 Server Motherboard

Intel Core i3-2120T Sandy Bridge 2.6GHz LGA 1155 35W Dual-Core Desktop Processor Intel HD Graphics 2000 BX80623I32120T

it will run fine, 'might' not fill a 10GB pipe but for home use, does that really matter?
 
Currently FreeBSD uses only 1 core for SAMBA (but may become multi-threaded in the future).
Samba.org said:
Multithreading and Samba

People sometimes tout threads as a uniformly good thing. They are very nice in their place but are quite inappropriate for smbd. nmbd is another matter, and multi-threading it would be very nice.

The short version is that smbd is not multithreaded, and alternative servers that take this approach under Unix (such as Syntax, at the time of writing) suffer tremendous performance penalties and are less robust. nmbd is not threaded either, but this is because it is not possible to do it while keeping code consistent and portable across 35 or more platforms. (This drawback also applies to threading smbd.)

The longer versions is that there are very good reasons for not making smbd multi-threaded. Multi-threading would actually make Samba much slower, less scalable, less portable and much less robust. The fact that we use a separate process for each connection is one of Samba's biggest advantages.
Samba is not, and will not in the future be, multi-threaded. On my Solaris 11 box, my CPU idles 99.99% of the time. For a quick ZFS server, focus on RAM and drives. For drive recommendations, we need to know your usage patterns and what's stored. Also, something else I haven't seen asked: Are your 10GigE cards supported under your OS of choice?
 
Here we go... I typed this in wordpad before pasting here... I hope this helps a lot more than my original post. Please provide feedback as any and all is very valuable to me indeed!

Thanks all so far for the wonderful feedback. I sincerely appreciate it. I will simplify my expected use and hardware thus far and you can help me fill in the gaps where needed, some of which you already have done.
Lets begin...............
Use: Home, but I am an enterprise performance junky. It isn't so much that I want super overkill rather I just dont want to settle with exactly enough. I like room to grow and room to be able to beat the snot out of my hardware.
Items stored: LARGE amount of movies. I am an avid blue ray collector and I use software to rip digital copies of my disc and then place my expensive movie disc in a tupperware box where no one will scratch them. I then convert those movies in order to store and stream digitally throughout my home at full blast highest 1080p quality possible.
I also have an extensive collection of digital photography, about 1TB of RAW and JPEG files that my father stores on my NAS. He is a semi-pro photographer.
Business usage. I also own and run a small business from my home so I will be using a small portion, about 500-750GB of datastore . I would like to encrpyt and compress these files if possible using built in ZFS capabilities. Not some out of box software.
Typical other files include occassional VM usage, but nothing that requires SAN like syle performance. I just fire them up to test certain environments etc... training, studying etc....
_________________________________________________________________________
CPU usage scenarios:
Main expectations....
-Fast enough to easily handle high disk I/O operations, iscsi, Samba, etc...
-Transcoding using installed or built-in uPnP/DLNA server plugins, i.e. Fuppes on Nas4Free
-Able to compess data quickly and commit to disk array.
-Able to utilize disk encryption quickly.
-I am sure there are more.....
____________________________________________________________________________
Disk I/O expectations.....
Plan to use on board ICH10 controller and on board SATA ports. I could eventually use an Adaptec 6805 w/BBU (I have one for my business use but can use it for personal later if justifiable) or Adaptec 5405 or purchase a used IBM 1015 as recommended earlier in this forum. Not sure what HBAs work with ZFS. I am huge fan of LSI Megaraid 9200 series. Please recommend?
I think that using ICH10 I will need a decent CPU as this is all software controlled if I am correct?
Additionally as far as cache drive L2ARC is concerned, 64GB SSD's are getting dirt cheap. I woould completely entertain the idea of using a 64GB SSD as a cache drive for ZFS to store whatever it needs in it, i.e. dedup tables (not that I will be using it really), block information, etc....
__________________________________________________________________________
Hardware build.......
Socket 1155 CPU or 2011 CPU (Sounds like 1155 is the best way to go) however I will go 2011 if that is what I need.
Motherboard - I am open to either kind 1155 or 2011. I was looking at the following:
1155 = http://www.newegg.com/Product/Product.aspx?Item=N82E16813151246
2011 = http://www.newegg.com/Product/Product.aspx?Item=N82E16813128563
Cpu 1155 (1240 V2 Ivy) = http://www.newegg.com/Product/Product.aspx?Item=N82E16819117285
Cpu 2011 (E5-2609) = http://www.newegg.com/Product/Product.aspx?Item=N82E16819117270
RAM (to start with 16GB) = http://www.newegg.com/Product/Product.aspx?Item=N82E16820139931
Chassis (SuperMicro) = http://www.newegg.com/Product/Product.aspx?Item=N82E16811152116
I really cant think of anything else that is important as of right now. It would appear that with the 2011 build I can use a full 8 drives or with the 1155 I can use up to six drives.
I have 3x 2TB RED drives and would like to add 3 more for a total of six and I am thinking of using them in Raid Z2 for the performance of raid 10 like I/Os.
I can sell the reds as they are brand new and get something else if needed. I do want to be able to use 8 drives eventually so if I have add a card in I will to gain more ports. But for now 6 is going to be enouigh for this build.
I like the idea of socket 2011 because it is future proof and I could always convert it to a gaming rig later or some other server... but I am sure I can save going to 1155 and get more drives/ram which might be more beneficial. Hell I already have a 3930K and rampage IV. I will not share my budget because it is zero based. I want to save as much as possible but on the otherhand not spend too little as to gimp my rig so I have chosen some relatively nice parts to go in this build thus far.
_____________________________________________________________________________
Besides some of the questions that have been answered here already please feel free to beat me up all you want. I do not want a lopsided build but I do not want to go overkill. I want it to rip along as fast as anyone would want but at the same time not be completely overkill.

EDIT**** Forgot to add network interfaces ----

NAS will have Intel XF SR server adaptor (10gbps Short Reach Fiber). I have tested in both Freenas and NAS4FREE and both load drivers just fine and dandy. They were iperfing between my test nas, cheaper hardware, and my sig rig which also have same card through my Cisco 3750E switch at just under 9.8gbps throughput. They work fine. So no worries on NIC compatibility as long as I stick with FreeN or N4Free.
 
Last edited:
1) I'd also recommend the 1155 setup.
2) The RAM you selected is registered. Make sure you get "unbuffered" with the C20X chipset.
3) 2X8GB is a good starting point.
4) You can start with the onboard controllers, but likely you will want an add-in controller.
5) A megaraid controller is NOT useful for ZFS and likewise the battery backup. Go with a straight HBA (or a M1015). ZFS does all the RAID work and allowing ZFS low level access to the drives is beneficial and more cost effective.
6) L2ARC is easy to add at anytime.
7) A raidz2 of 6x2TB RED sound like a good start to me according to your intended use.

That's enough comments for now to get the discussion going.

Edit: Also wanted to add, when I built my system, I choose the Asus P8B WS with a Xeon E3-1235 CPU. This allowed a bit more GPU power than the integrated VGA on the server boards without needing to add a dedicated video card. I think you were talking about a maybe reusing later. Might not be "gaming rig" powerful, but much closer than an matrox. If I were going to do the same now, I would use the Asus P8C and a Xeon E3-1225 V2, E3-1245 V2, or E3-1275 V2 CPU.
 
Last edited:
just my quick suggestion,
if you need low budget, 1115 and e3 would be good.
if you need mid to up budget, 2011 and e5 would be feasible.

2011 with e5 would be good when running many virtual machines by assuming your zfs server is one of the vms.

1115 and e3 would be abundance, when running only zfs.

I would prefer to use PCI express card to handle all HDD or SSD, since more robust than on board data. M1015 is a good choice.....

this was the reason, when replayed you other thread that AMD am3+ is cheaper than Intel 1115 by assuming not using onboard sata

if you jump on Intel e3 1115, pick the motherboard that has Ipmi. My assumption 2011 motherboard always has ipmi

once again, just my suggestion.
 
Thank you to both of you above. Please others send your suggestions...

So basically the onboard intel sata controller on a socket 2011 is just as fast as a socket 1155 basically? I think they are the same to be honest almost? And an M1015 would be faster? Is the M1015 a raid HBA or just a straight forward place to plug drives into that presents them to the system as standalone drives?

Would there be any special modes to have to reflash the M1015 card to run in so it would work flawlessly with ZFS?
 
just my quick suggestion,
if you need low budget, 1115 and e3 would be good.
if you need mid to up budget, 2011 and e5 would be feasible.

2011 with e5 would be good when running many virtual machines by assuming your zfs server is one of the vms.

1115 and e3 would be abundance, when running only zfs.

I would prefer to use PCI express card to handle all HDD or SSD, since more robust than on board data. M1015 is a good choice.....

this was the reason, when replayed you other thread that AMD am3+ is cheaper than Intel 1115 by assuming not using onboard sata

if you jump on Intel e3 1115, pick the motherboard that has Ipmi. My assumption 2011 motherboard always has ipmi

once again, just my suggestion.
Exactly how would you setup a ZFS NAS using VM?

Would you use ESXi or some other product?

I might entertain the idea of a ESX based NAS if I can get the performance of a pure hardware level NAS, however with VTd it is basically hardware. Just not sure how I would implement this exactly.
 
Exactly how would you setup a ZFS NAS using VM?

Would you use ESXi or some other product?

I might entertain the idea of a ESX based NAS if I can get the performance of a pure hardware level NAS, however with VTd it is basically hardware. Just not sure how I would implement this exactly.
My NAS was a ESXi box running Solaris, using ZFS. I wanted to see what the performance penalty was when virtualizing the NAS, so I re-installed using just Solaris. There was no difference, performance wise. Not just a small difference... NO difference. I was very impressed.

For what it's worth, my setup:
E3-1230, SuperMicro X9SCM-F-O, 16GB (4x4GB ECC UDIMM), 6x3TB connected to the onboard.
 
Last edited:
My NAS was a ESXi box running Solaris, using ZFS. I wanted to see what the performance penalty was when virtualizing the NAS, so I re-installed using just Solaris. There was no difference, performance wise. Not just a small difference... NO difference. I was very impressed.

For what it's worth, my setup:
E3-1230, SuperMicro X9SCF-F-O, 16GB (4x4GB ECC UDIMM), 6x3TB connected to the onboard.

Very nice...
 
Very nice...
It was. I should mention, however, that SATA drives are not officially supported as passthrough devices. It works just fine (Google SATA RDM for a how-to), but if something breaks... You're on your own. That's why I stuck with Solaris on bare metal instead of virtualizing it and making use of those spare cycles. My posts on the [H] are actually in the first page of search results. Passing through a controller is supported, but individual disks are not.

Hrm. I need to go back and edit that post. I have an X9SCM, not an X9SCF.
 
Back
Top