Yet another ZFS Build

axan

[H]ard|Gawd
Joined
Nov 5, 2005
Messages
1,935
So I've decided to jump on the ZFS bandwagon. Still undecided on the OS but it will be something that supports zfs. Here's what I've got for hardware so far.

case: norco rpc-4224
mobo: Supermicro X8DTH-6F
cpu: Intel Xeon E5620
memory: 16GB DDR3 I have lying around
SAS Controller: Supermicro AOC-USAS2-L8e x2
Drives: Not sure yet, prob WD20EADS x24
SSD for cache: Not sure yet, prob OCZ Vertex2 120GB x3-4

The plan is for 24x 2TB in 3 raidz2 volumes for total of 36TB of usable space, Future expansions will consist of extranal sas controller + extra norco cases with drives.

I started ordering the hardware and need to decide on OS soon, I have no experiance with solaris or freebsd so the plan is to setup some vms and play with opensolaris, freebsd + submesa project, nexenta, openidiana and napp-it.
 
Last edited:
That SM Motherboard is pretty picky about RAM from what I have heard. You may want to double check the memory compatibility if you want to go that route...

Also, you should understand that the WD20EADS is being manufactured in much smaller numbers than the EARS version, which is a 4K drive. If you RMA a EADS you probably will get a EARS in exchange.

You may want to think about the Hitachi 7200 RPM drives which are usually found around the same price. Older technology, but faster and bulletproof.
 
I just put together a rig with that case, I choose freebsd due to it being more versatile because of the ports system. It depends on what all you want to do with your rig.
Everyone says the hitachi is the best commodity drive for ZFS, I snagged some Samsung F4's and sub.mesa has been doing some benchmarks in another thread if you care to check it out. I would probably stay away from the EADS due to what mikesm said. You probably want to go all the way with the 4k emulation drives or not at ALL. Mixing will be futile.
 
That is a 5520 chipset board. They are not as restrictive as the 3420 chipset boards, which may be what you are thinking of. But it still would not be a bad idea to check whether the RAM is compatible.
 
the ram is a pull from a brand new dell r510 server i don't think it will be a problem but if it doesn't work out then I'll replace it.

as for the hdds, I want some type of "green" drive but I want to avoid 4K due to them still being untested. I know sub.mesa is running some tests on few configs but I need this system in few weeks and can't wait for results.

The hitachis are good true and tested drives I have 10 of them running on areca in my current file server but like I said trying to get something low power now.
 
Why are you wanting these "green drives"? You save maybe 2W per drive, or or less than a 50W light bulb in your config. I don't understand you wanting to blow power on a dual CPU configuration, but worrying about 50W power differences between a Green and non-green system...

I guess I am not understanding what you are trying to optimize for.
 
it not just about power saving aspect, the green drives run cooler too. Skipping the whole cooler hdds last longer argument, the less heat gets dumped into my "server room" the better for all the other servers and network gear there.
 
I have pretty much the same build. Mine is currently up and running, with a few quarks, but pretty happy with it so far. http://hardforum.com/showthread.php?t=1548892
Here are some things i recommend:
1. Put as much ram in this thing as possible.
2. I tried opensolaris and had lot of issues. Probably something i was missing, but just my two cents.
3. Nexentacore and Napp-it was by far the easiest to install and configure. Just install nexentacore, then an automated Napp-it install, and use the web gui for stats, pool/folder setup. And they're free!
4. If you decide on the newer 4k drives you may have to configure them in a certain number combination. Sub mesa has a few posts on this i think. I did a lot of reading on this and a lot of people are happy with the Hitachi's. I'm running pretty good so far.

Im also filling my norco up in stages. I already know my first 12TB will be filled in a week or so. :( But adding the second 11-12TB pool will be as easy as popping in 8 more drives. Keep us updated, as i would like to see the end results.
 
installed nexenta core in vm and playing around with it, so far not going too well, can't get it to join my domain.
 
Yeah i had the same problem with opensolaris and nexenta. Install the napp-it web-gui. After i did that and enabled SMB i was able to join my domain from Napp-it.
 
I tried it both ways, manually and through napp-it

iu get smbd[1033] failed joining domain error

then i get bunch of gssapi errors: "Unspecified GSS failure"


I'm going to try it on my domain at home where I've joined bunch of linux computer successfully
 
BTW, if you are near a fry's, you can get the Hitatchi 7200 rpm 2 TB drives for $87. NO rebate. Very nice price...
 
ok little progress, installed nexenta on vm at home and it joined domain no problem. Weird since both at home and at work its doamin 2008 level. The only difference is at home i run server 2008 R2 at work it's server 2008 SP2.
I also figured out how to do link aggregation for multiple nics. Now what's left is to figure out ACLs and iscsi.

I also ordered all the hardware - the hdds and ssds, still undecided on that.
What do you guys recommend as far as ssds for L2ARC and ZIL do I need a SLC based ssd for ZIL?
 
Last edited:
You need an SSD which can write safely; with supercapacitor. The new Intel X25-M G3 can do that; making it a suitable SLOG (or "ZIL") device.
 
ok so ssd with supercapacitor for SLOG, so something like ocz vertex2 pro would work

Also correct me if I'm wrong but since I'm using zfs pool version above 19 even if my SLOG device fails the data is safe, the pool just reverts to internal log. I would just loose performance? If that's the case it seems it would be ok to use some cheap sub $100 30-40gb ssds maybe use 2 in raid1 and if they fail just replace them. Seems like a good alternative to expensive ssds with supercapacitors.
 
Can I enquire to what PSU both Axan and Jaw are considering?

Also I'm assuming you'll be running 16xHDD off the SAS Controllers and the remaining 8 from the on-board LSI SAS Controller?
 
I'll be using corsair 1000hx for psu, and yes you're correct the 16x hdds will run of the 2x sas hba card, 8xhdd on the internal lsi sas controller, boot drives + ssds for cache will use onboard sata ports (intel)
 
Can you explain how you setup link aggregation?

it's actually pretty simple

first take all nics you want to aggregate offline.
im using e1000 intels in this example

Code:
ifconfig e1000g0 unplumb
ifconfig e1000g1 unplumb
ifconfig e1000g2 unplumb
ifconfig e1000g3 unplumb
then create the new aggregate device
Code:
dladm create-aggr -d e1000g0 -d e1000g1 -d e1000g2 -d e1000g3 1
this will create aggr1 device

next you need to plumb it and give it ip address

Code:
ifconfig aggr1 plumb 192.168.1.1 up
To see status

Code:
[B][B]
dladm show-aggr[/B][/B]
you can also use
Code:
dladm modify-aggr

to change load balancing policy or lacp mode.

In order to make the change persistent you need to create
/etc/hostname.aggr1 file which should contain the IP address.

obviously you also need to configure the switch for the aggregation to work. Each switch is different but if you need help with cisco or hp let me know.

 
ok nexenta is going pretty good, I got NFS, CIFS and ISCSI working great (btw napp-it makes it a snap to setup comstar iscsi, thanx gea)
The Solaris ACL are bit confusing since I'm used to xfs posix ACLs but I got it doing what I need.
So it looks like I'm going with nexenta + napp-it for the OS.
 
Thanks for the info. Have you done any tests with jumbo frames? I plan on implementing link aggregation today. Did you notice any difference with link aggregation (less cpu, speed more stable, etc)?

I am also using windows 2008 R2. I wonder what the difference is.
 
i got the joining to domain working at work now, the problem was caused by when I applied the fix that adds the "AllowLegacySrvCall" registry key. I removed that, added the "sharectl set -p lmauth_level=2 smb" in nexenta and it joined the domain without a problem.

Unfortunately all my testing was done in virtual environment so I can't tell you what effect the link aggregation or jumbo frames might have. My hardware should be here within a week then I'll be able to do some real testing.
I run link aggregation and jumbo frames on my current linux file server, and it's working awesome. Didn't notice any impact on cpu etc but having 4Gb/s network link gives me some nice transfer speeds + I'm able to use iscsi for esx datastore without an issue.
 
I got my 2 nics linked, and dont really notices a huge difference. I guess its still nice to have redundant nics though.

Having a problem with the hostname.aggr1 file and restarts. When i have this file in the /etc directory and reboot, i get a network error before the system comes up, and cant connect to the NAS. I can ping the aggr1 ip, but cant connect by CIFS, SSH, etc. Have you been able to set the hostname.aggr1 file and reboot? If i dont have this file i have to re-enter "ifconfig aggr1 plumb 192.168.0.47 up" every time i reboot.
 
works fine for me, I just moved my /etc/hostname.e1000g0 to /etc/hostname.aggr1

here's what's in the file
Code:
10.10.10.108 netmask 255.255.255.0 broadcast + up
check if you still have the hostname file from your main network card, maybe that's causing the problem
 
thanks..weird...l had backed up the original hostname.e1000g0 file, and created a new identical file called hostname.aggr1, which didnt work. I renamed the old file and uploaded it again, and all works fine now. Go figure.

Pretty happy so far with everything. Getting 80-100MB/s from my windows 7 pc to NAS and 40-60MB/s from NAS to my windows 7 pc, and about 105-120MB/s to/from another NAS chassis with a areca ARC-1280ML-2G raid controller.
 
probably access permissions on the file, glad you got it resolved.
 
Little update.
The norco case came in yesterday and the motherboard is due today.
Still waiting on the Supermicro controllers, they are shipping straight from SM.
Also bought the drives for the system today. Decided to go with the WD20EADS, still not enough data on the 4k drives to risk it on large scale.
 
Axan, i can understand you would want to stay with proven stuff with such a serious build.

Would you be interested in running the ZFSguru benchmark script on your system when you got it running? Would serve as great comparison, even though your system is alot better than the average ZFS box. Testing is quite easy; it just might take longer than 24 hours. To test every combination perhaps 48 hours is required. If you can miss your NAS for that time, you get very nice graphs in return; specific to your system which is very valuable i think. :)
 
ya sure I'll run the benchmark, the system has to go live on the 22nd so depending on when all the components show up I should have enough time to do it.
 
Alright; i'm still working on the benchmark. Adding random I/O IOps as well which could be interesting too! Even though most home NAS ZFS boxes are mostly sequential access to mass storage data; large files.

Once it is integrated in the web-gui and working, then i make a preview .iso you can use for testing. This should be very easy and you may not need my help to get the same kind of graphs i posted in the 4K testing thread; but now specific to your system configuration and disks.

If you could signal me in the 4K testing thread when you have setup the system and are ready for the testing, then i have some ideas for additional tests on your 512-byte sector disks as well; which shouldn't take long. I hope to be ready with the benchmark script in about a few days.
 
So i got most of the hardware and put things together last night. Mobo had no issues with my Dell ram.
Currently running tests on hdds to make sure they are all working properly before I start using them.

I like the new norco 4224 a lot better then my old 4020, the trays are much nicer.

Only hick-up so far is Supermicro taking their sweet time sending my HBA cards, so right now I'm limited to 8 drives using the onboard sas.
 
Back
Top