Moving from WHSv1 to ZFS - New Build 40TB

Zak_

n00b
Joined
Nov 17, 2010
Messages
11
I currently have a WHSv1 built around a Coolmaster Stacker chassis and a bunch of 4x3 modules. It is running on an old AMD Opteron 190 and AOC-SASLP-MV8 + motherboard ports for storage. I'm running out of room in the case and ports, and while it was very easy to use this solution never really gave me the performance or data protection I was looking for. I have 9 2TB drives, mostly Samsung F3's and F4's with a couple Seagates thrown in that I will be adding to my new build once the data is migrated.

I've been doing a lot of reading on this forum and elsewhere about ZFS. I'm planning to move to a ZFS based storage system, most likely Nexenta. sub.mesa's ZFS GURU project looks really interesting, so I'll keep an eye on that for the future, but Nexenta seems to be one of the most up to date and mature ZFS implementations currently out there.

New Hardware:

Shipping: $65.57

Total: $2825.40

For the case I considered the Supermicro SC846E1 series as it seemed like a great solution with expander built in, but at around $1200 the redundant power supply didn't seem worth the added expense and noise. I'm sure the case is top quality, and it did have significantly better cabling with the built-in expander, but not enough to justify the price premium.

I wanted to go with the Xeon for the ECC memory and relatively low power consumption of the 1156 socket. The Supermicro board was a no brainer with it's onboard LSI SAS controller. I was considering the SUPERMICRO X8SIL-F and recycle my AOC-SASLP-MV8, but I've heard that it has compatibility issues with FreeBSD and OpenSolaris.

For memory I'll start out with 8GB memory for now, with the option to expand to 16GB later. It didn't seem like it was worth it to purchase one 8GB registered stick just to have the option of 32GB in the future.


Zpool - Total Capacity 40TB
I plan on building out two 12 drive RAIDz2 vdev's. I'll bring up the server with 1 vdev in the pool, copy all the data from my WHSv1 box, then once the data is verified on the ZFS box, I'll move the 9 x 2TB HDD's out of the WHS box along with the 3 leftover new drives and create a second 12 drive RAIDz2 vdev and add that to the pool, maxing out the norco 4224, and hopefully giving me a lot of room to grow.

System + ZIL + L2ARC
I had heard of someone splitting one SSD up into 3 different partitions, one for the system, one for ZIL and one for L2ARC. I was thinking of taking the 2 Intel X25-V's, mirroring them to create 3 mirrored partitions for the system, ZIL, and L2ARC. Will this work well? I'm looking to get a good boost to performance, while maintaining redundancy and doing it all for a budget price.


I know there are a lot of ZFS diehards in this forum, so being new to ZFS I'd appreciate any feedback or advice. Thanks.
 
I've been doing a lot of reading on this forum

Hallelujah. A request for comment thread by someone that's actually done some reading and lurking first is pretty rare in here these days. :) It makes people a lot more interested in helping, since otherwise its just the same basics being asked over and over = back button.

You're right on target with pretty much your whole build list. Skipping the SM in favor of RPC-4224 is good, because I've owned both and ended up getting rid of the SM's due to noise. If you ever need to expand to another chassis, Norco will have already released their 12-bay and 24-bay expander cases. You're right on with getting the 4GB memsticks. Personally i'd go with Norco's SFF-8087 .5m cables - I happen to like them better than 3ware's (I think because of how they look, lol), but its trivial.

Some thoughts about operating system and filesystem choices:

- Rather than installing the O/S directly to a boot drive, install ESXi 4.1 and virtualize. Lots more flexibility and power doing it that way, plus you can hardware pass-through the SAS2008 card to the VM. That way you can do other things with all those free cycles since the system will otherwise sit idle essentially 99.9% of the time.

- I think using SSD's for ZIL and L2ARC is trivial when you're storing large, infrequently accessed media files. I think where ZIL and L2ARC shines is environments with lots of random IOPS of smaller files. With that many 2TB drives you're going to have plenty of throughput, and you'll be bottlenecked by GigE anyway.

- I'm not keen on NexentaCore anymore since things *seem* to have fizzled in that camp as far as dev. Since you mentioned most up to date and feature complete when talking about ZFS, consider a test install of Solaris Express 11. Download the text-installer version for free, then grab the excellent Napp-It GUI to manage it graphically until you learn commandline. Literally in 5 minutes you're done, complete with SMB shares and copying files from Windows to them. Alternatively there's sub.mesa's zfsGURU as you are already aware. Or, latest OpenIndiana + NappIt GUI.

- Order all your parts but wait until the last minute to finalize O/S choice and begin migrating your data. I say that because we're approaching the verge of what may just be a paradigm shift in storage filesystems for media, if Brahim pulls off what he's set out to do with Flexraid's upcoming realtime feature (3 weeks until beta release). Think drobo-esque BeyondRaid file system ported to Windows and Linux but better. Expanding and shrinking a storage pool one drive at a time, even different sized drives, non-striping with single or multiple parity drives so that only the disk with the file you're reading needs to be accessed, and only one drive + parity drive(s) need to be accessed when writing. Here's his preview of the new web-based management GUI, which doesn't explain a lot but shows he's come a long way since the previous snapshot based parity system.

 
Last edited:
Sorry to thread jack. Was planning on building pretty much the exact same system as Zak (except with the cheap 4220 instead) so this thread strikes close to my wallet.

How's the expandability with that particular solution odditory? Does that M1015 card work with SAS expanders, where you would connect it to another chassis using some sort of SFF8087/SFF8088 converting?
 
Some notes:

  • For optimal performance and to avoid hardware bottlenecks, swapping the expander for another (real) controller would be the most high-performance choice, and may not be much more expensive at all. There also are possible issues with expanders that only get exposed when using ZFS.
  • Intel SSDs are good from L2ARC (random reads), but bad at ZIL (45MB/s write speed per SSD) and do not have a supercapacitor so these SSDs are subject to corruption on power loss. That makes the use of ZIL dangerous. Ideally, wait with SSDs and buy third generation SSD (Intel G3, SF2000) when it is available; this will be the preferred choice since you can use one (or more) SSDs for both L2ARC and ZIL without risk of corruption. Do not use the Intel G2 or any other SSD without supercapacitor as ZIL device! L2ARC is always safe though, even when the SSD shows corruption.
  • Be careful that your controller is in HBA (non-RAID) mode, aka "IT" mode. If it's in RAID mode, it may still disconnect disks that don't respond in time; so you get the 'drive dropout' problem back again. At least this happened on my Areca controller. The LSI does support setting timeout values; you could set those to 90 seconds to try to prevent drive dropouts. But ideally you would want to use a REAL HBA without any RAID and give the OS/ZFS full control; it will then NEVER be disconnected/dropped out.
  • Be sure to create your pool using the ashift=12 method, which my distribution can do for you. After using the 'sectorsize override' feature, you can go back to Nexenta or other Solaris-derived OS. You still would have the benefit of having a 4K-sector optimized pool. Do not use the ashift patch; it can cause total dataloss; multiple people lost data due to it. The sectorsize override in my distro is completely safe, though. The ashift patch, however, is dangerous and should not be used.
  • Do you really need a UPS? You don't need it for consistency on your filesystem; ZFS won't allow it to be corrupted no matter what you do to it; as long as the ZIL works as intended (i.e. no bad SSD as ZIL). Not buying the UPS probably saves extra power as well.
  • Ideal pool size for 4K sector disks is either 6 or 10 disks; not 12. You may want to reconsider. Normal config for 24 drives is 2x 10 drives in two RAID-Z2 vdevs, and the other 4 drives as SSD or boot/system device.
  • Consider multi-gigabit if you want to exceed the gigabit speeds; your board already has two Intel NICs. Perhaps buy a new managed switch so you can use LACP?
  • As for my ZFSguru project; the LSI2008 controller should work with latest testing .iso; but doesn't work yet on vanilla FreeBSD 8.2.
  • Your existing 2TB drives are also Samsung F4? Or of different type? That's quite important. :)

Good luck! :)
 
Thanks for all the great info, this will really help solidify my plans.


odditory:

Thanks, I try to make sure I do my homework before asking the same questions over and over. ;)

- I was considering virtualizing, but I've heard there can be issues with that on O/S that support ZFS. Are there real problems or performance concerns with putting ZFS in a VM, or is it just much more of a hassle than on native hardware?

- Based on the comments about the ZIL + L2ARC it seems like I should hold off on using those for now and only go for it if extra performance is needed. At that point there should be some better SSD options out there, supercap, etc.

- Now that I'm not using the SSD's for ZIL + L2ARC I was thinking about ditching the mirrored rpool. I saw that you usually use 1 system drive and just take backup images every so often. Is that the easiest thing to do for a home environment, how important is a mirrored system drive?

- I checked out Solaris Express 11 and that looks promising. I've also read a bunch about flexraid earlier in the year, I really liked the idea of his setup, especially for a home server. My only concern with something like that is that it's essentially only 1 guy working on it. I'll have to keep an eye on that project as it progresses.

- Thanks for the heads up on those IBM ServeRaid M1015 cards. They look very promising, especially if there are issues with expanders and ZFS. I'm always looking to save a few bucks, so $150 for 2 real controllers instead of going for the expander seems like the best way to go.


sub.mesa:

- Thanks for the tips, I'll make sure that I keep the controller in HBA mode, and I'll read more about ashift=12 to align for 4k sectors.

- For the UPS, if I wait and don't purchase a separate ZIL device, there are still no issues with a sudden power outage? You're right, it would be nice to save that money, and I think the UPS is only 97-98% efficient so I'll be using a little more electricty throughout the year. I was thinking of it more like an insurance policy, the system would be protected from brownouts/loss of power as well as lighting or other surges.

- For the pool size, I've checked out the benchmarks in the 4k sector ZFS thread. Is there a big issue with not creating a 6 or 10 drive RAID-Z2 vdev? I'm planning on internally mounting the system drive(s) to leave the 24 hotswap bays free to max out my storage drives. I'm ok with a 12 drive pool not performing at its most optimal as long as it still can max out a couple of gigE connections.

- I was thinking about getting a new switch, but the most you can get from open solaris with LACP is single-gigabit to each client, right? So for now I was planning on keeping it simple with no LACP and just upgrade to 10G when it becomes affordable in a couple years.

For my existing drives I have:

4x Samsung F3 2TB
2x Samsung F4 2TB
3x Seagate 2TB

Is it a problem mixing and matching the 4k drives?
 
You do not need an UPS, a BBU or other mechanics to survive am interrupted power cycle. If you do not use a dedicated ZIL, the ZIL will be on the main pool; mirrored or stored with parity just like your pool disks.

The lightning surge protection etc; well that is up to you. If you don't have a backup of this data i can understand you wanting protection for it. I just chose two systems; which seemed easier and more secure.

You can create 12 yes, but i would still want 10 for the higher performance; which may not be noticeable just right away but might after some factors caused some performance degradation.

The LACP bandwidth scaling is not linear, the main interface will receive about 70% of the requests, so you can get 70-30 balance instead of the optimal 50-50; but still an improvement over single gigabit. Some people think LACP can't get faster than single gigabit from host to host; but LACP should both send on both interface and receive on both interfaces; so i don't see how host-to-host transmissions would only be 1 gigabit.

Your existing F3 2TB drives should ideally not be mixed with 4K drives. F3 is 7200rpm and low-data density, F4 is 5400rpm, 4K sector and high-data density. Thus the two could slowdown eachother. There's some options you can do:
- accept the performance penalty
- create a separate pool of just those 4 Samsung F3 disks (use for backup and/or system?)
- keep them as hot spare drives or external backup drives.

Especially in RAID-Z it is extremely important the latencies of all HDDs are about the same. If disk A was slow on I/O 1 and disk B is slow on I/O 2, then both I/O requests will be slowed down. RAID-Z involves all members in an I/O operation; so the latencies are curicial here! Mirroring would be less affected.
 
You do not need an UPS
For the data in the ZIL sure, but all electronics are better off with UPS rather then powered down instantly, especially the hdd's. Im never having a single hdd running without a APC SMART behind it :p
F3 is 7200rpm and low-data density, F4 is 5400rpm, 4K sector and high-data density.
F3 is a 5400rpm also. It just has 4 500gb plates and is not a 4kb drive. Only 1TB (and lower) F3's are 7200.
 
Oh i thought the F3 was 7200rpm and F2 was 5400rpm; but i guess i was wrong. :p
 
buying 2+8 disks for one vdev to copy your old stuff, then add the old drives plus one new one for a second 2+8 vdev does get you going cheaply, but all the new writes will be going to the second vdev so its not exactly balanced going forward.

perhaps a stripe of 10 disks, copy old stuff, then attach 10 disks to make mirrors. this will yield an extra pair of disks to balance the new write load. (but 6 disks fewer of available space)

or two 1+4, copy data, add two more 1+4 vdevs would be a compromise between the two.

remember, you can replace all disks in a vdev to expand the pool.

> with LACP is single-gigabit to each client
solaris can do a L4 hash, so each connection can consume one pipe, but both sides need that option.
 
I was checking out the IBM M1015, have you had any compatibility issues, or is it the same as the LSI card once you flash it?
 
Think I just found my Perc 5i replacement!

What RAID level did you intend to use? There's no out of the box support of RAID 5 as far as I can tell. You need to plug in a physical 'key' to the HBA to enable RAID 5/50 and LSI SafeStore.

odditory can probably shed more light on this, as I haven't played around with this card.
 
Here's a little secret that is going to hit my blog in a day or two, which may help your build if you care about saving a few hundred dollars. You can grab (3) IBM ServeRaid M1015 cards from ebay for a total of 24 ports. They're selling for around $75 a piece and less in some cases. That would mean you could skip the expander ($250+ savings) *and* get a cheaper motherboard ($100 savings) - just make sure it has minimum 3 x PCIe slots.

They're rebadged LSI 9240-8i cards (a $300 part) and are based on the 6G SAS2008 chip. The M1015 is the most little known, highest value-for-money prospect right now in HBA's. The reason they've flooded ebay is because they come default in a lot of lower end IBM servers, and companies tend to pull them out on arrival and replace them with more powerful SAS2108 based raid cards.

Can you IT firmware the IBM ServeRaid M1015?
 
Hallelujah. A request for comment thread by someone that's actually done some reading and lurking first is pretty rare in here these days. :) It makes people a lot more interested in helping, since otherwise its just the same basics being asked over and over = back button.

You're right on target with pretty much your whole build list. Skipping the SM in favor of RPC-4224 is good, because I've owned both and ended up getting rid of the SM's due to noise. If you ever need to expand to another chassis, Norco will have already released their 12-bay and 24-bay expander cases. You're right on with getting the 4GB memsticks. Personally i'd go with Norco's SFF-8087 .5m cables - I happen to like them better than 3ware's (I think because of how they look, lol), but its trivial.

Some thoughts about operating system and filesystem choices:

- Rather than installing the O/S directly to a boot drive, install ESXi 4.1 and virtualize. Lots more flexibility and power doing it that way, plus you can hardware pass-through the SAS2008 card to the VM. That way you can do other things with all those free cycles since the system will otherwise sit idle essentially 99.9% of the time.

- I think using SSD's for ZIL and L2ARC is trivial when you're storing large, infrequently accessed media files. I think where ZIL and L2ARC shines is environments with lots of random IOPS of smaller files. With that many 2TB drives you're going to have plenty of throughput, and you'll be bottlenecked by GigE anyway.

- I'm not keen on NexentaCore anymore since things *seem* to have fizzled in that camp as far as dev. Since you mentioned most up to date and feature complete when talking about ZFS, consider a test install of Solaris Express 11. Download the text-installer version for free, then grab the excellent Napp-It GUI to manage it graphically until you learn commandline. Literally in 5 minutes you're done, complete with SMB shares and copying files from Windows to them. Alternatively there's sub.mesa's zfsGURU as you are already aware. Or, latest OpenIndiana + NappIt GUI.

- Order all your parts but wait until the last minute to finalize O/S choice and begin migrating your data. I say that because we're approaching the verge of what may just be a paradigm shift in storage filesystems for media, if Brahim pulls off what he's set out to do with Flexraid's upcoming realtime feature (3 weeks until beta release). Think drobo-esque BeyondRaid file system ported to Windows and Linux but better. Expanding and shrinking a storage pool one drive at a time, even different sized drives, non-striping with single or multiple parity drives so that only the disk with the file you're reading needs to be accessed, and only one drive + parity drive(s) need to be accessed when writing. Here's his preview of the new web-based management GUI, which doesn't explain a lot but shows he's come a long way since the previous snapshot based parity system.


Odditory, I was thinking about using ESXi in a new NAS build as well, with the onboard LSI 1068E on my X8ST3-F in passthrough mode to Solaris 11. The motherboard has full hardware virtualization support, but I have heard there are issues with performance of the disk controller in this configuration.

Can you share your experience with this? What motherboard are you using and did you see much of a hit in performance of ZFS in this way over native execution?

Thx
Mike
 
you will get nearly the same performance with ESXi and passthrough - if your mainboard has no vti-d problems and you have enough RAM and CPU Power, just like you should have on real hardware

if possible, you should install vmware-tools and use vmxnet network driver
(not already included in vmware tools in Nexenta but in SE11 and Opensolaris )

read also about all-in-one:
http://wiki.xensource.com/xenwiki/VTdHowTo
http://www.servethehome.com/configure-passthrough-vmdirectpath-vmware-esxi-raid-hba-usb-drive/
http://www.napp-it.org/napp-it/all-in-one/index_en.html

gea
 
you will get nearly the same performance with ESXi and passthrough - if your mainboard has no vti-d problems and you have enough RAM and CPU Power, just like you should have on real hardware

if possible, you should install vmware-tools and use vmxnet network driver
(not already included in vmware tools in Nexenta but in SE11 and Opensolaris )

read also about all-in-one:
http://wiki.xensource.com/xenwiki/VTdHowTo
http://www.servethehome.com/configure-passthrough-vmdirectpath-vmware-esxi-raid-hba-usb-drive/
http://www.napp-it.org/napp-it/all-in-one/index_en.html

gea


Outstanding. The X8ST3-F has a pretty good vt-d implementation (It's got an X58 onboard). The only issue is that the onboard ICH10R sata controller can't be exported as a whole, so I can't really use it for the disks and SSD's attached to Solaris. Passing through a disk as a block device won't allow Solaris to read some of the details of the disk, and that worries me.

But I have some extra LSI cards that can fill in the ports I suppose. I did order some of the IBM M1015's as I found a source selling them cheaply... I could export those to Solaris and they would be great for SSD's I think.

This will allow me to run a windows server 2008R2 instance as a VM as well, so it can do comskip processing and be a BDC for the AD domain.
 
Dear OP, I am currently reading on forum and also other websites notes on ZFS. Beginner per ZFS, no real experience. The background for this post is I read your post of 40TB. some web pages suggest per aggressive ZFS usage, (and for users wanting enterprise performance), the rule for DRAM is 1GB-OS + 1GB per TB data. + special mem per SSD cache + special mem for deduplication.

I understand this is for home use. Just in case you want to put similar setup into production deployment do leave some room for extra RAM.
 
Hello OP, can you please divulge the functions that this server will be used for? I'm in the planning stages for a server, and I am debating between going with something more powerful like you are doing, and something that focuses more on energy conservation. Do you know what kind of wattage this thing will draw?
 
lightp2:

I've heard some of those recommendations around memory as well, but for the load I am looking to put on this box I've heard that 8gb should be sufficient. Since it's for home use I don't have a large amount of concurrent users, or things like big databases where high IOPS and throughput are going to be of the utmost importance. For instance, here is a 60TB box using 12GB of memory:

http://www.servethehome.com/big-whs-update-60tb-edition/

My setup will allow me to go to 16GB of Unbuffered ECC or higher for registered. So I can easily expand to 16GB if I need to in the future, and I don't see having to break that barrier for a while unless my usage model drastically changes.

DieTa:

Definitely, I've never done a project quite this big before, so I'll make sure to take lots of pictures.


facesnorth:

Power consumption was a big concern for me to as this is a home server running 24/7. Here's an article I saw comparing the Xeon x3440 to the Core i3 which is one of the most efficient desktop processors:

http://www.servethehome.com/intel-core-i3-530-and-supermicro-x8sil-f-power-consumption-xeon-x3440/

I chose the 1156 socket because the power consumption is a lot lower and I can take advantage of the Supermicro board and ECC memory. I anticipate that this box will be pretty lean on power for a 40TB box. It's hard to be too lean with 24 drives, but with the Samsung drives and a high efficiency PSU it shouldn't be too bad. I'll post power consumption numbers as soon as I have it up and running. The parts are on order now.
 
Cool. I had been thinking about similar hardware for the same reasons, but I was concerned I'd be building a beast. I'm looking forward to seeing those numbers.

I am still curious though, what are all of the functions you plan to use this box for?
 
i've been redoing an outline for a new home fileserver (tired of buying external hdds and finding places to stick them!) and keep switching around hardware and ideas. i had decided i was stuck on using raidz2 for obvious reasons, i really like your build idea so far, i think i may be cloning this build. you're using pretty much exactly the same setup i was aiming for, except i was looking at a norco 4020. maybe the $100 bump to the 4224 isn't such a bad idea. i won't be filling the entire thing with drives from the start, just as i get more money to blow on hard drives i will get more :p
 
- Rather than installing the O/S directly to a boot drive, install ESXi 4.1 and virtualize. Lots more flexibility and power doing it that way, plus you can hardware pass-through the SAS2008 card to the VM. That way you can do other things with all those free cycles since the system will otherwise sit idle essentially 99.9% of the time.

I'm actually rather intrigued by this option, although hardware pass-through is something that I've never really played with in ESXi, and [surprisingly] Google doesn't seem to be revealing much either.

I've got a Supermicro board with a built in LSI 1068E controller, and I can't for the life of me figure out how to configure hardware pass-through to a virtual machine.

Got any hints that might set me on the right path?
 
I'm actually rather intrigued by this option, although hardware pass-through is something that I've never really played with in ESXi, and [surprisingly] Google doesn't seem to be revealing much either.

I've got a Supermicro board with a built in LSI 1068E controller, and I can't for the life of me figure out how to configure hardware pass-through to a virtual machine.

Got any hints that might set me on the right path?

http://hardforum.com/showpost.php?p=1036401898&postcount=5

is a previous post of mine with 2 screenshots of doing passthrough in ESXi. You have to select the 1068E to be passed through, and then in the configuration of the guest properties add the selected passthrough device.
 
which hp expander card are you going to use (model)? just curious for compatibility reasons. i plan on using this one myself along with the hitachi 2TB drives. otherwise pretty much the same setup as you hardware wise. i'd like to run freebsd 8.2 and do 2 20TB zfs volumes, issue jails for the various services i would need to isolate. but we'll see what actually happens
 
Last edited:
Did you guys encounter any issues going with ESXi? After reading more about what people are doing with it, I'm going to try to setup my server that way, much more powerful to be able to run a few VM's rather than just the one OS. I've kept away from VM's in the past because of the added complexity, but I'll give it a go this time around.

I went with the rpc 4224 just to max out on the drives. It was more expensive than the 4020 and the 4220, but the extra 4 drives along with the simplicity of the backplanes led me choose the 4224. Especially since I plan on maxing it out from the start.

I ended up not going with the expander based on some recommendations here about possible issues. Once odditory brought the IBM ServeRaid M1015 to my attention it saved me a hundred bucks or so and should be a better solution using 3 HBA's instead of 1 HBS + Expander.

If you are looking for an HP Expander, send a PM to SynergyDustin on this forum.
 
Any updates?After a semi failed attempt with a freenas box with a 3ware 9650se, I am thinking of scrapping this box and moving onto a more powerful box using zfs.
 
hello acesea

i would suggest, try the newest ZFS-OS, Solaris Express 11
or the free OpenIndiana (stable announced 02.2011)
If its running, then with this one.

In general:
ZFS is the best currently available filesystem, but ZFS-OS's are not mainstream.
A lot of hardware is not running, a lot is running with problems but only a few
are just working, no problems - mainly the ones used by Sun/ Oracle or prebuild
Nexenta Enterprise Systems.


Usually you can say:

Trouble free =
- X8 Series from SuperMicro with 3420 or 5520 Intel server-chipsets,
- X-Class or L-Class low power Xeons
- Onboard SATA with Intel ICH10 (ahci)
- use always Intel Nics
- LSI Controller based on 1068e (always the best) or SAS2 LSI 2008 with IT firmware

- Avoid expander, hardware raid, do not use other controller at all or only if others have reported success
Use mainboards with enough 4x or 8x slots for needed NICs (1Gb now /10Gb next) and needed Storage Controller.

If you follow these simple rules, you will have fun.
If i would build a 40TB system i would use a 24 bay system with 3 x 8x SAS/Sata (1068E) and at least
additional slot for future 10Gbe = minimum 4 slots needed (SuperMicro has boards with 7 slots) and use non 4k Sata disks.

I would not use a SAS2 Controller ex LSI 2008 because of the WWN problem.
(LSI 2008 will report disk related WWN instead of controller based ID)
read http://www.nexenta.org/boards/1/topics/823

Gea
 
I've been waiting on a backordered motherboard and the drives, but everything just arrived a couple days ago and I'm starting to put the whole thing together:

serverparts.jpg
 
Nothing quite like getting a bunch of new parts together in such a display.
 
Just wanted to say that this is a great thread. Keep us posted, and I'm curious about the IBM ServeRAID M1015 w/ZFS as well. I have 2 ordered in the mean time as they are super cheap.
 
Just wanted to say that this is a great thread. Keep us posted, and I'm curious about the IBM ServeRAID M1015 w/ZFS as well. I have 2 ordered in the mean time as they are super cheap.

I bought a bunch of m1015 cards and returned them. May have been a batch of lemons or the cards have issues. The ones that were not doa behaved very flaky in openindiana. Be advised all zfs flavors require installing the imr_sas drivers from lsi.
 
Last edited:
I bought a bunch of m1015 cards and returned them. May have been a batch of lemons or the cards have issues. The ones that were not doa behaved very flaky in openindiana. Be advised all zfs flavors require installing the imr_sas drivers from lsi.

Define very flaky please. I bought three of them as well of which two aren't working completely (three ports dead total across the two of them). Once I figured out all of the good channels and populated them with good drives (yet another issue I'm having with my build, lol), things went pretty smoothly.
 
Back
Top