11TB + ZFS, WHS, IP Camera, HomeSeer Server

Emulsifide

n00b
Joined
Jan 24, 2011
Messages
59
Hey everyone. Looooooong time lurker (literally 11-12 years, if not longer), first time poster.

I currently have an 11TB WHS server that I'm using to backup my PCs and store data such as movies and personal information. It's been a great little server up to this point, but I'm starting to see it's limitations. First and foremost, I'm not satisfied with only having data duplication to protect my data, so I'd like to incorporate ZFS into the mix. I'm starting to arm my residence with MJPEG streaming IP cameras which eat up a lot of CPU cycles when trying to monitor them. Add home automation to the recipe and you've got a great case for virtualizing everything.

Here's the spec of the build (all of which is laying around the house and/or currently being utilized in the WHS box:

  • Intel Core i7-860
  • OCZ Gold 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Low Voltage Desktop Memory Model OCZ3G1600LV6GK (I bought a triple channel kit when they were on sale. I plan on adding a 4th module to even it up to 8GB)
  • ASUS P7P55D PRO LGA 1156 Intel P55 ATX Intel Motherboard
  • 2 x Intel EXPI9301CTBLK 10/ 100/ 1000Mbps PCI-Express Network Adapter
  • Cooler Master Centurion 590 Case
  • 2 x IBM ServeRAID M1015
  • 3 x Western Digital Caviar Green WD20EADS 2TB SATA 3.0Gb/s 3.5" Internal Hard Drive
  • 5 x SAMSUNG EcoGreen F2 HD103SI 1TB 5400 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive
  • 3 x SUPERMICRO CSE-M35T-1B Black 5 Bay Hot-Swappable SATA HDD Enclosure
  • A fairly efficient 80 plus Corsair power supply (I can't remember what it is off the top of my head)
  • Some old crappy PCI video card

I never thought I'd want to build such a beefy server for home use. I actually have a total of three M1015s on the way for further expansion. Chances are, I'll end up moving to some sort of 24 bay Norco case when I need to expand further. It sucks because for another $50 or so (those damn Supermicro chassis are EXPENSIVE), I could of had a Norco.

I'm sure the Core i7 is tad overkill for my situation, but it's what I've got laying around at the moment. Throwing a Kill-A-Watt onto the power supply, motherboard, and cpu above along with a Radeon 4850 and an old school WD 320GB 7200rpm sata drive (that eats up at least 15-20 watts) I'm seeing about 110 watts idle and 200 watts when the CPU is under full load (gpu at idle). Not half bad. It's either that or the 1.8GHz Conroe 430 I'm using currently for my WHS server to power all of this :-D

Here's what I'd like to do with the system:

  • Virtualization - ESXi 4.1. I've used 4.0.1 in the past at my old job and it was a blast to work with.
  • ZFS NAS - A solid backend storage solution for all of the VMs. I'm leaning toward Solaris 11 Express since we use Solaris 10 exclusively here at work in our data center. I'm a Windows administrator and I'm looking to learn more about the other side of the business. Napp-it will be used with the OS as well.
  • Windows Home Server - To provide all the workstations and laptops in the house with a user friendly backup and storage solution. Local TV shows will be recorded to here as well using an HDHomeRun dual digital tuner.
  • HTPC Storage - DVD and Blu-Ray Movies in MKV and ISO format will be stored to the server. A mix of XBMC and Media Center 7 are being used on the Core i3 HTPC. More HTPCs will be attached to the system as time progresses (one for the garage and two more for bedrooms).
  • HomeSeer HSPRO - To work with the Z-Wave devices I have around the house. This one is questionable considering I need to involve a USB device with the VM. With ESXi, has VMWare straightened out USB passthrough yet? Doing a quick Google hit, I found this:

    http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1022290

    Back when I was working with ESXi 4.0.1, you needed a networkable USB hub to interface anything with a VM. Has anybody attempted this yet? I'm interested in whether or not you've had successful results.
  • Video Surveillance Server - Active WebCam software to interface with my IP cameras. Potentially on the same VM as HomeSeer.


I'd love to hear some feedback on my proposed setup. Advice on how to virtually interface the ZFS NAS to WHS would be greatly appreciated. iSCSI and NFS come to mind. As soon as the HBAs come in, I can start putting this system together. My WHS box is about 75% full at this point, which means I'll end up locally buying a couple more 2TB drives to offload everything.
 
Windows Home Server - To provide all the workstations and laptops in the house with a user friendly backup and storage solution. Local TV shows will be recorded to here as well using an HDHomeRun dual digital tuner.

If by chance your workstations and laptops are Windows 7, I would forgo this and just have them back up to a samba share served by ZFS.

If how I'm reading this is correct, you want all of this on one box under ESXi? If so, you'll have to balance a fine like with passing through direct hardware access, or have a kludge of vmdk files across multiple drives. This is because ESXi doesn't support volumes over 2TB without some hacking. Meaning if ESXi is doing all your storage, you would only be able to give 2TB slices to WHS/ZFS/etc. You could passthrough a controller, but any drives on that controller would only be seen by the guest it's passed through to.

AFAIK, USB is still a no-go for ESXi. EDIT: Guess it is possible now, no experience here though.
 
If by chance your workstations and laptops are Windows 7, I would forgo this and just have them back up to a samba share served by ZFS.

If how I'm reading this is correct, you want all of this on one box under ESXi? If so, you'll have to balance a fine like with passing through direct hardware access, or have a kludge of vmdk files across multiple drives. This is because ESXi doesn't support volumes over 2TB without some hacking. Meaning if ESXi is doing all your storage, you would only be able to give 2TB slices to WHS/ZFS/etc. You could passthrough a controller, but any drives on that controller would only be seen by the guest it's passed through to.

AFAIK, USB is still a no-go for ESXi. EDIT: Guess it is possible now, no experience here though.

Thanks for your input. I'm well aware of the 2TB issue from my previous job. With that, I don't have a problem passing controllers over to the VMs especially since I have three that I've purchased at this point. Two could go toward ZFS and the third could be used local to ESXi for hosting the other VMs. Either that or I could bring the on-board crap into play somehow. So many choices!!
 
I'm starting to arm my residence with MJPEG streaming IP cameras which eat up a lot of CPU cycles when trying to monitor them.
  • Video Surveillance Server - Active WebCam software to interface with my IP cameras. Potentially on the same VM as HomeSeer.

How many camera's are ya planning on monitoring?

I have a Xeon E5520 hooked up for 6 IP cameras @ 720x480 recording with analytics. The processor sits anywhere between 15% and 20% just for that alone. I've seen it spike near 30% at times, but not often. Each recording is fenced, so not all of each image is processed for motion. Our plan was for 20 cams total and 9 or 10 with analytics.
 
How many camera's are ya planning on monitoring?

I have a Xeon E5520 hooked up for 6 IP cameras @ 720x480 recording with analytics. The processor sits anywhere between 15% and 20% just for that alone. I've seen it spike near 30% at times, but not often. Each recording is fenced, so not all of each image is processed for motion. Our plan was for 20 cams total and 9 or 10 with analytics.

Thank you for the benchmark! What kind of frame rate are you capturing and what kind of cameras? What kind of bandwidth are you eating up per camera? I'm planning on at least four cameras at this point. I'm currently evaluating a Foscam FI8904W. Not too shabby of a camera, although the stock lens is a little narrow for my situation. Resolution is 640x480 @ 4fps. The cameras will be monitored for motion by the server. When tripped, I get an email with a couple frame captures and the server will begin recording the event. Depending on server load and power consumption, I may raise the frame rate to something better, but 4fps seems to capture quite a bit of action during my testing this past weekend. The raw feed of the camera to the server seems to generate roughly 3-4Mbit/sec per camera, which is acceptable.
 
As if it were meant to be, I've experienced a hard drive failure with my current WHS box tonight. I came home to a gut wrenching clicking coming from my rack in the basement.

Thankfully, the only that I don't have duplication on with my server is the movies and thankfully, the drive appears to be working fine for the moment when hooked up to another computer. There's about a hundred movies currently on the drive and I'm about a half hour into the three hour offload of the data to another stable drive. We'll see how much of it I can really salvage in a couple hours.

I really wish I had ZFS right now. This is what I want to prevent with the revamp of this system....
 
its funny I have almost the same setup you do....


I have a norco 4020 with the SM PCIe dummy cards running 2008 R2 with WHS in Hyper-V with 6 1 TB drives and 2 2TB drives.

I have a HTPC with a ceton 4 tuner and HDhomerun... I however have another PC running HomeSeer and HStouch...

I found having my HomeSeer server being separate allowed my redundancy. My HomeSeer PC is a Low Power AMD x2 with a 30gb SSD and matx motherboard wit Nvidia 6150 GPU and 2GB or RAM

it is backed up nightly to my WHS. This allows me to have 2 hardware failures before my home automation goes down.

I can use my imagebased backups of my HomeSeer (weekly via W7 backup feature) to load up a VM on Hyper-V and pass though my Z-wave USB interface in less than 10 mins of a hardware failure.

if I have 1 machine for them all than 1 hardware failure and I am toast.

when my irrigation, HVAC, light and whole house music are all controlled via HomeSeer redundancy is important. I can do everything but move my USB Z-wave interface from a remote location with RDP and Hyper-V manager.

Also I have my OS on my server as RAID 1 with RE drives.

good luck to you and let me know if you have any questions with homeseer... I have about 20 lights on my system, HVAC and irrigation....

using HStouch via 2 iPhones, 1 iPad and 1 ELO Windows touchscreen.
 
Thank you for the benchmark! What kind of frame rate are you capturing and what kind of cameras? What kind of bandwidth are you eating up per camera? I'm planning on at least four cameras at this point.

10fps sounds about correct from memory for recording, I'd have to actually check.

I'm using a combination of Vivoteks FD7141's and IP7142's. And a pile of older existing analog cams once the rest of the converters show up.

Bandwidth for the 6 cams is 10 Mbps at this moment. So just under 2 Mbps per cam. Parts of the images are static because of mounting conditions (sides of buildings) so bandwidth usage will be a bit lower than if the full view was motion.
 
Again, thank you both for the advice and input.

Adidas4275, I'll get in touch with you if I have any issues with HomeSeer. As it currently stands, I don't believe having hardware redundancy for the home automation side of the coin is necessary in my situation since I'm only controlling lighting. HVAC will be in the future, but the thermostats I'm looking at all have the option of operating independently of the z-wave controller when not in contact with it.

satterth, you've got some great cameras to work with. I know I'm limiting myself, but I don't want to spend a ton of money on the cameras. A huge different between the Foscam I'm working with and your cameras is the fact that yours support H.264 streaming. Mine only supports MJPEG which is a huge bandwidth hog. Are you aware of any low dollar (under $150) H.264 cameras that do pretty good in low light?

While my current WHS server is rebuilding its backup database, I'm dicking around with the core i7, ESXi and Solaris. With a single intel NIC, the 6GB of ram, headless, an Antec Neo HE 500 watt power supply, and a Western Digital 2.5" 80GB Scorpio Blue as the OS drive, the system is idling at 66 watts. Not half bad. Before I head to bed tonight, I'll have Solaris installed so that I can tinker with it.
 
Are you aware of any low dollar (under $150) H.264 cameras that do pretty good in low light?

No, i do not have any experience with cheap H.264 cams that work well. I got mine from a local supplier. Due to the cost of installing mine in remote locations i did not want to dinker around with experiments. I needed something that would work correctly out of the box.

Indoor cams you can get for near that price.
Look at the ACTI ACM-4000 or Axis M1011. They should be close to the $150 mark.

If your looking for an outdoor cam, then good luck. Everything i know of starts at $400 and goes up from there. Thats part of the cost of having the extra electronics onboard and making sure it can work in adverse weather conditions.

Cheapest way i think would be to get a video server (with as many ports as your need 1x 2x 4x 8x) and blast in a bunch of analog cameras to it, then feed that into the NVR or FTP for archival storage. A 4 port grandstream can be had for about $100 per port. It will do motion detection and fire emails on events. $50 will get ya some analog camera's that do well in low light. The tech is already proven. This way you keep electronics in the cams at a minimum, and the MPEG converter can use cheaper electronics cause it be kept in doors where its warm and dry.

Here is an example of an old analog camera we had on site (its at least 8 years old) The side of the building is approx 220 feet long and there are 6 or so pot lights along the building edge. Some of which are burnt out. Analytics work at the far end of the parking lot (cars/trucks). Pretty good for what is now a $50 dollar analog cam. The cam catches birds and people easily when closer.




This is just my personal opinion, but if you want the option to swap cameras in and out as conditions change, then go IP. If your just gonna put it up and let it run for the rest of its life, then go analog, its cheaper. Plus if ya move its easy to abandon analog cams and coax cable. Just take the video converter with ya.

But if your install base is large,remote or complicated then go IP. Install costs will save ya money in the end.

I know you already have the one cam for testing, just food for thought before you get in too deep to turn back.
 
Not a half bad image coming out of that camera! I thought about going the capture card route but never put too much thought into it since I didn't feel like running coax throughout the property. I realize now that's a horrible excuse for ignoring the fact that I can get the quality I want at a potentially cheaper price point (at the expense of convenience).

Another power metric update. The same setup as before, but adding an additional Windows XP VM to the mix with Active Webcam installed. The test camera is monitoring and capturing at 4fps with full time motion monitoring. With Solaris 11 humming along in it's own VM and this new one up and running, the system is idling at 72 watts. Granted, no HBAs and my main storage drives are in the mix yet, but having the monitoring VM cost me only 6 watts so far is fantastic. Not only that, for some reason network utilization for the IP cam dropped down to 1.5Mbps. I haven't tried to figure that one out yet, but it's great considering the same quality of video is pumping out of it as there was when it was being monitored from my old WHS box.
 
Was just thinking, since memory prices are so low, you could buy 3 new kits of 2x 4GiB and install a total of 24 whopping gigabytes of RAM; just for 200 euro; not sure the prices in USA but relatively DDR3 memory is extremely cheap right now! The perfect time to buy, and buy alot!

I'm sure it would be a tad excessive for the system; but given the money it's a huge convenience and nice investment: give all your VMs what they need and give ZFS OS the rest of the memory; it can use it well!

Also make sure ESXi is configured to honor flush commands; not sure how that works though.
 
If you already have the Cat5 strung, you can check out these little do dads.

http://www.svideo.com/cctvbalun.html

I've never tried them, but if ya already have the Cat5 ready for the Ip Camera you might be able to use these and have the ability to run cheaper analog cams too.
 
sub.mesa, you're absolutely correct. Memory is super cheap at the moment. I've been debating in my head whether or not to do what you've suggested. My only argument is if I want to switch to ECC and a server motherboard in the future. If so, buying now would be pointless. I need to get this system built with what I've got to see if I really need the performance for what I'm doing (doubtful).

Regarding flush commands, will I need to enable this in ESXi if I'm passing through the LSI controllers directly to the Solaris VM?

Satterth, I haven't strung anything through the property yet. That will happen sometime next week. That's a pretty neat suggestion, but like I said in my last message, I'm willing to look into coax at this point. Thank you for the suggestion. These might come in real handy at work as well.

Well, I received the HBAs in the mail yesterday. I can't say I'm pleased with the seller and his/her packaging methods. Here's how they showed up:

1170101635_C2KPc-L.jpg


Friggin lovely. All three were jammed into a USPS flat rate box with NO padding at all! Areas on each board are scratched up, but it doesn't appear any surface leads or traces have been hit (the scratches don't go through the protective coating).

I only had time to hook up the drives and add them as passthrough to ESXi last night. With all three HBAs in the system, 3 x 2tb WD20EADS, the 80GB scorpio blue from earlier, a Playstation 3 40gb, and one old ass WD 320gb 7200rpm drive I was pulling 112 watts at idle. That's phenomenal considering the 320gb drive is a power hog and eats around 15-20 watts alone!

I will tinker more with it this weekend. Come payday next week, I'll order what I need hard drive wise to get the system to the point of offloading a test (copy) package of live data from my current WHS server to mess around with for a couple weeks. So that I'm ready for anything, I intend on simulating a lot of different scenarios so that I fully comprehend how to deal with the situation when it happens for real.

Does anybody have a really good source for forward SFF-8087 to 4 x sata cables? I bought up the remaining supply (two) of them off of Amazon at $19 a piece.
 
Not the best way to ship hardware for sure! But if it still works, it shouldn't really have caused harm. I would be much more sensitive with shipping damage for HDDs due to their mechanical nature. It could very well be that a a badly shipped package of 8 HDDs get thrown around excessively by the shipping company, causing you to receive 8 disks that may work, but have hidden flaws causing multiple of them to fair prematurely; it's entirely possible!

For the controller this is much less of a concern; if it breaks well get another controller and it works again. Doesn't have to be the same type as well, unless ESXi somehow requires this which i don't think would be the case.

Also, if you passthrough the entire controller, so that the guest OS running ZFS sees the physical controller and uses its own driver to communicate with the controller, then you should be fine! It should honor the write flush commands in that case.
 
Not the best way to ship hardware for sure! But if it still works, it shouldn't really have caused harm. I would be much more sensitive with shipping damage for HDDs due to their mechanical nature. It could very well be that a a badly shipped package of 8 HDDs get thrown around excessively by the shipping company, causing you to receive 8 disks that may work, but have hidden flaws causing multiple of them to fair prematurely; it's entirely possible!

For the controller this is much less of a concern; if it breaks well get another controller and it works again. Doesn't have to be the same type as well, unless ESXi somehow requires this which i don't think would be the case.

Also, if you passthrough the entire controller, so that the guest OS running ZFS sees the physical controller and uses its own driver to communicate with the controller, then you should be fine! It should honor the write flush commands in that case.


Yeah, I know. It still pisses me off when people don't give a crap about delivering a product properly. I'm going to test these things to the nth degree before I deem them fit for my data. I plan on hooking up every one of the connections this weekend to multiple drives and passing a significant amount of data over each one just to be sure.
 
Alright, here's the latest.

I hooked up the HBAs along with only my 3 x 2TB drives to setup a raidz for testing purposes. Everything went fine within the operating system using Napp-It. Onced shared out via smb and I start transferring data from my old WHS box, the connection starts out transferring at around 35MB/s and then drops to 1-2MB/s. This cycle repeats and the max rate steadily drops. FYI: Transferring to the VMWare virtual disk (the datastore is hosted off of one of the power hog WD 320GB 7200rpm hard drives) provides a steady 35MB/s connection.

I've tried throwing the drives on different channels of the HBAs multiple times. After seeing the same results happen over and over, I finally grew a brain and brought each individual drive into the OS as a basic drive. One of the three 2TB drives is definitely inducing the slow connection. The strange thing is, no corruption is occurring when in raidz. The drive is just slow. Regardless, I will probably advance exchange RMA it within the next day.

In the meantime, I'm stuck. I need at least one more WD20EADS drive to get me in raidz so I can do a long term test of the array with a copy of some of my data. I'm also unsure as to whether or not I want to purchase either more of the 1TB or 2TB drives OR buy something completely different. Given the dead 1TB samsung I have from the WHS box and now a somewhat dead 2TB western digital, I'm a bit concerned about my drive choices and how long the rest of the drives are going to last that I currently have.
 
Are you using aligned partitions? You may want to ask _Gea for help regarding napp-it, and how disks are setup using this frontend. Perhaps your performance problem is related to other issues though, normally i recommend to benchmark the network and local I/O performance separately so you can judge where the bottleneck lies: network performance or local ZFS performance; or a combination of both.
 
Are you using aligned partitions? You may want to ask _Gea for help regarding napp-it, and how disks are setup using this frontend. Perhaps your performance problem is related to other issues though, normally i recommend to benchmark the network and local I/O performance separately so you can judge where the bottleneck lies: network performance or local ZFS performance; or a combination of both.

Yeah, tonight's testing was going to be more refined. Working with VMWare on a daily basis, I'm well aware of the issues that can creep up due to misconfiguration of virtualization components.

Do I have to worry about aligning the partitions with these drives? I thought that was only necessary for 4k drives. The WD20EADS drives I'm testing with predate Western Digital drives using 4k.
 
EADS are 512-byte sector indeed, so you wouldn't need alignment. I would recommend you test both local ZFS performance using dd or bonnie and network performance independently, so you know which component is causing low performance.
 
EADS are 512-byte sector indeed, so you wouldn't need alignment. I would recommend you test both local ZFS performance using dd or bonnie and network performance independently, so you know which component is causing low performance.

Well, Like I said, I've already found the drive in question. Pulling the bad drive out of the equation nets solid transfers.

Now, it's just a matter of figuring out what the hell I want to do with the drives I have. I'm considering hitting up my local Microcenter and grabbing 5 Samsung HD204UI drives since they have them for $80 a pop. Once you get past the firmware problem and align the drives, they appear to be pretty solid according to the internet think tank. Now that I'm thinking about this, that would actually be ideal. I could use the older 1TB Samsungs as a raidz2 to hold 3TB of my critical data and then the new 2TBs for my movies and surveillance. Total storage would be 11TB again. lol. The eads drives will retire to offline backup duty only.
 
Alrighty, I've finished up my testing. I can definitely say that I'm not real pleased with what I've found.

First off, I definitely have a bunk WD20EADS. S.M.A.R.T. C5 current pending sectors is show nearly 1500 sectors that are toast. Awesome! That's two drives in a week. Regardless, the drive is being advance RMA'd to Western Digital. Even though I'm beginning to become convinced more and more that their hard drives are shit for quality control, I love how easy they make it to RMA things. Lol!

Second, not one, but two of my HBAs are screwed up. One of them is showing SAS port P1/1 completely dead and won't recognize any hard drives that I have. The other is oddly showing the same port P1/1 as completely dead and port P0/0 is tripping false S.M.A.R.T. C7 CRC errors on every drive that I throw onto it. I know I took a chance with these cards, so I halfway expected this. I've contacted the seller on eBay. Chances are he/she is probably a deadbeat that was looking to offload 100+ of these cards quickly without any concern for quality of shipping and/or received product. If that's the case, I'll get my money back from PayPal. No big deal. The good thing is, I obviously have enough port density to continue on with the project.

Still no decision on drives yet. The 2TB Sammy's are looking mighty tasty at $80, but I'm not sure.

Interesting point for the EADS lovers who don't care to buy EARS drives and can't find a cheap source for them. Western Digital's advance RMA charge for a WD20EADS drive is $85, which isn't a bad deal. I wonder if you can purchase directly from Western Digital at that price. If not, advance RMA all the drives you already have, don't return anything, and then let them hit you with the charge.

EDIT: Here's some graphs of the bad drive versus a good drive:

Good drive:
1174115137_DbVNL-O.png


Bad drive:
1174114776_39ETw-O.png


Bad drive's S.M.A.R.T. health:
1174115467_URky8-O.png



EDIT 2: Well that was surprisingly fast. I just got an email back from the seller. He's supposedly going to send me two more HBAs immediately. We shall see...
 
Last edited:
UDMA CRC Error Count means your cables had an error, just 3 errors, but watch that value it should not increase!

Current Pending Sector is your immediate concern. This is very dangerous and a result of the high data density while still using only 40-byte ECC per sector. The 4KiB sector disks have double the ECC per sector, which helps against this issue. You're seeing high Uncorrectable Bit-Error-Count, which causes corruption and 1000 sectors is quite heavy corruption!

New EADS disks may do the same; these disks kind of suffer amnesia. They need more ECC protection to prevent this from occurring as often. With ZFS you do have a reasonable protection against this, but i would make sure you have a good redundant configuration like RAID-Z2.

And nothing beats a real backup in addition to that, that's mandatory for data you really really cannot afford to lose!
 
UDMA CRC Error Count means your cables had an error, just 3 errors, but watch that value it should not increase!

sub.mesa, in this case, the UDMA CRC Error Count is not because of the cable. I have three brand spankin new 3ware SFF-8087 forward breakout SATA cables that all work just fine on the one controller that has no problems. Plug in all three of those cables into P0 on this particular controller with the problem and ALL three of the cables tick up C7 on any drive I plug into P0/0. It's not the cables. I'm 100% sure it's the port.

Current Pending Sector is your immediate concern. This is very dangerous and a result of the high data density while still using only 40-byte ECC per sector. The 4KiB sector disks have double the ECC per sector, which helps against this issue. You're seeing high Uncorrectable Bit-Error-Count, which causes corruption and 1000 sectors is quite heavy corruption!

Hmmm, doing the simple math, does this mean that 4K disks are worse off considering their sectors hold eight times more data and the ECC is only double that of a previous generation disk?

New EADS disks may do the same; these disks kind of suffer amnesia. They need more ECC protection to prevent this from occurring as often. With ZFS you do have a reasonable protection against this, but i would make sure you have a good redundant configuration like RAID-Z2.

That's definitely unacceptable to me. Hence why I'm only using these drives for testing and then for an offline backup.
 
4KiB sector disks would indeed lose 8 times as much when they cannot read a sector, but the added ECC would surely help at reducing this uBER problem to more acceptable levels.

As for your EADS disks; i'm not sure if they will be suitable for long-term offline storage. It could be that this degrades the electric charge causing many Current Pending Sector to show up once the HDD realises that it cannot read the sector back anymore.

Using ZFS for these disks is recommended since it can fix the damage quite easily as long as it has enough redundancy left. But indeed i would not use this for your most precious data. Perhaps backups of less important stuff would be good. Letting it scrub every week or so would help, to spot any weak charged sectors and 'recharge' them if necessary. HDDs do that automatically by measuring the time it takes to read a sector. If it could be read but took longer than usual, then it writes the just read data back to that sector to 'recharge' the sector as it were.
 
Ok, here's the real question.

If you me and you were looking for a new set of drives to buy, what would you get? Mind you, I'm very power conscious given this thing willl run 24/7 in my home.
 
Well i think the current hottest (in terms of popularity) HDD, the Samsung F4, is quite a good buy:

- unlike EADS (4x 500GB) you get the most modern 666GB platters (3x 666GB); which would yield higher throughput than 500GB platter disks
- with just 3 platters you have less power consumption (though all green drives are about 4W while 7200rpm normal disks are about 5-8W depending on the platter count).
- 4KiB sectors protect against extreme uBER causing massive bad sectors (amnesia)
- cheap as hell

The only disadvantage is alignment problems due to 4KiB sectors and the firmware issue forcing you to flash new firmware to all Samsung F4 drives to prevent corruption from occuring on NCQ writes.

For ZFS, having 4KiB sectors is less required since uBER can be coped with using other means on traditional 512-byte sector disks. For normal NTFS or other RAID (hardware/software) on Windows platform the 4KiB sectors would be a distinct advantage, as NTFS provides no protection against this kind of corruption from occurring. ZFS at least gives you additional layers of protection.

With 2TB HDDs so cheap right now, you can put them in a RAIDZ2 and also use a few as external backup, like in an external USB3 casing that would be sleek. Then you can power it on only when you update the backup stored on it, making it a solid protection for your data.

If you're using ZFS and have some data that is more important than others, consider using copies=2 on that dataset. Examples are personal documents, emails, work stuff, personal photos, etc.Those kind of data should be backed up as well.
 
All of the 4KB sector HDDs, from all manufacturers, have specifications for uncorrectable bit error rate of <1 in 10^14 or <1 in 10^15, which is identical to the spec for the corresponding 512B sector drives.

So there is nothing from any HDD manufacturer to back up sub.mesa's claim that the 4KB sector HDDs are more resistant to bit errors.
 
Yes, but we've already had that discussion, didn't we?

There's all likelyhood of the specified BER values being inaccurate and misleading. The RAID edition drives are specified to have 10 times less uBER, even though it is physically the same as it's non-RE black edition cousin. Believing the specified uBER is absolutely honest and accurate would be rather naive, i think.

Furthermore, I've found more truth in my own educated guesses than some other people's (or vendors) facts.
 
Anyone who believes that ALL the HDD manufacturers are publishing incorrect BER specs for the 4KB-sector HDDs, simply because you have a conspiracy theory about it, is beyond naive.

On one hand, we have the published specification from every single HDD manufacturer, which are identical for corresponding 512B and 4KB sector hard drives. On the other hand, we have a conspiracy theory from sub.mesa -- someone who posted in this very thread that the "electric charge" on HDDs degrades and causes sector errors.
 
Last edited:
I don't have time for this kind of discussion john4200, you've made your point.
 
Went ahead and picked these up in the middle of snowpocalypse:

1175049652_rhYbb-M.jpg


We've got 5 x Samsung 2TB F4 drives. Now to get down to business! I just finished screwing the drives into the cages and taking initial S.M.A.R.T. readings. Now to flash them with the latest firmware to get rid of that nasty bug. If I've got time, I'll do some initial benchmarking tonight in Windows 7.
 
If you want to benchmark with ZFS performance as well, you can use my ZFSguru distribution. It has a web-interface like FreeNAS and on the Disks->Benchmark tab you can perform ZFS benchmarks easily, producing nice visual graphs of all various configurations (RAID0, RAIDZ, RAIDZ2, etc). You can also see the effect of sectorsize override feature, which is beneficial to RAIDZ write performance depending on the disk count. You need to activate Advanced Mode in the preferences to make the Benchmark tab visible.

The Samsung F4s should be capable of 140MB/s read and 136MB/s write, tested with something like CrystalDiskMark.
 
Does ZFSGuru have a built-in driver for my M1015 HBAs? I loaded it up real quick and the drives do not show up.

A couple of updates. I'm currently in the middle of transferring a test copy of my data from the old WHS server over to the new server. Sustained transfer rates are roughly 45MB/sec at the moment which is about the best I've seen my old hardware perform. While the transfer has been going on, I've been tossing some movie isos back and forth between the local datastore (currently on one of the Western Digital power hog 320GB 7200rpm drives) and the F4 raidz array. The transfer isn't affected at all and I'm getting about 90-100MB/sec locally with the isos moving around. Not bad.

I think one of my next steps is to figure out the virtual and physical network topology that I want to implement. Working with VMWare on a daily basis, I know its critical to segregate management traffic, network traffic, and iSCSI traffic from one another for a smooth running system.

Pass-through of USB in ESXi 4.1 works great!!! My Aeon Labs Z-Stick Series 2 z-wave controller is connected and functioning properly with HomeSeer Pro in the Windows XP VM I have set up. Man is this friggin cool! I neglected to tell everyone that I've just started collecting parts for my home automation setup. I went nuts a couple weeks ago when the Schlage and GE stuff went on sale at Radio Shack. I managed to acquire one of the Schlage/Trane thermostats. I tried hooking it up last week, but wasn't able to do so since my furnace wasn't pushing a 24 volt common wire to the original thermostat. $15 worth of 18-7 thermostat wire and roughly 10 minutes of work and I now have my first z-wave device set up. I will probably spend the rest of tonight tinkering with events and setting them up.

Finally, the HBA seller on eBay shipped my replacements today and gave me a tracking number!! He again told me that I can keep the bad ones. Looking the HBAs up in LSI's online warranty system, they were made last month, so they technically still have a full three year warranty on them. I'm going to try and RMA one of them to see if I can get LSI to send me a new one. If they do, I'll have five of these suckers on hand after RMAing both of the bad ones. lol.

Things are starting to look up with this project!
 
Homeseer is great. I have a telephone/modem pci interface which I don't think works in passthrough mode, but otherwise it's a nice fit for virtualization...

Yeah, the radioshack sale was great... Z-wave is pretty cool...
 
@Emulsifide
Those M1015 HBAs (SAS2008 controller) - are they running in IT mode? The driver does not support IR mode at the moment. You need system version 8.2-001 or later. This driver is still in development. The USAS2-L8i/e should work with this driver. Not sure about the M1015.
 
@Emulsifide
Those M1015 HBAs (SAS2008 controller) - are they running in IT mode? The driver does not support IR mode at the moment. You need system version 8.2-001 or later. This driver is still in development. The USAS2-L8i/e should work with this driver. Not sure about the M1015.

Odditory posted this on servethehome.com:

Yes the IBM M1015 is most equivalent to LSI 9240 in terms of firmware, it runs the iMR stack actually (lite version of the MR stack found on 926x and 928x cards) and its based on SAS2008 platform. And I said mostly equivalent to LSI 9240 because its had a few features toggled off in firmware, like RAID5 ability which is present on the retail LSI 9240-8i part. IBM's idea was to upsell the RAID5 feature with a software unlock key, so naturally LSI made it difficult to circumvent that by mere firmware cross-flashing.

Thus unfortunately it cannot be cross-flashed to become an LSI retail part and inherit that feature set, the way one can do with certain other OEM LSI parts. It also cannot be flashed with "IT" mode firmware like the 9211-8i, even though they're both based on the same SAS2008 platform.

It can however be firmware upgraded with the files from LSI's website for the 9240-8i. Reason being the firmware ROM file is universal for all variations of the card - retail and rebadged/OEM versions - this based on analysis with a hex editor. The firmware rom file also can't (easily) be modified because there's a hash check to protect against changed bytes, otherwise it would be (easier) to hack an OEM card into having the featureset of the LSI retail version of the card.

It appears I'm SOL for IT mode.
 
Back
Top