-> PCIe Card Slot: agreed. I will probably create room via half height or 90deg with riser.
-> True: I suspect I'm going to need some more room in general in the bottom section. I don't mind making it taller, but i'm trying to keep the footprint about the same.
-> mSATA would be nice but...
Very nice. Pretty much exactly what I'm looking for.
Havn't found it available anywhere yet though. Amazon.com has it, but would be nice to purchase up in Canada.
Haha, possibly. At the moment i'm not too worried about that. There is 10mm clearance on each side of the drive and my fingers aren't that fat (yet). Depending on the final dimensions of the case, I might be able to push that to 15mm each side.
Power supply wise, assuming Mini ITX i'll probably go with a PicoPSU-160-XT and a custom 12v/5v 300W power supply. I'm trying to keep the cabling to a minimum here. An alternative would be some of the 300/350W MiniITX power supplised (FSP for example).
I'm open to ideas from the community...
Cables are not part of the design just yet. I'm still trying to figure out if I go with backplane technology or if I go the same route as Backblaze with SATA fan out cables. Once I've decided this, i'll have a better idea on what cabling requirements I have.
I have looked at this board, and kind of designed this case with this board in mind. It's an awesome board; 4 x SFF-8087 connectors on a Mini-ITX board? Sweet!
However I've subsequently discovered hardware driver compatibility issues with Illumos based distros and FreeNAS that make this board...
Interesting board, that might be exactly what i'm looking for, though it would only support 8 disks natively. I'm curious what that extra SFF-8087 port is (there appears to be 3 on that board?). Intel VT-D is not really an issue. I plan to make this a native ZFS box. I already have a...
So I've been looking for a decent NAS case for quite some time now that can push the limits on disk count. I've come across cases up to 8 drives or so, but these cases have a lot of wasted space internally, and are expensive, noisy and leave me wanting.
So, I've started putting together some...
pretty standard for ZFS; expensive higher IO disks for production, larger capacity slower disk for backup. You'll find this to be a "run of the mill" use of ZFS. I have several SAS based production ZPOOL's (mirror based) shipping snapshots via zfs send/recv and netcat to a SATA based (raidz)...
Hi houkouonchi,
I noticed you had a smartinfo.pl scrip that you used to agregate all the critical SMART stats for your disks. Is this publicly available, or was it something you wrote yourself? I'm looking for something exactly the same, and at the point where i'm going to write the script...
Just ordered up 8 of the 4TB Seagate ST4000DM000's to replace my 2TB Samsung HD204U's (need more space!). It'll be interesting to see how they perform.
Now I just have to migrate data and sell the 8 x 2TB disks before my wife finds out I bought $1600 in disk drives:D
CPU discussions only really come into play with large ZFS systems with have enough disk to saturate controller and CPU I/O. I've noticed this recently when performing "zpool scrub" on a system with an older Xeon Quad Core. It was hitting 800+ MB/s, but running "top" revealed 100% cpu...
*subscribed*
This looks interesting guys. I'm interested in the Solaris side as well as I'm having backup time window issues with ZFS send/recv's in our production environment. Looks promising!
Are you sure? It's entirely possible to accidentely fill your root pool by doing something stupid like a zfs send/recv to it, or something similar. I'm with brutalizer on this one; boot the VM with the OpenIndiana CD ISO image and see if you can mount the zpool. At that point, do a 'zfs list'...
Light fragmentation will rarely cause increased login times and degraded user performance on MS Terminal Servers. That being said, heavy fragmentation certainly can (ie: upwards of say 50%). Unfortunately this is a contentious topic as it's difficult to accurately measure the effect that...