We have found it more energy efficient to run one big computer rather than a handful of little workstations and servers - not to mention the cost savings of less equipment to maintain, have depreciate and upgrade. Whether or not that applies to anyone else really depends on how many systems and...
+1 to this!
How valuable is the data? If it's really important that it not be corrupted at all look at ZFS, whereas if it's TV shows and the like you may not feel that level of safety necessary. If you want ZFS there's a couple of easy options - Napp-it is probably the easiest I've tried.
Are you backing up basic files (e.g. user directories etc.) or backing up system images? We just use rsync for user directories and a ZFS on Linux box for those services, and it's pretty darn straight-forward.
I haven't ever heard of that - TRIM isn't always passed through depending on your RAID setup but that affects performance more than longevity, iirc. I don't see why that would be the case.
We have a large number of both in RAID and the SSD fail rate is lower than the HDD rate.
I've used OpenIndiana+ZFS, freeBSD+ZFS and Ubuntu/Debian+ZFS and in all cases the performance of a 6-disk raidz2 could saturate gigabit 2-3 times over which is more than enough for what we need. As such we use ZFS on Linux for reasons similar to yours (better package management and generally...
Sure, as far as I'm aware there's not anything that's significantly better available in that price range. They perform really well, don't use much power and aren't super expensive. I always liked three SAS2008 cards in a 4224 to keep things simple.
As spankit has said, ZFS manages RAM extremely well with regard to using it as an adative read cache (ARC); adding a RAMdisk is adding an unnecessary layer. Using a RAMdisk as L2ARC will also reduce the amount of ARC available as L2ARC eats up a small amount of RAM which would otherwise be used...
You could try a live Ubuntu CD and a program like gddrescue - we've had some succes with that before, imaging broken drives. It has the neat feature of going back and re-trying any bits that didn't work the first/second/third/nth time around so if it's coming and going it may help.
I have one of the 120GB models of that line and the benchmark graphs are the strangest I've ever seen on a drive - all over the shop. It seems to perform really well in real life, though, and I never did get a satisfactory explanation as to the oddities.
As a point of curiosity I looked up how many pluggings they're rated to - turns out it's 50.
(from the wiki - esata section http://en.wikipedia.org/wiki/Serial_ATA)
Sounds like the OP's described situation might involve rather a lot more than 50 over the lifespan of the drives.
What SAS/SATA issues? These cards are used in so many home setups it's not funny with no issues. Can't refute the budget issue, though!
...I will say, though, that anything I have ever used which cost less than the $100-odd a M1015 does has been significantly worse, either in terms of...
We replaced all of our Greens and Reds with 4TB Se drives and have been really happy with the performance and reliability so far. Only downside is that they run hotter, which means noisier fans. All up very much worth it - the 5 year warranty is good peace of mind.
OK. After spinning up most of the VMs we need running on here performance is terrible - latency on the dual-mirror SSD zpool is sky-high - so I'm going to record a bunch of benchmarks, move to ESXi 5.5 and see how we go there. I suspect it'll be much improved, based on the fact that our AIO with...
Current thought process:
Autostart fileserver VM on boot - done
Find the XenServer CLI commands for re-mounting the NFS share
Write a script on the fileserver VM to log into XenServer and remount the shares
Run the above when the fileserver VM boots
...and for shutdown - see if it's possible...
OK. After poking and prodding it with hard reboots and whatnot, here is what happens when 6.2 boots and the fileserver VM isn't brought back online:
...so not really automatic, but good enough for my use.
One thing that it's NOT doing on it's own now is rebooting or shutting...
Aha! THAT was the info I was looking for when I started this thread. Thankyou for posting that.
Well, my experience so far has been more positive than I would expect based on the above - the performance with ~10 VMs stored on a RAID10 ZFS SSD pool has mirrored what we were getting with bare...
M1015 or equivalent (9201-8i etc) are really what you're after - we have run every size up to 4TB on our ZFS box without issue. Both the ones I mentioned are the SAS2008 chipset, which is proven and works with just about anything. If you get one of those two, get it in IT mode (9201-8i only come...
We had 16 3.5" hard drives mounted in a Fractal Design Define XL. Ten in the stock bays and six in 3-in-2 bays. If you want more look at a Norco 4224.
Do you need RAID? Have you got backups? How much redundancy do you want?
I run raid10 (two mirrored pair vdevs in one zpool) with zfs and have no performance issues - the read speeds in particular are fantastic. What issues are you referring to?
You can use SSDs as L2ARC caches with ZFS.
Hold up. ZFS doesn't "need" 24GB RAM - it needs a few GB (2GB-4GB is enough) and will use any other RAM it has available for caching (ARC) over time. Unless you are running dedup, that is. Btw - don't use dedup. :p
Are you planning on running ESXi still in an all-in-one setup? Which OS do you...
You don't necessarily need L2ARC - if your ARC is big enough it may even never be used. I had one for a while but after checking the stats it was almost never touched as the RAM allocated was large enough, so the SSD was used for something else. Check your ARC stats once it's up and running to...
Thanks :) After doing some more searching after the replies it's starting to look a lot like it's not worth my time to try out XenServer in an AIO - I'm aiming for a minimum of time to switch so taking the known/safe path is more appealing.