Search results

  1. J

    Virtual gaming server

    We have found it more energy efficient to run one big computer rather than a handful of little workstations and servers - not to mention the cost savings of less equipment to maintain, have depreciate and upgrade. Whether or not that applies to anyone else really depends on how many systems and...
  2. J

    Technical RaidZ information from the ZFS architect:

    A really good read. Thanks for posting it!
  3. J

    Anyone use ownCloud?

    I use it in place of Dropbox. Syncs well between the server and my iOS devices. I'm working on moving my calendars and contacts from Google across to it.
  4. J

    Finally did it...what OS to run

    +1 for a M1015/9201-8i/other SAS2008 card - not super expensive and performs really well.
  5. J

    Finally did it...what OS to run

    +1 to this! How valuable is the data? If it's really important that it not be corrupted at all look at ZFS, whereas if it's TV shows and the like you may not feel that level of safety necessary. If you want ZFS there's a couple of easy options - Napp-it is probably the easiest I've tried.
  6. J

    Keep WHS2011 vs moving on

    Not as far as I am aware.
  7. J

    Keep WHS2011 vs moving on

    Are you backing up basic files (e.g. user directories etc.) or backing up system images? We just use rsync for user directories and a ZFS on Linux box for those services, and it's pretty darn straight-forward.
  8. J

    raided ssds vs raided hdds

    I haven't ever heard of that - TRIM isn't always passed through depending on your RAID setup but that affects performance more than longevity, iirc. I don't see why that would be the case. We have a large number of both in RAID and the SSD fail rate is lower than the HDD rate.
  9. J

    Has anyone compared FreeBSD ZFS vs ZFS on Linux?

    I've used OpenIndiana+ZFS, freeBSD+ZFS and Ubuntu/Debian+ZFS and in all cases the performance of a 6-disk raidz2 could saturate gigabit 2-3 times over which is more than enough for what we need. As such we use ZFS on Linux for reasons similar to yours (better package management and generally...
  10. J

    M1015 or 9201-8i - still recommended?

    Less consumption of PCI-E slots - if you need them for other cards and only had one free, for argument's sake, you wouldn't have much choice.
  11. J

    M1015 or 9201-8i - still recommended?

    Sure, as far as I'm aware there's not anything that's significantly better available in that price range. They perform really well, don't use much power and aren't super expensive. I always liked three SAS2008 cards in a 4224 to keep things simple.
  12. J

    My WD Black runs at 56c - is this too warm?

    For some hard info on drive temps, have a look at the Google study on drive failures - they found that above (and below!) a certain temperature range failure % increased.
  13. J

    ZFS - RAM Optimization

    As spankit has said, ZFS manages RAM extremely well with regard to using it as an adative read cache (ARC); adding a RAMdisk is adding an unnecessary layer. Using a RAMdisk as L2ARC will also reduce the amount of ARC available as L2ARC eats up a small amount of RAM which would otherwise be used...
  14. J

    Repairing SD card

    You could try a live Ubuntu CD and a program like gddrescue - we've had some succes with that before, imaging broken drives. It has the neat feature of going back and re-trying any bits that didn't work the first/second/third/nth time around so if it's coming and going it may help.
  15. J

    need immediate help with something I deleted

    What partitions show up in disk management? Before you do anything, back up anything you may want on it (presumably not much?).
  16. J

    seagate 600 pro 240 weird speeds on sata2

    I have one of the 120GB models of that line and the benchmark graphs are the strangest I've ever seen on a drive - all over the shop. It seems to perform really well in real life, though, and I never did get a satisfactory explanation as to the oddities.
  17. J

    Advice needed: server for 48 SSDs (professional project)

    As a point of curiosity I looked up how many pluggings they're rated to - turns out it's 50. (from the wiki - esata section http://en.wikipedia.org/wiki/Serial_ATA) Sounds like the OP's described situation might involve rather a lot more than 50 over the lifespan of the drives.
  18. J

    which adapter for additional SATA ports?

    What SAS/SATA issues? These cards are used in so many home setups it's not funny with no issues. Can't refute the budget issue, though! ...I will say, though, that anything I have ever used which cost less than the $100-odd a M1015 does has been significantly worse, either in terms of...
  19. J

    WD 3TB Red vs SE series drives

    We replaced all of our Greens and Reds with 4TB Se drives and have been really happy with the performance and reliability so far. Only downside is that they run hotter, which means noisier fans. All up very much worth it - the 5 year warranty is good peace of mind.
  20. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    OK. After spinning up most of the VMs we need running on here performance is terrible - latency on the dual-mirror SSD zpool is sky-high - so I'm going to record a bunch of benchmarks, move to ESXi 5.5 and see how we go there. I suspect it'll be much improved, based on the fact that our AIO with...
  21. J

    Compact 20TB: planning.

    It does.
  22. J

    which adapter for additional SATA ports?

    M1015 or 9201-8i/other equivalent in IT mode - 8 ports total, no RAID, very fast, compatible with just about everything under the sun. Not terribly expensive and is just plug-and-play.
  23. J

    ESXi 5.1 Compatible SATA controller

    IBM M1015/LSI 9201-8i/equivalent. In IT mode.
  24. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Current thought process: Autostart fileserver VM on boot - done Find the XenServer CLI commands for re-mounting the NFS share Write a script on the fileserver VM to log into XenServer and remount the shares Run the above when the fileserver VM boots ...and for shutdown - see if it's possible...
  25. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    OK. After poking and prodding it with hard reboots and whatnot, here is what happens when 6.2 boots and the fileserver VM isn't brought back online: Click repair: ...so not really automatic, but good enough for my use. One thing that it's NOT doing on it's own now is rebooting or shutting...
  26. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Aha! THAT was the info I was looking for when I started this thread. Thankyou for posting that. Well, my experience so far has been more positive than I would expect based on the above - the performance with ~10 VMs stored on a RAID10 ZFS SSD pool has mirrored what we were getting with bare...
  27. J

    Best Cheap HBA for >=3TB Drives, and 8-16ports+

    M1015 or equivalent (9201-8i etc) are really what you're after - we have run every size up to 4TB on our ZFS box without issue. Both the ones I mentioned are the SAS2008 chipset, which is proven and works with just about anything. If you get one of those two, get it in IT mode (9201-8i only come...
  28. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    In an all-in-one setup?
  29. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Went with Xenserver, works perfectly.
  30. J

    Storing harddrives. New to the field.

    We had 16 3.5" hard drives mounted in a Fractal Design Define XL. Ten in the stock bays and six in 3-in-2 bays. If you want more look at a Norco 4224. Do you need RAID? Have you got backups? How much redundancy do you want?
  31. J

    WD RE4 Hard Drives.

    We have found the RE4s to be quite reliable - I wouldn't worry.
  32. J

    build new multipurpose file server

    I run raid10 (two mirrored pair vdevs in one zpool) with zfs and have no performance issues - the read speeds in particular are fantastic. What issues are you referring to? You can use SSDs as L2ARC caches with ZFS.
  33. J

    build new multipurpose file server

    Hold up. ZFS doesn't "need" 24GB RAM - it needs a few GB (2GB-4GB is enough) and will use any other RAM it has available for caching (ARC) over time. Unless you are running dedup, that is. Btw - don't use dedup. :p Are you planning on running ESXi still in an all-in-one setup? Which OS do you...
  34. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    I hope there is something for those who want to use the free version for more than 60 days like you used to be able to! Definitely going with passthrough, thanks :)
  35. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Hang on. I was just reading... does the web client really expire after 2 months and the native client won't function for V.10 VMs? Is there any reason not to just use 5.1 assuming <=32GB RAM?
  36. J

    Napp-it - NFS & ESXi 5.5

    You don't necessarily need L2ARC - if your ARC is big enough it may even never be used. I had one for a while but after checking the stats it was almost never touched as the RAM allocated was large enough, so the SSD was used for something else. Check your ARC stats once it's up and running to...
  37. J

    How much power is your server drawing?

    All on our UPS: P9D-E/4L + E3-1240V3+32GB RAM+6x4TB enterprise drives + 3x HBAs + external SAS card + 2x2TB enterprise drives + 6xSSDs, Platinum PSU Dual-G34 motherboard, 70GB DDR3, 2x16 core Opterons, 1x SSD, Platinum PSU P9D-WS, E3-1245V3, 16GB DDR3, 4xSSDs, Platinum PSU, 3x monitors ...the...
  38. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Thanks :) After doing some more searching after the replies it's starting to look a lot like it's not worth my time to try out XenServer in an AIO - I'm aiming for a minimum of time to switch so taking the known/safe path is more appealing.
  39. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Was that with XenServer or ESXi?
  40. J

    All-In-One: XenServer 6.2 vs. ESXi 5.5

    Well. That isn't a promising start! Thanks for the response. I don't suppose you tried ESXi with the same hardware config?
Back
Top