Search results

  1. N

    What's your backup strategy?

    I occasionally put all my important files on a USB key stick, and then go drop it in some random heavily-traveled area. A mall, a park, whatever. My assumption being at least some % of them will end up plugged into some guy's computer and uploaded to the internet in some sort of "look at all...
  2. N

    To FreeNAS or not to FreeNAS?

    My first reaction was 'probably a failing component, like a slowly dying disk', but while that could be the case, when I got to the end and read your stats my reaction was 'not nearly enough RAM'.
  3. N

    To FreeNAS or not to FreeNAS?

    No, not bashing on them. Especially ZFS - it's perfectly fine on FreeBSD and illumos-based OS. It's just ZoL I'm a little leery on, still. That's less, btw, to do with ZFS itself or even ZoL, and more to do with unfinished or unavailable features -- DTrace key amongst them. I come from the...
  4. N

    To FreeNAS or not to FreeNAS?

    This is not meant to be snarky, though I can't come up with a way for it not to sound that way. Are we at the point where BTRFS is stable enough that it's a plus to say a storage appliance for any form of production use, including home use, is utilizing it? I should add I still don't approve of...
  5. N

    Thin Provisioning + Virtualization

    I'm glad rsq already brought it up. If what you have is a bunch of nearly or entirely homogeneous VM's, especially if they're not permanent, or are semi-permanent in that you can replace them when doing major upgrades to the OS, etc, and you either do not need HA storage or your total number of...
  6. N

    ZFS/ESXi All-In-One - SuperMicro Motherboard Questions

    What is the state of VGA pass-thru on VMware? Wouldn't you rather have a box that could potentially handle high-end gaming if the desire hits? Last I checked was awhile ago, but at that point only open-source Xen, on very specific hardware, could handle a modern video card being passed through...
  7. N

    Best drives for RAIDZ (TLER, URE, cost, etc.)

    I disagree with him. You're running ZFS. In RAID. It has another copy of the data. Why would you ever want to let the drive crunch away trying for more than 7-8 seconds to recover a block of data (which after all that time it may very well still fail to manage to read) when you've got another...
  8. N

    ZFS boot/zil/l2arc all off of one SSD?

    Effectively, yes. Yes. More so, if going defaults, as your zvol will have a significantly lower average block size than your NFS share. Good, then you should be in good shape, assuming you're OK with having a period of disruption and controlled chaos after a power loss (which you may...
  9. N

    ZFS boot/zil/l2arc all off of one SSD?

    No, just the opposite, which is why people think iSCSI is faster. Out of the box on illumos derivatives I'm aware of, COMSTAR enables "Write Back Cache" by default on LU's. This "feature" basically says: unless the client calls for synchronous writes, I will assume asynchronous writes. This...
  10. N

    ZFS boot/zil/l2arc all off of one SSD?

    I find it somewhat concerning that the prevailing wisdom seems to be you only need a slog device for NFS, or that only with NFS does it become a necessity. It is just as necessary on iSCSI. If you don't think that, then you've been running iSCSI with writeback cache enabled, and you're not...
  11. N

    ZFS: mix SATA and USB disks?

    Do you enjoy tragedy?
  12. N

    ZFS: bringing a disk online in an unavailable pool

    New OS install with latest ZFS bits - try to follow instructions in Serverfault reply. Also to answer your second question, no, absolutely no way. If you cannot 'fool' the system into re-importing the pool with the old drive, you're not getting it back through any non-Herculean methods. Which...
  13. N

    Cheapest/easiest way to store archived data in the cloud? 5TB and slowly growing

    My $0.02 is I wouldn't trust critical backups to any company running a solution that is oversubscribing. That's going to be any company offering a flat fee for 'unlimited' or outrageous amounts of space. They're banking on their income outdoing their expenditures primarily through a large...
  14. N

    Which OS for home ZFS system?

    Yes, most the illumos derivs also have 'beadm' or equivalence. The ability to upgrade into a new snapshot/clone and boot from it, and roll back out of it if there's a problem, is very, very handy in production environments. I should also correct one comment in my post -- for NFSv3, they're all...
  15. N

    Which OS for home ZFS system?

    If you're planning on making use primarily of NFS, I find FreeBSD, Linux, or an illumos derivative all usable. Not equally usable, but all reasonable. If you're planning on making major use of CIFS/SMB, and don't need SMB2.1 or higher, illumos derivatives have the advantage. If you're planning...
  16. N

    Why do people say "Raid won't prevent data loss"...

    Can't do offsite backups? Unfortunate, but that doesn't mean don't DO backups. Onsite backups improve your data's safety by eleventy-bajillion percent when compared to no backups at all.
  17. N

    Never buy hard drives from Newegg!

    Another anecdotal +1 for NewEgg. Been ordering drives from them off and on going back basically to their first year. From 1-2 disks at a time up to 4-8 at a time. Never had one show up in a state or in a shipping state that made me leery. I have noticed an upward trend in their packaging of...
  18. N

    ZFS trim when only some drives are SSDs

    My understanding of this code is that it is only in FreeBSD, so it is not a 'general' question. :) That same understanding is that the code identifies what devices support TRIM (including some trickery to get around devices that claim support but don't actually), so my not-personally-verified...
  19. N

    big zfs ssd problem - zil/l2arc max 2500 iops

    ////generally yes, but as far as I understood, if I set it to single or low, it negative impact the rest of the pool disks? This perhaps means you don't understand yet. The ZIL is low queue depth. You can't affect that. That is how it works. So if your SSD doesn't perform well at low queue...
  20. N

    Increase iSCSI Performance over 1 GB Ethernet?

    LACP does not increase individual transfer speeds, and depending on setup may not even increase per-host transfer speeds. That is not what it is for. It is for link redundancy, period. If you're making lots of connections and are using LACP with certain settings it might improve your aggregate...
  21. N

    How much does constant writing (shadowplay) wear a drive down?

    Glad I only have a GTX 570 and thus this 'feature' wasn't running w/o my knowledge.. On a mechanical drive I wouldn't expect this to significantly decrease the longevity of the disk. On an SSD, however, 5 MB/s added on top of other typical wear is likely to have a potentially significant impact...
  22. N

    ZFS N00b: raidz3 and larger qty of disks?

    Yes. The more vdevs, the higher the IOPS potential. ZFS will 'stripe' all vdevs in the pool by default. The catch is it isn't necessarily a good idea to be going to more vdevs if doing so is requiring you drop the parity level. One of the fairly hard & fast rules I recommend is not going under...
  23. N

    ZFS N00b: raidz3 and larger qty of disks?

    Haha, yes, good catch.
  24. N

    ZFS N00b: raidz3 and larger qty of disks?

    That would also be preferable, though I was personally suggesting staying @ 16 and just using the enclosure. If you really want to not be able to just move the enclosure around, go with 2x20 z2, yes.
  25. N

    ZFS N00b: raidz3 and larger qty of disks?

    I hope you typo'd. Please don't make a 9-disk raidz (raidz1) vdev of 3 TB drives. That has data loss potential written all over it. Also, having 2 hot spares + 2 9-disk raidz vdev's makes NO sense, because you could instead do 2 x 9-disk raidz2, adding one of those hot spares to each vdev to...
  26. N

    nas4free RED vs Black

    ZFS can survive w/o TLER, but as I stated earlier, you're going to hate it when a drive goes into deep recovery and your pool either hangs until it recovers or drops the disk from the pool (which depending on environment could take nearly as long to actually do as the deep recovery action takes...
  27. N

    LSI Outs Their SAS12 lineup

    Oooh! Now we wait a few months for the inevitable initial firmware updates..
  28. N

    ZFS N00b: raidz3 and larger qty of disks?

    Personally I'd stick everything on the pool into just the enclosure, because then you can in the future swap out the server in front of it with more ease, but I'm a lazy git by nature, so YMMV. I'd also go with 2x8 raidz2 vdevs - you're doubling your IOPS potential for not much loss in capacity...
  29. N

    How many of you use NAS?

    I'm debating 'winning' this and stroking my ego with a list of systems I have access to, or 'losing' this horribly and feeling very inadequate by pointing out my only personal NAS box is my home NAS -- and it's a 4-disk zpool on an 8 GB RAM box.. that I'm not even using 70% capacity on. :( What...
  30. N

    SAS Multipath - Supermicro Dual Expander Backplane - BPN-SAS2-826EL2

    So wait, you have single-port HBA's? Not dual-port HBA's? If you had dual-port HBA's you could provide HBA & JBOD & cable redundancy w/o any daisy chaining. In general I tend to recommend the most diverse possible setup that maintains the flattest topology possible. eg: avoid daisy chaining as...
  31. N

    big zfs ssd problem - zil/l2arc max 2500 iops

    As others have already stated - your SSD latency at a single or very low queue depth is what is ultimately the only real value of importance in most ZIL use-cases, but it sounds like that eventually got communicated in an understandable way for you, so I'll leave it at that. l2arc_write_max...
  32. N

    JBOD software with individual drive spin down?

    I hope you keep backups of all this data, or that it is unimportant data you don't mind losing! Because some of those drives will die with time. Stick to something like Unraid or the other suggestions. Stay far away from 'normal' RAID, like the one built into Windows, because any 'RAID0' setup...
  33. N

    nas4free RED vs Black

    I come from an 'enterprise'-ish world. So, generally, I recommend Seagate or Hitachi. :) -- My time on this forum seems to indicate a significant feeling of distrust for Seagate, which is very interesting to me, and IMHO must represent some difference between their retail and enterprise edition...
  34. N

    JBOD software with individual drive spin down?

    I would think, essentially, yes. Assuming you want actual RAID-level redundancy, then it's going to have to spin up every drive that has a 'part' of that file. On most RAID systems, this is going to be basically all of them. The only one I'm presently aware of that makes a point of doing what...
  35. N

    nas4free RED vs Black

    TLER is 'usable' in anything. TLER is just there. TLER means the drive will not take longer than X seconds to try to recover from a problem, generally 6-8 seconds or so. This is useful even on ZFS. When a drive without TLER goes into one of those 30-45+ second recovery actions, your pool's...
  36. N

    [Q] RAID-Z migration from unRAID

    Go for it. Just -- keep backups. Which would be my advice if you went ZFS, too.
  37. N

    OCZ Stock Crashes 60.5% in TWO days

    All of their SSD's were crap. Those few, those very few exceptions? They just prove the rule. DOA's, early deaths, inexplicable incompatibilities, and oh did I mention data loss (yes, seen it -- accepted writes it then provably never actually put to NAND, and later handed back old data as...
  38. N

    Question about SATA 3.0 Gb/s

    Interesting. Perhaps this is a difference between desktop & server world? In server world, the drives I've seen the most failures on has been Western Digital, and the highest failure rates on is also Western Digital. Indeed, WD is practically a bad word, and to be mocked and ridiculed when...
  39. N

    large storage with software RAID

    The problem with suggesting iSCSI is 'faster' is it is waxing over the myriad of current deficiencies in COMSTAR, and the myriad of current deficiencies in zvols, that just aren't present when you opt for filesystems and SMB/NFS. I guess what I'm trying to say is IF you're going that route, /be...
  40. N

    Inexplicably Decreased Performance - RAID Array on Live Server vs Test Server

    Whew! Well, if it helps, your description of the problem seems to implicate DPM? That 'upon installing DPM' the performance drops? That wouldn't really be all that surprising, depending on what DPM is and does. It could be installing an agent, it could be modifying Windows properties related to...
Back
Top