I occasionally put all my important files on a USB key stick, and then go drop it in some random heavily-traveled area. A mall, a park, whatever. My assumption being at least some % of them will end up plugged into some guy's computer and uploaded to the internet in some sort of "look at all...
My first reaction was 'probably a failing component, like a slowly dying disk', but while that could be the case, when I got to the end and read your stats my reaction was 'not nearly enough RAM'.
No, not bashing on them. Especially ZFS - it's perfectly fine on FreeBSD and illumos-based OS. It's just ZoL I'm a little leery on, still. That's less, btw, to do with ZFS itself or even ZoL, and more to do with unfinished or unavailable features -- DTrace key amongst them. I come from the...
This is not meant to be snarky, though I can't come up with a way for it not to sound that way. Are we at the point where BTRFS is stable enough that it's a plus to say a storage appliance for any form of production use, including home use, is utilizing it?
I should add I still don't approve of...
I'm glad rsq already brought it up.
If what you have is a bunch of nearly or entirely homogeneous VM's, especially if they're not permanent, or are semi-permanent in that you can replace them when doing major upgrades to the OS, etc, and you either do not need HA storage or your total number of...
What is the state of VGA pass-thru on VMware? Wouldn't you rather have a box that could potentially handle high-end gaming if the desire hits? Last I checked was awhile ago, but at that point only open-source Xen, on very specific hardware, could handle a modern video card being passed through...
I disagree with him.
You're running ZFS. In RAID. It has another copy of the data. Why would you ever want to let the drive crunch away trying for more than 7-8 seconds to recover a block of data (which after all that time it may very well still fail to manage to read) when you've got another...
Effectively, yes.
Yes. More so, if going defaults, as your zvol will have a significantly lower average block size than your NFS share.
Good, then you should be in good shape, assuming you're OK with having a period of disruption and controlled chaos after a power loss (which you may...
No, just the opposite, which is why people think iSCSI is faster. Out of the box on illumos derivatives I'm aware of, COMSTAR enables "Write Back Cache" by default on LU's. This "feature" basically says: unless the client calls for synchronous writes, I will assume asynchronous writes.
This...
I find it somewhat concerning that the prevailing wisdom seems to be you only need a slog device for NFS, or that only with NFS does it become a necessity.
It is just as necessary on iSCSI. If you don't think that, then you've been running iSCSI with writeback cache enabled, and you're not...
New OS install with latest ZFS bits - try to follow instructions in Serverfault reply.
Also to answer your second question, no, absolutely no way. If you cannot 'fool' the system into re-importing the pool with the old drive, you're not getting it back through any non-Herculean methods. Which...
My $0.02 is I wouldn't trust critical backups to any company running a solution that is oversubscribing. That's going to be any company offering a flat fee for 'unlimited' or outrageous amounts of space. They're banking on their income outdoing their expenditures primarily through a large...
Yes, most the illumos derivs also have 'beadm' or equivalence. The ability to upgrade into a new snapshot/clone and boot from it, and roll back out of it if there's a problem, is very, very handy in production environments.
I should also correct one comment in my post -- for NFSv3, they're all...
If you're planning on making use primarily of NFS, I find FreeBSD, Linux, or an illumos derivative all usable. Not equally usable, but all reasonable.
If you're planning on making major use of CIFS/SMB, and don't need SMB2.1 or higher, illumos derivatives have the advantage.
If you're planning...
Can't do offsite backups? Unfortunate, but that doesn't mean don't DO backups. Onsite backups improve your data's safety by eleventy-bajillion percent when compared to no backups at all.
Another anecdotal +1 for NewEgg. Been ordering drives from them off and on going back basically to their first year. From 1-2 disks at a time up to 4-8 at a time. Never had one show up in a state or in a shipping state that made me leery. I have noticed an upward trend in their packaging of...
My understanding of this code is that it is only in FreeBSD, so it is not a 'general' question. :) That same understanding is that the code identifies what devices support TRIM (including some trickery to get around devices that claim support but don't actually), so my not-personally-verified...
////generally yes, but as far as I understood, if I set it to single or low, it negative impact the rest of the pool disks?
This perhaps means you don't understand yet. The ZIL is low queue depth. You can't affect that. That is how it works. So if your SSD doesn't perform well at low queue...
LACP does not increase individual transfer speeds, and depending on setup may not even increase per-host transfer speeds. That is not what it is for. It is for link redundancy, period. If you're making lots of connections and are using LACP with certain settings it might improve your aggregate...
Glad I only have a GTX 570 and thus this 'feature' wasn't running w/o my knowledge..
On a mechanical drive I wouldn't expect this to significantly decrease the longevity of the disk. On an SSD, however, 5 MB/s added on top of other typical wear is likely to have a potentially significant impact...
Yes. The more vdevs, the higher the IOPS potential. ZFS will 'stripe' all vdevs in the pool by default. The catch is it isn't necessarily a good idea to be going to more vdevs if doing so is requiring you drop the parity level. One of the fairly hard & fast rules I recommend is not going under...
That would also be preferable, though I was personally suggesting staying @ 16 and just using the enclosure. If you really want to not be able to just move the enclosure around, go with 2x20 z2, yes.
I hope you typo'd. Please don't make a 9-disk raidz (raidz1) vdev of 3 TB drives. That has data loss potential written all over it. Also, having 2 hot spares + 2 9-disk raidz vdev's makes NO sense, because you could instead do 2 x 9-disk raidz2, adding one of those hot spares to each vdev to...
ZFS can survive w/o TLER, but as I stated earlier, you're going to hate it when a drive goes into deep recovery and your pool either hangs until it recovers or drops the disk from the pool (which depending on environment could take nearly as long to actually do as the deep recovery action takes...
Personally I'd stick everything on the pool into just the enclosure, because then you can in the future swap out the server in front of it with more ease, but I'm a lazy git by nature, so YMMV.
I'd also go with 2x8 raidz2 vdevs - you're doubling your IOPS potential for not much loss in capacity...
I'm debating 'winning' this and stroking my ego with a list of systems I have access to, or 'losing' this horribly and feeling very inadequate by pointing out my only personal NAS box is my home NAS -- and it's a 4-disk zpool on an 8 GB RAM box.. that I'm not even using 70% capacity on. :(
What...
So wait, you have single-port HBA's? Not dual-port HBA's? If you had dual-port HBA's you could provide HBA & JBOD & cable redundancy w/o any daisy chaining.
In general I tend to recommend the most diverse possible setup that maintains the flattest topology possible. eg: avoid daisy chaining as...
As others have already stated - your SSD latency at a single or very low queue depth is what is ultimately the only real value of importance in most ZIL use-cases, but it sounds like that eventually got communicated in an understandable way for you, so I'll leave it at that.
l2arc_write_max...
I hope you keep backups of all this data, or that it is unimportant data you don't mind losing! Because some of those drives will die with time.
Stick to something like Unraid or the other suggestions. Stay far away from 'normal' RAID, like the one built into Windows, because any 'RAID0' setup...
I come from an 'enterprise'-ish world. So, generally, I recommend Seagate or Hitachi. :) -- My time on this forum seems to indicate a significant feeling of distrust for Seagate, which is very interesting to me, and IMHO must represent some difference between their retail and enterprise edition...
I would think, essentially, yes.
Assuming you want actual RAID-level redundancy, then it's going to have to spin up every drive that has a 'part' of that file. On most RAID systems, this is going to be basically all of them.
The only one I'm presently aware of that makes a point of doing what...
TLER is 'usable' in anything. TLER is just there. TLER means the drive will not take longer than X seconds to try to recover from a problem, generally 6-8 seconds or so. This is useful even on ZFS. When a drive without TLER goes into one of those 30-45+ second recovery actions, your pool's...
All of their SSD's were crap.
Those few, those very few exceptions? They just prove the rule.
DOA's, early deaths, inexplicable incompatibilities, and oh did I mention data loss (yes, seen it -- accepted writes it then provably never actually put to NAND, and later handed back old data as...
Interesting. Perhaps this is a difference between desktop & server world? In server world, the drives I've seen the most failures on has been Western Digital, and the highest failure rates on is also Western Digital. Indeed, WD is practically a bad word, and to be mocked and ridiculed when...
The problem with suggesting iSCSI is 'faster' is it is waxing over the myriad of current deficiencies in COMSTAR, and the myriad of current deficiencies in zvols, that just aren't present when you opt for filesystems and SMB/NFS. I guess what I'm trying to say is IF you're going that route, /be...
Whew!
Well, if it helps, your description of the problem seems to implicate DPM? That 'upon installing DPM' the performance drops? That wouldn't really be all that surprising, depending on what DPM is and does. It could be installing an agent, it could be modifying Windows properties related to...