Trying to standardize on our vmdk formatting here and finding online documentation to be a bit lacking as far as recent vsphere & storage array abilities.
Assuming you have an array that performs thin provisioning (LUNs via FC, no NFS here), what is your preference for a thick vmdk?
I've thrown out the idea of a thin provisioned vmdk, or 'thin on thin', as I don't like the idea of having to monitor over-committed storage against two seperate peices of infrastructure.
To be able to take immediate advantage of thin provisioning on the array, you'd need to present it with a lazy zeroed vmdk, as an eager zeroed vmdk will consume the full capacity up front.
Seems like an obvious choice, but I am interested in the performance benefit of an eager disk. Is it a noticeable impact to have a lazy disk wait for a datastore lock before it can zero & write to newly required blocks on demand? In a busy environment, I think you would see some impact to latency - but what about less than 100 vms?, and only 10-15% of those heavy (ok, maybe moderate) on IO.
From my testing, it appears I can gain back all of the empty eager zeroed blocks after a deduplication scan against the volume - so if the performance impact is marginal, it really comes down to how I want to logically manage the over-committed storage.
Interested in thoughts & experiences from the storage experts out there. Ultimately, it is looking like I will go the lazy route to take advantage of immediate storage savings & let deduplication do it's magic on top of it.
Assuming you have an array that performs thin provisioning (LUNs via FC, no NFS here), what is your preference for a thick vmdk?
I've thrown out the idea of a thin provisioned vmdk, or 'thin on thin', as I don't like the idea of having to monitor over-committed storage against two seperate peices of infrastructure.
To be able to take immediate advantage of thin provisioning on the array, you'd need to present it with a lazy zeroed vmdk, as an eager zeroed vmdk will consume the full capacity up front.
Seems like an obvious choice, but I am interested in the performance benefit of an eager disk. Is it a noticeable impact to have a lazy disk wait for a datastore lock before it can zero & write to newly required blocks on demand? In a busy environment, I think you would see some impact to latency - but what about less than 100 vms?, and only 10-15% of those heavy (ok, maybe moderate) on IO.
From my testing, it appears I can gain back all of the empty eager zeroed blocks after a deduplication scan against the volume - so if the performance impact is marginal, it really comes down to how I want to logically manage the over-committed storage.
Interested in thoughts & experiences from the storage experts out there. Ultimately, it is looking like I will go the lazy route to take advantage of immediate storage savings & let deduplication do it's magic on top of it.
Last edited: