Storage of VMs on ESXi

cphillips

n00b
Joined
Mar 23, 2010
Messages
52
All,

We have Dell Equallogic iSCSI arrays in use. We have various categories/pools of storage - 10k/15k/SSD.

We also run ESXi 5.5 on our infrastructure (Enterprise Plus).

We have several SQL (MS) VMs that are all currently sitting on the SSD arrays. Is there an easy way to move the C: drives of these servers to slower disk and leave just the actual data drives with the DB's/logs on to SSD?

I've read up and found mention of affinity/non-affinity rules but not 100% sure this is correct.

We'd just like to make things a bit tidier. We run fully automated vMotion/DRS.

Regards
 
Storage vmotion the boot drives (assuming that you didn't install the DB there) to the appropriate lun. Advanced mode on the migration will let you pick individual disks.
 
Storage vmotion the boot drives (assuming that you didn't install the DB there) to the appropriate lun. Advanced mode on the migration will let you pick individual disks.

Wont his DRS move them back to group VM hard drives together?

I think he need to create a affinity/non-affinity rule to keep them separate most likely.
 
Wont his DRS move them back to group VM hard drives together?

I think he need to create a affinity/non-affinity rule to keep them separate most likely.

Agreed - I believe I have seen storage DRS move resources around that I had previously manually moved.

If you're using storage DRS, you're doing things wrong.

Storage DRS and Compute DRS are drastically different things.

lopoetve - could you explain why we are doing things wrong? Curious... and eager to learn.
 
Because the algorithms that run it are ancient and assume SO many things about the back end storage that are almost never true anymore (Sorry, missed that this thread was updated), and outside of initial placement, the space algorithms are ... well, crap.

Basically, Storage DRS was designed assuming that front end contention is almost never the issue, but that back end RAID/Lun/Volume contention is; thus, identify the volumes that are experiencing contention, read from them (should be easy!) and write somewhere that isn't, since traditional raid provides significantly better reads than writes and they'll all be independent groups (hence that stupid error "a non-vmware workload has been detected").

Reality is that almost every system out there now spreads the load across every drive- your contention issues are with the entire platform, not with individual volumes, or with large IO which really doesn't care what volume it goes to, it's just going to abuse things period.

If you're on an AFA, you're not going to help at all - just send ~more~ large block io to it, since sVmotions are all large (VMkernel IO are 64MB), and it's not going to make a difference (unless you've got luns mapped to different processors/front end ports/cache, which is not exactly common either.

On a hybrid array, the system is going to be doing it's own adaptations but has no idea what the actual workloads are - it can't tell the storage vmotion from the actual workloads either, and any adaptations it makes will either be disrupted or "undone" (or vice versa) by whatever vmware is doing. VMware has no idea what's happening for actual LBA locations anymore, so it doesn't really apply.

From a placement perspective, it just does a piss poor job overall. Initial placement has had dozens of issues and bugs, and the "move to avoid running out of space" part doesn't seem to trigger reliably either. Just be smart.


In short, since it's the day after christmas and I'm hung over - it was designed for arrays that don't really exist anymore, and does a poor job of adapting to what's actually happening, and hte initial placement engine is poop.
 
Back
Top