IO Frustrations with ESXi Box

Benzino

[H]ard|Gawd
Joined
Mar 3, 2005
Messages
1,668
So I built an ESXi box that consists of the following:


  • Gigabyte 990FXA-UD3
  • AMD FX 8320
  • 32GB G-Skill RAM
  • 1TB Western Digital Black HD for datastores
  • Mushkin 60GB SSD for paging/cache testing
  • MSI R5450
  • Corsair CX750 PSU
  • Corsair Carbide 200R case

I boot ESXi 5.1 off of a 8GB USB flash drive.

My goal with this whitebox is to test Server 2012, Exchange 2013, and Lync 2013 and
have all three of them make beautiful music together.
I have 4 VM's running:

  • vCenter Server Appliance
  • Server 2012 DC
  • Server 2012 w/Exchange 2013
  • Server 2012 soon to be Lync 2013
I can run the vCenter Server Appliance and the DC together nicely, but once I fire up
the Exchange 2013 server everything slows to a crawl. I have already installed and
configured Exchange 2013, and I know its a resource hog.
I'd like to work on installing Lync 2013 but with everything dragging so slow I can barely get
Windows Updates to run. It *will* work eventually, it just takes a long time, and I'd rather not
wait hours on end for stuff to install and update.

I never go over 16GB of memory utilized with all 4 of those VM's running, and never go over
11GHZ CPU utilization with all four running. I'm pretty certain its my disk, as I am running all
4 VM's off of the 1TB Western Digital disk.

My ultimate goal is to create another two Windows 7 VM's so I can test email and Lync communcation back
and forth. So 6 VM's total running at the same time is where I'd like to be.

I thought about moving the page file for each of these VM's over to the SSD but I was cautioned against
it in a previous thread here. What can I do to improve this frustrating IO performance?

Here are some options:

  • Get another 1TB drive, blow everything away, and create a RAID0 array
  • Get 3 more 1TB drives, blow everything away, and create a RAID10 array
  • Get a 250GB SSD to run the guest OS's on with minimal storage and point Exchange/Lync storage to 1TB drive
Other options?

Its a long read but I appreciate in advance, as always, the input from folks on this forum.
 
When you fire up the Exchange VM what does your disk IOPS look like? What if you fire up the Lync server instead of the Exchange server, does it run slow? Try provisioning part of the SSD for the Exchange VM and see if that helps.
 
You're just overrunning that disk. Those are good for sequential I/O but what you're doing isn't sequential. You need to add more disks and use some form of RAID...or go with SSD. My main lab storage is 5x1TB WD Blacks in a Synology and it works well. Not as fast as the 4x128GB SSD datastore I have, of course..but it handles the bulk of the workload.
 
The Western Digital 1TB Black drives were on sale today at Newegg. I ordered two of them and will build a RAID 0 array with the three drives. Should provide a nice IO bump.

My 60GB SSD has been great for staging. I've been studying for a Windows 7 cert and being able to spin up a small guest and then migrate it off to the bigger storage when I'm done and work on the next lab has been a huge time saver.

Thanks for the input.

PS. NetJunkie, your blog is :cool:
 
I would personally recommend raid 1 or 10, but that's just me. Only because if one of the three disks fail, all your work goes with it.
 
I would personally recommend raid 1 or 10, but that's just me. Only because if one of the three disks fail, all your work goes with it.
That's certainly valid. RAID 10 would be the sweet spot. Right now I'm willing to trade redundancy for speed. My ESXi rig has been a slow build in progress, I can add another drive later when the budget allows. Could I have picked up a third drive on sale? Yeah, but you gotta cut yourself off at some point. Just like with alcohol. :)
That's the sucky part of this hobby and profession. Can't get everything at once. :(
 
I can add another drive later when the budget allows

are you planning on rebuilding the server after you purchase the 4th drive? Seems that paying $50 for that last drive and building the RAID now would save you some headache in the future.
 
are you planning on rebuilding the server after you purchase the 4th drive? Seems that paying $50 for that last drive and building the RAID now would save you some headache in the future.
I boot from a flash drive, so I won't lose my ESXi install. Everything I do is for training for the Microsoft certification path I am on. Create a VM, do the lesson, then shut it down and move on.
 
You could have a raid0 array, then when you get the fourth HDD, turn it into a raid 10. Just use the 3rd drive as is for now.
 
My main lab storage is 5x1TB WD Blacks in a Synology and it works well. Not as fast as the 4x128GB SSD datastore I have, of course..but it handles the bulk of the workload.
Very curious about that part.
I've been trying to get an ESXi whitebox build going , with an embedded Nas4FREE VM feeding out iSCSI LUNS internally back to the rest of the VM's but it's proven to be a pain. I'm leaning towards using my Synology DS-409 instead but have doubts about how fast it can feed i/o out.

I'm assuming you're running at least a gigabit ethernet network between the two units.
I'm also assuming you're creating LUNs on the Synology and connecting them to the ESXi box.

What model Synology are you using to feed the iSCSI LUNS over?
Are you running in Synology Hybrid RAID (SHR) mode, or RAID 10 / 0(mirrored)?
How would you test throughput?
 
So it turns out that VMware ESXi 5.1 U1 does NOT recognize my onboard RAID. With all the time I spend on this forum I should have known better.
:(
So, I'm mildly embarrassed but determined otherwise.

I see this RAID card on the VMware HCL:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118133&IsVirtualParent=1

I'll be shooting to pick up this card after I save up some money. In the meantime, I have 3 disks in ESXi I can deploy VM's to. Won't be a big deal for testing, I can get by on this til I get the RAID card.
 
Personally I run my larger disks (1TB platters in 1, 3, and 4TB drives) in RAID1 which works great for my lab. I run 6 VMs off a RAID1 1TB setup no problem. (One of which is a very disk intensive distributed client backup server)

Another option if you need speed and space at a decent price are the WD raptors. They are essentially enterprise 15k drives and you get more I/O out of them.
 
A single SSD drive will dominate anything. It's tough to beat 85,000 IOPS for $120. It's hands down the best option for vsphere if you don't need massive amounts of space.

In case you need a reference, your SATA spindle drive will do about 75-100 IOPS.
 
I'm picking up a Dell PERC 5i card off of a fellow [H] member for super cheap. We'll see how that improves things.

My ultimate goal is to run 5-6 Windows VM's simultaneously, so I'll need at a minimum 256GB. For what I've dropped into storage up to now I could have gotten a Samsung 840 Pro.
:facepalm:
That's life, learning hard.
 
Depending on the VMs youre running I doubt youd need 256GB. To do that you'd either be thick provisioning all of them or loading them up with tons of files/databases/something.

Personally, get an SSD and use that. Thin provision all your VMs. If you run out of space, get a second one, even a smaller one and add that. Since its lab work youll save time on some steps too IMHO.

Remember, RAIDed spinning disks dont come anywhere near even junk SSDs in terms of latency and IOPs.
 
At this point, I'll do the best I can with what I have. I'd rather not sell off what storage I currently have for a loss to get a SSD or two.

If I could go back I would certainly have gone with SSD. But, I can't do that. I wanted to start small and build up as budget allows. I certainly did that, but now I'm at the point where for what I've invested in storage for performance I could have gotten a SSD for the same price.

Hindsight being what it is, I of course should have gone with a SSD. I'll work with the RAID setup and see if that works the way I want it to. I'll test thin provisioning on my 60GB SSD and maybe that will be enough for what I'm doing. I thick provisioned everything I did so far.

Thanks for the input everyone. Hopefully this discussion will help other folks go the SSD route when planning their builds.
 
Back
Top