Setting up storage in ESXi - Need Recommendations

marley1

Supreme [H]ardness
Joined
Jul 18, 2000
Messages
5,447
I finally had some downtime to mess around with a demo server we have:
Dell R900 - 32GB, 2 x 300GB SAS, 3 x 450GB SAS, 2 x Quad Core 2.4 Xeon

Right now we have two array, 2 300GB in Raid 1, 3 450GB in Raid 5.

What I want to know is the best practice for setting up the storage.

In ESXi, I loaded the VM OS on Storage Array - VM_OS which is the Raid 1 300GB
In ESXi, I have another Storage Array, VM_Data with 837GB the Raid 5

I want to know if I was setting up this from scratch, what do I want to do.

Do I want a Raid 1 and Raid 5? Should I make a smaller partition for the VM ESXi? Where should I be creating the Disks for the OS? Plan to have a SBS 2008 and a BES server.
 
With it being a demo/test system, I would do this:

1. Load ESXi on the 300gb RAID 1 container and use the rest of the storage for VMFS3 storage. You can store disk images/ISOs in that datastore along with slower low-usage VMs.
2. Use the RAID 5 storage array for VMFS3 and raw storage for your faster high-use VMs. You can also test different cluster scenarios and other scenarios where raw disk mappings would come in handy.

Possibilities are endless I guess. Let's see what others say.
 
well we may make it our production server.

But I see what you mean.
 
I'm hoping more folks will respond because I have a Dell PowerEdge 2950 that I have ESXi 4.0 on that is going to be our new test/dev environment. I'm waiting for my 32gb of ram to come in for it, and I may be obtaining an EMC CX300 and Brocade 200e switches next week that is being decommissioned. 1tb internal raid 5 storage, and 1.8tb SAN storage that i'm not sure how I want to use yet.
 
OP:

I would recommend smaller drives to install ESX onto, but whatever. Using what you have, I would do the following:

RAID-1 2x 300GB drives - ESX install. I would not recommend using the VMFS3 volume (would actually recommend deleting it) on these disks. If you need to reload ESX host OS, you don't want to lose VMs because they reside on this array
RAID-5 3x300GB - VMFS3 datastore for VMs

FibreChanMan:

Put the internal disks to use elsewhere. Boot from SAN and create a VMFS3 LUN on SAN, as well. If you can get away from using internal disk at all, do it. If you have to have internal disk to boot the hosts, fine, but even 36GB disks in a RAID-1, 5, or 10 are sufficient. ESX4 installer only wants 10GB total to install to. Using internal disks to house a VMFS3 volume is not a good idea.

It ties the VMs running on it to that host (no VMotion) because the disk is a local resource. Additionally, if the internal disk sustains a failure, or the host goes down, the VM is not available to be migrated to another host machine, or could be lost (if there's a RAID failure that makes the array unusable).

With ESX, ESXi - as a general rule, do not make, use or create internal disk based VMFS3 stores. If you do create them, use them as ISO repositories, not for production data. Do not run VMs on them.

One of the reasons virtualization is so popular is because it frees servers and applications from the hardware that they run on. Creating internal disk VMFS3 volumes and running VMs on it negates that concept.

As far as SAN disk, and how to chop it up, my personal preference is for 500GB VMFS3 volumes, adding more as I need them, and spreading that LUN across as many spindles as I can afford to give it, for production machines (or ones that I want to be fast). For dev machines, running on a VMFS3 that sits on a 3 disk RAID-5 is fine. ISO repositories - who cares. Keep in mind I am referring to 15k FC disks here, not SATA. Hopefully you're not forced to work with SATA for anything other than ISO storage.
 
Last edited:
Back
Top