esx install on a raid 5

cyr0n_k0r

Supreme [H]ardness
Joined
Mar 30, 2001
Messages
5,360
New UCS server from vendor. They INSIST on setting up the server themselves before they deliver onsite. All they ask from me is an IP address for the management and a hostname. I provide said information.

They deliver 2 UCS servers. Each UCS server has 8 300GB 15k rpm SAS drives

1 they left configured for network boot which we turned on and didn't get a monitor hooked up to it soon enough and it picked up our PXE server and began imaging with Windows 7. By the time we saw it the server was already 5% into the multicast session (which means all the partitions were wiped already) :rolleyes:

the second server they setup the ESX install on a RAID 5 across all drives which I don't ever do. I never install operating systems on every drive in the system. Usually I setup the first 2 drives in a RAID 1 and install the OS on that, then use the remaining drives as a RAID5 for whatever else.
Is my way a normal practice? Any documentation that suggests how to install ESX one way or the other in terms of RAID1, RAID5, etc?
 
Your way is IMO a better way, their way is the 'old' way. If you wipe the raid, just configure the OS drives how you want, install ESXI and then configure the DATA volume after and add it to ESXI, no special instructions required.
 
Personally, you may be OK with RAID5 for the datastore but for VMs I always use RAID10.
 
I wouldn't use local disk at all for ESXi, in this case I would boot from SAN, if you had that capability or use Autodeploy.

In our Dell Servers we always go the internal mirrored SD cards...but UCS-B, boot from SAN or Autodeploy.
 
I wouldn't use local disk at all for ESXi, in this case I would boot from SAN, if you had that capability or use Autodeploy.

In our Dell Servers we always go the internal mirrored SD cards...but UCS-B, boot from SAN or Autodeploy.

Never even tried boot from SAN, is it easy enough to implement?
 
Never even tried boot from SAN, is it easy enough to implement?

Different strokes for different folks, but I'm not the biggest fan of boot from san. The juice isn't worth the squeeze in my book.

I do love AutoDeploy though.
 
I don't usually store the VM's on local disks either. But I do install ESXi on the local disks. I don't trust boot from SAN. The drives come with the server (purchased as a package buy through Cisco for their CUCM) so might as well use some of them.
 
I agree with installing esxi to USB or SD card, but I use raid 10 and never have worried when a drive failed, because I can just replace the drive without worrying about losing everything. I used to use raid 5 but io was to slow when writing
 
I would use RAID 10 for ESX, and also create one array if local storage only. More disks translates to more IO capability and 8 is better than 6. RAID 5 writes are too slow and there is more chance to lose the whole array. If you did lose a drive the writes would be unbearable.

We use SSD for ESX in our blades that have SAN storage.

A hypervisor is not a typical OS. I don't find any reason to have a RAID 1 boot/OS drive.
 
I use a raid 1 array for control vm's and put file and db servers on raid 10 arrays. Seams to work, but if its not needed, no need to have a setup more complex then it needs to be.
 
Back
Top