Help me understand storage

marley1

Supreme [H]ardness
Joined
Jul 18, 2000
Messages
5,447
Sorry this is probably dumb but I am so use to the physical server route.

Normally with physical servers I have a C Partition (usually a raid 1 array) and then a D Partition (usually Raid 5 or Raid 10).

Now I keep reading "putting the vm on this array and having a zfs array"

Now when I think of the VM I am thinking in my physical world the C and D Partition. When I am messing around with Hyper V at the office on a test server I have a Raid 1 Array which I have Server 2008 with the Roles installed, then I have a Raid 10 Array that I created a few guests. I went through the setup and create a 100GB Disk and installed OS 1, then created another, etc, etc.

So if you were to have a SAN or some shared storage, would you just create a Disk of X GB and then install the OS?

Then say you are doing Exchange server, do you install the OS and then where do you locate the data? Is it on the same virtual disk you created to install that OS?

Sorry these are probably noob questions but trying to get a grasp over the full thing.
 
Shared storage allows for clustering of all sorts. In the VMWare world this means for a VM to be able to float/be moved/be redundant/etc, that VM's storage has to be available to all servers. Think of a system with ESXi1 and ESXi2. What happens if ESXi2 physically fails? All of its VMs will be down as well until that host can be repaired. If we can add some shared storage into the mix, then both hosts can access the VMs and can have the potential to load balance and have some redundancy. Beyond that it

Depending on the array there also may or may not be an element of higher performance and/or redundancy compared to onboard storage (if any, you can boot ESXi off a USB stick). Some SANs can also have additional features like deduplication, replication, thin-provisioning, NAS (NFS/CIFS) shares, etc. Imagine having the primary SAN replicated to a backup SAN in a DR location. You also can do true "boot from SAN" scenarios depending on the scenario and requirements as well.

Implementation of SAN technology is not always about additional raw storage capacity, but rather the additional flexibility that they can bring.
 
If you are used to physical hdd's and physical arrays..ie your raid 5 array look at it this way.

You make a pretend computer
You pretend to give it xyz amount of memory
You also pretend to give it some hdd's

To in your exchange example.

You would give it two pretend hdd's
One to hold the OS... And one to hold the exchange database.
When you then install the OS it will see both pretend hdd's .... So you format the 1st one and install the OS , then do as you would with a physical machine and format the second hdd... Call it D: drive or whatever... And set that as your exchange data store.

.
 
Stop thinking in terms of physical disk/raid group..etc per server. Start thinking more along the lines of "pooled" infrastructure to include external storage. External storage, NAS/SAN, are presented to the Virtualized host as blocks in the case of IP/FC connected storage and File Systems in the case of NAS (NFS).

In the case of block storage, say in the Hyper-V world (block is all that's supported until Windows Server 2012 Hyper-V), the Hyper-V host would use an NTFS file system on the presented blocks from the SAN so that you can utlize the storage. Again, in the case of Hyper-V, you create .vhd's on top of that file system for your VM's to utilize. Basically, you are carving out storage for your VM's. In the case where you need multiple Volumes presented to the guest OS, you are just creating additional .vhd's off of the "POOL" of storage and presenting them to the guest as if they were physical volumes.

This is similiar to VMware's methods, however, they use .vmdk files and the file system for block would be VMFS. Unlike Hyper-V, vSphere supports NFS to that you can present the FileSystem to the ESXi host. In this case, ESXi doesn't have to apply it's own file system, VMFS to be able to leverage the storage, it can just create .vmdk's on top of NFS.

This is not exactly a granular explanation by any means..but it should answer your question.
 
I didn't read the other responses so somebody else could have nailed it.

But you brought up the point of having different raid groups in the physical world, well you can mimmick those same characteristics in the virtual world. You can carve out diskgroups, aggrigates (terminology varies per vendor) based on requirements. So you may have one group that is a RAID-5 with SATA storage and you may have another that is a RAID-1 on SSD or another RAID-5 on 15k. When you carve out volumes/LUN's from those groups you will then provision your VMFS datastore and it will have those underlying performance/redundancy characteristics.

So lets say you have one VMFS datastore of faster drives and one VMFS datastore of crappier drives. You can cut your virtual disks (C and D drives) out from those two separate datastores allowing you to still meet service levels.
 
Back
Top