home based Hyper-V lab storage

Joined
Jan 21, 2014
Messages
15
I am finishing up my "on the cheap" Hyper-V lab and the last piece is storage.

My 2 hosts are older quad core Xeon powered HP workstations, one has 12GB of memory and the other 16GB of memory.

The machines each have a dual Intel Gigabit NIC along with the onboard Broadcom NIC.

I am doing this to learn Hyper-V, and study to refresh my certifications for the system center 2012 certifications.

I have a budget of around $300~400 and a 120GB Samsung 840EVO and am planning on setting up Server 2012R2 tiered storage spaces using the 840EVO drive. I have a few slow large drives that I will run nightly backups to and will be storing inactive VHD files on as well.

On my current home computer I run 5 machines using differencing disks on the 840EVO and will likely want to have enough disk performance to run upwards of 12`15 machines at a time. These machines are lab use so they won't see any production usage. The 12-15 machines will mostly be AD DS, SQL, and system center.

I will be driving this from one of 2 different machines. The first is my little used Intel Core 2 Duo file server. It has 4GB of memory and can be expanded to 8GB (2 empty slots) and has 4 available SATA II drives. The other option is my home computer, Asus FX8120 /w 16GB of memory and 4 available SATA II ports. This box has Windows 8.1, so the the file server would be have to be virtual.
 
Sorry for the confusion.
I am seeking advice on selecting a storage solution that will meet the needs of a all Microsoft lab with 2 small physical hosts, each running between 6-8 guests (total of 12-15 guests at a time)

Should I purchase a separate controller, or use the on-board SATA II?

Which drives are going to be best to use? I have found that SSD is very fast, but have read on the forums that using only SSD will cause premature failure on the SSDs. I can't afford to purchase enough SSD storage to do the lab, and would like to configure and setup tiered storage as it is included in Server 2012R2.

How many drives do I need to be able to support 12-15 VMs?

I currently run 4-5 VMs on a single SSD and for the most part it is fine. The problem is that the 120GB SSD drive is only large enough for 4-5 VMs when I use differencing disks. In a shared storage solution, differencing disks is not an option.

So I really need performance and low price. I am willing to sacrifice reliability because it is a lab and I will run nightly backups. I have a total budget of around $300 to $400.

I have been poking around a lot on this forum and lots of people have provided lots of great information, but in my application of a super low cost lab, I am not certain what I should be targeting.
 
First off, you're going to need a second SSD drive if you want to do tiered Storage Spaces unless you plan on running only Simple vDisks.

I'd get an IBM M1015 card off ebay along with 2x breakout cables from Monoprice. Total cost would be $125 or so. Then flash the card with IT firmware. This gives you 8x extra SATA3 ports.

Pick up some 1TB 7200RPM SATA drives for as cheap as possible and you can do the following in a pool:

2x 120GB SSD
4, 6, or 8 SATA drives

Obviously the more SATA drives, the more IOPs your pool can handle.

Then you could create two Mirrored vDisks, each with about 55GB in SSD tier and the rest in your SATA tier (vDisks created from a tiered Pool automatically get 1GB of write cache from the SSD tier on top of whatever space you assign them).
 
First off, you're going to need a second SSD drive if you want to do tiered Storage Spaces unless you plan on running only Simple vDisks.

I'd get an IBM M1015 card off ebay along with 2x breakout cables from Monoprice. Total cost would be $125 or so. Then flash the card with IT firmware. This gives you 8x extra SATA3 ports.

Pick up some 1TB 7200RPM SATA drives for as cheap as possible and you can do the following in a pool:

2x 120GB SSD
4, 6, or 8 SATA drives

Obviously the more SATA drives, the more IOPs your pool can handle.

Then you could create two Mirrored vDisks, each with about 55GB in SSD tier and the rest in your SATA tier (vDisks created from a tiered Pool automatically get 1GB of write cache from the SSD tier on top of whatever space you assign them).

thanks for the help.

So here are a few follow up questions. Again, I have a $300 budget that I can stretch to $400 but beyond that it won't clear congress (the wife).

4 WD 1TB RE3 drives (Storite)
http://www.amazon.com/dp/B001IEXU68/?tag=pcpapi-20

1 Samsung 840 EVO
http://www.amazon.com/Samsung-Elect...id=1390409839&sr=1-2&keywords=samsung+840+EVO

2 breakout cables
http://www.monoprice.com/Product?se...&cagpspn=pla&gclid=CJS8_fmgkrwCFQtgMgodf1YAcA

1 IBM M1015 controller
http://www.ebay.com/itm/IBM-SERVERA...296?pt=LH_DefaultDomain_0&hash=item2c7a90d1e8

1 full height bracket
http://www.ebay.com/itm/Full-Height...sk_Controllers_RAID_Cards&hash=item53faa45e16

I come up at $404 total.

I really only plan to run simple disks, I don't need any mirroring (this is a pure lab) I can handle the rebuild time in case of a failure, and will be backing up to a DFS share (redundant) in another part of the environment already. I also don't need the capacity of 2TB since I anticipate the VMs to each be 50GB or less (typically my VMs are less than 20GB)

What if I went this direction instead, its alot less capacity, but would be cheaper (no redundancy)

SSD tier (existing 840EVO) $0

storage tier (3* WD Velociraptor 150GB) $59.99 /w free shipping @ Newegg
http://www.newegg.com/Product/Produ...301&nm_mc=AFC-IR&cm_mmc=AFC-IR-_-na-_-na-_-na

IBM M1015 card /w 2 cables & bracket = $130 after shipping

total cost = $290.
RAID 0 (i think in storage spaces this means simple vDisk)
total capacity (before formatting loss, etc) around 570GB.

Will the performance from the 3 10k drives /w the 120 SSD be enough for 12-15 VMs?
Will 4 * 1TB WD RE3 drives in a mirror be faster than 3 * WD 10k drives?

The workstation that I am running these from has a 500w power supply, and in addition to these drives, it has to power a boot drive and 2 additional NIC cards. Will I have a power supply problem?
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
one thing to remember is that this is a home lab. The two hosts are each single processor HP Z400 workstations, one has 12GB and the other 16gb.

So my CPU/Memory is only 8 hyper-threaded cores and (after hypervisor OS use) around 22GB of memory.
 
To create a storage space with storage tiers, the virtual disk must use fixed provisioning, and the number of columns will be identical on both tiers (a four-column, two-way mirror with storage tiers would require eight solid-state drives and eight hard disk drives)
http://technet.microsoft.com/en-us/library/dn387076.aspx#bkmk_tiers

It looks like I will need the number of columns to be the same in both tiers. So if I want a 2 way (mirror) 2 column (this would total 4 spinning drives) I am going to need 4 SSD drives to make that work.

From what I am reading, on read, it pulls the data from only a single column, meaning that a 2-way mirror (RAID 1) would have the drive performance of a single set of drives.

Looking at the cost vs. performance, the WD 10k drives don't make any sense. They cost $60 give me only 30GB more than an SSD, consume more energy, and generate more heat.

at 12-15 VMs each consuming roughly 25GB of space with 10% overage - I suspect that I will need 412GB of space. I could purchase 3 Samsung 840EVO drives, the controller and a single cable for around $370 and have 480GB /wo storage spaces. The limit likely would no longer be the drives as it would be the transport medium. (I am planning on using SMB 3.0 and iSCSI to learn both) on teamed / MPIO dual gigabit NICs through a dedicated VLAN on a single Cisco 2970 switch.

thoughts?
 
How many drives do I need to be able to support 12-15 VMs?
That depends on how much work you're going to be doing on those servers. It's a matter of IOPS. If you're not doing disk intensive things then it's not a big deal...

The RPM speed of HDDs generally leads to faster SEQUENTIAL read/writes so if you're dumping a lot of large files it's good. Otherwise meh.

vDisks created from a tiered Pool automatically get 1GB of write cache from the SSD tier on top of whatever space you assign them

You can change the write cache size if you use PowerShell to create it.

at 12-15 VMs each consuming roughly 25GB of space with 10% overage - I suspect that I will need 412GB of space
Data De-duplication is your friend, obviously you need enough storage for your solution but data de-duplication in Server 2012 (R2) can make a big difference here too.

(I am planning on using SMB 3.0 and iSCSI to learn both) on teamed / MPIO dual gigabit NICs

Excellent things to play with, you should note that MPIO on an LACP team is generally not recommended HOWEVER if you're teaming with Server 2012 (R2) it's supported by Microsoft out of the box. Also you should check out SMB 3 multipathing with and without the LACP team. I found in my situation that SMB 3 multipathing worked better without the LACP team. But I suspect my switch was partially the problem.
 
That depends on how much work you're going to be doing on those servers. It's a matter of IOPS. If you're not doing disk intensive things then it's not a big deal...

The RPM speed of HDDs generally leads to faster SEQUENTIAL read/writes so if you're dumping a lot of large files it's good. Otherwise meh.



You can change the write cache size if you use PowerShell to create it.


Data De-duplication is your friend, obviously you need enough storage for your solution but data de-duplication in Server 2012 (R2) can make a big difference here too.



Excellent things to play with, you should note that MPIO on an LACP team is generally not recommended HOWEVER if you're teaming with Server 2012 (R2) it's supported by Microsoft out of the box. Also you should check out SMB 3 multipathing with and without the LACP team. I found in my situation that SMB 3 multipathing worked better without the LACP team. But I suspect my switch was partially the problem.

Sweet, I will try the different configuration settings and report back on how its working. Currently I have it setup with SMB 3.0 over a 2012 (non-R2) file server with 3 NIC cards in one team. Performance is fine at this point (no hint at VM performance issues)

The question I was curious about however is data deduplication across active VHD loads.

For CSV / Hyper-V server active VHD files, is support for data-dedupe new in R2?

Some of the forums I have read state that dynamic VHD's and data-dedupe is doubling the load on the disk and isn't recommended, which is faster / smaller?

Can it dedupe everything if the servers that rely on those VHD files are running?
 
Quote from Data Deduplication on Server 2012-R2.

Is Hyper-V in general supported with a Deduplicated volume?

We spent a lot of time to ensure that Data Deduplication performs correctly on general virtualization workloads. However, we focused our efforts to ensure that the performance of optimized files is adequate for VDI scenarios. For non-VDI scenarios (general Hyper-V VMs), we cannot provide the same performance guarantees.

As a result, we do not support deduplication of arbitrary in use VHDs in Windows Server 2012 R2. However, since Data Deduplication is a core part of the storage stack, there is no explicit block in place that prevents it from being enabled on arbitrary workloads.
http://blogs.technet.com/b/filecab/...-new-workloads-in-windows-server-2012-r2.aspx

So it looks like my first question was answered. Data dedupe isn't officially supported (but this is a lab) and I will need to upgrade my storage server to r2 (no problem) the next question is whether or not to use dynamically expanding disks and which will give me a better solution in a volatile environment (a lab where servers are getting stood up and taken down regularly)
 
Back
Top