I have a 2(3) server setup that does exactly what you are looking at doing.
-File Server(I have HP Microserver Gen7 running Oi/Nappit)
-HP Microserver Gen8 with Xeon 1230 swap running Hyper-V
Hyper-V box runs 2 domain controllers and a Plex VM. I stream 720/1080 to Rokus and xbox360s...
Do you have a lot of customization in the template definition?
Convert the template to a VM, export it as an OVF, then convert the VM back to a template. Import the OVF to the other system and recreate the customization spec.
If your san supports it, look into VVOLs for servers that tend to be problematic with the snapshot removal process. We had a number of VMs that would take almost an hour to remove the snapshots and it was killing our SAN IO.
For my home setup I went this route:
HP Microserver Gen8 running Hyper-V with 2 DCs and a Plex server
Norco ZFS box running openindiana/nappit on bare metal.
I've played with all-in-one/passthrough in the past, but it was more hassle than it's worth. If I need more compute I can set up an...
If you're going ESXi, get a 16-32gb small footprint thumb drive for the OS. Most modern motherboards will even have a USB port directly on the motherboard for this purpose. For the raid, make sure you have a real raid card, and not softraid.
Instead of using the keep VMs together rule try a host affinity rule with the "should keep VMs together". This will allow the VMs to migrate off during patching, then migrate back when the host exits maintenance mode.
We've been using 10g iSCSI in production for the last 4 years with no issues.
The twinax cable is essentially a copper cable with built-in SFP+ modules.
The FreeNAS FC method looks extremely hackjob and not a representation of real world FC.
I would recommend checking ebay for intel X520 10GE based cards and snagging some passive twinax cables, much less frustrating in the long run.
Most modern motherboards have an onboard USB port, or you can get an adapter to use one of the USB header ports. No need to have a USB port hanging off an external port.
With the current feature parity, the only use case would be extremely niche features, or insane SOD in relation to network operation. It is a royal pain to manage updates/host upgrades.
I run vRanger which is similar in function. We have a "helper" per host to perform the backup jobs. Backing up a large number of VMs on a single system concurrently can cause the backup jobs to take a lot longer to complete.
For larger VMs I would check for updated VMWare tools and VSS...
At some point either during the linked vcenter joining or SRM config the installer will bomb out with a cryptic message about the accounts being the same.
I ran into it during the very early 5.1 release.
One of the biggest cost cutters for 10G I've run into would be Twinax cables. If your cable distances are within the 1-5M range these can be significantly cheaper than SFP+ and fiber.
We run 1gb iSCSI in some of our older sites(Equallogic/Nimble). Unless you're aiming for some crazy backup or file transfer windows the storage network throughput isn't going to be an issue.
Dirsync isn't that hard until you get on the phone with MS Tech support. I've had to do restores on our dirsync box at least 3 times after a Jr. SA has been on the phone with them.
Single sign on should get a bit smother once they roll out the next wave of AzureAD with bi-directional sync.
Drop down to 3-4 hosts and respec with 128 or 256GB of RAM.
Get a quote for a Nimble CS220.
Look at the Force10 S55's for 1g, or the S4810 if you are going 10G.
I would also swap out the broadcom for intel on the daughtercard.
I know, I was trying to allude to the point that if the storage array doesn't require it, or there isn't some work done on the storage switching, then it might not work.
As mentioned already it comes down to what SAN vendor you're working with. Equallogic for example runs off of the flat storage network(single vlan). In this case seperate VLANS would not work. EMC on the other hand can work with more of a mesh fabric like you would do with FC fabrics.
One...
You will need to get into the switch cli.
Ssh or telnet into the cmc address, then issue connect switch-a1, connect switch-b2, etc to get to the terminal for each switch.
Do you have access to the blades? I would take an inventory of what is installed on each blade as far as NICs go and work...
No.
All of those settings are used for storing roaming profiles. The local copy is used while the session is active and any temporary file related OS activities are stored there.
What is your goal/reasoning for this change?
My guess is that the final solution you are looking for is a...