Noob help needed - VM within windows or...?

Joust

Supreme [H]ardness
Joined
Nov 30, 2017
Messages
6,338
Ok. So, I need some help. I intended to use VMware for two or three VM's, but the freeware version doesn't allow enough vCPU's for my needs. One VM needs many vCPU's. The other, not so much.

Here's what I need to do: single Threadripper hosting stuff using Windows 10 or server. I need another windows 10 for another process. One Windows 10 MUST always be open to global, the other MUST exclusively be going to a VPN.

How do I set this up?
 
staying legit free, your best bet is probably Hyper-V (free on Win10Pro and Enterprise) if you are looking for that enterprise type stuff. Otherwise, if just wanting virtualized OSs, what about a Virtualbox or equivalent solution?
 
staying legit free, your best bet is probably Hyper-V (free on Win10Pro and Enterprise) if you are looking for that enterprise type stuff. Otherwise, if just wanting virtualized OSs, what about a Virtualbox or equivalent solution?
Doesn't have to be free, necessarily. However, the VMware stuff licensing seems fairly expensive. Looking at VMware workstation to see if it fits the bill.
 
Player is free for personal use. Each instance can only support a single guest, but it is possible to launch multiple instances (and therefore guests) simultaneously. By the comparison at VMware's site, it supports the same guest CPU and RAM specs as Pro.

Any VPN setup would be within the guest itself.
 
Player is free for personal use. Each instance can only support a single guest, but it is possible to launch multiple instances (and therefore guests) simultaneously. By the comparison at VMware's site, it supports the same guest CPU and RAM specs as Pro.

Any VPN setup would be within the guest itself.
This might be an option. I'm just not sure...

In theory, I think it'd work, but it needs to be 100% stable, without memory leaks or anything like that. I'm just not sure...

This isn't a setup that will be turned off basically ever.

Original plan was for three VM's: windows 10 open host, windows 10 vpn, some type of Linux for a samba share host for the big datastore.
 
I personally wouldn't suggest VMware workstation for something like that. And last I knew Vmware player was read only. Meaning every time the VM went down it would reset back to where it started and lose all data while it was online. (Software, patches, etc) Even a reboot wouldn't work as it would revert the VM during that time.

Hyper-V is decent, and personally I've found more reliable than workstation. But as with Windows, specifically Windows 10 just expect it to randomly patch and reboot itself regardless of what you wanted it to do. At least on Server 2016 and up you can run sconfig and set updates to manual. Then it only tries to reboot itself on you after you start doing the update process, so you'll at least have an idea it's coming.

It more or less sounds like you want to setup something for production, so I'm not sure you'd want to use Windows 10 as your host for this. My two picks would be to just run Hyper-V on Server 2016, or take a look at the free version of proxmox. https://www.proxmox.com/en/downloads They don't suggest using their free version for production, but a single socket is roughly $100 a year and comes with some support, so might not be too bad of an idea. It's easy enough to download, install and see if you like it at all before you worry about committing to it.

You could look at the cost of ESXi for a the cheapest license as well, but I'm not sure how much that goes for. You don't need likes the HA, so the most basic of licenses would probably fit the bill. That said it might just end up being more costly and just getting another desktop to run this, and just bare metal both installs. That or save the TR system for bare metal Windows 10, and install ESXi on something else because you don't need the high vCPU count for the two VMs your going to put on it.
 
I personally wouldn't suggest VMware workstation for something like that. And last I knew Vmware player was read only. Meaning every time the VM went down it would reset back to where it started and lose all data while it was online. (Software, patches, etc) Even a reboot wouldn't work as it would revert the VM during that time.

Hyper-V is decent, and personally I've found more reliable than workstation. But as with Windows, specifically Windows 10 just expect it to randomly patch and reboot itself regardless of what you wanted it to do. At least on Server 2016 and up you can run sconfig and set updates to manual. Then it only tries to reboot itself on you after you start doing the update process, so you'll at least have an idea it's coming.

It more or less sounds like you want to setup something for production, so I'm not sure you'd want to use Windows 10 as your host for this. My two picks would be to just run Hyper-V on Server 2016, or take a look at the free version of proxmox. https://www.proxmox.com/en/downloads They don't suggest using their free version for production, but a single socket is roughly $100 a year and comes with some support, so might not be too bad of an idea. It's easy enough to download, install and see if you like it at all before you worry about committing to it.

You could look at the cost of ESXi for a the cheapest license as well, but I'm not sure how much that goes for. You don't need likes the HA, so the most basic of licenses would probably fit the bill. That said it might just end up being more costly and just getting another desktop to run this, and just bare metal both installs. That or save the TR system for bare metal Windows 10, and install ESXi on something else because you don't need the high vCPU count for the two VMs your going to put on it.

Yeah man. So, I currently have several machines running. I'm trying to consolidate into one box. Actually, three chassis on one rack- but two of them to be jbod cases. I'm getting eaten alive by power costs, noise, and heat. I'm not opposed to running Windows server, but I currently have two Windows 10 licenses. I'm getting tired of throwing money at this project. I probably have a Honda Civic sitting on the damn rack by now.
 
Yeah man. So, I currently have several machines running. I'm trying to consolidate into one box. Actually, three chassis on one rack- but two of them to be jbod cases. I'm getting eaten alive by power costs, noise, and heat. I'm not opposed to running Windows server, but I currently have two Windows 10 licenses. I'm getting tired of throwing money at this project. I probably have a Honda Civic sitting on the damn rack by now.

If it's for home use, then sure go ahead and go that route. It will work fine. Just expect there to be downtime because Windows 10 will do whatever it pleases. At home Vmware workstation would probably be fine as well, but it's definitely at the mercy of the host OS. I prefer hyper-v over workstation in a home scenario, but that's just IMO.

As for power unless you're running 3 separate server systems, you might not see as big of a difference as you'd think. A typical desktop is only like 50W, and it sounds like you want to use that TR system, so it's going to still be sucking down power, putting out heat, etc. If you know which one draws the most power, try to eliminate that first. Some times it just doesn't make sense to put certain workloads on a VM system, and if you're needing more than 8vCPUs that's a very demanding workload.

Are you actually cutting down on the number or disks or no? I ask because 4 - 6 disks will double the power consumption of a standard desktop.
 
Last edited:
I personally wouldn't suggest VMware workstation for something like that. And last I knew Vmware player was read only. Meaning every time the VM went down it would reset back to where it started and lose all data while it was online. (Software, patches, etc) Even a reboot wouldn't work as it would revert the VM during that time.

This has never been an inherent property of Player. Guests maintain normal state across reboots and host app sessions, just like they would with the other VMware desktop products or other solutions (e.g., VirtualBox, Hyper-V, etc.). It is possible to use Workstation Pro (and probably other VMware products) to create guests that do not maintain state after shutting down.

It more or less sounds like you want to setup something for production, so I'm not sure you'd want to use Windows 10 as your host for this. My two picks would be to just run Hyper-V on Server 2016, or take a look at the free version of proxmox. https://www.proxmox.com/en/downloads They don't suggest using their free version for production, but a single socket is roughly $100 a year and comes with some support, so might not be too bad of an idea. It's easy enough to download, install and see if you like it at all before you worry about committing to it.

You could look at the cost of ESXi for a the cheapest license as well, but I'm not sure how much that goes for. You don't need likes the HA, so the most basic of licenses would probably fit the bill. That said it might just end up being more costly and just getting another desktop to run this, and just bare metal both installs. That or save the TR system for bare metal Windows 10, and install ESXi on something else because you don't need the high vCPU count for the two VMs your going to put on it.

Not a bad idea. Running on a server would probably help with uptime if that's important and/or you're worried about contention while using your desktop. The desktop VM products are generally better for more transient guests.

I've used the free version of Proxmox for years (mainly Linux guest VMs/containers) and it's been great. IIRC you can get a lab subscription for ESXi that gives you most everything for ~$200/year, should the base free product not be good enough.
 
This has never been an inherent property of Player. Guests maintain normal state across reboots and host app sessions, just like they would with the other VMware desktop products or other solutions (e.g., VirtualBox, Hyper-V, etc.). It is possible to use Workstation Pro (and probably other VMware products) to create guests that do not maintain state after shutting down.

I'm reading the wiki and they basically spell out how to use guests in VMware player that reset on power down. That might be where I'm remembering that functionality from, where someone had prebuilt VMs that were configured that way and said just run these VMs in player and they revert themselves after each use. The first couple versions wouldn't let you create your own VMs so that might be why I associate those two things together.
 
If it's for home use, then sure go ahead and go that route. It will work fine. Just expect there to be downtime because Windows 10 will do whatever it pleases. At home Vmware workstation would probably be fine as well, but it's definitely at the mercy of the host OS. I prefer hyper-v over workstation in a home scenario, but that's just IMO.

As for power unless you're running 3 separate server systems, you might not see as big of a difference as you'd think. A typical desktop is only like 50W, and it sounds like you want to use that TR system, so it's going to still be sucking down power, putting out heat, etc. If you know which one draws the most power, try to eliminate that first. Some times it just doesn't make sense to put certain workloads on a VM system, and if you're needing more than 8vCPUs that's a very demanding workload.
Home use with several users. Transcode duty on the CPU - which is very, very heavy. I needed more CPU, as it were.

I agree that the power savings don't appear to be much - I'll probably save about 150w with the TR, on average. About 100 kWh, plus demand charge contribution per month.

I was running the major datastore and transcode on a FreeNAS box. However, I'm tying to consolidate rigs, implement hardware RAID, tremendously expand the datastore, etc, all at once. ZFS does a lot very well. However, I'm going another direction.

So, since I've gotten some very thoughtful responses, let me share:

I've got three supermicro chassis: 12-bay, 16-bay, and 24-bay LFF size. I've taken the Xeons out of the boxes. After abandoning that gear, I was running a 3570k for transcode (good for 4 streams) and zfs management. I used another machine...AMD-based (forgot exactly what ...a10-something) for VPN connectivity, and finally managed it all with my workstation - which is a 1950x TR. I didn't use the workstation all the time, of course.

The new TR stuff is in the 24-bay chassis. It could accommodate the larger noctua cooler - a necessity. It can also use ECC RAM - which the 3570k could not. It also has a ton more lanes than the 3570k. I've got two RAID cards with batteries - each is 24 port. The SAS2 backplane I just installed is also an expander - so plenty of capacity here. All told, I've got 52-drive capacity which I am filling with HGST 10TB enterprise drives. Currently, I'm sitting on probably 60TB of data - maybe 70TB.

I'll be putting the dumb board - whatever the power-on call is - in both of the other chassis when needed.

So. That's the deal.
 
Yea basically those HDDs are going to kill you, and there's not much you can do about it. Those enterprise drives are likely 10W a pop from the wall, so 52 x 10 = 520W. There will be a non zero amount of power for the backplane, the RAID card, etc. For transcoding I don't actually expect that TR system to be much more power efficient, if at all. The little 3570K is only like a 65W part if it wasn't overclocked, and the board it was in was likely WAY less power hungry. There definitely are a lot of boards where the southbridge I/O might eat 25W on it's own. TR is going to give you a lot more transcoding power, which I assume you want to run more streams. Otherwise it's not really a better bet for 24/7 usage over a tiny little 3570k. The CPU could be more efficient, but the total system not so much. I'm not actually sure once you kit out that system with those backplanes you will end up saving much power, but you should get more performance overall.

Another point of consideration, don't get too crazy on memory. Each DIMM is about ~4W or so just to stick it into the system. If you have 16 slots that's another 64W for it to idle and do nothing. If you're using it the power only goes up from there. Network cards can vary wildly, I've found some that pull 25W (quad port), some that pull 4W(single or maybe dual port).

From what I've read TR can use ECC memory, but it can only alerts you to errors and not actually correct them. So better than nothing, but not really true ECC support. As for lanes I'm doubting you were bandwidth starved before, because even PCI-E 2.0 x 8 is 4GBps, so you're not hitting that with spinners.

One of the best insights I can give you is from one of our storage vendors, in virtualization just consider all traffic to be an I/O blender. If you need absolute disk performance, then you don't really want to run them in a VM. For example I have 22 7K drives in one of my production SANs. I see approx ~600MBps in 100% read and I'm not sure I can give you a good write number off the top of my head. I would have thought it would be a bit more than that, but through virtualization I can clearly see that the block sizes are not scaling like you'd want them to. So I'd guess even if you were striping across all of those disks, you might get ~1.2GBps Reads in fully sequential workloads if you're using RAID6. And with that many drives you probably only want to have like 48 live and have about 4 hot spares for failures.

The general rule of thumb I was told for IOPs is:

7K 80 IOPs
10K 120 IOPs
15K 180 IOPs

So 48 x 80 = ~3,840 IOPs.

That's actually quite a bit of performance, and for doing a 70 / 30 Read / Write workload should be more than enough to stream anything you want while writing data back to disk. That said a couple SSDs will still destroy that, so if possible it's honestly worth tossing in an SSD for your VM's root drive and leave the big disks strictly to the disk pool. The thing with RAID is that as soon as you start writing to the disk write amplification is a real thing, I want to say it's like a 6x write amplification for RAID 6. (For a 64KB change it will cause 384KB of disk I/O)

If you're going to use Windows 10 as the host for that guy, you definitely want to make sure you're using the maximum cluster size possible. Disk performance basically works like this, cluster size + IOPs = performance. Writing one cluster is one IO, so you'd get 4KB per IO. So in your configuration you'd be looking at about 15,360KBps or ~15MBps of actual, real world bandwidth if everything is working against you. (Think small file reads and writes) Thankfully Windows can take multiple I/O operations and combine them together, so on bare metal it's not uncommon for Windows to write up to 1MB (I think max is 32MB) of data all at once. That still only counts as 1 I/O, so now we're talking about 3,840MBps. Sounds great, right until you remember about that write amplification problem. Cut that number by 6, and now it's like ~640MBps. That assumes a perfect 100% write workload with no reads and no seek time. Throw in either of those variables and watch performance go down.

Long story short I assume most if not all of your data is probably > 1MB, so on your RAID controller just pick whatever the biggest block size it allows. Then in Windows when you format the disk due the same for the cluster size. Make sure whatever block side you set (Probably 512KB) that the cluster size is evenly divided into it. 64KB CS fits exactly 8 times into a 512KB BS so everything will stay aligned. Using a 64KB cluster will definitely take an overhead hit doing so, because if you have a 1KB file it will use 64KB of space, and it will also cause a 64KB read and a 64KB write when that cluster is used. So it's not ideal if you have small files. If the entire disk is going to be hosting Linux ISOs at like 4GB a pop, then it's hardly a concern of yours and you want the biggest block size to keep the amount of IOPs down. Less IOPs means more performance. In my real world scenario, I typically see around 200KB I/O requests. so if I have ~1,800 IOPs then I will max out at around ~360MBps of total I/O. Some times I see less, but then there are definitely other times where it could be double that if it's copying large chunks of data.

The biggest issue with virtualization is that your VM might be writing out it's data in 1MB chunks, but the hypervisor might only be writing those out as 64KB chunks. I don't have enough experience with Windows to tell you how well it does or doesn't handle that, and I'm sure there a ton of variables. Especially if you're talking about a Linux host inside a VM that's now passing write commands up to the Windows host. The problem is Linux could intelligently schedule it's reads and writes, but Windows won't know what it's doing so it could buffer a bunch of writes, then go to start writing them out and then get a bunch of read requests. Those might be delayed and cause slow downs for read latency, or it might just decide to process them in the middle of writing and now you have 50/50 read / write which means everything is going to happen slower than if it could have handled the I/O intelligently. If you then were to toss a couple other VMs on there and Windows Defender kicks off and starts scanning the VM, it might start compteting for IOPs with your streams.
 
Yea basically those HDDs are going to kill you, and there's not much you can do about it. Those enterprise drives are likely 10W a pop from the wall, so 52 x 10 = 520W. There will be a non zero amount of power for the backplane, the RAID card, etc. For transcoding I don't actually expect that TR system to be much more power efficient, if at all. The little 3570K is only like a 65W part if it wasn't overclocked, and the board it was in was likely WAY less power hungry. There definitely are a lot of boards where the southbridge I/O might eat 25W on it's own. TR is going to give you a lot more transcoding power, which I assume you want to run more streams. Otherwise it's not really a better bet for 24/7 usage over a tiny little 3570k. The CPU could be more efficient, but the total system not so much. I'm not actually sure once you kit out that system with those backplanes you will end up saving much power, but you should get more performance overall.

Another point of consideration, don't get too crazy on memory. Each DIMM is about ~4W or so just to stick it into the system. If you have 16 slots that's another 64W for it to idle and do nothing. If you're using it the power only goes up from there. Network cards can vary wildly, I've found some that pull 25W (quad port), some that pull 4W(single or maybe dual port).

From what I've read TR can use ECC memory, but it can only alerts you to errors and not actually correct them. So better than nothing, but not really true ECC support. As for lanes I'm doubting you were bandwidth starved before, because even PCI-E 2.0 x 8 is 4GBps, so you're not hitting that with spinners.

One of the best insights I can give you is from one of our storage vendors, in virtualization just consider all traffic to be an I/O blender. If you need absolute disk performance, then you don't really want to run them in a VM. For example I have 22 7K drives in one of my production SANs. I see approx ~600MBps in 100% read and I'm not sure I can give you a good write number off the top of my head. I would have thought it would be a bit more than that, but through virtualization I can clearly see that the block sizes are not scaling like you'd want them to. So I'd guess even if you were striping across all of those disks, you might get ~1.2GBps Reads in fully sequential workloads if you're using RAID6. And with that many drives you probably only want to have like 48 live and have about 4 hot spares for failures.

The general rule of thumb I was told for IOPs is:

7K 80 IOPs
10K 120 IOPs
15K 180 IOPs

So 48 x 80 = ~3,840 IOPs.

That's actually quite a bit of performance, and for doing a 70 / 30 Read / Write workload should be more than enough to stream anything you want while writing data back to disk. That said a couple SSDs will still destroy that, so if possible it's honestly worth tossing in an SSD for your VM's root drive and leave the big disks strictly to the disk pool. The thing with RAID is that as soon as you start writing to the disk write amplification is a real thing, I want to say it's like a 6x write amplification for RAID 6. (For a 64KB change it will cause 384KB of disk I/O)

If you're going to use Windows 10 as the host for that guy, you definitely want to make sure you're using the maximum cluster size possible. Disk performance basically works like this, cluster size + IOPs = performance. Writing one cluster is one IO, so you'd get 4KB per IO. So in your configuration you'd be looking at about 15,360KBps or ~15MBps of actual, real world bandwidth if everything is working against you. (Think small file reads and writes) Thankfully Windows can take multiple I/O operations and combine them together, so on bare metal it's not uncommon for Windows to write up to 1MB (I think max is 32MB) of data all at once. That still only counts as 1 I/O, so now we're talking about 3,840MBps. Sounds great, right until you remember about that write amplification problem. Cut that number by 6, and now it's like ~640MBps. That assumes a perfect 100% write workload with no reads and no seek time. Throw in either of those variables and watch performance go down.

Long story short I assume most if not all of your data is probably > 1MB, so on your RAID controller just pick whatever the biggest block size it allows. Then in Windows when you format the disk due the same for the cluster size. Make sure whatever block side you set (Probably 512KB) that the cluster size is evenly divided into it. 64KB CS fits exactly 8 times into a 512KB BS so everything will stay aligned. Using a 64KB cluster will definitely take an overhead hit doing so, because if you have a 1KB file it will use 64KB of space, and it will also cause a 64KB read and a 64KB write when that cluster is used. So it's not ideal if you have small files. If the entire disk is going to be hosting Linux ISOs at like 4GB a pop, then it's hardly a concern of yours and you want the biggest block size to keep the amount of IOPs down. Less IOPs means more performance. In my real world scenario, I typically see around 200KB I/O requests. so if I have ~1,800 IOPs then I will max out at around ~360MBps of total I/O. Some times I see less, but then there are definitely other times where it could be double that if it's copying large chunks of data.

The biggest issue with virtualization is that your VM might be writing out it's data in 1MB chunks, but the hypervisor might only be writing those out as 64KB chunks. I don't have enough experience with Windows to tell you how well it does or doesn't handle that, and I'm sure there a ton of variables. Especially if you're talking about a Linux host inside a VM that's now passing write commands up to the Windows host. The problem is Linux could intelligently schedule it's reads and writes, but Windows won't know what it's doing so it could buffer a bunch of writes, then go to start writing them out and then get a bunch of read requests. Those might be delayed and cause slow downs for read latency, or it might just decide to process them in the middle of writing and now you have 50/50 read / write which means everything is going to happen slower than if it could have handled the I/O intelligently. If you then were to toss a couple other VMs on there and Windows Defender kicks off and starts scanning the VM, it might start compteting for IOPs with your streams.

I'm still digesting - or trying to digest - all of what you've said.

Without getting into it, for near-equal performance for my purposes, I would've had to run four 3570k - all OC'ed. I was using the one as a proof-of-concept, but my use case exceeded its abilities. I specifically used THAT one because it sipped power. As for RAM - all my other boxes were using 4x8gb DDR3 stuff. This one is using 2x16gb of ECC DDR4. It's the optimum performance/utility/cost at the moment. Would it have been awesome to run an EPYC system? Yes - but that was reaching a bridge too far for the project. As for the spinning drives: I know they're power hungry. I went with the highest density drives I could afford to mitigate as much as possible. The old host uses a bunch of 4TB drives.

I have also considered the SSD for the host for VM's - while ESXI itself (ignoring the cost issues, for the sake of discussion) from a USB drive. I don't think it pages to its host drive, and I figured this wouldn't be a problem. Even a cheap 256gb SSD should be enough for all VM's, I would think.

In my use case, there will be relatively few small files, with many large files. Large being multiple GB's. I originally intended the datastore to be on a linux-based host VM. I hadn't decided on the flavor. In fact, my ORIGINAL plan was for three VM's: Windows Server (transcode host), Windows 10 (VPN host), and Linux (datastore).

I will be using RAID 6, probably with some hot spares. The entire rig is not populated now. The first traunche of 10 drives is ready, however.

Obviously, you all are VASTLY more informed about this business than I am. This is my very first virtualization experience. It's been a very expensive lesson.
 
If you OC'ed that 3570K it would destroy the perf/watt, so TR is likely a better bet for more users. Since your requirements are for more hosts, I don't think it's a bad decision at all. You need more CPU, TR will give it to you. With only a pair of DIMMs it's going to hardly be noticeable with everything else in that system. It's mainly when you start populating all of the slots you can start to see a noticable difference in where you started. The drives you picked should be a good compromise for Price / space. I'd bet if you tried to push higher the cost of the drives would outweigh the power savings.

A lot of big enterprise environments run ESXi on SD cards, so that's certainly an option. (Vendors like Dell have built in SD card slots in their servers for this purpose) Proxmox I know will just take a small chunk of whatever drive you put it on, and leave you the rest to load VMs on. I can't remember either way for VMWare if it will give that space back or not if you were to just toss it onto and SSD. (For some reason I don't believe it will) A cheap SSD is definitely enough to get a few hosts on there. You should have an onboard RAID controller so I'd probably just grab a pair of 256GB SSDs and mirror them in RAID 1 on that. No sense having an SSD die and having to rebuild all of the VMs.

Making Linux the data host is perfectly fine, I don't think it's going to be a big deal no matter what way you go. (I tend to prefer Windows for SMB shares, but if your going to be doing transcoding then people aren't accessing that share directly so it really comes down to how you connect your transcode host to the datastore) Just keep in mind that when you're doing virtualization you have multiple layers of overhead. So There's the SAN / DAS level, the Hypervisor Host, then the VM. Each level needs to be configured appropriately otherwise performance is going to suffer. Linux should be able to block align, but I can't guarantee that the VM file on the hypervisor host layer will end up perfectly aligned. In theory it should, but you kind of get what you get when you virtualize so there's no real guarantee that I know of. If the only purpose that the Linux host is going to server is so that the Windows transcode server can map an SMB share to it, then I'd probably just skip that step all together. Use Windows Server, throw the disks into it and let the software work directly from disk. That cuts out a lot of unnecessary configuration and then you're not having to eat cpu cycle to transfer data over the internal network. That will be much slower than direct access in the same VM, eat more processing, and add a lot of complexity and parts that could stop working. If it's going to be 50 / 50 or you'll have clients accessing the data store directly, then it could make sense to have those separated out and deal with the overhead.

I don't know if there is a rule of thumb for hot spares, I've seen enclosures with 14 + drives and only one spare. The biggest issue you run into with disks is that if you bought them all at the same time, and they ran in the same system for several years, when they finally start dying they like to go around the same time. So one will fail, which triggers and automatic rebuild, that rebuild puts extra stress on the drive, which causes another one to fail while it's working on the first rebuild. That's basically why RAID 6 exists. I'd think 1 per 12 should be enough, but you could certainly look to see if someone has better guidance.

So if you're only putting in 10 drives, hopefully you know that a lot of systems don't handle expanding RAIDs well, if at all. Because you're planning on using hardware RAID, I would fully expect once you setup that disk pool there is no expanding it. So when you put in another 10 drives, you'll need to create another RAID6 container, refresh to see the new LUN in ESXi, Format it in ESXi, then create a new disk and attach that disk to the Linux host. Then in Linux you'd need to make sure that shows up, format that volume, then finally add it as a mount point somewhere in the system. If you're using LVM you might be able to just attach it as a pool and expand it, but that can come with caveats like not being able to remove it from that pool later if you wanted to. I'm definitely not the person to ask about that because I always just keep things separated and don't try to get too fancy. In Windows I'd keep them as separate drive letters, in Linux you can just mount that disk as a folder somewhere in the tree and you'll have all of that space inside that folder. So if you did need to unmount it later you can take it out in smaller chunks. I think that's where using ZFS could potentially benefit you even if it's in a VM, the ability to add / remove disks from the pool at will.

There is another option called RDM, or raw device mapping. That might be worth a look to see if you can just directly assign storage space from the ESXi host to the VM. That would skip the overhead from the virtualization layer, which has it's pros and cons. Since you're basically trying to build an entire enterprise infrastructure on a VM stack, you'll probably need to do some trial and error before you start putting a bunch of data on it to see what makes sense. You should have enough parts and pieces already purchased, you just need to start playing with software and using trials and what not to see what works.

If you're going to be using OpenVPN for the VPN host, I'd probably suggest rather than using Windows, use something like PFSense instead. It has much better integration, and there is an addon package that will take your client config, put it into a one click bundle that all you have to do is install it on the client and log in. The only built in option I see for Windows 10 is PPTP, which is like 25 years old and completely insecure at this point. I'm not even sure why that still exists tbh. Server 2016 does have better options, I believe the actual correct option for right now would be L2TP/IKEv2.
 
I'm using ASR-72405 RAID cards. I plan(Ned) to attempt to add another drive set to the existing array. I'm curious if you think that will not be possible?

bman212121

Moreover, I was planning to use an SMB share from the Linux to the windows VM, so that if/when the windows VM gets screwed up, the datastore would be unaffected in the other VM. Both windows-based VM would need to read/write to the datastore.
 
A quick Google seems to suggest it's definitely possible to do. I see mixed takes on how well it works.

https://hardforum.com/threads/raid-6-disk-swap-to-increase-capacity.1716704/post-1039151986

Someone about 8 years ago on [H] stating they wouldn't suggest it. I've seen other people say it works fine. I think the biggest issues is that in a home lab how you actually go about backing up all of that data. If that volume is your only copy then there's no way I'd even suggest trying to expand it. The good thing is that right now everything is blank, so what I would do is take 5 disks, set them up, get the OS configured how you want so you can throw a couple test files on it. Then you can go back into the controller and in other disks, which would trigger a RAID rebuild. Then you can make sure however you're doing it keeps the data on disk and isn't just wiping the drives and re configuring them. There is mixed information about how many you can do, so it would be a great time to see if you can just dump 5 additional disks into the array, or if there is some limit. I can't say as I have that much experience with adding to an existing array because you generally don't buy a half full DAS and put more drives in later. If you're using a full blown SAN, it's all software anyway so they are designed from the ground up to scale across whatever disks you put into it. The other thing to consider is that rebuild times could become ridiculously long. Since you're using 10TB drives, trying to trigger a rebuild from say 10 to 20 disks could quite literally take a month for it to move around all the data. Because of the write penalty and because the disks are already full as far as it's concerned, it's going to have to shuffle that data around more than once to be able to fully stripe the parity bits.

https://www.memset.com/support/resources/raid-calculator/

A decent calculator, but doesn't go big enough for what you're using. The general consensus seems like if you did trigger a rebuild of a failed disk, it's going to take a significant amount of time. Their default 10MBps is way slower than reality, but if you're trying to still use the array while it's rebuilding, it's not going to be rebuilding at 100MBps+. Just keep that in mind once again, it's probably a bit easier to tackle problems if say you only stripe 10 drives than if you were trying to stripe 20. You certainly should have a few options, if you need large stripes than using RAID60 is probably a good idea. It's basically a pair of RAID6 arrays that are striped together, but each set has a pair of parity drives. So in a 12 drive config, you'd have two 6 disk arrays, each one of those would 2 disks of parity, so 4 drives of data, 2 parity multiplied by 2. So you'd end up with about 66% space efficiency, with 8 data drives and 4 parity drives. You'll be playing the efficiency vs security dance while you do things. It's absolutely not worth doing RAID6 in it's minimum configuration, which is 4 disks. That's only 50% efficient, Which is what RAID1 is but that is a full mirror with none of the massive write overhead.

So with 10 disks you're kind of in the middle of what to do. I think it's probably a bit small for doing RAID60, but if you wanted to expand it later you could say go up to 16 disks and that would give a good 75% efficiency. So something like 12 disks of data, 4 parity disks, and then I'd still keep two hot spares. You'd be using about 18 disks total and getting ~100TB unformatted out of it. But in the mean time you'd be at worse case scenario, and at best could do an 8 disk RAID60 and keep a hot spare.

Another thought is the only reason for doing RAID60 is to create a larger volume pool. It does help somewhat with space efficiency on the drives themselves (You don't want to go above 85% usage, but if you cut them in half it can be hard to balance per side) Like I said before with Linux, it's as simple as just tossing another mount point somewhere in the tree. You can even mount a new array inside of another array's structure, and anything below that will be on the new array. An example would be this:

Array A
/opt/storage

Array B
/opt/storage/windows

So /opt/storage could have folders like /opt/storage/linux, /opt/storage/BSD. Those would store their data on array A.

If you copied a file into /opt/storage/windows or /opt/storage/windows/server2012/ then in both cases that data would end up on Array B, because there is a mount point for that entire directory structure starting at /opt/storage/windows.

If you make your SMB share /opt/storage, then you'll see folders for linux, windows, and BSD all in that same view. The files themselves are on different volumes, but it will be presented as one entity. The main downside to doing it that way is that if several transcodes are working out of one particular directory, then you could run into performance issues.

The point of all of that is that might be be more simple in the long run to just make smaller RAID groups. So do RAID6 but only do it across 6 or 8 drives. As long as you can find ways to split up the data, if something happens to a volume you only have to deal with that subset of data, and not the entire array. At a certainly point things just become too unwildly to tackle, and with 10TB drives that only transfer at like 200MBps, one drive pass is like 14 hours in a best case scenario. You'll never get 200MBps of performance out of a single disk like that in your RAID volume, so one pass is probably more like 2 days. The other issue you can run into is how to actually juggle all of the disks. I've run into that before where you only have 6 ports on your controller, but you already have 5 drives on it. If you need to plug in 2 more just to do a transfer, you won't be able to because you didn't have a way to move data around to pull out enough disks to put new ones in.

I give you props for trying to manage all of those disks, there's definitely a lot to consider so you'll really want to think about how you can handle DR, data migration, expansion, rebuilds, performance, flexibility, etc.


The SMB share should work fine, just know that it adds additional latency and overhead. I don't think you'll have too much of an issue transcoding even 10 streams over SMB though, it's generally quite robust. If anything it's the I/O blender on the other side when you start pulling random bits of data from all over the file system that causes you to need IOPs. Spinners are just really slow and have poor performance, and the problem is they don't get faster at random reads / writes because they are at the same RPM as drives from 20 years ago. You get increased sequential performance because more bits can pass over the head in a given amount of time, but seek time remains unchanged so as soon as it jumps out of line it's as slow as ever.
 
RAID 60 is too costly in parity for me. In my head, I'm thinking 22 drives in RAID6 with 2 hot spares. Worst case, I can pop the case and connect to the RAID controller directly for mass data migration. I've got empty chassis, so it wouldn't be impossible - just a pain in the ass.

I used a ZFS single volume/pool of 16 drives in ZFS2 (RAID6). It worked ok.

The above-mentioned system would allow for 200TB raw, plus 2 parity and 2 hot spares. That will be enough for now. Ideally, I'd be able to only do one array expansion from 10->24 drives.
 
Update:

Installed ESXI 6.7 and one windows 10 VM - but the hypervisor sees my RAID card as an adapter, but does not see any storage attached with it. WTF. Probably firmware or driver issue or something. I hate messing with this stuff.
 
If you're not a g@m3r, I recommend switching the base OS to Linux and using kvm. It's robust and fast compared to the likes of VMware. I use Linux as my primary workstation and virtualization system. Works well. And VMware doesn't "drive you". With Linux, you have more control (less forced updates, etc.).

Linux kvm is incredibly hardware agnostic.

(btw, I do game on my Linux workstation, YMMV)
 
Update:

It took me 15 hours of work, but I have finally made progress. Firmware for RAID card is updated, driver for esxi for card is current, a whole host of bios settings for motherboard have been changed. That was a real pain in the ass - the documentation was inaccurate, and there are only acronyms used in the uefi.

I learned a whole lot. I mean a whole, whole lot.

The result: I can now see the array in ESXI, have built a datastore, and now...to bed.
 
Back
Top