A bulid for VMs work

kimoh10

n00b
Joined
Jul 6, 2017
Messages
6
Hi geeks :)

I'm looking for a build, that help my work as a Penetration tester. I'm looking for a build that support running multiple VMs (Virtual Machines) with heavy working on some VMs.

I'm confused especially in the CPU , what CPU do you think is suitable for this kind of work (working with multible VMs) ? I googled a lot about this subject but didn't get sufficient answer.

My budget is about 2k$

some requirement to note:
A good GPU for some password cracking.
M.2 + SSD for storage
it's better if the build box is not big, because its for work office not for home, small is better (if possible).

Some questions:

Is M.2 Worth it as a Storage solution? or the SSD is enough and not that match of a differences ? (in this case with VMs)

there is another big question please, is Nvidia GPU going to work probably in VM ? or is there a some GPU that support working with VM?


Draft of potential build:
Core i7-7820X (8 Cores/16 Threads . and it has highest votes in userbenchmark.com and it has a really good bench marking in term of Single core and multi core bench marking http://cpu.userbenchmark.com/Intel-Core--i7-7820X/Rating/3928)
GTX 1080 (or lower this for budget reson)
960 Evo NVMe PCIe M.2 500GB
Samsung 850 EVO 1TB
Corsair Vengeance LPX 16GB (2x16GB) DDR4 DRAM 3000MHz
 
Question - what software are you planning on using as your hypervisor for the VMs?

If it's VMware ESXi, then be aware you'll likely have to hand-configure the VMX file in order to get any consumer glass GPU to pass through to a VM, and the GPU will not be a 'shared' resource in that case since it would be assigned to a specific VM.

Also a note for ESXi, storage performance is only truly acceptable when paired with a cache-equipped RAID controller from VMware's HCL, and last I checked I couldn't convince ESXi to boot from an NVMe drive (though that may have changed).

Additionally, if you're really going for a multi-VM box, then your limiting factor is likely going to be RAM, so 32 might not be enough.

Lastly, your i7 7820X is *brand new* and it's possible it may have compatibility problems with bare metal hypervisors until they are updated.
 
M.2 is a waste for that. If you are doing pentesting, heck even password cracking, none of that will be HD intensive. I'd save the $$ and just get a regular SSD and then an external USB drive (4 TB) to store the data you need. Heck, even the VMs will run fine from a "regular" hard drive.

As was said, the GPU will most likely be useless in a VM. You'd be better off password cracking on a physical box if you want to use a GPU.

VM Workstation (my guess) or ESXi? Big difference. Everything, sans the GPU, *should* work with workstation. ESXi is a different beast. Also, how many VMs do you envision running at once and what flavor of OS will they be running? I have 32GB of RAM and did not have an issue running 6-8 VMs for a lab environment while doing some practice. That said, they all were not hammering away or being hammered at once.

What does your current system look like? VM does have a 30 day trial key for Workstation you can play with. If you have a decent system you could start with that and an external USB 3 drive and see what your desires may be.
 
Question - what software are you planning on using as your hypervisor for the VMs?


If it's VMware ESXi, then be aware you'll likely have to hand-configure the VMX file in order to get any consumer glass GPU to pass through to a VM, and the GPU will not be a 'shared' resource in that case since it would be assigned to a specific VM.

Also a note for ESXi, storage performance is only truly acceptable when paired with a cache-equipped RAID controller from VMware's HCL, and last I checked I couldn't convince ESXi to boot from an NVMe drive (though that may have changed).

Additionally, if you're really going for a multi-VM box, then your limiting factor is likely going to be RAM, so 32 might not be enough.

Lastly, your i7 7820X is *brand new* and it's possible it may have compatibility problems with bare metal hypervisors until they are updated.

No hypervisor, just normal windows as host and VM Workstation for running VMs.

Thanks for your notes :)
 
Alright, well that minimizes some issues and you definitely don't have to care about the RAID controller anymore.

You likely won't be able to pass arbitrary GPU acceleration down to client VMs though, so your GPU accelerated password cracking would likely have to take place in the host OS. VMware Workstation supports GPU accelerated graphics passthrough, but I don't know if it exposes enough of the GPU to allow arbitrary GPU acceleration via OpenCL or CUDA or other apps like that. It's not honestly a question I've ever even though of to be honest.
 
Last edited:
M.2 is a waste for that. If you are doing pentesting, heck even password cracking, none of that will be HD intensive. I'd save the $$ and just get a regular SSD and then an external USB drive (4 TB) to store the data you need. Heck, even the VMs will run fine from a "regular" hard drive.

As was said, the GPU will most likely be useless in a VM. You'd be better off password cracking on a physical box if you want to use a GPU.

VM Workstation (my guess) or ESXi? Big difference. Everything, sans the GPU, *should* work with workstation. ESXi is a different beast. Also, how many VMs do you envision running at once and what flavor of OS will they be running? I have 32GB of RAM and did not have an issue running 6-8 VMs for a lab environment while doing some practice. That said, they all were not hammering away or being hammered at once.

What does your current system look like? VM does have a 30 day trial key for Workstation you can play with. If you have a decent system you could start with that and an external USB 3 drive and see what your desires may be.

I think when dealing with file especially inside VMs, The M.2 NVME gonna show much difference, since its about 4 times faster in read and write than SSDs.

It seems that what I'm going to do, running GPU in VM not an easy task and it require some recruitment in term of the VM software type, and some time even a piece if hardware to support running GPUs efficiently on multiple VMs.

Thanks :)
 
This is actually my second time having part of this conversation today, so I'm just going to quote myself from another thread regarding the NVME SSD:

Additionally, and I've said this before in other threads, the point of diminishing returns has easily been hit in terms of drive speed for consumer use. A huge bulk of the perceived speed increase in SSDs came from a reduction in access times versus conventional hard drives, not from the actual transfer time of data. Modern, SATA SSDs are 4-5x faster than modern 7200 RPM HDDs (550+ MB/s vs 150 MB/s), but they are orders of magnitude faster in their access times (an 850 Pro is 0.04 ms access time, whereas modern HGST 7200 RPM drives are in the 12 MS range, or 300x slower). The incredible reduction in seek time is what truly fueled the "life changing" experience of SSDs, since data from all across the drive was seemingly instantly accessible rather than having to wait on the drive to physically spin itself around to get data from different parts of the disk.

Moving to modern NVMe SSDs is a huge increase in the first metric - 2 GB/s instead of 550 MB/s - but that performance increase is *not* reflected in their access times. This is hammered home to me in the fact that Storagereview, one of my favorite websites for truly in-depth reviews of storage infrastructure, *didn't even bother testing access times* when they reviewed the 960 EVO drive. The access times on modern SSDs are already so close to zero as to be indistinguishable from each other, and a non-factor in their performance.

That's not to say NVMe drives aren't faster - they are and the differences are measurable for sure. They just aren't as big a deal and are *not* the second coming of the revelation that was the replacing of mechanical drives with SSDs.
 
This is actually my second time having part of this conversation today, so I'm just going to quote myself from another thread regarding the NVME SSD:

What sinister said. The biggest bang with the M.2 is the superior transfer rate, not the access time.

Hey, it's your money. If you want to waste it on the M.2, go for it. Unless you are moving VMs between drives or transferring very large files, the M.2 is not going to make a difference.

As for spreading your GPU across VMs, that's not really a feature. You're not going to utilize the GPU across multiple VMs and gain performance. If you think that you will be disappointed and upset that you wasted money on a high-end gaming GPU. If worked all of the miners would be doing it.
 
What sinister said. The biggest bang with the M.2 is the superior transfer rate, not the access time.

Hey, it's your money. If you want to waste it on the M.2, go for it. Unless you are moving VMs between drives or transferring very large files, the M.2 is not going to make a difference.

As for spreading your GPU across VMs, that's not really a feature. You're not going to utilize the GPU across multiple VMs and gain performance. If you think that you will be disappointed and upset that you wasted money on a high-end gaming GPU. If worked all of the miners would be doing it.

what about if I only want to utilize the GPU in only one VM. is it possible? connecting the GPU in one VM and even disconnected from the host to have full utlization of the GPU.
 
Back
Top