Anyone running ESXi on an Intel NUC? Questions on Ethernet adapters and Wifi passthru

raiderj

Limp Gawd
Joined
Jun 21, 2011
Messages
340
I picked up one of the new Intel NUC with an i5. Added 16GB RAM and a 120GB SSD. With a little bit of customizing of an ESXi 5.5U1 install disk I was able to get it up and running smoothly. Intending for this to be a small portable demo type box.

Has anyone else been using these boxes for ESXi? Curious if there are any gotchas or other items I should be aware of. Also, has anyone added in a wifi card and successfully passed it through to a VM? I'm considering using a VM has a hotspot.

How about the thunderbolt to Ethernet adapter? I've read of people using it on a Mac Mini to get ESXi running. Curious if anyone has done the same with an NUC to get another Ethernet on the box.

EDIT: Mistake, it's not a Thunderbolt connector on the NUC I have, it's just a mini-Displayport. Not the same thing unfortunately. Guess that's out!
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
We played with them at lot at VMware - work well, minus the limitations of the form factor (limited ethernet). Smaller CPU power too, but that's not often a ~major~ concern.
 
Funny this popped up.
I just started a mod project with my set of NUCs.
I think I found a way to have two wired Ethernet and two drives (msata+sata ) in a single NUC. "VSAN love"

My chips and mods arrive Friday, I will post my findings this weekend....

I know you are asking about wifi and PCI passthrough so it "should" work considering the i5 has VT-d.
I haven't tested the wifi adapter idea but that part is inexpensive enough to at least attempt a valid test.
My current goal is to get two wired Ethernet adapters inside a single NUC and I think I have it down.

Nick
 
Last edited:
Definitely update here with what you find! I know there's a USB 2.0 header inside, but I really don't see a way to utilize it without some cutting of the case. Best thing I could think of would be to have a different case bottom that screws in, but would expose an Ethernet port and maybe a USB port. Personally I'd use the setup to have one NIC that would lead into pfSense, while the other would be LAN facing.
 
I've been looking at doing this since the new Mac Mini doesn't look all that promising. It looks like you could use the mPCIe slot to add ethernet or if you get a thunderbolt capable nuc, add ethernet that way as well. First way will require a new case or some cutting.
 
I see a couple mPCIe Ethernet adapters on Amazon. Looks like that'd work, but making ESXi recognize it would likely require a VIB.
 
Yeah, most are using Realtek chipsets, but I did find one using Intel a while back when I first started looking into this, can't seem to find it now though.
 
What hardware did you use for the second NIC? Assume a 1/2 length mPCIe adapter? Where did you mount the adapter?
 
Syba SD-MPE24031.
It is a Realtek adapter so you need to do some VIB work with the 5.5u2 build.

Mounted in the half length slot and a Samsung MSATA drive above it..
It did take some "tweaking" to get it all screwed down.
I almost considered getting a 120GB half length SSD for the bottom and getting the larger NIC for the top slot..... but I already had the full length SSD and the half length SSD selection is limited.

Everything is mounted inside the NUC.
My first run has a network cable poking out of the side. When I get fancy with my cutting tools ill see if I can mount the Ethernet port to the side of the case and post up a picture.
There is enough room for it, I just need to make it happen then spend forever making it look good.....

I must say though, I migrated six VMs to my NUC and powered down one of my 350Watt beast servers.
Its super nice going from 350 to 20 watts even if the noise was contained within the garage... I'm going to play with VSAN this week and if it works solid for a week, i'll be able to power down another server. 600watts down to 40 just in time to power up the Chirstmas lights..... bleh.

More info....
If the pcie port was larger than a 1x I was going to play with a pcie Extender and throw in a 10Gbps adapter. BUT since the 1x slot can only do about 2Gbps, its a waste. If you can find a super small 1x pcie two port Ethernet card (like a SYBA SD-PEX24041) you could mount it to the top of the NUC and have three Ethernet ports... I guess you could even do two of these cards like that. Convert the NUC to five Ethernet ports and one Sata port... They even have SATA to mpcie adapters so you could convert the NUC to have five Sata ports and one NIC.
Fun times..

Nick
 
you're gonna have trouble with that AHCI controller eventually - queuing problems will eventually knock nodes offline, so watch carefully.
 
Very curious to see a picture of your setup, especially how you have the network cable connected. There is very little room on any side of the case to stick in a cable!
 
you're gonna have trouble with that AHCI controller eventually - queuing problems will eventually knock nodes offline, so watch carefully.

I'm not familiar with VSAN, but why does AHCI make a difference? The better option is to use a SAS controller?
 
I'm not familiar with VSAN, but why does AHCI make a difference? The better option is to use a SAS controller?

As per Duncans Blog - The AHCI bugs were fixed however still not technically supported and you will not get the best performance out of the AHCI controllers just due to the I/O limitations of the Controller/Driver.
http://www.yellow-bricks.com/2014/09/13/vsan-with-ahci-controller-with-vsphere-5-5-u2/

Peter has also done some interesting things with Mac Mini's same core concept!
http://www.virtuallyghetto.com/2014/10/a-killer-custom-apple-mac-mini-setup-running-vsan.html
 
As per Duncans Blog - The AHCI bugs were fixed however still not technically supported and you will not get the best performance out of the AHCI controllers just due to the I/O limitations of the Controller/Driver.
http://www.yellow-bricks.com/2014/09/13/vsan-with-ahci-controller-with-vsphere-5-5-u2/

Peter has also done some interesting things with Mac Mini's same core concept!
http://www.virtuallyghetto.com/2014/10/a-killer-custom-apple-mac-mini-setup-running-vsan.html

Bugs yes. Limitations, no - there's still stuff that will keep that from working 100% properly.

I'm not familiar with VSAN, but why does AHCI make a difference? The better option is to use a SAS controller?

Because AHCI isn't a bus protocol and has too many blocking commands. Various commands (like SMART, for instance) block the queues while they complete. When that happens, the disk group can drop offline due to the delay, which causes a failover/rebuild (which ups load significantly and can increase the chance of it happening again immediately).
 
Lopoetve:
I agree 100% running a NUC isn't VMware supported but:

1: A nuc only has a single msata SSD and one sata spinner so SCSI commands should queue up inside any controller or abort/kick a disk from any array.
2: The NUC config for a home lab won't be running massive cpu and memory intensive VMs. The NUC 16GB ram and four cpu core limit will bottle neck before I see a vm pushing massive IOPs or MBps to disk. Network is limited to 1Gbps per flow on the two network adapters so any disk queue issues will be IntraHost intraVM traffic.
3: This is a home lab setup with toys. If I was building for production It wouldn't be with Intel NUCs. Though I would consider it for vcloud crap hah...

I think VSAN still needs a lot of changes before I consider it production. Anything that asks you to turn off the cache on the spinning Hard Drive (every drive) needs work.

Thank you for the input though. I never think of these issues when it's not on production equipment.

To add: I knew Peter Bjork and William Lam was playing with Mac Mini and VSAN and this is what made me try it with Intel NUCs. Same idea, smaller form factor, lower price, removable ram and drives, and no apple logo.


Nick
 
Except the controller kicks everything on the disk group when it blocks like that ;) Doesn't matter what ~part~ of the disk group got blocked by that, everything goes, based on how AHCI handles it >_<

It has very little to do with memory and CPU, and has to do with size of command, when it executes (related to when the SMART command executes), etc.

I'm just making sure you understand that this isn't the same as production gear - I've seen too many people do weird whitebox/goofy setups and have it blow up, and then immediately assume it's the software that was the cause when it was very much designed to ~not~ work that way in the first place :)

I had a lot of conversations with Lam when he was building out that environment, and helped design the support process around VSAN as a whole :)
 
I was totally thinking about doing this just a few weeks ago. Would be a nice little lab that is easy to carry around.
 
Quick update:

Some quick screen shots of the three NUCs running VSAN.
I did some quick ten second performance tests which didn't equate to anything "real world" but its up and running without issues.

I'm trying to think of ways to stress test the cluster to show some "entertaining" data without getting bored... Thoughts? Comments?

SS1.jpeg

SS2.jpeg
 
Back
Top