Sandy bridge, VT-d and ESXi... success stories?

intel

n00b
Joined
Apr 5, 2009
Messages
35
Hi guys

It is time for me to build an ESXi whitebox server for my home and I have been researching VMdirectpath using VT-d directed I/O for virtual machines.

I have been unable to find much information about motherboards that do support ESXi and VT-d out of the box, would you mind helping me out? I am looking at a low power sandy bridge 2400S

Thanks ;)
 
I have two SuperMicro X9SCL-O, and I've tested it and - VT-d works

Its for Xeons though...
 
No K's support vt-d
I don't think the i3 does either.

I think the c204/206 do. You'll need a xeon/SB.

What are you looking at virtualizing that needs SB vs something a lot better documented like 5500/890FX/SR56xx?
 
No K's support vt-d

At the bottom of this there is a chart with the sandy bridge processors that do and don't support vt-d

It's contrary, even though Intel places the attribute to the processor, it is the core logic (chipset) that supports VT-d not the processor. Remember this is an IOMMU feature; not part of the MMU of the processor.

Also, the chipset may have support for VT-d, but the BIOS may not enable it due to OEM's support. Also the kernel, OS, and VMM may not be properly configured or compiled to support it.


EDIT:

Technical notes: http://software.intel.com/en-us/art...s-for-efficient-virtualization-of-io-devices/ "The VT-d DMA-remapping hardware logic in the chipset sits between the DMA capable peripheral I/O devices and the computer’s physical memory"

This is a nice read on the contradictions that Intel is doing : http://software.intel.com/en-us/forums/showthread.php?t=80009

To note, if it needs to be part of the processor, it is interesting that Intel does not even note in the Ark page the capable attribute for some processors; including my Core i7-940. However, I can enabled support in the motherboard's BIOS for VT-d. This is one thing that Intel is really pissing me off with lately, and making this disconcerting to the end user is not a good thing.
 
Last edited by a moderator:
I have a Xeon E3-1230 on a Supermicro X9SCM-F and an Intel SASUC8i SAS controller.
ESXi 4.1 Update 1 is installed to a 2GB USB flash stick mounted on the internal USB header.
VT-d is enabled in the bios and a VM that gets the SASUC8i via VMDirectPath.

everything was smooth sailing to this point!
 
I have a Xeon E3-1230 on a Supermicro X9SCM-F and an Intel SASUC8i SAS controller.
ESXi 4.1 Update 1 is installed to a 2GB USB flash stick mounted on the internal USB header.
VT-d is enabled in the bios and a VM that gets the SASUC8i via VMDirectPath.

everything was smooth sailing to this point!

Nearly identical setup, also no issues.
 
I am thinking about an all in one. So far, for processor, I am thinking E3-1230 - with HT enabled, this gives me 8 total threads. The setup: 12GB of ECC RAM. ESXi on a usb stick or laptop pata drive. OI+napp-it on local datastore, with 6 640GB WD blue drives connected to 6 onboard sata ports, passed through via VT-D (assuming no issues there, if one arises, I will shell out for an m1015 and reflash it with LSI firmware). The 6 drive zfs pool already exists (a raidz2, so about 2.5TB of usable storage.) The OI+napp-it already serves up a data store to ESXI via iSCSI. 3 VMs right now: pfsense firewall/gateway, ubuntu mail/web/whatever server and windows XP for telecommuting. I'm thinking a supermicro mobo, either x9scm-f or x9scl-f (I want the ipmi/kvm variants). As far as I can tell, there is little difference between the two (c202 vs c204), and none of the differences on the datasheets are anything that matters to me. The scl is about $20 less. I would prefer to save $20, but not if there is a gotcha here...
 
Last edited:
I choose the SCM model because of its 2 x8 (electrical and physical) and 2 x4 (electrical, x8 physical) pcie 2.0 slots. plus 2 of the 6 onboard SATA ports are 6gb/s on the SCM

I didn't see any redeeming qualities in the SCL's C202 chipset and the extra cost of the SCM model was worth the 1 additional x4 electrical slot.

i am running nearly what you spec hardware wise and just getting to the software side of things.

i have ESXi installed to an internal 2GB USB, and a temporary HD on the onboard controller being used as the datastore. what i havent decided yet is how where to house the VMDK for the OI VM....having a disk attached to the onboard controller used for the local datastore seems very wasteful plus i dont think i can then pass that controller through to a VM. and i dont think there's such a thing as passing a physical disk directly to a VM (passing the other 5 onboard sata ports to the NAS VM).

do you have any firm plans or best practices you've come up with regrading where to put that NAS VMDK?

a larger USB stick (maybe 32GB?) to have both the ESXi host OS and enough of a local datastore to house the OI VMDK seems to be a horrible idea in terms of performance of the OI VM but I may be wrong here.
 
I was originally going to use the IDE port all mobos have for a pata drive, but it looks like they don't have one :) So, maybe a usb stick? Or a relatively small older sata drive and a separate HBA? Performance should not be a major issue, since like most Unix variants, OI is not going to doing a lot of reading from the root pool, caching program code and such in RAM.
 
i have a 16GB USB stick laying around that i can try that on. just installed an OI VM and its default installed disk size is 3.7GB. Gea recommends 12GB system drive for OI in his/her all-in-one writeup. assuming drive performance is not an issue, that 16GB usb stick should do the trick. the dedicated HBA for this is another good option. i have an old Promise Sata 3gb/s HBA laying around collecting dust waiting to be tested for ESXi compatibility :)
 
I choose the SCM model because of its 2 x8 (electrical and physical) and 2 x4 (electrical, x8 physical) pcie 2.0 slots. plus 2 of the 6 onboard SATA ports are 6gb/s on the SCM

I didn't see any redeeming qualities in the SCL's C202 chipset and the extra cost of the SCM model was worth the 1 additional x4 electrical slot.

i am running nearly what you spec hardware wise and just getting to the software side of things.

i have ESXi installed to an internal 2GB USB, and a temporary HD on the onboard controller being used as the datastore. what i havent decided yet is how where to house the VMDK for the OI VM....having a disk attached to the onboard controller used for the local datastore seems very wasteful plus i dont think i can then pass that controller through to a VM. and i dont think there's such a thing as passing a physical disk directly to a VM (passing the other 5 onboard sata ports to the NAS VM).

do you have any firm plans or best practices you've come up with regrading where to put that NAS VMDK?

a larger USB stick (maybe 32GB?) to have both the ESXi host OS and enough of a local datastore to house the OI VMDK seems to be a horrible idea in terms of performance of the OI VM but I may be wrong here.

For my setup, I used the built-in controller for my datastore drives. In my case I used a SSD. For my NAS part, I have an Areca card that I passed fully through to the NAS VM.
 
i can report that ESXi does not like USB storage as a datastore. it was having none of that

googling found that it is indeed not supported.

so, i'm back to a single disk on the onboard controller, eventually migrating that datastore disk to a small 2port HBA to free up the onboard for vt-d to a VM.
 
i can report that ESXi does not like USB storage as a datastore. it was having none of that

googling found that it is indeed not supported.

so, i'm back to a single disk on the onboard controller, eventually migrating that datastore disk to a small 2port HBA to free up the onboard for vt-d to a VM.


I did have ESXi 4.1 U1 installed on a USB stick without problem. What kind of problems are you having ?

BTW, I am looking at either Supermicro X9SCM or Tyan S5510GM3NR for my next build. Tyan mainly because it has 3 LAN port that are well supported by ESXi. X9SCM's 82579LM LAN port is not supported by ESXi without modifying the oem.tgz.

Anyone has experience with the Tyan board ?
 
durianmy,

thanks for pointing out that ambiguity. I too have ESXi 4.1 U1 installed to a USB stick.

what did fail however was getting ESXi to recognize the remaining free space on the USB stick as a local datastore for use to store the VMDK of a virtual SAN VM. I wasn't crazy about the prospects of the performance setup like this, but wanted to at least give it a go since this is for home use only.

locating the VMDK of the virtual SAN VM on the USB stick would have meant not needing to use the onboard controller for disks to house the local datastore and leave it open to be passthrough to a VM (pending compatibility of course)

regarding the 82579LM port on the X9SCM, i bit the bullet hoping that a) support would eventually come for that chip, b) i'd live with a pci-e nic in its place or c) i may give hyper-v a shot for various other reasons in which case the NIC issue is moot.

if i had to do it all over again, i'd probably go with the Tyan board you mention for the exact same reason, NIC compatibility. I was a little impatient when the HBA went on on my current WHSv1 box and grabbed the first awesome sandy bridge xeon board i could get a hold of, the X9SCM

I had a dual Athlon Tyan board back in 2001-2002 that was rock solid and awesome, don't have personal experience with modern Tyan gear. Patrick over at ServeTheHome did a review of the S5510GM3NR's big brother, Tyan S5512WGM2NR, that you may want to look at as a base point.
 
82579LM

Offical Support? No
Un-Offical Support? YES!

As the creator of the 82579LM driver, I can tell it it works GREAT, and have been running it on my two X9SCL motherboards since day one(of driver release, April 23rd) as my SAN port. No issues, etc etc.
 
SCM-F with an ARC-1680 vt-d to OI. Populated with 2 HP expanders and associated drives without any problems so far.

Test installed esxi with an internal USB worked. Added Areca and 82579 support on it without any issues. I didn't install any VMs tho. My final install was to an internal drive (personal preference) tho but I saw no reason why a USB would be any different.

*btw... thank chilly since it was his drivers on vm-help that got that 2nd intel up*
 
Order placed with newegg. e3-1230 cpu, 4 sticks corsair DDR3 1333 ECC RAM, supermicro x9scl-f. total cost (no tax, no shipping) about $650.
 
I just ordered from Superbiiz the following

4 Super Talent DDR3-1333 4GB/256x8 ECC Micron Server Memory
1 Supermicro X9SCM-F-O
1 Intel Xeon Quad-Core E3-1230 3.2GHz 5GT/s 1155pin 8MB CPU
 
82579LM

Offical Support? No
Un-Offical Support? YES!

As the creator of the 82579LM driver, I can tell it it works GREAT, and have been running it on my two X9SCL motherboards since day one(of driver release, April 23rd) as my SAN port. No issues, etc etc.

hi, chilly. i saw the post you linked. a little confused here: the post references two rar files, part 1 and part 2, both of which have oem.tgz files in them?
 
installing from usb / booting from USB is very different than running VMFS on USB, since it is not a fully SCSI compliant device.
 
Hi guys

I found another forum saying that the Intel DQ67SWB3 motherboard will work with esxi. I just ordered it from Amazon, once I get it I will report back!
 
Okay, x9scl-f/e3-1230 up and running. Very nice! Turns out I was able to pass the cougar point sata controller thru to open indiana, so even virtualized, the SAN is kicking ass. With vmxnet3 drivers, my ubuntu guest can hit about 8gb/sec between itself and the virtualized OI SAN.
 
Last edited:
Okay, x9scl-f/e3-1230 up and running. Very nice! Turns out I was able to pass the cougar point sata controller thru to open indiana, so even virtualized, the SAN is kicking ass. With vmxnet3 drivers, my ubuntu guest can hit about 8gb/sec between itself and the virtualized OI SAN.

Where do you install your ESXi and OpenIndiana if you pass your sata controller to OpenIndiana ? I am getting my board (x9scm-f) soon.

Thanks.
 
Here's the storage setup: a 160GB laptop ide drive (leftover) with a sata adapter, plugged into a low-end LSI HBA. along with the sata optical drive. An 8GB usb flash drive plugged into the socket on the mobo. ESXi installed on the usb stick, and configured for local datastore on the laptop drive. The 6 640GB SATA drives are in the 6 sata ports, and their controller is passed in to OI. Works a treat. Inside an OI shell, I did a 16GB write to a testfile and got 330MB/sec, reading back got 370MB/sec. I configured OI with 2 vcpus and 6GB ram. Since HT is enabled, this leaves 10GB ram and 6 vcpus for ESXi and other guests.
 
I'm really torn now. Sandy bridge is obviously better but they took away support for registered memory and I can't find any 8GB unbuffered ecc dimms. Has anybody here managed to get 32GB ram working with sandy bridge xeons?
 
durianmy,

thanks for pointing out that ambiguity. I too have ESXi 4.1 U1 installed to a USB stick.

what did fail however was getting ESXi to recognize the remaining free space on the USB stick as a local datastore for use to store the VMDK of a virtual SAN VM. I wasn't crazy about the prospects of the performance setup like this, but wanted to at least give it a go since this is for home use only.

locating the VMDK of the virtual SAN VM on the USB stick would have meant not needing to use the onboard controller for disks to house the local datastore and leave it open to be passthrough to a VM (pending compatibility of course)

regarding the 82579LM port on the X9SCM, i bit the bullet hoping that a) support would eventually come for that chip, b) i'd live with a pci-e nic in its place or c) i may give hyper-v a shot for various other reasons in which case the NIC issue is moot.

if i had to do it all over again, i'd probably go with the Tyan board you mention for the exact same reason, NIC compatibility. I was a little impatient when the HBA went on on my current WHSv1 box and grabbed the first awesome sandy bridge xeon board i could get a hold of, the X9SCM

I had a dual Athlon Tyan board back in 2001-2002 that was rock solid and awesome, don't have personal experience with modern Tyan gear. Patrick over at ServeTheHome did a review of the S5510GM3NR's big brother, Tyan S5512WGM2NR, that you may want to look at as a base point.

Hi!
I did see your thread, and detect you have a problem like me.
I did put a esxi 4.1 in a bootable usb. I have free space on this usb. I'd like format this usb in vmfs3 format. But, esx don't permit me it. Do you resolve your problem? I'd like learn about your experience.
TIA
jorge infante
rosario - santa fe - argentina
 
I'm not sure how one would 'resolve this problem'. As already stated by several folks (including a vmware employee), you can't put a datastore on a usb key.
 
I'm not sure how one would 'resolve this problem'. As already stated by several folks (including a vmware employee), you can't put a datastore on a usb key.

yup, the resolution is to not use a USB key as a datastore.
 
wondering if there are any updates to this,
I have been looking for a light weight (in $ up front and $ for power over time) solution for having multiple vm's with each with a different device directed to it (nic/vid/capture/raid)
 
Back
Top