New Build: ESXi / Solaris 11 Express /w Raw Drive Mapping (RDM) passthrough

fields_g

Weaksauce
Joined
Apr 9, 2011
Messages
102
Hey all, just built a machine and am in the testing phase. Here are the specs:

Processor: Intel Xeon E3-1235 3.2ghz Quad core HT Sandy Bridge
Motherboard:ASUS P8B WS (C206 Chipset /w ECC and now VT-d)
Ram: 4GB DDR3 1066mhz (just to power the thing on)
Hard drives: 5x 2tb Hitachi Deskstar 5K3000 (SATA III, 5900 rpm)
Surplus 80gb 7200 SATA Drive
Power Supply: 400 watt XIGMATEK NRP-PC (80 Plus Bronze)
Case: Surplus ATX Mini-Tower


I'm building a home storage/VM machine. I took advantage of the 2Tb drives available for $60. The limit of 5 allowed for a good stopping place. The C206 chipset on the motherboard allows use of the on-CPU graphics of the Xeon E3-1235 as well as ECC memory, when I get around to buying it. When I purchased the motherboard, it didn't support VT-d although there were rumors that the motherboard would get a BIOS update to support VT-d. I was pleasantly suprised the day following my purchase, when it showed up on their website.

I'm using ESXi 4.1 hypervisor and Oracle Express 11 for my storage VM OS. As I started to build, I assumed that I could install ESXi to a flash drive, but I found out that it does not support having a datastore on USB. Since I was planning to pass the entire onboard controller through to the VM, I couldn't boot ESXi from there also. I needed to get creative at this point... either find a way to trick a USB data store, buy an additonal storage controller (either for the array or the ESXi boot), or experiment with Raw Device Mapping (RDM). I chose to play with RDM.

I installed the ESXi to a misc HD on remaining SATA port, installed Solaris 11, and passed the five 2tb drives through. Then I installed the vmware tools (which do not support that version of X.org ?!?!) to get access to the VMXnet3 adapter, loaded Napp-it, and set up a RAID-Z ZFS volume. Reads were mid-400's and writes were high 300's MB/s, while SMB was 50+ MB/s (my client CPU was actually maxing out, so don't know yet it full speed).

The plan is to create a NFS share back to ESXi to be the datastore for other VMs.

I'm still a bit nervous about the RDM. I might still get a controller with a LSI2008 chipset for passthrough.

I also tried the text install of OpenIndiana build 151, but the Vmware tools failed to install because it couldn't find X.org (obviously) and left me without a VMXnet3 driver. I assume the same would happen with the Solaris 11 text install also. A 10gbe TOE virtual NIC sounds much better than 1gbe non-TOE one to me, so I went back to my solaris 11 GUI VM.

So does anyone ....
1) know how to make the VMWare Additions fully work with Solaris 11?
2) have experience with RDM?
3) know how to use unallocated space on ESXi thumbdrive as datastore?
4) know how to get around the install issues with OpenIndiana's text install?

I know I didn't explan the error messages much, but I'm at work now without messages in front of me.
 
use esxi embedded instead of installing it that way the esx is contained on usb drive and you can pass the whole onboard sata controller to vm guest
 
"embedded" Isn't that installing ESXi to a flash drive or did I miss something? I did that, but didn't have a place to put the datastore for the Solaris VM that I'd pass the sata controller to. I don't have another system to be a NFS or iSCSI target, so the datastore has to be local, meaning another hard drive, unless someone knows of some other trick.
 
FWIW, I started off with my all-in-one ESXi ZFS build using RDMs, and it worked terribly. Everything appeared to work fine, but ZFS would report 1000s of corrected errors per day. Eventually there would be many files that were listed as corrupt. These exact disks used in the same VM but on a passed through controller have been running for close to 6 months without a single error.
 
Thanks, that's what I needed to hear. Must get passthrough up and going, one way or another.....
 
Yea I have a guide on the RDM thing 70% written for the site but have not been happy with the results.
 
I've done that, but when I run the package install I get:

Code:
Detected Xorg X server version 1.7.7.

No drivers for Xorg X server version: 1.7.7.
Skipping X configuration because X drivers are not included.

I've even run the April ESXi patch on the server, which included a vmware-tools update. When I run update manager in Solaris, I have never got any updates, is this normal?
 
Im not sure what the problem would be. Im not a Solaris expert, I have installed it a few times and have not had an issue.
 
Purchased an M1015 today + cables. I think I'm getting closer to my intended goal.
 
FWIW, I started off with my all-in-one ESXi ZFS build using RDMs, and it worked terribly. Everything appeared to work fine, but ZFS would report 1000s of corrected errors per day. Eventually there would be many files that were listed as corrupt. These exact disks used in the same VM but on a passed through controller have been running for close to 6 months without a single error.

that was exactly what happened to me. i ended up upgrading the cpu/mobo and using pass thru and it works flawlessly.
 
Have you tried running the P8B WS with a PCIe video card? I'm curious if the BIOS and ESXi will still output through the onboard processor video or go through the PCIe card, and if there's a BIOS option to control this.

The reason I ask is because I've also been thinking about using the P8B WS with ESXi, Solaris, and a passed-through LSI controller.
If possible though I want to try putting a PCIe video card in the system, and passing that through to a VM. For this to work though, ESXi would still need to have its console on the onboard video, with the PCIe reserved for the VM.
 
I'll try the video passthrough. The chances of success are low, but I have several AMD and Nvidia, consumer and Workstation grade cards to try out for you.
 
Graphics passthrough results:

You cannot remove the Vmware Graphics card from the VM configuration, so your machine has dual video cards. I tried two devices so far.

Nvidia GeForce 9300:
Code:
July 16 02:38:26 storage pcplusmp: WARNING: Sharing vectors:pci10de,6e0 instance 0 and SCI
July 16 02:39:33 storage scsi: WARNING: /pci@0,0/pci15ad,790@11/pci15ad,1976@1 (mpt1):
July 16 02:39:33 storage        Disconnected command timeout for Target 0

ATI RV620 (Radeon HD 3450):
GPU Fan revs 3 times and starts normally. In solaris, it booted, but X did not load. I assumed an improper X configuration. More comfortable with XP graphics, so switched to that (without Vmware Tools installed). Booted normally using vSphere client console as primary video, Device manager was both unknown video controllers. Loaded the Catalyst drivers, XP device manager showed that it installed driver, but needed to reboot. Upon reboot BSOD... device driver got stuck in infinite loop, 0x000000EA, ati2dvag. SO CLOSE!!! Removed passthrough, installed Vmware-tools, rebooted and added passthrough again. Booted fully, display properties listed both cards, set ATI as primary, and did the same BSOD. Same BSOD when rebooting. Reliably enters safemode though. Looks to be a potentially resolvable driver issue from the AMD side?
 
Graphics passthrough results:

You cannot remove the Vmware Graphics card from the VM configuration, so your machine has dual video cards. I tried two devices so far.

Nvidia GeForce 9300:
Code:
July 16 02:38:26 storage pcplusmp: WARNING: Sharing vectors:pci10de,6e0 instance 0 and SCI
July 16 02:39:33 storage scsi: WARNING: /pci@0,0/pci15ad,790@11/pci15ad,1976@1 (mpt1):
July 16 02:39:33 storage        Disconnected command timeout for Target 0

ATI RV620 (Radeon HD 3450):
GPU Fan revs 3 times and starts normally. In solaris, it booted, but X did not load. I assumed an improper X configuration. More comfortable with XP graphics, so switched to that (without Vmware Tools installed). Booted normally using vSphere client console as primary video, Device manager was both unknown video controllers. Loaded the Catalyst drivers, XP device manager showed that it installed driver, but needed to reboot. Upon reboot BSOD... device driver got stuck in infinite loop, 0x000000EA, ati2dvag. SO CLOSE!!! Removed passthrough, installed Vmware-tools, rebooted and added passthrough again. Booted fully, display properties listed both cards, set ATI as primary, and did the same BSOD. Same BSOD when rebooting. Reliably enters safemode though. Looks to be a potentially resolvable driver issue from the AMD side?

Great, thanks for giving it a try. I found a guy on the VMware forum who got a Radeon 3450 (and also a 6850) running passed-through to a Windows 7 x64 VM. The only weird catch is that the VM's RAM allocation has to be below 2816 MB, or it would BSOD. Not sure why it worked for him and didn't work for you, but there could be a lot of reasons: 7 vs. XP, 64-bit vs. 32, maybe running an older ESXi version, having separate motherboard graphics, or even 3420 vs. C206 (hope not!).
 
I'll try with the ATI card a bit more. I think I was running at 2GB, so it might be the OS. I also played with passing the integrated Intel graphics through.... working as planned until the end of the driver installation when it tried to initialize...BSOD.

I tried to pass the sound card and a TV tuner through to XP. They were seen, drivers installed, but sound was just squeals and the TV tuner.... well.... just didn't function. When I set up my test win7 vm, I'll try these again.

With these problems, combined with the upcoming Vsphere/esxi 5 8gb ram limitation, I decided to try w/o a hypervisor. I'm playing with a Ubuntu 11.04, samba, open-iscsi, virtualbox, zfs on linux setup. I have to say, this might be the way I'm going to go. The performance seems very good and everything just works.

My M1015 is inroute. This could spice things up a bit more.
 
An alternative: do the exact same but use the GUI version of openindiana. Virtuabox works, etc, but you have a more stable version of zfs.
 
... and napp-it support. I initially went this way because the box will function as a mythtv backend also, but that could be virtualized. I'll probably give it a try.... Thank God for a whole bunch of surplus 80gb drives.
 
Last edited:
No, but I will for you. Since I got the M1015, I passed that through, but not the onboard.
 
did you ever get around to testing passthrough of a sata-controller on this motherboard?

Just tested.

M1015(it) booted esxi, passed through onboard ICH10R to XP VM. Worked fine.
Onboard ICH10R booted esxi, passed through M1015(it) to Solaris 11 Express. Worked fine.

Disk passthrough seems fine, but I did have problems passing sound and a TV tuner through to a XP vm.
 
That's great news! Big thanks for your efforts. You are the only source I've found testing the ASUS P8B WS with an ESXi setup. Did you do any performance benchmarks on the drives connected to the passed through controllers?

About your sound and tuner card problems, maybe you could try the passthrough on a Windows 7 x64 VM. Seems people are having a bit more success with passthrough on that OS (although on other mobos). http://communities.vmware.com/message/1781979 - If you do decide to explore this, I'd be very interested in the results!
 
Not really surprised that I am the only one since the BIOS update to allow VT-D was released on July 5th. I think many people passed this chipset to more server-class boards. It is the only one that will take advantage of the E3-12X5 processors. With the update, I think it will become a more popular board in these circles.

Give me a couple of days, but I'll test the sound/tuner pass-through on Win7-64.
 
Bump :)

Any news? I'm itching to buy this card to use as an ESXi server actually. It's quite unique, it's the only C20x card I've found with USB3, and that might come in handy down the line. Especially since I would assume you can pass through the USB3-controller to a VM? (Could you perhaps test this?) It's also one of few C20x cards I've found with 4 PCI-e slots available in my area.

The ones available (with 4 PCI-e slots):
Supermicro X9SCM(-F)
Intel S1200BTL
Asus P8B WS

Unfortunately the Tyan cards aren't available in my area :\

I like the extra PCI slots available for the Intel & Asus cards. Could use them for RAID1 datastore for ESXi + Solaris, and then just pass through the onboard controller for some "free" extra slots, so I'm really leaning towards either the intel or the asus.

The Asus (your) card even has an extra pci-e x1 which could potentially be used either for an extra Gbit NIC or a tuner card. Speaking of which, have you had any issues with fitting cards close next to each other? It does seem kind of cramped if one were to fill up all the slots.

How is your current ESXi setup running, and how is it set up? (Have you made any changes?)
 
Sorry for the long delay... ESXi testing with this board is really frustrating. I created a win7 64bit pro vm and was able to get the M1015 to work and a TV tuner (I think... hard to tell with console / remote desktop graphics). The onboard audio, ATi addon video card, and onboard usb 3.0 didn't work. The items that didn't work did appear though, just didn't do anything. The link you provided has a table and shows "jsnow" having similar results with the same MB.

Card spacing: If you use only the 16x slots, you have space for double wide cards. I wouldn't think you would have problems.

Right now, I'm not even using ESXi for the host. I've moved over to linux and everything seems happy, including my zfs storage. I might mess around with ESXi again once vsphere 5 comes out with improved passthrough.
 
Thanks again! :)

What with the ESXi testing is frustrating? Issues with the board?

You say you're running linux, are you running ZFS solutions for linux instead?
 
Most frustrating is that you can see the devices, they seem like they are going to work but they don't. Is it the OS, drivers, IRQ sharing (which slot), ESXi, the MB?

I'm using the ZFS on linux. Haven't had any problems yet.
 
I see. So have you found the experience to be different on other boards? If so, which boards?

Sorry to be so interrogative, but I'm just really curious :p
 
Only other system that I have used VT-d with is a Dell Optiplex 980 /w Citrix XenClient. I was able to pass through onboard video and it virtualized all other devices.

My impression is that passthrough efforts are focused mostly network and storage. There are a fair number of people wanting video passthrough. I think it is just a matter of time before passthrough compatibility, in general, will become more robust. In the mean time.... if it works... great!
 
So NIC and storage passthrough (PCI / PCI-E / on-board SATA) has worked fine on the Asus P8B WS then?
 
Back
Top