Supermicro boards for ESXi?

HalfJawElite

Weaksauce
Joined
Jul 2, 2012
Messages
111
Hey everyone,

So I've been browsing around the HCL for ESXi and located a number of Supermicro boards that are compatible with ESXi 5.1 but am having a difficult time choosing which.

The choices I have come down to are as follows:
X9SCL-F-O
http://www.newegg.ca/Product/Product.aspx?Item=N82E16813182251

X9SCL+-F
http://www.newegg.ca/Product/Product.aspx?Item=N82E16813182262

X9SRi-f
http://www.newegg.ca/Product/Product.aspx?Item=N82E16813182331

From what I can tell the only difference between the first two is that the +-F has two 82574L nics while the -F-O has one 82579LM and one 82574L nic. The third board is much like the ASUS P9X79 WS board I'm using with Hyper-V 2012. I know that the LM nic isn't supported by ESXi out of the box but I haven't heard too much on these boards reliability, both when trying to setup the system for the first time and while running ESXi afterwards. Too add to the mess the feedback on New Egg hasn't helped me either, even though I know better than to base my decision entirely off that!

Can anyone vouch for these boards or suggest something similar that they know? I only plan on using the system with the 32 GB RAM limit and some minor virtualization.
 
Last edited:
I personally use a X9SCM-F, which is very similar to the first two, for my ESXi server. No complaints, it's been rock solid since the day it was assembled. As you noted, the LM isn't supported however which is an issue with that board as well. With four PCI-e slots on the SCM, it's less of an issue however if you end up needing more NICs. I use a pair of single port Intel's along with two LSI 9211's and it's been pretty smooth sailing from ESXi 4.1 thru 5.1.
 
Nope, not supported nor functional at all with ESXi 5.x. There's a community driver out there for it but I have no idea on it's stability. You may be able to pass it through to an OS, but I haven't tried it with that NIC.
 
I personally use a X9SCM-F, which is very similar to the first two, for my ESXi server. No complaints, it's been rock solid since the day it was assembled. As you noted, the LM isn't supported however which is an issue with that board as well. With four PCI-e slots on the SCM, it's less of an issue however if you end up needing more NICs. I use a pair of single port Intel's along with two LSI 9211's and it's been pretty smooth sailing from ESXi 4.1 thru 5.1.

Might I ask what your ESXi box specs are? I need to get this up and running soon. I might also plan on booting off of a USB drive or SD card.
 
Not a problem.

Case: Lian Li PC-P50
CPU: Xeon E3-1230
MB: Supermicro X9SCM-F-O
RAM: 16GB (KVR1333D3E9SK2/8G)
Drives: 3x CSE-M35T-1B enclosures, 2x 300GB VRaptors RAID1 for datastore, rest of various sizes for storage.
RAID: 2x Intel SASUC8I (Rebranded LSI 9211's, one flashed for dumb SATA mode)
NICs: 2x Intel CT 1Gb PCIe's
PSU: Corsair 620HX

ESXi itself boots off a USB stick on the internal USB header. No issues with CF booting on this system either, had that setup initially before I needed to use the onboard SATA for VT-d.
 
Not a problem.

Case: Lian Li PC-P50
CPU: Xeon E3-1230
MB: Supermicro X9SCM-F-O
RAM: 16GB (KVR1333D3E9SK2/8G)
Drives: 3x CSE-M35T-1B enclosures, 2x 300GB VRaptors RAID1 for datastore, rest of various sizes for storage.
RAID: 2x Intel SASUC8I (Rebranded LSI 9211's, one flashed for dumb SATA mode)
NICs: 2x Intel CT 1Gb PCIe's
PSU: Corsair 620HX

ESXi itself boots off a USB stick on the internal USB header. No issues with CF booting on this system either, had that setup initially before I needed to use the onboard SATA for VT-d.

Sweet looks like a nice build. I plan on doing an almost identical setup but I'm not too sure about Supermicro's reliability out of the box. Many users have reported poor system performance or faulty components once setting it up. I have found a similar board that looks a lot more promising and has better reviews by users. Its a TYAN S5512WGM2NR: http://www.newegg.ca/Product/Product.aspx?Item=N82E16813151247

Considerably more expensive than the Supermicro board but I'm willing to spend the extra cash if it means less head banging and hair pulling for me.

Update: I have also come across an Intel board that my local Canada Computers can obtain for me that's comparable. Intel S1200BTL http://www.canadacomputers.com/product_info.php?cPath=38_505_957_1114&item_id=038528
 
Last edited:
Supermicro's are generally extremely rock solid. I work almost exclusively with their products (datacenter engineer) and can could on one hand how many dead out-of-the-box hardware I've had from them over hundreds of servers. I think a lot of the bad rap the X9's are getting lately is because people are buying Ivy Bridge CPUs and newegg/etc are still stocking older revision boards without the latest BIOS that those CPU's require to function.

Both of those you listed look pretty good, tho the Tyan is a bit expensive for what it is.
 
I'll be interested to know which you're going with OP and how it turns out. Been looking for reviews of these exact boards and couldn't really find anything. LGA2011 not to popular yet?!

anyways, I had settled on the X9SRi-3F not sure if the extra sata are even worth it since i'm using a sas extender with my lsi 8i. just waiting for newegg to do their 20% off server board promotion.
 
I'll be interested to know which you're going with OP and how it turns out. Been looking for reviews of these exact boards and couldn't really find anything. LGA2011 not to popular yet?!

anyways, I had settled on the X9SRi-3F not sure if the extra sata are even worth it since i'm using a sas extender with my lsi 8i. just waiting for newegg to do their 20% off server board promotion.

I'll most likely be looking into the Tyan since it has the supported network adapters for ESXi already built-in. Saves me from having to go out and get extra adapters. I might end up buying two of these and transfer Hyper-V to one from my current P9X79 WS board. I love the LGA 2011 boards mostly because of the large number of SATA ports and max RAM supported, but if you're doing a rack build good luck finding low profile coolers that can handle 130 watts! The only viable option I've seen so far for me there is liquid coolers. And even then I'd have to get creative on how to mount one in the case. Currently using an Antec 1200 I had sitting around but I'd like to change that if possible.
 
Supermicro's are generally extremely rock solid. I work almost exclusively with their products (datacenter engineer) and can could on one hand how many dead out-of-the-box hardware I've had from them over hundreds of servers. I think a lot of the bad rap the X9's are getting lately is because people are buying Ivy Bridge CPUs and newegg/etc are still stocking older revision boards without the latest BIOS that those CPU's require to function.

Both of those you listed look pretty good, tho the Tyan is a bit expensive for what it is.

Would you happen to know if any of the motherboards I have listed so far have a built-in EFI Shell on them? I currently have a boot-able MS-DOS flash drive that I use for updating hardware firmware but I'm looking at ditching this method and going on board.
Is there any of these that you can confirm based on current knowledge?
 
I'll most likely be looking into the Tyan since it has the supported network adapters for ESXi already built-in. Saves me from having to go out and get extra adapters. I might end up buying two of these and transfer Hyper-V to one from my current P9X79 WS board. I love the LGA 2011 boards mostly because of the large number of SATA ports and max RAM supported, but if you're doing a rack build good luck finding low profile coolers that can handle 130 watts! The only viable option I've seen so far for me there is liquid coolers. And even then I'd have to get creative on how to mount one in the case. Currently using an Antec 1200 I had sitting around but I'd like to change that if possible.

Never had Tyan. I thought the Intel I350-AM2 on the Super x9sri is supported under esxi? http://vmwarehints.blogspot.com/2012/10/single-root-io-virtualization-sr-iov.html

And yeah, finding coolers has not been easy. Luckily the norco 4220 gives me a little room to play with as it seems to be able to fit 92mm fans. Was looking at the Noctua NH-U9B SE2 or the Corsair H100 but ended up getting a cheap active heatsink pulled from a proliant. We shall see how this all works out.
 
Never had Tyan. I thought the Intel I350-AM2 on the Super x9sri is supported under esxi? http://vmwarehints.blogspot.com/2012/10/single-root-io-virtualization-sr-iov.html

And yeah, finding coolers has not been easy. Luckily the norco 4220 gives me a little room to play with as it seems to be able to fit 92mm fans. Was looking at the Noctua NH-U9B SE2 or the Corsair H100 but ended up getting a cheap active heatsink pulled from a proliant. We shall see how this all works out.

Funny cause I was thinking of using the H100 as well. I figured I'd just have to find a way of sticking it to the inside of the case since I'm looking at 2U size. Maybe just velcro the rad to the inside top of the chassis?
 
Funny cause I was thinking of using the H100 as well. I figured I'd just have to find a way of sticking it to the inside of the case since I'm looking at 2U size. Maybe just velcro the rad to the inside top of the chassis?

hehe, velcro works, though just make sure its tight enough that the fans don't vibrate the chasis (so I'd add some zip ties to strengthen it) ideal would be to screw it in somewhere, even a single one would be fine
 
yeah btw. anandtech just did a review of the new closed coolers. seems the new 2013 Corsair H80i is the way to go for performance.

newegg happens to have them on sale atm also.
 
hehe, velcro works, though just make sure its tight enough that the fans don't vibrate the chasis (so I'd add some zip ties to strengthen it) ideal would be to screw it in somewhere, even a single one would be fine

Yeah I think you're right. Besides it never hurts to be careful.
 
yeah btw. anandtech just did a review of the new closed coolers. seems the new 2013 Corsair H80i is the way to go for performance.

newegg happens to have them on sale atm also.

Really? Never even heard of the new coolers yet. Can you post the link to the article you're referring to? I'm curious as to seeing what the differences are compared to the current gen coolers.

Oh and I went ahead and found one of the TYAN S5512WGM4NR's for sale on ebay. I know I should have done it through a retailer but you have NO idea how hard it is to find any of the TYAN models here in Canada.
 
Last edited:
You know I just found un-boxing videos on YouTube of the H80i and H100i on Linus Tech Tips and it seems like they're very impressive water coolers. The new magnetic mounting system for the CPU brackets makes me wonder about using small micro magnets on the radiator to help secure it to the top of a 2U rack mount chassis.

Any ideas as to how successful this might be? I know using screws might be safer but drilling into the top of a perfectly good case seems a bit much. Aside from the fact that the screws need to be long enough to hold through the metal but not rub against the bottom of the server above it.
 
Never had Tyan. I thought the Intel I350-AM2 on the Super x9sri is supported under esxi? http://vmwarehints.blogspot.com/2012/10/single-root-io-virtualization-sr-iov.html

And yeah, finding coolers has not been easy. Luckily the norco 4220 gives me a little room to play with as it seems to be able to fit 92mm fans. Was looking at the Noctua NH-U9B SE2 or the Corsair H100 but ended up getting a cheap active heatsink pulled from a proliant. We shall see how this all works out.

I got this one for CA$280 last week plus XEON E5-2600 and 128G Samsung 1.35V ECC RDIMM.

X9SRL-F
http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRL-F.cfm

Comes with remote management, and yes EFI shell. I was able to update firmwares for my SAS and FC HBAs using EFI shell, a little buggy but works.;)

This board doesn't come with the new i350 nic but 84574L works fine.

I have another 2 socket 1155 system (XEON E3) with 32G ram each but I feel regret not getting S2011 as I'm pushing the limit on RAM.

Plus this board comes with 7 PCI-E slots, 6 of them are PCI-E 3.0.
 
Last edited:
I got this one for CA$280 last week plus XEON E5-2600 and 128G Samsung 1.35V ECC RDIMM.

X9SRL-F
http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRL-F.cfm

Comes with remote management, and yes UFI shell. I was able to update firmwares for my SAS and FC HBAs using EFI shell, a little buggy but works.;)

This board doesn't come with the new i350 nic but 84574L works fine.

I have another 2 socket 1155 system (XEON E3) with 32G ram each but I feel regret not getting S2011 as I'm pushing the limit on RAM.

Plus this board comes with 7 PCI-E slots, 6 of them are PCI-E 3.0.

Holy CRAP! That's a lot of PCI-E cards! I got Server 2012 Data Center running at home on an ASUS P9X79 WS motherboard and having 8 PCI-E slots is way more than I need.

The server board I've ended up purchasing for ESXi 5 is the TYAN S5512GM4NR. Still not sure if it has UFI shell but the 4 gigabit LANs are what was needed most.
 
Really? Never even heard of the new coolers yet. Can you post the link to the article you're referring to? I'm curious as to seeing what the differences are compared to the current gen coolers.

Oh and I went ahead and found one of the TYAN S5512WGM4NR's for sale on ebay. I know I should have done it through a retailer but you have NO idea how hard it is to find any of the TYAN models here in Canada.

in case you haven't found it yet: http://www.anandtech.com/show/6530/...-liquidcoolers-from-corsair-and-nzxt-compared

I ended up going with the supermicro x9sri-3f, though it will be a few weeks before I can put it together as I'm waiting for some other parts

And your magnets question....I'm honestly not sure if that's a good idea or not but they would have to be pretty strong to hold a radiator up. Also, I'd be concerned just in general about how magnets would affect any internal systems.

Holy CRAP! That's a lot of PCI-E cards! I got Server 2012 Data Center running at home on an ASUS P9X79 WS motherboard and having 8 PCI-E slots is way more than I need.

The server board I've ended up purchasing for ESXi 5 is the TYAN S5512GM4NR. Still not sure if it has UFI shell but the 4 gigabit LANs are what was needed most.

btw, how's datacenter? any major problems getting it running over esxi?
 
in case you haven't found it yet: http://www.anandtech.com/show/6530/...-liquidcoolers-from-corsair-and-nzxt-compared

I ended up going with the supermicro x9sri-3f, though it will be a few weeks before I can put it together as I'm waiting for some other parts

And your magnets question....I'm honestly not sure if that's a good idea or not but they would have to be pretty strong to hold a radiator up. Also, I'd be concerned just in general about how magnets would affect any internal systems.



btw, how's datacenter? any major problems getting it running over esxi?

I'll keep that in mid regarding the magnets for mounting the H80i rad.

As for Server 2012 its a blast. The more I use it the more I love the features that Microsoft has added into it over Server 2008 R2. I was easy to setup and followed the same procedure as a Windows 7 install aside from having to make user accounts on first boot. The real downfall I have with this thing is the metro desktop interface. It literally is the desktop mode of a Windows 8 tablet! I have an RDP client installed on my ASUS TF300t tablet and lets just say trying to get to the start menu or power off options in it is a bitch, and I mean A BITCH! Why would Microsoft put a tablet UI on an enterprise OS, I don't have a damn clue? But other than that I have not had any qualms with it. I get all the Windows programs for free from college so I figured I'd use Server 2012 since ESXi doesn't allow disk passthrough. I had purchased an Intel RS2WC040 SAS RAID card for ESXi but it kept freezing at boot so I switched and installed Hyper-V for my use. I got my VM's running off of 2 Corsair 180 GB Force 3 GS's and Server installed to a single 60 GB Corsair Force 3 GT and boy does it load. I've clocked out the fastest boot so far for it with my VM's starting up immediately on the server at 12 seconds! My AMD gaming desktop has 4 Force 3 GT's in RAID 0 and it doesn't load that fast!

I would hazard a guess that the standard edition is just as reliable and quick to boot as well. Once I get my system back up and running in a new case I can fire up Standard in a VM and see how it does from that point too.

Device and hardware support has been massively improved too. Just about any thing I've plugged in so far has been detected and working aside from the 82579V controller (which required a driver install hack) and the Intel RAID card not working at all. The Marvell SATA 3 controller on the ASUS P9X79 WS board is detected and works but it has the usual white circle dot warning in device manager because there's no driver for it yet under Server. Got the boot drive plugged into that and no issues so far.
 
My apologies for the long post but I felt it was necessary to reinforce my opinion so far of how good Server 2012 is with Hyper-V installed as a role.
 
no worries. details is what I like to read.

So you're running Server 2012 as the base hypervisor (vs ESXI) and then using Hyper-V on top of the server 2012 environment? Just want to make sure I'm understanding correctly.

I still haven't decided what route I want to go with either ESXI or Server 2012 since I want to run a couple of VM that are non microsoft (pfsense, ubuntu for htpc). I've played a little with Hyper-V on my windows 8 box and already having issues since I can't connect to the VM without some major workarounds with security permissions since I don't run a domain at home.

Planned so far is running ESXI for the bare metal hypervisor. And then VM's for: Server 2012 Datacenter (handle file servering) with Flexraid, pfsense, Windows 7 (handle downloading, transcoding), Ubuntu (htpc).

Again, I guess I could reduce one VM by just running server 2012 datacenter and then create VM's off of hyper-v. Also, since I plan to use it as a file server it might be better to not have it wrapped up in virtual disk and I didn't know that ESXI couldn't do disk pass through. Still need to do a little more research on that.

Not sure yet what's going to be more reliable. guess we'll see :) and yeah, not sure what MS was smoking by putting a tablet interface into a desktop/server product. drives me crazy trying to navigate around.
 
Last edited:
no worries. details is what I like to read.

So you're running Server 2012 as the base hypervisor (vs ESXI) and then using Hyper-V on top of the server 2012 environment? Just want to make sure I'm understanding correctly.

I still haven't decided what route I want to go with either ESXI or Server 2012 since I want to run a couple of VM that are non microsoft (pfsense, ubuntu for htpc). I've played a little with Hyper-V on my windows 8 box and already having issues since I can't connect to the VM without some major workarounds with security permissions since I don't run a domain at home.

Planned so far is running ESXI for the bare metal hypervisor. And then VM's for: Server 2012 Datacenter (handle file servering) with Flexraid, pfsense, Windows 7 (handle downloading, transcoding), Ubuntu (htpc).

Again, I guess I could reduce one VM by just running server 2012 datacenter and then create VM's off of hyper-v. Also, since I plan to use it as a file server it might be better to not have it wrapped up in virtual disk and I didn't know that ESXI couldn't do disk pass through. Still need to do a little more research on that.

Not sure yet what's going to be more reliable. guess we'll see :) and yeah, not sure what MS was smoking by putting a tablet interface into a desktop/server product. drives me crazy trying to navigate around.

I currently have Server 2012 Data Center Edition installed on the 60 GB Force 3 GT and have the Hyper-v role installed. While some may debate if this is still really a type 1 hypervisor, it has been proven otherwise. When windows loads it first starts up the Hyper-V manager and resources, then boots the VM's, and lastly loads the Windows OS up.

I opted for running it as a role because, as you have found, running strictly Hyper-V Server 2012 in a workgroup setup is a pain in the damn ass. Hyper-V REQUIRES an active directory server to assign which computers are allowed to administrate it via remote server manager. There have been work around's posted that basically use power shell commands to manually enter in trusted computers and users, but unfortunately I have yet to find any that work for me in my LAN.

Your best bet to get that working is to build a dirt cheap computer, like a small Mirco ATX system with only 8 GB RAM, and install a copy of Server 2008/R2 on it and then adding your desktop and Hyper-V server to it. The box doesn't need to be too expensive or support lots of devices. Just a single drive and enough RAM to run without problems is enough to get it working.

I'll be doing something very similar to this in my build as I now have a second server ready for ESXi to install on to. With this setup I can configure an active directory server on the new box and then dump the Data Center OS and install only Hyper-V Server on the SSD. This should give me the ability to remotely access server manager while providing a second hypervisor as a backup solution.

As per your issue with drive allocation, yes ESXi will not allow drive pass-through. It will allow controller pass-through instead. This is mostly a linux (or in this case VMware) kernel thing as ESXi is designed from the ground up to only recognize attached hardware and provide means for VM's to connect to it. It doesn't do much in the way of management of the resources them selves when it comes to details. Hyper-V accomplishes this via actually reading the drives and their controllers and emulating it for attached VM's This is done to allow other VM's to access drives attached to the same controller. An example of this is my file server VM. My ASUS P9X79 WS motherboard has 2 SATA III Marvell ports (DVD burner and OS boot drive), 2 Intel X79 SATA III (2 SSD's for VM's), and 4 Intel X79 SATA II ports (storage drives). I have only 2 of the storage drives passed through to the file server while 1 of the other 2 drives is used for pagefile and ISO image storage on the host.

I think the cheapest and quickest option for you would be to setup a second computer to run as an active directory server (if you have the keys for an extra Windows box) and then run only Hyper-V Server 2012 on your main box. I get all of Microsofts OS's for free through my college because they're certified partners with them. Depending on if you can obtain Data Center Edition as well then you might be stuck with the free Hyper-V Server edition. The new features alone in Server 2012 are enough to warrant me on keeping the Windows box up instead of VMware if I had to choose. Plus the additional fact that the management tools required for Server are already built in to Windows 7 Pro and higher editions. No need for additional software to be installed if your tight on space like me!
 
Back
Top