X9SCM-F Issues (network, memory, latency)

war9200

Limp Gawd
Joined
Oct 9, 2010
Messages
273
I have sent numerous e-mails out and will contact supermicro, I have documented all of my issues here:
https://sites.google.com/a/lucidpixels.com/web/blog/supermicrox9scm-fissues

In addition, with an Ivy Bridge CPU and the newest BIOS (2.0b) released 10/05/2012 -- it is unstable and I have proof of that here:
http://home.comcast.net/~jpiszcz/20121010/x9scm-web-small.jpg

Here is a youtube video of using 2.0b (latest) with 32GB of ram when you run memtest86+:
https://www.youtube.com/watch?feature=player_embedded&v=M2TWO5kFm9U

I recommend reading this if you are curious about the X9SCM-F.
 
I can report a similar issue regarding the first onboard Intel 82574L ethernet port under Openindiana/Illumos on the X9SCM-Fii. During a large SMB file copy the interface gets dropped by the kernel.
This does not happen when using the second onboard Intel 82574L ethernet port.

So I would suspect that this is a general issue of the board implementation an if it can be resolved it has to be done on the BIOS level. I'm not sure if there's much that can be done in the OS since the problem seems to occur under Linux as well as Illumos.

Has this problem been reported to Supermicro ?
Is there a ticket number open for the networking issue in particular ?
 
Has this problem been reported to Supermicro ?
> Yes, they are aware of the issue but to my knowledge there is no fix yet.

Is there a ticket number open for the networking issue in particular ?
> Not that I recall (I only communicated with them via e-mail) and no ticket was referenced for the issue, they just told me that they were aware of it.

I bought a new board and have 7 PCI-e cards in it at the moment, this one only has 2 quirks:
1) The video card has to be in the left PCI-e x16 slot.
2) You cannot set the IPMI address (static doesn't work) - its DHCP by default so I got the mac address, assigned it an IP from a dhcp server then switched it to static and it kept those settings so I moved it back to static and it stayed as such.

Other than that, new box is rocking, no other issues (I did not bother to try and use the NICs though).
http://www.supermicro.com/products/motherboard/Xeon/C600/X9SRL-F.cfm
 
Tried that before I bought the new board, it did not help (it said it applied the change to the EEPROM but it did not make a difference)
 
Last edited:
I've opened a support ticket at Supermicro. However, so far it doesn't look like this will lead to a fix any time soon. I'm currently using the board under Openindiana, i.e. an Opensolaris derived distro based on Illumos and Supermicro support already told me that they don't have a lot of experience with Opensolaris. Most of their suggestions were based on Linux as the controlling OS like compiling and using the latest Linux Intel network drivers.

I'm using the X9SCM-IIF which has 2 x Intel 82574L on board. The X9-SCM-F that you use has one 82574L but also a 82579LM NIC chip. war9200 is your first interface which shows the problem the 74L based or the 79LM based ?

It is really a strange problem because it's only crashing the network interface when I'm doing a large copy job from another system to the Supermicro box. The other way around, i.e. using the Supermicro machine as the source and another computer as the copy target works perfectly well. The same holds for the latency when doing a ping during the copy job. No increased latency when the Supermicro machine is used as the copy source but fluctuating latency when it is the target.

One interesting suggestion from Supermicro support was to disable the Linux IPMI driver. I didn't further investigate since I'm not on Linux. Is there really a device driver for IPMI ? I need to investigate what's the situation on Opensolaris.
The only thing I can tell is that when I took the board into operation it was initially setup to run IPMI over the first LAN port instead of the dedicated management port. In this configuration I couldn't make the first LAN port working under Openindiana. It appeared perfectly OK under the gnome network manager and also ifconfig seemed to report a proper status but it simply didn't get an IP address via DHCP. Only after I had changed IPMI to use the dedicated management port everything was fine.

So now the LAN1 and IPMI interfaces should be completely separated but who knows ?

At least Supermicro did know that my config is with a dedicated IPMI interface but nevertheless they recommended to disable the IPMI Linux driver. It is the only difference between the two LAN ports on my board that I can see, that you can run IPMI over LAN1 but not over LAN2.

Next thing I'll do is to install VMware ESXi 5.1 on the box and run Openindiana as a guest. Then the NICs will be controlled by ESXi and I'm curious if this will change anything.
 
I'm using the X9SCM-IIF which has 2 x Intel 82574L on board. The X9-SCM-F that you use has one 82574L but also a 82579LM NIC chip. war9200 is your first interface which shows the problem the 74L based or the 79LM based ?

The first interface on the board (top port) should be the *79LM, which is the chipset that exhibits the problem. Please let me know if I am wrong but I believe this is correct.

> One interesting suggestion from Supermicro support was to disable the Linux IPMI driver. I didn't further investigate since I'm not on Linux. Is there really a device driver for IPMI ? I need to investigate what's the situation on Opensolaris.
I always keep those modules out, e.g. there are bugs sometimes when the chassis instrusion/etc log fill up the IPMI logs and then kipmi (kernel ipmi process gets stuck at 100% CPU really annoying).

> Next thing I'll do is to install VMware ESXi 5.1 on the box and run Openindiana as a guest. Then the NICs will be controlled by ESXi and I'm curious if this will change anything.
Have you tried buying a $20-$30 Intel PCI-e NIC (or more expensive server) or an x4 4-port card?
 
I finally upgraded the X9SCM-iiF BIOS from version 2.0 to the latest version 2.0a and this solved the problem !

It also solved a problem with 3 LSI_SAS9211-8i controllers, which was the initial reason for the upgrade.
 
It also solved a problem with 3 LSI_SAS9211-8i controllers, which was the initial reason for the upgrade.

Can you elaborate on that issue? I have a X9SCM-F with three M1015's and I'm having tons of issues. I can't tell if it's the motherboard, the cards or the Norco 4224 Backplanes.
 
I update the BIOS of my X9SCM-iiF this weekend, but sadly the ping latency problems remained on both on-board controllers. I still have to use a I350 addon card.
 
Have you changed to a Sandy Bridge chip, maybe borrow a friends, and see if the results are the same?

Since you updated your bios have you tried an PCI-e NIC like Intel Desktop CT to see what would happen?

Sometimes latency issues and ping replies are not always generated on the terminal you are using. Sometimes cheap switches can cause this. What is the weakest part of your network?

When I ping my Cisco switch (in sig) I get solid 100% 24/7 pings at <0ms . However when I connect to my cheap tplink gig switch I get the same pattern you reported in your initial link above.
 
The OS I'm running is Ubuntu 12.04 with a 3.6.2 kernel, but older kernels also showed that behaviour. The effect is also visible if I ping that machine from another machine connected through a Cisco 200 switch.

I tried both an I350-T2 card and an X540-T2 card, neither exhibited the same behaviour. While I have a lot of different CPUs available, I'm not going to completely disassemble the machine since it is my main fileserver and VM host and the I350 card works fine and supports SR-IOV, but I would still like to get rid of that problem.

Maybe I should try and compile the latest e1000e driver from Intel's website?
 
That's interesting. Can you confirm that SR-IOV is possible with the X9SCM-iiF ?
So far I couldn't find a definitive statement about this, even not on Supermicros website.
I also cannot see any configuration option in the BIOS to enable/disable SR-IOV ?

I would be very interested to share a X540-T2 with SR-IOV between several VMs.
 
Yes, I can confirm it. It is correct that there is no definitive statement. So I just bought it and tested it. I would go as far as saying that every server chipset board since Sandy Bridge will support it, but I could not find proof for that. My previous X8SIA-F did not support it.
SR-IOV works well with both cards. The latency using VF is better than software bridging, but the throughput is much worse. The linux software bridge can push 40 Gbps,while the I350 achieves only about 4 Gbps for inter-VM traffic. The X540 could only transfer a bit more than 1 Gbps between VMs, although this was on a gigabit line, not sure whether this has influence. Tests were done with iperf.

It may be interesting to note that you can install the VF driver under Windows 7 even though Intel only offers drivers for Windows Server.

EDIT: Back on topic, I compiled the latest e1000e driver from sourceforge and was not able to reproduce the ping lag yet. More tests will be done this weekend.
 
Last edited:
I am about to buy another X9SCM-F for a build using a Ivy Bridge CPU. Anyone still having huge problems with this combination?
 
My problems seem to have went away when using the latest linux drivers. Make sure to have a BIOS supporting Ivy Bridge, your board will not boot otherwise.
 
I've opened a support ticket at Supermicro. However, so far it doesn't look like this will lead to a fix any time soon. I'm currently using the board under Openindiana, i.e. an Opensolaris derived distro based on Illumos and Supermicro support already told me that they don't have a lot of experience with Opensolaris. Most of their suggestions were based on Linux as the controlling OS like compiling and using the latest Linux Intel network drivers.

I'm using the X9SCM-IIF which has 2 x Intel 82574L on board. The X9-SCM-F that you use has one 82574L but also a 82579LM NIC chip. war9200 is your first interface which shows the problem the 74L based or the 79LM based ?

It is really a strange problem because it's only crashing the network interface when I'm doing a large copy job from another system to the Supermicro box. The other way around, i.e. using the Supermicro machine as the source and another computer as the copy target works perfectly well. The same holds for the latency when doing a ping during the copy job. No increased latency when the Supermicro machine is used as the copy source but fluctuating latency when it is the target.

One interesting suggestion from Supermicro support was to disable the Linux IPMI driver. I didn't further investigate since I'm not on Linux. Is there really a device driver for IPMI ? I need to investigate what's the situation on Opensolaris.
The only thing I can tell is that when I took the board into operation it was initially setup to run IPMI over the first LAN port instead of the dedicated management port. In this configuration I couldn't make the first LAN port working under Openindiana. It appeared perfectly OK under the gnome network manager and also ifconfig seemed to report a proper status but it simply didn't get an IP address via DHCP. Only after I had changed IPMI to use the dedicated management port everything was fine.

So now the LAN1 and IPMI interfaces should be completely separated but who knows ?

At least Supermicro did know that my config is with a dedicated IPMI interface but nevertheless they recommended to disable the IPMI Linux driver. It is the only difference between the two LAN ports on my board that I can see, that you can run IPMI over LAN1 but not over LAN2.

Next thing I'll do is to install VMware ESXi 5.1 on the box and run Openindiana as a guest. Then the NICs will be controlled by ESXi and I'm curious if this will change anything.

Update the E1000E module. That has resolved all networking issues for me on x9scl and x9scm servers that I manage (over a dozen).

I am about to buy another X9SCM-F for a build using a Ivy Bridge CPU. Anyone still having huge problems with this combination?


I'd buy the x9scm-ii as that is guaranteed to support Ivy Bridge out of the box, you may need to flash the x9scm you buy before it will support Ivy Bridge.
 
Update the E1000E module. That has resolved all networking issues for me on x9scl and x9scm servers that I manage (over a dozen).

Are you refering to a firmware fix or is this just on OpenIndiana? I plan to either use OpenIndiana barebone, or package it in ESXi like my other servers.

I'd buy the x9scm-ii as that is guaranteed to support Ivy Bridge out of the box, you may need to flash the x9scm you buy before it will support Ivy Bridge.

As i don't even plan on using both network cards, those $40 extra is better spent on RAM :) I'm buying and picking up directly from a distributor, so it's easy to get a RMA - and I do have a V1 Xeon if a new firmware is required.
 
Are you refering to a firmware fix or is this just on OpenIndiana? I plan to either use OpenIndiana barebone, or package it in ESXi like my other servers.

I don't know in regards to OpenIndiana, the e1000e update I mentioned is basically a driver update for the nic in Linux distributions. I imagine you can compile the driver update from Intel's site from the source on OpenIndiana.
 
Back
Top