Anyone have any experience with Chelsio NICs?

Sycraft

Supreme [H]ardness
Joined
Nov 9, 2006
Messages
5,884
We are looking to upgrade our VM servers that run quite a few low-load VMs to 10gbit and in shopping around for NICs I came across Chelsio. Always used Intel server NICs in the past but these look interesting and it looks like they might have better driver support for new OSes than Intel.

Anyone use them, particularly with VMs and extra particularly with Hyper-V? Good, bad, indifferent?

Thanks.
 
I had two of them in a pfsense install. Unfortunately something with pfsense is a bit broken and even though the same driver is used from their 10gb cards all the way to 40gb within the same card series (T5, T6, etc), pfsense would have driver instability and crash with the 40gb card installed. With the 10gb card it would only crash like that once every few months, so still somewhat unstable. With the 40gb NIC installed though the crash would happen anywhere from once every couple days to once a week. The cards also run super hot, way hotter than even older Intel 10gb NICs which I thought were hot before I got these Chelsios. And then they cost more than Intel on top of it all. So ya Id just avoid them. The features they advertise are pretty nice, but unless you run a major server cluster you wont actually use any more than the basic NIC features every brand has.

I was going to say no one beats Intel on price for the features, stability, and compatibility for their X540 NICs, however I just checked and they have near doubled in price on Amazon so I guess they are a kinda bad deal these days. I guess the best on price would be a regular old Aquantia 107 card now days. Unless you want to use fiber SFP, then used Mellanox 10gb NICs are $40 or less.
 
The main feature that interested me was the built-in switch, useful for inter-VM communication. Also the RDMA/SMB Direct stuff since I'd like to look at doing Storage Spaces Direct with the next refresh of the servers, rather than an external NAS. The pricing of Intel doesn't really bother me, $300-400 for an adapter is fine (we only have two VM servers, a primary and a backup, the VMs are not large so they fit in that), I was just looking at other options that might give better performance. However something stable that doesn't give us shit is by far the most important. The VMs aren't very demanding, but the functions they perform are important.
 
Back
Top