10Gb Infiniband vs 1Gb Ethernet

402blownstroker

[H]ard|DCer of the Month - Nov. 2012
Joined
Jan 5, 2006
Messages
3,242
How much better would two system connect with these 10Gb Infiniband card be compared to there embedded 1Gb ethernet cards? Mainly wonder about latency and bandwidth( sustained ).

eBay card Link
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Too many variables to answer your question. Is it for in-memory cache requests such as those seen using Oracle RAC or are you transferring data directly from storage devices if so what is the performance of the storage devices.

Basically if your system bottleneck is not the network but something else you will see no performance increase.
 
In general, 10G over Infiniband can be quite good. But you'll find that using these particular cards it performs quite badly. The MHEA28-XSC does not do processor offload very well - and it doesn't do any at all for the IP stack. Because of this, you get poor performance and high CPU loading when running IP over Infiniband (which is probably what you will end up using).

See here for one person's current experience. His story is not yet completely written, but the experience so far is not real good: http://forums.servethehome.com/show...or-workstation-server-Recommendation-question
 
I would mainly be for storage access for /home directories in a home network where the host has a RAID0 array of 6 SAS disks.

Any suggest some cards that work well, but not cost a fortune?
 
Define "fortune"...

Least expensive solution for point-to-point 10Gbe is probably using Dell XR997 (OEM branded Intel EXPX9501AT). These are single port 10Gbase-T cards, so your cables will be cheap (regular Cat6 works without error for modest distances). Since they are standard Intel driver model cards they are compatible with almost all OS (Win, *nix, ESXi, etc). Can be found on fleabay for ~$150/each.

Other than costing 6x what you were looking at they also have a really screamy little fan on them - but they work wonderfully well and can deliver full 10Gbe performance (as long as you aren't doing super small packets).
 
What do you need that kind of bandwidth for? Also those drives will not saturate 10GB. My advice would be invest a decent switch that allows for MPIO for multiple 1GB connections.
 
For single point-to-point storage access you would have a hard time saturating 10G connection. Since only 1 client even if the server had the disk bandwidth to saturate 10G, that single client probably does not.
 
Back
Top