Infiniband home network

zandor

Supreme [H]ardness
Joined
Dec 14, 2002
Messages
4,188
Anyone else running Infiniband at home? I just got my DDR Infiniband network & systems set up to the point where I'm mostly happy with them. I still have to try out NFS & iSCSI over RDMA (remote direct memory access), but everything else seems to be working pretty well.

As far as performance tests go so far I've just done some simple tests with ftp.

Main Rig: i7-3820, Mellanox ConnectX VPI IB card, dual boot CentOS 6.3 and Win7
File Server: Quad Opteron 8378 (2.4GHz), Dell PERC5i flashed to LSI firmware, 7x 146GB 15k SAS RAID5+HS, Mellanox ConnectX VPI IB card, CentOS 6.3
Quad #2: Quad Opteron 8356 (2.3 GHz), Mellanox Infinihost III IB card, CentOS 6.3

File Server -> Main Rig (CentOS 6.3), disk -> /dev/null: 650-700MB/s
File Server -> Main Rig (CentOS 6.3), memory (cache) -> /dev/null: 1100MB/s
File Server -> Main Rig (Win7), disk->Crucial M4 128GB: 650-700MB/s*
File Server -> Quad #2, disk -> /dev/null: 650-700MB/s
File Server -> Quad #2, memory (cache) -> /dev/null: 700MB/s
Quad #2 -> File Server, memory (cache) -> /dev/null: 800MB/s

* I realize this is impossible. I assume Windows is write caching. I need to set up a ram disk or get some sort of benchmarking app.

You might expect more on the memory->/dev/null tests, but it looks like the Opteron machines are CPU bound. The cards can only use 1 CPU core per tcp connection & their tcp/ip offload engines aren't as good as you'll find on a decent ethernet NIC, so to get full bandwidth from IP over IB I'll need more than one tcp connection or an RDMA connection. Sometime soon I'm going to try out NFS-RDMA. It should at least lower the CPU load. RDMA is much more efficient and faster than tcp or udp, but it's not routable. I'm maxing out my disks though, so I'm happy.
 
NFS-RDMA is faster than FTP, at least with my gear. It also completely blows away NFS over TCP or UDP.

Averages of 2 runs copying the disk 1 DVD image for CentOS Linux 6.3 to /dev/null using dd:

RDMA
File Server -> Main rig (CentOS 6.3), disk -> /dev/null: 710MB/s
File Server -> Main rig (CentOS 6.3), memory (cache) -> /dev/null: 1.4GB/s
File Server -> Quad #2, disk -> /dev/null: 685MB/s
File Server -> Quad #2, memory (cache) -> /dev/null: 1.0GB/s

TCP
File Server -> Main rig (CentOS 6.3), disk -> /dev/null: 362MB/s
File Server -> Main rig (CentOS 6.3), memory (cache) -> /dev/null: 588MB/s
File Server -> Quad #2, disk -> /dev/null: 363MB/s
File Server -> Quad #2, memory (cache) -> /dev/null: 569MB/s

UDP
File Server -> Main rig (CentOS 6.3), disk -> /dev/null: 320MB/s
File Server -> Main rig (CentOS 6.3), memory (cache) -> /dev/null: 333MB/s
File Server -> Quad #2, disk -> /dev/null: 318MB/s
File Server -> Quad #2, memory (cache) -> /dev/null: 327MB/s
 
I have nothing at home, outside of my ESXi lab that could use the speed. A single SSD won't even touch the bandwidth available via my ZFS/NFS storage across 40GB's QDR Infiniband. What do you plan on using it for?
 
QDR? Nice. I went with the older DDR stuff because it was cheaper & only my desktop machine could possibly manage a transfer rate over 20Gbit. The quads only have PCIe 1.0, so 8x cards are maxed out at DDR speeds.

I've basically got 2 reasons for putting in IB. One is that it was the cheapest and easiest way to go faster than 1Gb at least from my point of view. We use IB extensively at work, so I'm pretty familiar with it. $70 each for 20Gb NICs and 10 CX4 cables for $35 shipped isn't bad at all.

The other reason is I've got a few HPC type programming projects in mind that could benefit from a high bandwidth interconnect between systems. Two of them would also benefit from fast disks. One's work related (market data processing for trading applications), one's more along the lines of text search, and the last one is maybe doing some physics simulations with a friend. His Ph.D. research involved a lot of physics simulations and he still dabbles in it though he works in the finance industry now. If any of these go anywhere I'll probably be looking for more machines, but for now I have enough toys to get started on development.
 
Last edited:
Interesting. We only use IB too, I don't disagree- the price point is great! I mainly see either disks or the PCI-e bus being the limitation.

(I have a qlogic 40GB 32 port switch and 6 cards that I am still trying to find a use for at home, haha.)
 
Back
Top