Anyone else running Infiniband at home? I just got my DDR Infiniband network & systems set up to the point where I'm mostly happy with them. I still have to try out NFS & iSCSI over RDMA (remote direct memory access), but everything else seems to be working pretty well.
As far as performance tests go so far I've just done some simple tests with ftp.
Main Rig: i7-3820, Mellanox ConnectX VPI IB card, dual boot CentOS 6.3 and Win7
File Server: Quad Opteron 8378 (2.4GHz), Dell PERC5i flashed to LSI firmware, 7x 146GB 15k SAS RAID5+HS, Mellanox ConnectX VPI IB card, CentOS 6.3
Quad #2: Quad Opteron 8356 (2.3 GHz), Mellanox Infinihost III IB card, CentOS 6.3
File Server -> Main Rig (CentOS 6.3), disk -> /dev/null: 650-700MB/s
File Server -> Main Rig (CentOS 6.3), memory (cache) -> /dev/null: 1100MB/s
File Server -> Main Rig (Win7), disk->Crucial M4 128GB: 650-700MB/s*
File Server -> Quad #2, disk -> /dev/null: 650-700MB/s
File Server -> Quad #2, memory (cache) -> /dev/null: 700MB/s
Quad #2 -> File Server, memory (cache) -> /dev/null: 800MB/s
* I realize this is impossible. I assume Windows is write caching. I need to set up a ram disk or get some sort of benchmarking app.
You might expect more on the memory->/dev/null tests, but it looks like the Opteron machines are CPU bound. The cards can only use 1 CPU core per tcp connection & their tcp/ip offload engines aren't as good as you'll find on a decent ethernet NIC, so to get full bandwidth from IP over IB I'll need more than one tcp connection or an RDMA connection. Sometime soon I'm going to try out NFS-RDMA. It should at least lower the CPU load. RDMA is much more efficient and faster than tcp or udp, but it's not routable. I'm maxing out my disks though, so I'm happy.
As far as performance tests go so far I've just done some simple tests with ftp.
Main Rig: i7-3820, Mellanox ConnectX VPI IB card, dual boot CentOS 6.3 and Win7
File Server: Quad Opteron 8378 (2.4GHz), Dell PERC5i flashed to LSI firmware, 7x 146GB 15k SAS RAID5+HS, Mellanox ConnectX VPI IB card, CentOS 6.3
Quad #2: Quad Opteron 8356 (2.3 GHz), Mellanox Infinihost III IB card, CentOS 6.3
File Server -> Main Rig (CentOS 6.3), disk -> /dev/null: 650-700MB/s
File Server -> Main Rig (CentOS 6.3), memory (cache) -> /dev/null: 1100MB/s
File Server -> Main Rig (Win7), disk->Crucial M4 128GB: 650-700MB/s*
File Server -> Quad #2, disk -> /dev/null: 650-700MB/s
File Server -> Quad #2, memory (cache) -> /dev/null: 700MB/s
Quad #2 -> File Server, memory (cache) -> /dev/null: 800MB/s
* I realize this is impossible. I assume Windows is write caching. I need to set up a ram disk or get some sort of benchmarking app.
You might expect more on the memory->/dev/null tests, but it looks like the Opteron machines are CPU bound. The cards can only use 1 CPU core per tcp connection & their tcp/ip offload engines aren't as good as you'll find on a decent ethernet NIC, so to get full bandwidth from IP over IB I'll need more than one tcp connection or an RDMA connection. Sometime soon I'm going to try out NFS-RDMA. It should at least lower the CPU load. RDMA is much more efficient and faster than tcp or udp, but it's not routable. I'm maxing out my disks though, so I'm happy.