10 GbE connection between 2-3 computers

Silhouette

Limp Gawd
Joined
Dec 14, 2006
Messages
205
I want to connect 2 or 3 computers using 10GbE. I don't need a switch. The traffic will be between 1 computer and 1-2 servers.

I need approximately 500 MB/s. Each server needs a single NIC, while the workstation either needs 2 single or 1 dual port NIC. Alternatively 1 single NIC and I'll switch the cables when necessary.

Do you have any suggestions for NICs?
 
I want to connect 2 or 3 computers using 10GbE. I don't need a switch. The traffic will be between 1 computer and 1-2 servers.

I need approximately 500 MB/s. Each server needs a single NIC, while the workstation either needs 2 single or 1 dual port NIC. Alternatively 1 single NIC and I'll switch the cables when necessary.

Do you have any suggestions for NICs?

Dang, how much money are you willing to spend? You might be able to get away with a 4 port gigabit NIC and an appropriate smart switch. Or, two of those 4-port adapters in the same PC. :) I'm not sure if a single transfer between two nodes would take advantage of that aggregate bandwidth though, even with adapter teaming. I'd have to look into that.

Edit: What are you trying to do, out of curiosity? 10GB links are usually for carriers or moving massive amounts of data between core switches. I don't think any single PC can push that much data. No disk subsystem can't. (AFAIK)
 
Last edited:
I have several servers with RAID setups (e.g., Areca 1280ML) that can push more than 800 MB/s. Teaming is not going to cut it, I've tried quad Intel server NICs and they are insufficient for my purpose.

Applications are backups of 30+ TB arrays and color grading of film material stored on the servers.

My budget is high, but I don't want to spend more than I have to.
 
Edit: What are you trying to do, out of curiosity? 10GB links are usually for carriers or moving massive amounts of data between core switches. I don't think any single PC can push that much data. No disk subsystem can't. (AFAIK)

I was thinking the same thing but didn't say it. A standard raid0 with two disk for desktop you get around 200mb/s like the WD Black hard drivers. But 10GbE connection are use on Cisco switches or routers and use as backbone for large corporate networks.

I have several servers with RAID setups (e.g., Areca 1280ML) that can push more than 800 MB/s. Teaming is not going to cut it, I've tried quad Intel server NICs and they are insufficient for my purpose.

Applications are backups of 30+ TB arrays and color grading of film material stored on the servers.

My budget is high, but I don't want to spend more than I have to.

Ok you got real data server here, and I guess is going to cost for a 10gb nic. You will need ture hardware and no teaming.
But if you want to spend less money use this http://www.newegg.com/Product/Product.aspx?Item=N82E16833106014
but it is 1000mbp/s only.
For 10 gigabit something like this maybe? http://www.cdwg.com/shop/products/default.aspx?EDC=1501692
 
∞Velocitymaster∞;1034705065 said:
I was thinking the same thing but didn't say it. A standard raid0 with two disk for desktop you get around 200mb/s like the WD Black hard drivers. But 10GbE connection are use on Cisco switches or routers and use as backbone for large corporate networks.



Ok you got real data server here, and I guess is going to cost for a 10gb nic. You will need ture hardware and no teaming.
But if you want to spend less money use this http://www.newegg.com/Product/Product.aspx?Item=N82E16833106014
but it is 1000mbp/s only.
For 10 gigabit something like this maybe? http://www.cdwg.com/shop/products/default.aspx?EDC=1501692

That Intel NIC would do the job but I would try to bid CDW, TechDepot and PC Mall reps against each other. Those things are not cheap I would totally haggle a few reps to get a few % off.
 
Silhouette.. I did the same as you for around $300 and some time on ebay. Wait it out and you can pick up 10GBE pcie nics on ebay for $80-150. I picked up one single port and one dual port 10GbaseCX4, as well as 5-6 of the Ethernet CX4 cables. After that expect your write speeds to be the bottleneck. I'm able to push 275MB/s sustained through the link. I have a pair of core2duo/Areca (1231ML/1261ML) based boxes with 12 and 16 spindle R5 arrays, 2GB cache each.

Alternatively, if you want to try Infiniband, a pair of 10Gig pcie HCAs aren't expensive. You'll top out at around 160-170MB/s as IPoIB doesn't have any RDMA support. Oddly enough I was hitting the same 160MB/s with a pair of 20Gig HCAs.
 
Silhouette.. I did the same as you for around $300 and some time on ebay. Wait it out and you can pick up 10GBE pcie nics on ebay for $80-150. I picked up one single port and one dual port 10GbaseCX4, as well as 5-6 of the Ethernet CX4 cables. After that expect your write speeds to be the bottleneck. I'm able to push 275MB/s sustained through the link. I have a pair of core2duo/Areca (1231ML/1261ML) based boxes with 12 and 16 spindle R5 arrays, 2GB cache each.

Alternatively, if you want to try Infiniband, a pair of 10Gig pcie HCAs aren't expensive. You'll top out at around 160-170MB/s as IPoIB doesn't have any RDMA support. Oddly enough I was hitting the same 160MB/s with a pair of 20Gig HCAs.


For the most time critical work (playback) I don't actually need to write the data anywhere, so that should help me out. Have you done any iperf testing of your network?

I'm mostly wondering whether the Intel cards are much better at throughput/CPU utilization than the cheaper solutions. I'm willing to spend the $, but only if it's necessary.
 
For the most time critical work (playback) I don't actually need to write the data anywhere, so that should help me out. Have you done any iperf testing of your network?

I'm mostly wondering whether the Intel cards are much better at throughput/CPU utilization than the cheaper solutions. I'm willing to spend the $, but only if it's necessary.

I didn't run iperf, the only thing I was worried about was pure throughput in a Windows environment (Vista x64). I am merely backing up one array to the other once a month or so and 5TB over 90MB/s just took way too long. I tested out IP over Infiniband with a pair of IB HCAs I had lying around and with a few sessions of teracopy and fastcopy I was able to bump that up to 120 and 140MB/s respectively. With the 10Gbe nics directly connected, and fastcopy I was able to bump that up further to 260-270MB/s sustained. Both network cards support IPv4 checksum offloading, TCP connection offloading and jumbo frames (set for 8000MTU by default). One card is a NetXen single port 10GBe CX4, the other is a Silicom PE10G2T-CX4 (Broadcom based).

As I said before IP over IB doesn't support RDMA, so it is CPU bound, and I found that with my Core2 cpus (E6750 and E2180) I was limited to around 160MB/s. This is of course with the E2180 system as the bottleneck. With the 10GBE adapters installed I was seeing the E2180 system running at 50-70% cpu at 260MB/s (as reported by netlimiter and dumeter). For reasons I don't know I was unable to turn on the Silicom adapter's "large send offload" setting.

iperf is reading 1.5-1.85Gbits depending on what I feed it for settings, E2180 CPU pings right at 100% for the 10s it is running. I have no idea on what settings to feed it. Let me know and I'll try something out.
 
Any reason you're not looking at Fiber Channel? It's a lot less expensive than 10gigE (buying working pulls on ebay), and it doesn't have the overhead of SMB / SMB2 or similar.

A single 4Gb FC link is capable of 400MB/sec. If you go dual you can get 800MB/sec.
 
Any reason you're not looking at Fiber Channel? It's a lot less expensive than 10gigE (buying working pulls on ebay), and it doesn't have the overhead of SMB / SMB2 or similar.

A single 4Gb FC link is capable of 400MB/sec. If you go dual you can get 800MB/sec.

Myself or Silhouette? I was actually looking at FCoIB as it supports RDMA, but there isn't windows support for that at the moment. My 10gigE cards ran me $79 and $129 plus shipping, and the cables cost me $100 shipped, minus what I make when I sell the extras off. I'm happy with the price/performance ratio. :)
 
Myself or Silhouette? I was actually looking at FCoIB as it supports RDMA, but there isn't windows support for that at the moment. My 10gigE cards ran me $79 and $129 plus shipping, and the cables cost me $100 shipped, minus what I make when I sell the extras off. I'm happy with the price/performance ratio. :)
I was intending to address the OP, but how did you get 10gigE cards for those prices? :eek:
 
I was intending to address the OP, but how did you get 10gigE cards for those prices? :eek:

Perhaps people assumed the $300 price I said in my other reply was per card, but it was ~$300 total. Just keep an eye out. Three Mellanox dual port 10GbaseCX4 sold recently (MNEH28-XTC) at $99 each. That is pretty good for a $500 retail card. :)
 
iperf is reading 1.5-1.85Gbits depending on what I feed it for settings, E2180 CPU pings right at 100% for the 10s it is running. I have no idea on what settings to feed it. Let me know and I'll try something out.

Hmm, this looks like a pretty poor driver to me.
 
Back
Top