10gig networking

IP over InfiniBand has terrible performance. You would be better off with a handful of teamed NICs. There are a number of threads on the matter on this forum if you search around for them.
 
Last edited:
IP over InfiniBand has terrible performance. You would be better off with a handful of teamed NICs. There are a number of threads on the matter on this forum if you search around for them.

I must disagree with that. We are running some Mellanox FDR IB in the office and get ~50Gb performance out of it running IPoIB. This particular install is a D2D2T network from some core servers. It is not as efficient for as IPoE on a byte by byte ratio, but it is still faster than just about anything out there right now and has extremely low latency.
 
All the previous threads on the topic that I've encountered said otherwise. Saw maybe ~25% of the rated speed doing IP traffic.
 
I suppose like anything else it varies on the hardware and infrastructure. I know some of the cheap Qlogic IB stuff is more realtek level vs intel level. It also depends on your usage. Are you pushing just 1 stream at a time or do you have a lot of interleaved comms. In our particular case we tested the FDR Mellanox (56 Gb/s) vs some of the available 40GbE stuff and saw better performance and much lower latencies out of the IB stuff.
 
I must disagree with that. We are running some Mellanox FDR IB in the office and get ~50Gb performance out of it running IPoIB. This particular install is a D2D2T network from some core servers. It is not as efficient for as IPoE on a byte by byte ratio, but it is still faster than just about anything out there right now and has extremely low latency.
Starting to look into this myself. Are you using point to point and running a subnet manager on one of the servers or using a switch with the manager running there? If a switch which one?

Thanks
 
In this particular install we are using Mellanox 353A boards (which can do FDR IB and 10GbE, they also have 40GbE versions) connected to an SX6018 switch. The 6018 isn't shipping until September I believe, we are a test site. This is replacing an older Voltaire IB switch.
 
As said, 10Gb/s is usually overkill. But I'm finding 1Gb/s can be easily saturated.
However, I'm actually kind of shocked with the performance of "desktop" grade drives I'm building in to some non-critical servers lately.

On fake-RAID (HP SmartArray B110i) in RAID10 with 4x 2TB Seagate Barracuda drives, I'm getting sustained reads over 250MB/s, writes over 220MB/s (testing to/from a SATA SSD). That's better performance on a 3 year old Dell SAS array in RAID10 with 15K drives.
Yep. Some of the latest 2TB+ magnetic HDDs get between 110-150 MB/s sustained read and write. Just wait until the storage density jumps another magnitude. They've already got laser+magnetic working together to increase the storage density immensely -- just a matter of time before those kind of HDDs come to the consumer market.
 
Seagate is saying HAMR is coming in 18-24 months. Though it may increase densities by up to an order of magnitude (in the future), it will not do anything near as much for transfer rates. Even the increases in STR are generally the exception rather than the rule, smaller random reads for the average HDD are more the rule and those don't come anywhere near SATA2 speeds due to latencies in finding the file(s) pieces, let alone anything faster.
 
It makes a lot more sense to me in most cases where you don't already have switches with a few uplinks or a modular bay to accept 10GbE you can still benefit with crossover connections. A direct 10G link to a SAN like the QNAP TS-879 Pro will just cost you a NIC for the server and a NIC for the unit. Maybe $1k and you're good to go.

I think you'll see crossover and 1-4 uplinks for a while longer, where people are using serious gear just for cost reasons to support that kind of data. I'm sure there'll be cheap gear in the not too distant future .. where you'll be able to get 2 NICs and a 5 port 10G switch for $400-800, but I would doubt it could handle wire speed for more than 1 or 2 ports.
 
It makes a lot more sense to me in most cases where you don't already have switches with a few uplinks or a modular bay to accept 10GbE you can still benefit with crossover connections. A direct 10G link to a SAN like the QNAP TS-879 Pro will just cost you a NIC for the server and a NIC for the unit. Maybe $1k and you're good to go.

I think you'll see crossover and 1-4 uplinks for a while longer, where people are using serious gear just for cost reasons to support that kind of data. I'm sure there'll be cheap gear in the not too distant future .. where you'll be able to get 2 NICs and a 5 port 10G switch for $400-800, but I would doubt it could handle wire speed for more than 1 or 2 ports.

I think your read on what we will see is probably correct - but the thought that the initial low port count 10G switches won't run non-blocking wire speed is not accurate. 50-100G switch chips are already available and reasonably inexpensive. The cost limiter remains the expense and power consumption of the prior generation of line drivers. This is starting to change and the small 10Gbase-T laywer-2 switch should be available later this year.
 
I think your read on what we will see is probably correct - but the thought that the initial low port count 10G switches won't run non-blocking wire speed is not accurate. 50-100G switch chips are already available and reasonably inexpensive. The cost limiter remains the expense and power consumption of the prior generation of line drivers. This is starting to change and the small 10Gbase-T laywer-2 switch should be available later this year.

+1 Informative, thanks for the clarification. =)
 
Well since I have a Cisco 3750E-24 Port with 2 10gb X2 ports. I think I am going to source some X2 cards from ebay for about 50-60 bones, some MM fiber, and a 10Gb card and see what this stuff can do for me.

Probably nothing at all because honestly 1gb/s is totally fast enough for what I use it for and I do transfer a HUGE amount of files rather quite frequently.

I actually for see 10g taking a plunge into the regular consumer market sooner than later. However I do not for some reason anticipate the initial adoption prices are going to be easy on the wallet. I remember when gig-e first hit the consumer, I mean regular home user, market and it was quite difficult for many to swallow the price of the cards and switches at first. But like all tech it will come down rapidly once it goes mainstream home and small business consumer level. I give 2 more years before we start to see a standard release of 10g to the consumer sector.
 
I suppose there's still a looong time until consumer-grade computers are equipped with 10G networking, considering that some of today's cheaper PCs are equipped with 10/100 adapters :eek:
 
i've read somewhere server motherboards will be shipping this year with 10gb onboard
 
Supermicro has had boards with integrated 10GbE for about 5 years now.

But up until this years models it was SFP+/fiber base 10G. What is new with the new generation boards is 10Gbase-T - RG45 on Cat6/6e twisted pair. Having these boards helps with the demand-side of getting reasonably priced 10Gbase-T routers to market.
 
Well to bump this thread again to hopefully add more for people looking into 10gb.

Has anyone done a good comparison showing which 10gb NIC's perform the best? I want to ditch a broadcom NIC in one of my servers and put a better NIC in it so its array's can be accessed faster (its got the IOPS and speed to handle more than the 3gb/s its getting now on its current NIC). NIC I am looking for needs to be SFP+ as well, not traditional copper or fiber.

I'm looking at this right now for what I need http://ark.intel.com/products/39776/Intel-Ethernet-Server-Adapter-X520-DA2

Reading through other places online and other forums, it looks like a lot of the cheaper NIC's are hitting 3gb's utilization.

Anyone think that I would get more speed and reliability out of that intel card or even a different vendor's card?
 
I've several Qnaps in production that run on LACP on 1GE connection I can easily have 2-3 vms running on it. Works quite well. I am thinking of getting a 5524 or a 6224 dell fleebay switch and trying to see how much I can squeeze out of a qnap. They are actually very good little nas'. I primarily use NFS and I get a much better transfer rate then iscsi.
 
i cant find anything that i swear i read on Intel releasing that $150 10G NIC... was i imagining things?
 
Might be imagining or just high.. 150 dollar 10gb is not going to happen any time soon.
 
Back
Top