bman212121
[H]ard|Gawd
- Joined
- Aug 18, 2011
- Messages
- 1,815
bman - see my above post - if you want cheap p2p 10Gb now, go the Infiniband IPoIB now on the cheap.
Interesting, I might have to look into that.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
bman - see my above post - if you want cheap p2p 10Gb now, go the Infiniband IPoIB now on the cheap.
IP over InfiniBand has terrible performance. You would be better off with a handful of teamed NICs. There are a number of threads on the matter on this forum if you search around for them.
Starting to look into this myself. Are you using point to point and running a subnet manager on one of the servers or using a switch with the manager running there? If a switch which one?I must disagree with that. We are running some Mellanox FDR IB in the office and get ~50Gb performance out of it running IPoIB. This particular install is a D2D2T network from some core servers. It is not as efficient for as IPoE on a byte by byte ratio, but it is still faster than just about anything out there right now and has extremely low latency.
Yep. Some of the latest 2TB+ magnetic HDDs get between 110-150 MB/s sustained read and write. Just wait until the storage density jumps another magnitude. They've already got laser+magnetic working together to increase the storage density immensely -- just a matter of time before those kind of HDDs come to the consumer market.As said, 10Gb/s is usually overkill. But I'm finding 1Gb/s can be easily saturated.
However, I'm actually kind of shocked with the performance of "desktop" grade drives I'm building in to some non-critical servers lately.
On fake-RAID (HP SmartArray B110i) in RAID10 with 4x 2TB Seagate Barracuda drives, I'm getting sustained reads over 250MB/s, writes over 220MB/s (testing to/from a SATA SSD). That's better performance on a 3 year old Dell SAS array in RAID10 with 15K drives.
It makes a lot more sense to me in most cases where you don't already have switches with a few uplinks or a modular bay to accept 10GbE you can still benefit with crossover connections. A direct 10G link to a SAN like the QNAP TS-879 Pro will just cost you a NIC for the server and a NIC for the unit. Maybe $1k and you're good to go.
I think you'll see crossover and 1-4 uplinks for a while longer, where people are using serious gear just for cost reasons to support that kind of data. I'm sure there'll be cheap gear in the not too distant future .. where you'll be able to get 2 NICs and a 5 port 10G switch for $400-800, but I would doubt it could handle wire speed for more than 1 or 2 ports.
I think your read on what we will see is probably correct - but the thought that the initial low port count 10G switches won't run non-blocking wire speed is not accurate. 50-100G switch chips are already available and reasonably inexpensive. The cost limiter remains the expense and power consumption of the prior generation of line drivers. This is starting to change and the small 10Gbase-T laywer-2 switch should be available later this year.
i've read somewhere server motherboards will be shipping this year with 10gb onboard
Supermicro has had boards with integrated 10GbE for about 5 years now.