10 GB Ethernet -- the advent of storage over ethernet

limxdul

Limp Gawd
Joined
Sep 16, 2004
Messages
465
3361123065_eecc5610cf_o.jpg


3361123043_d1b200c1ff.jpg


Fiber channel is being outpaced in terms of speed by 10Gb ethernet; netapp is already offering netapp ethernet storage over 10Gb in its virtualized solution with cisco and vmware.
http://issuu.com/limxdul/docs/final_dc_thumb_drive_version_morning_session_only_

What do you san eng / net engs / it guys think?

edit: sorry gigabit not gigabyte
 
It's not new, and still way too expensive per port. It still won't completely displace FC-AL.
 
And it's 10Gb(ps), not 10GB... would be nice to have 10 GigaBytes per second but, we ain't quite there just yet, still stuck with the measly 10 Gigabit stuff... but we'll get there, someday. ;)
 
InfiniBand would be the logical replacement for fibre channel due to the similar topology instead of ethernet for storage. Ethernet has the distance advantage, however that usually isn't an issue for storage. There is a reason why it is used in HPC instead of ethernet.
 
Eh, 10Gb is ok, but it's old news. Also, it's stupid to run 10Gb on the access layer if your uplink is 10Gb due to over-subscription. For most of our core routers, we run 4x 10Gb port channels to try and deal with this. Ultimately, I'm much more interested in 100Gb, which is currently in testing.
 
InfiniBand would be the logical replacement for fibre channel due to the similar topology instead of ethernet for storage. Ethernet has the distance advantage, however that usually isn't an issue for storage. There is a reason why it is used in HPC instead of ethernet.

Infiniband is not a storage interface, it's more ideal for clustering servers together. iSCSI is the way the market is going and this was true before 10Gb started shipping.
 
Infiniband is not a storage interface, it's more ideal for clustering servers together. iSCSI is the way the market is going and this was true before 10Gb started shipping.
It isn't exactly a storage interface, but neither is ethernet. If you want to argue that you can do iSCSI over ethernet, well, you can do ethernet over InfiniBand, so that means that you can do iSCSI over InfiniBand. InfiniBand has the advantage over ethernet in terms of topology as it is very similar to fibre channel, and that has been traditionally used for storage.
 
I have a 10Mb hub that is pretty damn hardcore. Might be a mighty Trendnet 4 port, too. Dammit.

I would love to have a 10Gb switch at home, myself. The network wouldn't be a bottleneck at all. For anything. It would be more than enough. 1Gb is great, but with multiple PC's doing HD streaming, it would bog down eventually. 10Gb would be amazing.
 
1gbit ethernet isn't going to be the bottleneck for multiple HD streams. Blu-ray movies for example top out at 54mbit. 10gbit ethernet cards that use cat6/a still run about $1k, so it won't be a consumer thing for a long time.
 
1gbit ethernet isn't going to be the bottleneck for multiple HD streams. Blu-ray movies for example top out at 54mbit. 10gbit ethernet cards that use cat6/a still run about $1k, so it won't be a consumer thing for a long time.

Nah, but it'd still be damn awesome, wouldn't it?!

Blu-ray tops out at 54Mb, but with a few streams going and a couple backups and overhead and whatnot, it would be close! Fuck it, man. I want it, ok. I'm coming up with excuses. Maybe I can get a few review samples! :D
 
Nah, but it'd still be damn awesome, wouldn't it?!

Blu-ray tops out at 54Mb, but with a few streams going and a couple backups and overhead and whatnot, it would be close! Fuck it, man. I want it, ok. I'm coming up with excuses. Maybe I can get a few review samples! :D
If you really want cheap 10gbit, pick up some InfiniBand cards off eBay. I see Mellanox stuff there all the time. Dual 10gbit cards are normally $50. You'll have to get some InfiniBand cables as well though, since they don't use standard 8P8C plugs that copper ethernet usually does.
 
I've got 10GbE here at home but except for iperf testing on it, I've yet to push it past ~450MB/sec in actual usage. I don't move enough data all the time to make it worth the ~$6,000 investment but then again, I didn't pay anywhere near that so I don't care.
 
Eh, 10Gb is ok, but it's old news. Also, it's stupid to run 10Gb on the access layer if your uplink is 10Gb due to over-subscription. For most of our core routers, we run 4x 10Gb port channels to try and deal with this. Ultimately, I'm much more interested in 100Gb, which is currently in testing.

glad to you know think it's "stupid"
10gig at access is fine, especially since you shouldn't have a single 10gb uplink due to lack of redundancy. it all depends on the situation, and current link threshold analysis on what you need in terms of link bandwidth.

if you're interested in 100Gb and you think 10Gb is so stupid at access, why aren't you looking into a cwdm/dwdm optical solution?
 
Haha, sorry dude. I didn't mean to come across as a dick. But yeah, like I said, if you have a port channel, it's usually ok...

However, it doesn't scale well at all. Let's say you have a rack with 3 blade chassis that each hold 8 blades. If all 8 blades need full 10Gb non-blocking connectivity, you're going to need not only 24 access ports, but an additional 24 uplink ports (maybe bind them in groups of 4 to the agg). And that's if you're lucky enough to have a top of rack switch with 48 10Gb ports. Obviously, this would get messy (or stupid ;)) instead of just having 3 100Gb uplinks...not to mention what would need to be done between the agg and core switches if there were more racks.

wdm would certainly help at the edge, but I was referring to the access/agg layers specifically when you have a bunch of 10Gb servers that want to span across a datacenter. I just hastily posted something without explaining my perspective.

But just like FastE-->Gig and now Gig-->10Gig... once 100Gig rolls out, we'll eventually want to push that to the servers and have all the fun start over again. Still, if a data center only has a fraction of its hosts at the speed of the uplinks, it is manageable and kick ass for certain applications.
 
what applications are taking full advantage of 10gb at the server-access layer in a single rack (hence, connected to a single edge DC switch)?

if you are spending that kind of money on such a project, the funds should have been allocated in the design phase to upgrade the network to accomodate at least 4x 10gig interfaces (channeled).

if you need more server-access/edge switches due to 10gig port density, you just have to increase the distribution layer chassis'. all part of the budget.

can't wait to see if nortel's next chassis comes out, but im under nda. (you want to see 10gig density? oye!)



good luck on waiting for 100gig.
10gig is still fairly expensive but coming down...if you think you'll be able to afford 40/100gig right when it is out, good luck.

and it is generally the switching fabric/backplane limitation in the chassis for all of these high speed interfaces. you are only looking at a portion of the overall picture.



most importantly, what applications/services are you running that would require a full non-blocking 10gig access (teamed) at this point?
 
We'll get a few 40/100 test units in right when it comes out but won't do a widespread prod deployment until things get more reasonable. Depending on how close 40 is to 100 in a release timeframe (as well as price), we may skip 40 altogether instead of doing 2 separate upgrades.

I'm well aware of the chassis constraints. Right now, the cisco nexus 7010 has a 1.7Tbps fabric (approx. 240Gig/slot excluding sups) which should be better utilized once newer line cards come out (the N7K-M132XP-12 is kind of crappy haha). Someday, they want to release fabric modules supporting 500Gig/slot. That's not too shabby of a fabric for 10gig access.

I'm sorry that I can't really go into details, but believe me, we have a real big need for 10gig non-blocking. Also, you may have missed it, but I mentioned previously that our cores have 4x 10Gig port channels. It's enough for now, but won't last long once more 10gig access is deployed.
 
j2cool can you elaborate on why N7K-M132XP-12 is crappy? We are looking to move toward virtualizing the network along with the san in one push as our servers have already been virtualized; if you have any experience on the N7K-M132XP-12 please let me know :)
 
Yeah, we're currently testing some 7010s and have 4 of these linecards. The main reason why they're crappy is because it's not really 32port 10Gig. The link speeds will negotiate to that on each port, but you only get 80Gig out of the card, not 320Gig as you would think.

Essentially, every four ports of the line card look like this internally (where | is a port)

|
[MUX]
| | | |

The ports are configurable as "shared" or "dedicated". In dedicated (marked as yellow on the card, 8 total), you will get the full 10Gig bandwidth, but the other 3 ports are disabled. In shared (8 groups of 4 ports), all 4 ports in that group are shared/muxed, making a theoretical 2.5Gig cap. However, the issue arises when all 4 are sending at more than 10Gig... you're going to have a lot of drops.

Cisco told us that they're going to release a new card at some point that doesn't have this 80Gig limitation. If none of your architecture frequently bursts, it should work fine in shared mode. Otherwise, you'll only get 8 ports out of the card, which is no better than what you would get with a 6500 with a 6708.
 
I am looking forward to 10Gb for the back haul of my stars, bonded 1Gb is not enough in some cases.
1Gb is plenty per station I just don't see needing faster to the desktop but the back haul does get overloaded.
 
not to mention freeing up physical infrastructure with the higher density links

just2cool,
i thought the 6500 w/ 6708 only had 40gig backplane (2x20)? is the 6708 full wire-speed or is it oversubscribed as well?
thx for input
 
not to mention freeing up physical infrastructure with the higher density links

just2cool,
i thought the 6500 w/ 6708 only had 40gig backplane (2x20)? is the 6708 full wire-speed or is it oversubscribed as well?
thx for input

6708 is oversubscribed 2:1 and the 6716 is 4:1. The 6704 is the only one that is not oversubscribed. (They all have the same 40Gb connection to the backplane)
 
Oh.. yeah wes is right, it's 2:1. It has the same "dedicated mode" feature as the Nexus to get line rate out of some ports:

Q. Is the 8-port 10 Gigabit Ethernet module not oversubscribed if I only use half the ports?

A. Yes, you can use only ports 1, 2, 5, and 6 to provide 40 Gbps local switching. To make it easier for you to configure your network, we have a new software command for you to go into performance mode. The software command
router(config)#[no] hw-module slot x oversubscription

will administratively disable the oversubscribed ports (ports 3, 4, 7, and 8) and put them in "shutdown" state. In this mode, the user cannot do "no shut" on the disabled ports. When user do "show interface" on the disabled ports, the output will show "disabled for performance" to distinguish between normal port shutdown and shutdown for performance.
 
Yeah, we're currently testing some 7010s and have 4 of these linecards. The main reason why they're crappy is because it's not really 32port 10Gig. The link speeds will negotiate to that on each port, but you only get 80Gig out of the card, not 320Gig as you would think.

Essentially, every four ports of the line card look like this internally (where | is a port)

|
[MUX]
| | | |

The ports are configurable as "shared" or "dedicated". In dedicated (marked as yellow on the card, 8 total), you will get the full 10Gig bandwidth, but the other 3 ports are disabled. In shared (8 groups of 4 ports), all 4 ports in that group are shared/muxed, making a theoretical 2.5Gig cap. However, the issue arises when all 4 are sending at more than 10Gig... you're going to have a lot of drops.

Cisco told us that they're going to release a new card at some point that doesn't have this 80Gig limitation. If none of your architecture frequently bursts, it should work fine in shared mode. Otherwise, you'll only get 8 ports out of the card, which is no better than what you would get with a 6500 with a 6708.

very informative, tyvm.
 
Back
Top