10 GB Ethernet -- the advent of storage over ethernet

Discussion in 'Networking & Security' started by limxdul, Mar 16, 2009.

  1. limxdul

    limxdul Limp Gawd

    Messages:
    465
    Joined:
    Sep 16, 2004
  2. Keiichi

    Keiichi [H]ard|Gawd

    Messages:
    1,491
    Joined:
    Jun 10, 2004
    It's not new, and still way too expensive per port. It still won't completely displace FC-AL.
     
  3. Joe Average

    Joe Average Ad Blocker - Banned

    Messages:
    15,459
    Joined:
    Apr 6, 2008
    And it's 10Gb(ps), not 10GB... would be nice to have 10 GigaBytes per second but, we ain't quite there just yet, still stuck with the measly 10 Gigabit stuff... but we'll get there, someday. ;)
     
  4. Blue Fox

    Blue Fox [H]ardForum Junkie

    Messages:
    11,693
    Joined:
    Jun 9, 2004
    InfiniBand would be the logical replacement for fibre channel due to the similar topology instead of ethernet for storage. Ethernet has the distance advantage, however that usually isn't an issue for storage. There is a reason why it is used in HPC instead of ethernet.
     
  5. just2cool

    just2cool Gawd

    Messages:
    524
    Joined:
    Sep 22, 2005
    Eh, 10Gb is ok, but it's old news. Also, it's stupid to run 10Gb on the access layer if your uplink is 10Gb due to over-subscription. For most of our core routers, we run 4x 10Gb port channels to try and deal with this. Ultimately, I'm much more interested in 100Gb, which is currently in testing.
     
  6. StorageJoe

    StorageJoe Limp Gawd

    Messages:
    460
    Joined:
    Jun 14, 2005
    Infiniband is not a storage interface, it's more ideal for clustering servers together. iSCSI is the way the market is going and this was true before 10Gb started shipping.
     
  7. SYN ACK

    SYN ACK [H]ard|Gawd

    Messages:
    1,243
    Joined:
    Jul 11, 2004
  8. Blue Fox

    Blue Fox [H]ardForum Junkie

    Messages:
    11,693
    Joined:
    Jun 9, 2004
    It isn't exactly a storage interface, but neither is ethernet. If you want to argue that you can do iSCSI over ethernet, well, you can do ethernet over InfiniBand, so that means that you can do iSCSI over InfiniBand. InfiniBand has the advantage over ethernet in terms of topology as it is very similar to fibre channel, and that has been traditionally used for storage.
     
  9. Ur_Mom

    Ur_Mom I'm Not Serious

    Messages:
    19,836
    Joined:
    May 15, 2006
    I have a 10Mb hub that is pretty damn hardcore. Might be a mighty Trendnet 4 port, too. Dammit.

    I would love to have a 10Gb switch at home, myself. The network wouldn't be a bottleneck at all. For anything. It would be more than enough. 1Gb is great, but with multiple PC's doing HD streaming, it would bog down eventually. 10Gb would be amazing.
     
  10. Blue Fox

    Blue Fox [H]ardForum Junkie

    Messages:
    11,693
    Joined:
    Jun 9, 2004
    1gbit ethernet isn't going to be the bottleneck for multiple HD streams. Blu-ray movies for example top out at 54mbit. 10gbit ethernet cards that use cat6/a still run about $1k, so it won't be a consumer thing for a long time.
     
  11. Ur_Mom

    Ur_Mom I'm Not Serious

    Messages:
    19,836
    Joined:
    May 15, 2006
    Nah, but it'd still be damn awesome, wouldn't it?!

    Blu-ray tops out at 54Mb, but with a few streams going and a couple backups and overhead and whatnot, it would be close! Fuck it, man. I want it, ok. I'm coming up with excuses. Maybe I can get a few review samples! :D
     
  12. Blue Fox

    Blue Fox [H]ardForum Junkie

    Messages:
    11,693
    Joined:
    Jun 9, 2004
    If you really want cheap 10gbit, pick up some InfiniBand cards off eBay. I see Mellanox stuff there all the time. Dual 10gbit cards are normally $50. You'll have to get some InfiniBand cables as well though, since they don't use standard 8P8C plugs that copper ethernet usually does.
     
  13. LittleMe

    LittleMe 2[H]4U

    Messages:
    2,977
    Joined:
    Feb 20, 2001
    I've got 10GbE here at home but except for iperf testing on it, I've yet to push it past ~450MB/sec in actual usage. I don't move enough data all the time to make it worth the ~$6,000 investment but then again, I didn't pay anywhere near that so I don't care.
     
  14. SYN ACK

    SYN ACK [H]ard|Gawd

    Messages:
    1,243
    Joined:
    Jul 11, 2004
    glad to you know think it's "stupid"
    10gig at access is fine, especially since you shouldn't have a single 10gb uplink due to lack of redundancy. it all depends on the situation, and current link threshold analysis on what you need in terms of link bandwidth.

    if you're interested in 100Gb and you think 10Gb is so stupid at access, why aren't you looking into a cwdm/dwdm optical solution?
     
  15. just2cool

    just2cool Gawd

    Messages:
    524
    Joined:
    Sep 22, 2005
    Haha, sorry dude. I didn't mean to come across as a dick. But yeah, like I said, if you have a port channel, it's usually ok...

    However, it doesn't scale well at all. Let's say you have a rack with 3 blade chassis that each hold 8 blades. If all 8 blades need full 10Gb non-blocking connectivity, you're going to need not only 24 access ports, but an additional 24 uplink ports (maybe bind them in groups of 4 to the agg). And that's if you're lucky enough to have a top of rack switch with 48 10Gb ports. Obviously, this would get messy (or stupid ;)) instead of just having 3 100Gb uplinks...not to mention what would need to be done between the agg and core switches if there were more racks.

    wdm would certainly help at the edge, but I was referring to the access/agg layers specifically when you have a bunch of 10Gb servers that want to span across a datacenter. I just hastily posted something without explaining my perspective.

    But just like FastE-->Gig and now Gig-->10Gig... once 100Gig rolls out, we'll eventually want to push that to the servers and have all the fun start over again. Still, if a data center only has a fraction of its hosts at the speed of the uplinks, it is manageable and kick ass for certain applications.
     
  16. SYN ACK

    SYN ACK [H]ard|Gawd

    Messages:
    1,243
    Joined:
    Jul 11, 2004
    what applications are taking full advantage of 10gb at the server-access layer in a single rack (hence, connected to a single edge DC switch)?

    if you are spending that kind of money on such a project, the funds should have been allocated in the design phase to upgrade the network to accomodate at least 4x 10gig interfaces (channeled).

    if you need more server-access/edge switches due to 10gig port density, you just have to increase the distribution layer chassis'. all part of the budget.

    can't wait to see if nortel's next chassis comes out, but im under nda. (you want to see 10gig density? oye!)



    good luck on waiting for 100gig.
    10gig is still fairly expensive but coming down...if you think you'll be able to afford 40/100gig right when it is out, good luck.

    and it is generally the switching fabric/backplane limitation in the chassis for all of these high speed interfaces. you are only looking at a portion of the overall picture.



    most importantly, what applications/services are you running that would require a full non-blocking 10gig access (teamed) at this point?
     
  17. just2cool

    just2cool Gawd

    Messages:
    524
    Joined:
    Sep 22, 2005
    We'll get a few 40/100 test units in right when it comes out but won't do a widespread prod deployment until things get more reasonable. Depending on how close 40 is to 100 in a release timeframe (as well as price), we may skip 40 altogether instead of doing 2 separate upgrades.

    I'm well aware of the chassis constraints. Right now, the cisco nexus 7010 has a 1.7Tbps fabric (approx. 240Gig/slot excluding sups) which should be better utilized once newer line cards come out (the N7K-M132XP-12 is kind of crappy haha). Someday, they want to release fabric modules supporting 500Gig/slot. That's not too shabby of a fabric for 10gig access.

    I'm sorry that I can't really go into details, but believe me, we have a real big need for 10gig non-blocking. Also, you may have missed it, but I mentioned previously that our cores have 4x 10Gig port channels. It's enough for now, but won't last long once more 10gig access is deployed.
     
  18. limxdul

    limxdul Limp Gawd

    Messages:
    465
    Joined:
    Sep 16, 2004
    j2cool can you elaborate on why N7K-M132XP-12 is crappy? We are looking to move toward virtualizing the network along with the san in one push as our servers have already been virtualized; if you have any experience on the N7K-M132XP-12 please let me know :)
     
  19. just2cool

    just2cool Gawd

    Messages:
    524
    Joined:
    Sep 22, 2005
    Yeah, we're currently testing some 7010s and have 4 of these linecards. The main reason why they're crappy is because it's not really 32port 10Gig. The link speeds will negotiate to that on each port, but you only get 80Gig out of the card, not 320Gig as you would think.

    Essentially, every four ports of the line card look like this internally (where | is a port)

    |
    [MUX]
    | | | |

    The ports are configurable as "shared" or "dedicated". In dedicated (marked as yellow on the card, 8 total), you will get the full 10Gig bandwidth, but the other 3 ports are disabled. In shared (8 groups of 4 ports), all 4 ports in that group are shared/muxed, making a theoretical 2.5Gig cap. However, the issue arises when all 4 are sending at more than 10Gig... you're going to have a lot of drops.

    Cisco told us that they're going to release a new card at some point that doesn't have this 80Gig limitation. If none of your architecture frequently bursts, it should work fine in shared mode. Otherwise, you'll only get 8 ports out of the card, which is no better than what you would get with a 6500 with a 6708.
     
  20. stormy1

    stormy1 [H]ard|Gawd

    Messages:
    1,047
    Joined:
    Apr 3, 2008
    I am looking forward to 10Gb for the back haul of my stars, bonded 1Gb is not enough in some cases.
    1Gb is plenty per station I just don't see needing faster to the desktop but the back haul does get overloaded.
     
  21. SYN ACK

    SYN ACK [H]ard|Gawd

    Messages:
    1,243
    Joined:
    Jul 11, 2004
    not to mention freeing up physical infrastructure with the higher density links

    just2cool,
    i thought the 6500 w/ 6708 only had 40gig backplane (2x20)? is the 6708 full wire-speed or is it oversubscribed as well?
    thx for input
     
  22. WesM63

    WesM63 2[H]4U

    Messages:
    3,266
    Joined:
    Aug 29, 2004
    6708 is oversubscribed 2:1 and the 6716 is 4:1. The 6704 is the only one that is not oversubscribed. (They all have the same 40Gb connection to the backplane)
     
  23. just2cool

    just2cool Gawd

    Messages:
    524
    Joined:
    Sep 22, 2005
    Oh.. yeah wes is right, it's 2:1. It has the same "dedicated mode" feature as the Nexus to get line rate out of some ports:

     
  24. limxdul

    limxdul Limp Gawd

    Messages:
    465
    Joined:
    Sep 16, 2004
    very informative, tyvm.