gigabit replacement?

Unless you're looking to install infiniband or 10GbE and pay through the nose, gigabit is the best home option for general networking. You can bond gigabit but you can't exceed the speed of the actual interface, and at most you can normally only bond 8 interfaces together (Cisco etherchannel for example)
 
yes, was talking about ethernet.

currently i have 3 intel gigabit nics teamed together on the server.
trying to research other alternatives.
 
As mentioned previously infiniband is an option. Usually can get kit pretty cheap now, but I think when doing file transfers this hits the CPU quite hard.

I personally spent alot of time looking at it and just bit the bullet and went with 10GbE, costs a small fortune but it was pretty much plug and play :)
 
Teaming NICs will NOT give you combined bandwidth from host to host. It simply offers two things...

1. Link failure resistance.
2. Shared bandwidth from one server to multiple hosts but not host to host/server to single host. You will still get 1gbps.

If you want more bandwidth you need to run 10ge/infini/etc....
 
yes, what i'm aiming for is that now server can supply more bandwidth to multiple host, total bandwidth of the server will not be limited to one gigabit anymore
 
May I ask why you need more then 1gb of bandwidth? crap most people cant even get 1gb of bandwidth without paying crap load of money.
 
A home file server can use >1Gbit easily. Even a single disk can do that.

I have an Infiniband setup. It's really pretty affordable if you just get used gear off of eBay. I paid about $70 each for a couple of 20Gbit DDR IB cards (Mellanox ConnectX VPI) and $35 shipped for 10 3m cables. Tonight I finally found a deal on a DDR IB switch for $195 shipped. Goodbye direct connections, hello switch.

IB is kind of a pain in the ass though. We use it extensively at work so I didn't have to learn much to set up my network, but if you've never used it you might be in for some "fun" if you try an IB network.
 
i've played with qlogic 2gb fiber channel cards in IPoverFC mode back in the day.

I wonder if you can do IP over FC with 4gbit and 8gbit.

Personally, I am currently experimenting with comstar to present ZFS volumes as FC LUN's to ESXi servers

You can get the 4gbit FC cards for ~$20.

Just an FYI, you don't "need" a FC switch either if you only are connecting 2 devices. Point to point will work just fine.
 
Intel X540's. iSCSI 10GbE.

It's a tough pill to swallow. Unfortunately, VMware (if that's what you plan on using) drops support incredibly quickly on old hardware, you don't want to be stuck using esx 3.5.
 
Teaming NICs will NOT give you combined bandwidth from host to host. It simply offers two things...

1. Link failure resistance.
2. Shared bandwidth from one server to multiple hosts but not host to host/server to single host. You will still get 1gbps.

If you want more bandwidth you need to run 10ge/infini/etc....

supposedly not true with 2012's implementation of SMB....
 
This is one of the only nice things about Windows 8 IMO, if you have multiple network adapters plugged in, it will use them together when transferring files. Of course both ends of the transfer have to support it, so we're talking about only being useful between Windows 8 and Server 2012 boxes, but it's better than nothing. It's nice to be able to continue using existing gigabit infrastructure, and just run multiple Ethernet cables from the switch to the computers that need more than 1GB.
 
Back
Top