Just came across (5) PCIe 4Gbps 4/chan Fiber Cards - anyone know of uses or interest?

Joined
Aug 21, 2009
Messages
588
I have 5 of these cards pulled from a Dell PowerEdge 1950 and they test perfectly. There are also (4) 1Gbps cat5/6 connections in each server for a total of 8Gbps available! The must have been doing some pretty heavy networking!

While I have little to no use for fiber cards at this point I guess they could be useful down the line or I could try to sell them. I see them listed for $625 on FleaBay at Buy it now, and then the average price is $375-$500.

http://www.dell.com/us/business/p/qlogic-qle2462/pd

Does anyone have any ideas as to why these would be so much better than the other Gigabit adapters other than the distance restriction on Cat5 wiring?
 
Those are fiberchannel HBAs, not regular NICs. You can't use them for general networking, just storage networking. Are all 5 dual port models? Depending on price I might be interested in 3 of them.
 
I'll take a few of those...

like poster above said you can't use those for TCP/IP
 
That makes sense then. I was wondering why they needed so much network connectivity!

Yes, all 5 are dual port (meaning each card has a total of 4 input's)

Images of the product(s)
This is what the interface looks like on all machines.
FibreCard.jpg


FibreCardfront.png


I am interested in selling them but I need to figure out what I could do with them first.

If one of these cards was installed in a large multi-terabyte file server, would I be able to connect my smaller servers (via the fibre connections) to the large tera-server - using the fibre cards for the connection and transfer of files (thus taking the load off the network)? Is this plausible and/or is this what it is intended for?

To the two parties interested I will keep you in mind and will let you know as soon as I make a decision as to whether I am selling or keeping them.

What kind of technology is this called? I'm guessing it isn't NAS, SAN or ??FibreSCSI??? Do these daisychain like 1394/Firewire - is that why they have two sets of ports? Any info on this to point me toward the correct topic to research would be great! Thanks!
 
I'm looking for support forums where the members deal with enterprise level solutions or even small-med-large businesses. It seems like most of the forums I have come across are, more often than not, dedicated to the consumer, small home office or very small businesses. I don't know if there are forums which cater to the enterprise/corporate market - maybe I need to look into manufacturer's channels for information.

Any suggestions?
 
The technology is FibreChannel. It's used for Storage Area Networks (or SAN). Your basic idea is right, it creates a separate network fabric for storage. The target (file server in your case) presents LUNs (Logical Unit Number - basically a carving of storage) which initiators (the 'smaller servers' in your example) connect to. The LUN appears as a local disk to the server and unless the filesystem installed on the LUN supports clustering only a single server can access it. It's not as simple as a NFS or CIFS share.

The dual ports are to support multipathing, not daisy chaining. Basic concept is you connect one port to fibre switch A, one to fibre switch B so that in the event either switch fails your server is still connected to the SAN. Depending on the OS, driver, SAN, and additional software you can also use both paths at the same time for double the throughput. In a smaller environment you can skip the fibre switches and do direct connections. In a small home environments this is typically done with the storage server having either 2 single port HBAs or a dual port one and each host connecting to one port.

What OS is your fileserver? It needs to be able to run the HBA in target mode. Windows Server cannot do this without 3rd party software like DataCore SanMelody. Some *nix distros have this capability (OpenFiler and OpenIndiana come to mind).
 
The technology is FibreChannel. It's used for Storage Area Networks (or SAN). Your basic idea is right, it creates a separate network fabric for storage. The target (file server in your case) presents LUNs (Logical Unit Number - basically a carving of storage) which initiators (the 'smaller servers' in your example) connect to. The LUN appears as a local disk to the server and unless the filesystem installed on the LUN supports clustering only a single server can access it. It's not as simple as a NFS or CIFS share.

The dual ports are to support multipathing, not daisy chaining. Basic concept is you connect one port to fibre switch A, one to fibre switch B so that in the event either switch fails your server is still connected to the SAN. Depending on the OS, driver, SAN, and additional software you can also use both paths at the same time for double the throughput. In a smaller environment you can skip the fibre switches and do direct connections. In a small home environments this is typically done with the storage server having either 2 single port HBAs or a dual port one and each host connecting to one port.

What OS is your fileserver? It needs to be able to run the HBA in target mode. Windows Server cannot do this without 3rd party software like DataCore SanMelody. Some *nix distros have this capability (OpenFiler and OpenIndiana come to mind).

Mucho gracias me amigo!!! I don't have any OS's loaded at this point, but on one of the poweredge 1950's they were running VMware ESX 3 but I couldn't get into the password so I reformatted and installed Ubuntu 10.04.4 LTS 64bit server. It loaded like a dream on this machine! I like SAS drives even though they are a little loud and hot but the speed is a nice positive. I am fairly well versed with Ubuntu and can administer all aspects of LAMP plus many other add-on features like Samba, squid, snort, iptables, etc. I'm a little torn as to whether I should start learning Redhat (start with CentOS to do this???) or what, for enterprise level servers.

Thanks again friend!
 
What is wrong with 8 1Gbps ports?

I did that for my hypervisor. Almost was going to go for 12 ports, but didn't want to try to fit the 13 network wires into the management arm. 9 has it pretty full as it. plus who knows if I would ever need the other PCI-e port for anything. Was trying to do about a 1 to 1 ratio for NIC to VM. but it still came out ok.
 
No one said there was anything wrong with 8 1Gb ports...but 1:1 VM:NIC? Good grief. Maybe I need to just go out on my own doing contract vSphere design.
 
Hey To the guys who were interested please PM me because i can't post in the for-sale section but i need to know fairly soon. I will show you my memberships on a number of forums to verify myself
 
Back
Top