So i bougth some QLE246x FC cards. Looking for ideas how to utilize them.

s0lid

Limp Gawd
Joined
Apr 17, 2011
Messages
202
Yeaaah got a true diamond of a deal here.
2x QLE2460 a 15€/ea and 1x QLE2462 25€ + Shippings.

Pic:


So yeah i'm little clueless how FC works on software level. Can run NFS/Cifs/iSCSI over a FC link?
I've possible use for atleas two of those cards in ESXi servers communication with fileserver.

Specs for server HW in sig tho fileserver is getting upgrade to LGA1366 goods with EX58-UD3R and yet-to-be-decided Xeon :D
 
Any more cards for that price? FC is used instead of iSCSI not in combination, and you should be able to do point-to-point from the ESX servers to the file server if you can get the drivers/firmware to agree.
 
Nice so Qlogic has been fantastic with drivers it seems ^^
Drivers for ESXi are on their site and Openindianas wiki claims that QLE246X cards will work in OI_151 :D
 
"NFS/Cifs/iSCSI over a FC link" - These are network protocols. You should not be concerned with these terms.

I'm assuming those are 2Gb cards? Use them with Solaris and ZFS. Multipath two ports and you get 4Gb.
 
4Gbps cards so 800MBps of io bandwidth :p

Yeah goin to use Openindana and ZFS for LUN server. Comstar got support for FC Target and it supports natively QLE246x HBAs =)
 
I've been using QLE246X since opensolaris 2008.11 for ESX in my home lab. Runs very well.
 
I just found an unused QLE2460 PCI-E at work, I may ask if they wanna "loan" it to me :) If I could only get another cheap...
 
stupid question. never thought about just direct connecting two fc hba's ... directly, not to a fc switch. am i reading thats possible?
 
put the dual port card in your ZFS server, one single port card per ESX host, profit. I'm jealous.
 
"NFS/Cifs/iSCSI over a FC link" - These are network protocols. You should not be concerned with these terms.

Sooo....is FibreChannel at the same level as Ethernet, PCIe, Infiniband and SAS? I assume this is the case, since there is the work being done for IP over FibreChannel. Presumably this will allow "NFS/Cifs/iSCSI over a FC link".
 
I say send them to me....

I will test them and write a HOWTO for you....

Promise I will return them.... **cough** **cough**:D

.
 
cheeky bugger stanza!

time to register the business name, Stanza's FC Import/Export....I like the sound of that.
 
AFAIK support for that generation of HBAs was dropped after ESX 3.0.1.

Yeah and Qlogic even provides you 3rd party drivers for ESXi for QLE cards :)
Little update: I'll get the fiber cables next week along with couple new hdds for fileserver :D
 
s0lid vs FC cards.
3...2...1... FIGTH!
This is goin to be half spammish trial and error thread :p
QLE2462 installed to NFS server with 8GB DDR3, 4x1.5TB RaidZ1 and Athlon II X2 245.

First impression ok the drivers recognise the cards good! Lets plug in the fiber and boot up the ESXi server. Erm what is this why the NFS server reboots all the time after loading to OS.

Takes closer look at fcinfo output ok, the cards are in Iniatator mode. F-A-I-L.
Time to check the Stanza's mini-howto. Ok i have to do some changes to /kernel/drv/emlxs.conf.
Done, boots, still in the init mode o_O

Googling, more googling and maybe result:
adding "forceload: drv/fct" to /etc/system has resolved the problem, now both HBA are in target mode.
http://article.gmane.org/gmane.os.solaris.opensolaris.storage.general/7361

Currently running bonnie. I'll post about it or just edit this post if that worked :)
 
Yeah maybe with read, write is at 81MB/s in bonnie :/
Ok finally got the cards into target mode thanks to this quide:
http://blogs.oracle.com/vreality/entry/storage_virtualization_with_comstar

List of FC/FCOE target ports:
HBA Port WWN: 210000e08b85ecab
Port Mode: Target
Port ID: 0
OS Device Name: Not Applicable
Manufacturer: QLogic Corp.
Model: QLE2462
Firmware Version: 5.2.1
FCode/BIOS Version: N/A
Serial Number: not available
Driver Name: COMSTAR QLT
Driver Version: 20100505-1.05
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 200000e08b85ecab
HBA Port WWN: 210100e08ba5ecab
Port Mode: Target
Port ID: 0
OS Device Name: Not Applicable
Manufacturer: QLogic Corp.
Model: QLE2462
Firmware Version: 5.2.1
FCode/BIOS Version: N/A
Serial Number: not available
Driver Name: COMSTAR QLT
Driver Version: 20100505-1.05
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 200100e08ba5ecab
 
Awww hell yeah!
4Gbps link up'n running!

HBA Port WWN: 210100e08ba5ecab
Port Mode: Target
Port ID: ef
OS Device Name: Not Applicable
Manufacturer: QLogic Corp.
Model: QLE2462
Firmware Version: 5.2.1
FCode/BIOS Version: N/A
Serial Number: not available
Driver Name: COMSTAR QLT
Driver Version: 20100505-1.05
Type: point-to-point
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 200100e08ba5ecab

And the 500GB Lun works in the ESXi :D
Target | Operational Status | Provider Name | Alias | Protocol | Sessions Initiator | Alias | Logged in since

wwn.210100E08BA5ECAB | Online | qlt | qlt1,0 | Fibre Channel | 1 | wwn.210000E08B857CBD | - | Wed Sep 21 20:52:27 2011
wwn.210000E08B85ECAB | Online | qlt | qlt0,0 | Fibre Channel | 0 |

fclun.png


Benchies in a second.
 
Are you doing a single or multilink from ESXi to Solaris?

I had trouble getting multilink / multipath working from ESXi to Solaris, ie using point to point mode...

didn't workies.

ESXi only wants to play Multipath THRU a switch

When I tried 2 x Emulex in Solaris box and 2 x Emulex in ESXi box..... all connected to the same switch (just to test) boom Multipath and all it's settings eg round robin blah blah all worked fine.

Get's pretty funky... but is fun to play with and sure seems snappy @ 2Gb.... should be quite nice with 4Gb cards setup like yours.

ESXi server ends up with 4 paths to the Solaris box

Paths end up as
Port 1 > Switch > Port 1 on Solaris
Port 1 > Switch > Port 2 on Solaris
Port 2 > Switch > Port 1 on Solaris
Port 2 > Switch > Port 2 on Solaris

Round Robin Multipathing sends a packet down each path then the next and next etc.

Have fun fiddling:D

.
 
I'm only running single 4Gbps link between File and ESXi servers :)
I will use the second port for connection between my main rig and file server after i get more space to it :p
 
And CDM Benchie inside 2008R2 VM that has been installed to that FC LUN:
fccdm.png


You can just quess how much that test has been boosted by the ZFS's ram caching. I'd quess: Alot.

When i added iSCSI service to ZFS server ESXi started to use iSCSI protocol for FC LUNs. After looking around i found this setting page:
Storage -> Datastores Properties -> Manage Paths:
fcpath.png


The iSCSI target was somehow preferred instead of FC.
 
Hi.......what are you using for your FC host. I'm planning on using Openfiler as my host. I picked up a couple of dual port cards. Figured I'd use FC to connect my exsi to openfiler.
 
If you read this thread at all the answer should be clear... From my sig:
Ahtlon II X2 245 + Asrock 790GX Pro (Necromanced) + 8GB DDR3 + 400W Generic psu + QLE2462 4Gbps FC + 80GB OS + 4x 1.5TB RaidZ1 (esxi VM data). Solaris 11 Express
 
Seems to me we use those HBA's in ESXi. I remember being told there was some issue so I updated the driver within ESXi from QLogic's website. Can't remember what it was but I've never heard any more problems out of them so I assume the update made the problem go away.

Good score!
 
Back
Top