FCoE CNA Protocol? (Solaris 11.1 & ESXi 5.1)

MrVining

n00b
Joined
Jun 2, 2013
Messages
15
Storage Server:
Chassis: SuperMicro - 846E26-R1200B
http://www.supermicro.com/products/chassis/4u/846/SC846E26-R1200.cfm

Motherboard: SuperMicro - H8SGL-F
http://www.supermicro.com/aplus/motherboard/opteron6000/sr56x0/h8sgl-f.cfm

CPU: AMD - Opteron 6212
http://products.amd.com/en-us/OpteronCPUDetail.aspx?id=765

SAS HBA: LSI SAS 9211-8i
http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS9211-8i.aspx

(2) FCoE CNAs: Brocade 1020
http://www.brocade.com/products/all/adapters/product-details/1010-1020-cna/index.page

Storage: Lots of 3TB disk drives & 6x SSD (4x cache & 2x ZIL)

OS: Solaris 11.1

Application Server:
Bare Bones Server: SuperMicro - 2022G-URF
http://www.supermicro.com/Aplus/system/2U/2022/AS-2022G-URF.cfm

CPU: AMD - Opteron 6272
http://products.amd.com/en-us/OpteronCPUDetail.aspx?id=761

FCoE CNAs: Brocade 1020
http://www.brocade.com/products/all/adapters/product-details/1010-1020-cna/index.page

OS: ESXi 5.1

The final project (fingers crossed):
I hope to direct connect two ports of Brocade CNA on the Application Server with two ports of a Brocade CNA on the Storage Server with twin-ax cables.

After that project is functional, I hope to use the other CNA on the Storage server to connect to a cheap fiber switch that is yet to be picked out, most likely something off ebay. I want to connect about 5 workstations to this switch to boot off of this home brew SAN.

The last thing I want to do is on a back burner so far back I may never get to it, but one day I would like to try learn how to do some sort of netboot through my normal 10/100/1000 network for stuff like MythTV front ends. I have other much more important projects that need to come first.

Question number 1: What protocol would work best between Storage and Application servers, I need to know what to be searching for? At first I wanted to use IP so I could use iSCSI targets and NFS. I've been reading as much as I can and it would seem others on this site are using something different, but I'm new at this so I'm not sure what it is. FC maybe, is that a protocol? Another issue here is that I want some of the servers to have access to the same files through this high bandwidth connection.

Question number 2: ZFS ZIL / cache drives... At his point I'm pretty sure the most RAM I will ever have in the Storage server is 64GB. From what I've read this means that the ZIL pool/section will never take up more than 32GB? If this is true, does it make sense to partition (4) 128GB SSDs. I could take 16GB partitions on each of the 4 drives for mirror + stripe ZIL. That still leaves quite a lot of space left over for cache across 4 drives that have some real bandwidth.

Question number 3: I have been bouncing back and forth from RAIDZ and mirrored + striped. This is mostly because some of the data will be written and then who knows when it will ever be accessed again. I have mp3's for instance that I haven't listened to in years, or movies in a media center. Some of the data will be from MythTV or other DVR software... that data I could care less about. I recently found out about the "zfs set copies=x" command. So now I'm thinking about just adding all my 3TB drives into a pool no mirroring or RAIDZ. Then just using that command to adjust how much redundancy I want. Family photos I may use 3 or 4 copies, but a boot drive target for a teamspeak or minecraft server only needs 2, while DVR I'm comfortable with 1. If I want to watch a show I that got lost that bad, I can buy it on amazon video or something. Anyway what are your thoughts on this?

Please take it easy on the flames, I've been doing much reading, but as I'm new at this stuff it's hard to put the pieces together that I'm reading.
 
@OP

My questions:

1) What are you running that will need 10Gb/s bandwidth?

2) What are you running that will actually need FCoE as opposed to 10GigE? Is this a learning environment?


Your questions:

1) Personally I would go Ethernet and iSCSI, and not bother with Fibre Channel at all unless it's for learning.

2) I can't see your storage box needing more than 16GB RAM, unless I'm missing something? It's a fileserver, after all. :)

3) Personally, I'd use RAID-Z or RAID-Z2 for the media, and the RAID10 equivalent for the VMs.
 
Personally I wouldn't bother with FCoE in the home - if you really need 10g speeds I'd look at 10g ethernet using copper CatX cable.

As to dedicated ZIL devices and L2ARCs, be sure you actually need them first before jumping in - you can always add them later if you need to.

Finally, your storage layout may need another visit - the "zfs set copies=x" is a filesystem level feature intended to be used in addition to the pool/vdev level protection, not in place of it. To protect your data from a hardware failure, such as a disk, you need raidZx or mirroring (or some external protection scheme, such as SnapRaid).
The multiple copy feature "may" protect your data from a failing disk, but if your pool/vdev has no redundancy (ie no mirroring or RaidZx) then it can't protect you from a failed disk - in such a case the pool will go offline and none of the copies will be accessible - you'll lose your entire pool!
As the overhead of protecting every file using "set copies=2", is 50%, the same as mirroring, then you may as well mirror.
What the multiple copies feature gives you though is the ability to set this differently on different filesystems within the pool - for instance you could have 3 filesystems, one with standard single copy, one with dual copy, and one with triple copy, for the really paranoid :)
 
1. iSCSI or FC will work just as well as each other. FCoE is great for situations when you’re doing multiple hops over a network and need to ensure that the storage protocol has great QoS. Based on your use, you won’t see better performance with FCoE so I’d go with iSCSI as it’s easier to set up. (NFS is going to take a performance hit but still won’t be bad for what you’re doing).
2. Sorry, I don’t know.
3. As has been mentioned, Set Copies doesn't guarantee that files are on multiple disks. So set up a pool with either mirroring or RAIDz and use Set Copies for the important bits. Just be sure to also have a backup solution for the really important stuff. Something like a ‘cloud’ service with Dropbox or google.
 
@OP

My questions:

1) What are you running that will need 10Gb/s bandwidth?

2) What are you running that will actually need FCoE as opposed to 10GigE? Is this a learning environment?

1)
Lot's of things. MythTV has 7 HD tuners and I may be adding (3) with another HomeRun Prime. Plus transcoding. Plus possibly the streams coming out to front ends. 4-5 security cameras. The bandwidth for all the boot drives for all my VMs will come thorough here. To name a few. I'm looking at it from the other end... If (2) of the cache drives reading @ 550 MB/s alone are able to saturate the 10GigE why not plug a second cable in between the two of them.

2)
As far as I know nothing I'm doing NEEDs the FCoE, but I have a box of those Brocade CNA cards so... Just trying to figure out what format makes the most sense.
 
@OP

No worries, we're happy to help, but you may want to include more info in your post. For example, it wasn't clear that "...I have a box of those Brocade CNA cards...", so I assumed you were going to go and buy them. :)
 
Oh, no, everything I listed I have. Some of it was donated, some of it I bought or is repurposed for this project. I may end up needing to replace some of this hardware with supported hardware, but it's looking like I should be GTG.

I'm still trying to figure out if I can even direct connect 2 of these cards together. Just getting the drivers installed on Solaris is a task for me.
 
Just fyi... These Brocade 1020's seem epic, but installing drivers / firmware... is not fun. The documentation pretty much stinks. It's full of links back to the same page you downloaded it from.
 
Back
Top