So many questions....

xx0m3gaxx

Weaksauce
Joined
Oct 24, 2013
Messages
70
Well to start off I will list what my servers have and what goal I want to get to from all this...

VM server ( proxmox )
CPU - X2 Opteron 6177
MB - MBD-H8DG6-F-O E
RAM - 64GB ddr3 1333 ecc reg
SSD - 120 gig pny
HDD - 4x 1tb seagate sata 2 drives

freenas server
CPU - Opteron 6177
MB - MBD-H8SGL-F
RAM - 16GB ddr3 1066 ecc reg ( will be upped to 32GB soon)
SSD - 4x 64GB ADATA Premier SP600 ASP600S3-64GM-C ( for testing)
HDD - 4X 2tb wd black sata 3
HBA - sas9211-8i (IT)


Both servers have a Intel I350-T4 quad gigabit card, and my switch is a HP 2810 48 port switch. I was looking to use all 4 ports on the cards to allow for an iscsi target to the freenas system to allow me to use the 4 ssds in the other system as a storage location for vms.

When I was first told about this I was told that you can use LACP or Round robin to get this 4gbps but from what I see since it is only 1 system accessing it will only provide 1gbps still even if they are linked/bonded/teamed.

So my main question is, What can I do to get anything higher then 1gbps using this setup for iscsi? Is there a way to do it via LACP or RR? or will I have to go MPIO?

Any help ,suggestions, or info is welcomed and I thank you for in advance.

( I am trying to avoid having to go out and buy 10gbps network cards cause I know I will not use anything close to this on the systems and I have about 250$ wrapped up from the 3 cards I got and cat 6 cables and switch. )
 
LACP is simply LACP. A Control Protocol. All it does is control and create the aggregate links. Many people have the common misconception that by bonding more NICs together that you will achieve higher speeds. The simplest answer is there is no current method of being able to split the data up and transmit over the aggregate links utilizing the benefit of each links connection speed and reassemble it correctly on the receiving end. What aggregate links gives us is redundancy and lower latency. The best analogy that I heard to describe it is this. Think of 1 link as a two lane highway during your morning commute. The speed limit is the same, but only a certain amount of cars can go both ways before you get a traffic jam and data slows to a crawl. Then imagine that the highway was expanded with another link with another two lane highway to the first one. The speed limit is still the same, but now more cars can drive the highway and move faster since there are more lanes available. If one of the highways has a traffic accident ( new network guy unplugged the wrong cable) , you still have the other link working and arent completely down. Things will just be slower until you get the other link back up.

--edit Check out Emulex. Since it looks like you have some white boxes and if you all you want is more speed, they offer 4, 8, and 16Gb/s HBAs to where you can do a PTP connection. Depending on what you are going to use this for it could be your way of getting the 4Gb/s speed you were hoping for. I know VMWare supports them, but I would check with FreeNAS if you can to make sure it will work.
 
Last edited:
Back
Top