Infiniband Help

Juda$

n00b
Joined
Nov 14, 2013
Messages
23
I am starting my first foray into infiniband and I am on the verge of purchasing 2 adapters and 1 cable, to create a point to point connection between a Solaris 11 ZFS box and a windows 7x64 box using 10Gbps infiniband.

My question is are the adapters linked below and the cable linked below compatible ?
I am finding all the various types of connection/interconnect somewhat confusing and I am reluctant to order until I am sure.

Any Help is greatly appreciated.

Adapters
http://www.ebay.co.uk/itm/VOLTAIRE-...853?pt=LH_DefaultDomain_3&hash=item1c4f2fb0fd

Cable
http://www.ebay.co.uk/itm/2m-6ft-Me...730?pt=LH_DefaultDomain_3&hash=item25a125857a

Thanks everyone.
 
Strange they mark the CX4 Cables 10Gb.... Mine from China i got for $25US and they do DDR between them.... in a Point to Point with those HCA cards i would expect to See DDR (20Gb) speeds....

Other than that the Voltaire cards look like the Cheapy HP ones which work fine with the right firmware... in ESXi its 2.7 firmware...

Even a point to point network you are going to need a subnet manager installed for these to cross communicate (i assume you are doing IPoIB) and the subnet manager will allow the cards to talk in a crossover scenario... Solaris i dont recall if it has a subnet manager... usually most people do linux... i would make sure you got a subnet manager software in place and looks good and is classified as STABLE or you are gonna have a bad time...

HCAs (ConnectX and ConnectX-2) use the CX4 Cables between them... then it just comes to the drivers and subnet manager and all that good fun....
 
Last edited:
Thanks for the info.

If I install the Subnet manger on the Windows host and get everything up and working and then need to reboot the Windows host, what happens to the connection?

Does it ?

1) Re-establish the connection once Windows and the subnet are back

2) Require work on the ZFS box to re-establish the connection

3) Just need to reboot the ZFS box

4) Or something else


Thanks
 
Assuming SM works on windows... usually what would happen is if you were doing some form of multicast would possibly cause issues, if its standard tcp stack shouldnt give you any crap....

1) should re-establish on TCP.. have seen wonky things on multicast
2) shouldnt..
3) shouldnt..
4) should just connect right back in... direct links arent usually as painful...

What is the expectation CIFS off the ZFS or are you planning iSCSI/NFS?
 
I am planning to use CIFS from the ZFS box eventually...

I have now got the cards in place and I am testing them between 2 windows hosts before messing with the Solaris ZFS box, one windows 7 x64 and 1 Windows 8.1 x64. The cards are detected and the drivers are installed for each.

1) I have set each card "HCA port Type Configuration" to Auto for each.
2) I have given each port on each adapter an IP address as follows

host windows 7x64 IB IF#0 10.0.0.1
host windows 7x64 IB IF#0 10.0.0.2

host windows 8x64 IB IF#0 10.0.0.3
host windows 8x64 IB IF#0 10.0.0.4

I have not set DNS or Gateway for these adapters, not sure they are needed ?

I launch the OpenSM.exe and the following text is displayed

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.

C:\Users\Admin>opensm
-------------------------------------------------
OpenSM 3.3.11 UMAD
Command Line Arguments:
Log File: %windir%\temp\osm.log
-------------------------------------------------
OpenSM 3.3.11 UMAD

Entering DISCOVERING state

Using default GUID 0x8f104039a6166
Entering MASTER state

SUBNET UP


I am finding that each host displays 3 Network connections via taskmgr however both Infiniband adapters show as disconnected........I am not sure why this is.

I am guessing that I need to do some further config of the OpenSM.exe subnet manager but any suggestions are most welcome.
 
Sorry for the late response i have 0 Experience with the Windows version of the SM and IB Drivers...
 
I have started testing with Debian and a windows host..... if possible could you take a look at the output of the Debian commands and see if there is anything clearly wrong......

I have used this page and others as a rough guide.

http://www.servethehome.com/configure-ipoib-mellanox-hcas-ubuntu-12041-lts/

I have the opensm running with the command when viewed using htop.

/usr/sbin/opensm -g 0x0008f104039a6376 -f /var/log/opensm.0xx0008f104039a6376.log

ibstat output

root@debian:~# ibstat
CA 'mlx4_0'
CA type: MT25408
Number of ports: 2
Firmware version: 2.7.700
Hardware version: a0
Node GUID: 0x0008f104039a6374
System image GUID: 0x0008f104039a6377
Port 1:
State: Down
Physical state: Polling
Rate: 8
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x0251086a
Port GUID: 0x0008f104039a6375
Port 2:
State: Active
Physical state: LinkUp
Rate: 8
Base lid: 1
LMC: 0
SM lid: 1
Capability mask: 0x0251086a
Port GUID: 0x0008f104039a6376
root@debian:~#

ifconfig output


root@debian:~# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:19:d1:0d:f6:99
inet addr:192.168.1.198 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::219:d1ff:fe0d:f699/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:392 errors:0 dropped:0 overruns:0 frame:0
TX packets:783 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:38556 (37.6 KiB) TX bytes:103268 (100.8 KiB)
Interrupt:17 Memory:ec000000-ec020000

ib0 Link encap:UNSPEC HWaddr 80-00-00-48-FE-80-00-00-00-00-00-00-00-00-00-00
inet addr:111.111.111.1 Bcast:111.111.111.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:4092 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:256
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

ib1 Link encap:UNSPEC HWaddr 80-00-00-49-FE-80-00-00-00-00-00-00-00-00-00-00
inet addr:11.11.11.1 Bcast:11.11.11.255 Mask:255.255.255.0
inet6 addr: fe80::208:f104:39a:6376/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:4092 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:17 errors:0 dropped:5 overruns:0 carrier:0
collisions:0 txqueuelen:256
RX bytes:0 (0.0 B) TX bytes:2255 (2.2 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:480 (480.0 B) TX bytes:480 (480.0 B)

lspci output

root@debian:~# lspci
00:00.0 Host bridge: Intel Corporation 82975X Memory Controller Hub
00:01.0 PCI bridge: Intel Corporation 82975X PCI Express Root Port
00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 01)
00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 01)
00:1c.4 PCI bridge: Intel Corporation 82801GR/GH/GHM (ICH7 Family) PCI Express Port 5 (rev 01)
00:1c.5 PCI bridge: Intel Corporation 82801GR/GH/GHM (ICH7 Family) PCI Express Port 6 (rev 01)
00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 01)
00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 01)
00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 01)
00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 01)
00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 01)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1)
00:1f.0 ISA bridge: Intel Corporation 82801GH (ICH7DH) LPC Interface Bridge (rev 01)
00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01)
00:1f.2 SATA controller: Intel Corporation NM10/ICH7 Family SATA Controller [AHCI mode] (rev 01)
00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 01)
01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 520] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GF108 High Definition Audio Controller (rev a1)
02:00.0 InfiniBand: Mellanox Technologies MT25408 [ConnectX VPI - IB SDR / 10GigE] (rev a0)
03:00.0 SATA controller: Marvell Technology Group Ltd. 88SE6145 SATA II PCI-E controller (rev a1)
04:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller

/etc/network/interfaces

root@debian:/etc/network# more interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp

auto ib0
iface ib0 inet static
address 111.111.111.1
netmask 255.255.255.0
post-up echo connected > /sys/class/net/ib0/mode
post-up /sbin/ifconfig $IFACE mtu 4092

auto ib1
iface ib1 inet static
address 11.11.11.1
netmask 255.255.255.0
post-up echo connected > /sys/class/net/ib1/mode
post-up /sbin/ifconfig $IFACE mtu 4092


root@debian:/etc/network#


For info Windows 7 vstat output

WINDOWS vstat

C:\Users\USERNAME>vstat

hca_idx=0
uplink={BUS=PCI_E Gen1, SPEED=2.5 Gbps, WIDTH=x8, CAPS=2.5*x8}
MSI-X={ENABLED=1, SUPPORTED=256, GRANTED=10, ALL_MASKED=N}
vendor_id=0x02c9
vendor_part_id=25408
hw_ver=0xa0
fw_ver=2.07.0700
PSID=MT_04A0110001
node_guid=0008:f104:039a:6164
num_phys_ports=2
port=1
port_guid=0008:f104:039a:6165
port_state=PORT_DOWN (1)
link_speed=NA
link_width=NA
rate=NA
port_phys_state=POLLING (2)
active_speed=2.50 Gbps
sm_lid=0x0000
port_lid=0x0000
port_lmc=0x0
transport=IB
max_mtu=4096 (5)
active_mtu=4096 (5)
GID[0]=fe80:0000:0000:0000:0008:f104:039a:6165

port=2
port_guid=0008:f104:039a:6166
port_state=PORT_ACTIVE (4)
link_speed=2.50 Gbps
link_width=4x (2)
rate=10.00 Gbps
port_phys_state=LINK_UP (5)
active_speed=2.50 Gbps
sm_lid=0x0001
port_lid=0x0002
port_lmc=0x0
transport=IB
max_mtu=4096 (5)
active_mtu=2048 (4)
GID[0]=fe80:0000:0000:0000:0008:f104:039a:6166
 
What are the IP addresses you are using? i see the Linux but not the windows... also i see the Active MTU on the Windows is 2048 and the MTU on the linux is 4096

Other thing to mention your PCIe Port on your windows box is PCIe 1.0 your maximum throughput bandwidth is going to be limited at ~2Gb speeds...
 
Each host has 1 adapter which has 2 ports

The Debian host is as follows
IB0 111.111.111.1
mask 255.255.255.0

IB1 11.11.11.1
mask 255.255.255.0

Windows 7
IB1 111.111.111.2
mask 255.255.255.0

IB2 11.11.11.2
mask 255.255.255.0

Thinking about this as I type I do not "KNOW" if the right ports are connected to each other.
 
That would be my gut speaking too.... I would have just initially started them on the same subnet then figured out who is what then did that subnet separation on the 2nd cable...
 
I have changed the IP addressing so that its as follows

The Debian host is as follows
IB0 111.111.111.1
mask 255.255.255.0

IB1 11.11.11.1
mask 255.255.255.0

Windows 7
IB1 111.111.111.2
mask 255.255.255.0

IB2 111.111.111.3
mask 255.255.255.0

This way I can swap the cable at the Debian end and test connectivity
 
I have also now updated the MTU for the 2 infiniband adapters using this command. However this does not seem to hvae worked for the live connection, Maybe a reboot will fix that.

netsh interface ipv4 set subinterface "InfiniBand #2" mtu=4096 store=persistent

"InfiniBand #2" This relates to the name of the connection in network management tool.



C:\Users\ross>vstat

hca_idx=0
uplink={BUS=PCI_E Gen1, SPEED=2.5 Gbps, WIDTH=x8, CAPS=2.5*x8}
MSI-X={ENABLED=1, SUPPORTED=256, GRANTED=10, ALL_MASKED=N}
vendor_id=0x02c9
vendor_part_id=25408
hw_ver=0xa0
fw_ver=2.07.0700
PSID=MT_04A0110001
node_guid=0008:f104:039a:6164
num_phys_ports=2
port=1
port_guid=0008:f104:039a:6165
port_state=PORT_ACTIVE (4)
link_speed=2.50 Gbps
link_width=4x (2)
rate=10.00 Gbps
port_phys_state=LINK_UP (5)
active_speed=2.50 Gbps
sm_lid=0x0001
port_lid=0x0003
port_lmc=0x0
transport=IB
max_mtu=4096 (5)
active_mtu=2048 (4)
GID[0]=fe80:0000:0000:0000:0008:f104:039a:6165

port=2
port_guid=0008:f104:039a:6166
port_state=PORT_DOWN (1)
link_speed=NA
link_width=NA
rate=NA
port_phys_state=POLLING (2)
active_speed=2.50 Gbps
sm_lid=0x0000
port_lid=0x0000
port_lmc=0x0
transport=IB
max_mtu=4096 (5)
active_mtu=4096 (5)
GID[0]=fe80:0000:0000:0000:0008:f104:039a:6166
 
looking at the logs although I am not sure what this means atm

root@debian:~# tail -f /var/log/opensm.0x0008f104039a6376.log


Mar 31 18:34:23 578846 [828C6700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:66 (New mcast group created) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 578975 [818C4700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:67 (Mcast group deleted) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579053 [820C5700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:66 (New mcast group created) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579180 [830C7700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:67 (Mcast group deleted) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579258 [828C6700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:66 (New mcast group created) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579387 [818C4700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:67 (Mcast group deleted) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579465 [820C5700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:66 (New mcast group created) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579591 [830C7700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:67 (Mcast group deleted) from LID:1 GID:fe80::8:f104:39a:6376
Mar 31 18:34:23 579666 [828C6700] 0x02 -> osm_report_notice: Reporting Generic Notice type:3 num:66 (New mcast group created) from LID:1 GID:fe80::8:f104:39a:6376
 
is it me or did your ports flop??? b4 Port 2 windows was active and now Port1 is active.

Also your firmware on the windows card is 2.07 where your linux is 2.7 - i would make sure firmwares are a match ;)
 
I did move the cables about yes just to check that the ports were which ones I thought they where, I had not checked the firmware versions so I will try that next.
 
I have looked at the firmware updates and there are updates that can be applied to the firmware.

Interestingly the "vstat" command returns the wrong firmware version when compared to the "flint -d mt25408_pci_cr0 query" which seems to report that the firmware on the adapter is the same as the one in the debian host.


C:\Users\****>vstat

hca_idx=0
uplink={BUS=PCI_E Gen1, SPEED=2.5 Gbps, WIDTH=x8, CAPS=2.5*x8}
MSI-X={ENABLED=1, SUPPORTED=256, GRANTED=10, ALL_MASKED=N}
vendor_id=0x02c9
vendor_part_id=25408
hw_ver=0xa0
fw_ver=2.07.0700
PSID=MT_04A0110001
node_guid=0008:f104:039a:6164
num_phys_ports=2
port=1
port_guid=0008:f104:039a:6165
port_state=PORT_ACTIVE (4)
link_speed=2.50 Gbps
link_width=4x (2)
rate=10.00 Gbps
port_phys_state=LINK_UP (5)
active_speed=2.50 Gbps
sm_lid=0x0001
port_lid=0x0003
port_lmc=0x0
transport=IB
max_mtu=4096 (5)
active_mtu=2048 (4)
GID[0]=fe80:0000:0000:0000:0008:f104:039a:6165

port=2
port_guid=0008:f104:039a:6166
port_state=PORT_DOWN (1)
link_speed=NA
link_width=NA
rate=NA
port_phys_state=POLLING (2)
active_speed=2.50 Gbps
sm_lid=0x0000
port_lid=0x0000
port_lmc=0x0
transport=IB
max_mtu=4096 (5)
active_mtu=4096 (5)
GID[0]=fe80:0000:0000:0000:0008:f104:039a:6166


C:\Users\****>mst start
-E- There is no need to start/stop mst service anymore, it is done automatically by the tools

C:\Users\****>mst status
MST devices:
------------
mt25408_pci_cr0
mt25408_pciconf0

C:\Users\****>flint
No options found.

C:\Users\****>flint -d mt25408_pci_cr0 query
Image type: FS2
FW Version: 2.7.700
Device ID: 25408
Description: Node Port1 Port2 Sys image
GUIDs: 0008f104039a6164 0008f104039a6165 0008f104039a6166 0008f104039a6167
MACs: 000000000000 000000000001
VSD:
PSID: MT_04A0110001

C:\Users\****>flint -d mt25408_pciconf0 query
Image type: FS2
FW Version: 2.7.700
Device ID: 25408
Description: Node Port1 Port2 Sys image
GUIDs: 0008f104039a6164 0008f104039a6165 0008f104039a6166 0008f104039a6167
MACs: 000000000000 000000000001
VSD:
PSID: MT_04A0110001

C:\Users\****>



ibstat continues to confirm that in the Debian host that the Firmware version is 2.7.700 as below.

root@debian:~# ibstat
CA 'mlx4_0'
CA type: MT25408
Number of ports: 2
Firmware version: 2.7.700
Hardware version: a0
Node GUID: 0x0008f104039a6374
System image GUID: 0x0008f104039a6377
Port 1:
State: Down
Physical state: Polling
Rate: 8
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x0251086a
Port GUID: 0x0008f104039a6375
Port 2:
State: Active
Physical state: LinkUp
Rate: 8
Base lid: 1
LMC: 0
SM lid: 1
Capability mask: 0x0251086a
Port GUID: 0x0008f104039a6376
root@debian:~#
 
Try this....

You can find the port GUIDs of your cards with the ibstat -p command:

# ibstat -p
0x0002c9030002fb05
0x0002c9030002fb06

ibping is an infiniband equivalent to the icmp ping command. Choose a node on the fabric and run a ibping server:

#ibping -S


Choose another node on your network, and then ping the port GUID of the server. (ibstat on the server will list the port GUID).

#ibping -G 0x0002c9030002fc1d
Pong from test.example.com (Lid 13): time 0.072 ms
Pong from test.example.com (Lid 13): time 0.043 ms
Pong from test.example.com (Lid 13): time 0.045 ms
Pong from test.example.com (Lid 13): time 0.045 ms
 
Hi sorry in the delay replying.

While testing I was able to setup the ping server on both systems and ping in both directions. However I also noticed that the adapters that I have give the option in the device manager to allow for "ETH" mode.

The adapeters are MT25408 Voltaire branded but are Mellanox chips.

I tried using the ETH mode and setup IP address's for each host. I also found that I needed to supply a "Locally Administrated address" which I setup to be the same as the other host but incremented by 1

i.e
Host 1 local adminstered address was 000000000001 so I set the 2nd host to be 000000000002

This setting is in Windows under the adapter config "advanced" menu see below its not for the right adapter but the option is the same and in the same place.


https://drive.google.com/file/d/0By16GyIFgiG_V2RBaWJuR3d2VlU/view?usp=sharing

I was then able to send and recieve data as if it was a 10Gbps network adapter rather than needing the OpenSM etc.....

I was sceptical about performance, however using jperf, see here https://code.google.com/p/xjperf/

I was able to sustain throughputs of upto 909 MegaBytes per second. Although this maxed out the intel dual core E8400 I was using for testing.In the real world my disk sub-system will do more like 600 Megabytes read/write per second so I will run this for a while and see how I get on.

I will post some performance stats later tonight

Thanks for all the help
 
Last edited:
Infiniband Update

After a few months of messing around with lots and lots of config options I have managed to get a sustained copy speed of 300MB + per/sec using IPoIB with 2 windows hosts, using a point to point connection. This is not great speed by infiniband standards but it is 3x times faster than the one Gbit Ethernet I was using and its many times cheaper than ten Gbit Ethernet thats available.

Problems I encountered

I encountered many problem on the way and maybe some of this will help others out there.

I was using an LSI 8308ELP raid card The Write speed of a LSi 8308ELP raid card is rubbish so I had to ditched it. The problem was that the first 2GB's of data transfered very fast then it dropped to 100MB per/sec no matter what I tried. It turns out it was dumping it straight to RAM rather than disk. This makes it useless for bigger copy jobs. It made no difference if I used raid 0 or any other raid type. block size mad no difference and this was an issue whether I was copying files locally or over the network. It was much much faster using withing software striping using just 5 disks rather than the planned 8 via the on board motherboard controller. Mother board is a Intel S5000VSA (really old)

Infiniband Adapters Config Changes

1) Disabled interupt moderation

2) Disabled IPV4 Checksum

3) Set the Jumbo packet size to be 1564 it seemed to be slower with it set higher for some unkown reason, maybe comeone out there can explain that ?

4) R/RoCE max Frame size was set to 2048


Registry Changes

1)HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/services/Tcpip/Parameters/Interfaces/{NIC-id}

2) Create two “DWORD Value” registry keys called TCPAckFrequency and one called TCPNoDelay,

3) Double-click on new registry key and change the value from “0″ to “1″ for each


These with these changes I am able to get 300MB per/sec over the infiniband transferring real files rather than using something jperf which gives significantly high results

I am using an old dual cpu xeon board with 16GB ECC RAM and 2x Xeon 5355 which are currently around 9 years old. I believe that this is the current bottle neck and and with any luck will be able to replace this with a 3Ghz + i5 system that should provide higher copy rates.I am also currently using a LSI 3Ware 9650SE which is not suffering from the 2GB issue mentioned above. I think better performance will be achieve using motherboard headers if I can get a board with 10Sata ports at a reasonable price I have not tested this as yet though.

https://drive.google.com/file/d/0By16GyIFgiG_VDkzcF9Ga0JhRUE/view?usp=sharing

https://drive.google.com/file/d/0By16GyIFgiG_bjVLY1JZV3lQOUE/view?usp=sharing
 
Last edited:
Just a note on the IP addressing. Please use RFC1918 private space instead of 111 or 11 address spaces. Failure to do so will render valid internet hosts on those networks unreachable from your machines due to routing.
 
Thanks for the feed back, Currently both hosts are only connected to each other.
 
Still a very bad idea. Given there are millions of compliant addresses, there is no good reason not to do this the right way (/off-soapbox) :)
 
Thanks for the feed back, Currently both hosts are only connected to each other.
Even having them connected directly will impact the other adapters in your computers. The routing table is shared across them and any attempts to connect to those IP ranges will be sent over the point to point adapter rather than the nic with the internet connection.
 
Some more performance tweaking options I have found that have boosted transfer rates to 600+ MegaBytes per second.

Change Tcp Window Size to 256 KiloBytes
Change the Max Segment Size or (MTU) to 256 KiloBytes

These changes are best made using regedit and setting these as options for the specific adapter rather than system wide.
I use these settings as I have two hosts connected directly together and transfer video's from game capture for the most part.

As my disk subsystem on the 2 machines struggles to sustain this sort of data rate I have tested this with JPerf 2.0.2 to simulate the performance, I have also tested with 2 small 6 Gigabyte RAM Drives to confirm.

TcpWindowSize

https://support.microsoft.com/en-us/kb/900926 (use Method 3)
 
Back
Top