Looking for >gigabit between 2 PCs, I'm just at the 10m range of sfp+ DAC

ConnectX-2 are able to do 9.5gbps with the iperf test. They work out of the box in Windows Server 2016 and Win10 as well.
Also as arnemetis said, they are most likely to be put in 2.0 slot that is fed by the chipset and most low to mid range chipsets at least in AM4 are 2.0. We'd like to use all the CPU 3.0 lanes for the graphics card :) . My cards are also in 2.0 slots. Their (slots') theoretical limit is 16gbps for x4 anyway I think.
Of course if anyone finds similarly priced ConnectX-3 cards, they should get that.
At least my connectX-2 cards have some little bugs. If one machine is rebooted sometimes the connection is not recovered after the machine boots. It seems some bug in the firmware. In one end the connection says "cable unplugged". A simple scheduled script checks the condition of the other machine and the state of the connection and disables/enables the connection which resolves the issue.
My performance in win10 was horrible with autodetected microsoft drivers. Better when downloading from Mellanox...
 
Sample size of 1 - but I never had any issues with my connectx-2 until I overheated it (don't put the heatsink/card in a place no airflow can reach!) no connect/disconnect issues etc..
I plan on somehow getting 80mm fans on them to keep them cool. I had also read about them having issues when getting really hot, as they are generally found in servers with plenty of airflow.
My performance in win10 was horrible with autodetected microsoft drivers. Better when downloading from Mellanox...
Yeah I've seen some comments of that too, but then I also saw that Mellanox doesn't support these in Windows 10 officially, and you have to use an old driver from the Windows 8.1 set. Do you have a link to working drivers, or the version number of what you're using?
 
I plan on somehow getting 80mm fans on them to keep them cool. I had also read about them having issues when getting really hot, as they are generally found in servers with plenty of airflow.

Yeah I've seen some comments of that too, but then I also saw that Mellanox doesn't support these in Windows 10 officially, and you have to use an old driver from the Windows 8.1 set. Do you have a link to working drivers, or the version number of what you're using?

Let me help you:

Windows 8.1 drivers for mellanox connectx-2 work with windows 10, as do the new ones that don't "support" the connectx-2 (I know, it's weird.. )

Re airflow, a single 80mm is probably overkill for the connectx-2 - Chelsio cards typically go supernova but not mellanox. Mine was cramped at the bottom of a case with about 5mm clearance (at best, as cables were stuffed under it). Obviously, some airflow is better than none, but so long as there is a little bit I think it'd be fine.
 
Can you give me an example of cheap 10gbit ethernet cards? Per my second reply in the thread, it looks like going the ethernet route will run me $177 while my first stab at fiber runs me $117. If I go the secondary route of getting sfp+ cards with transceivers in them already I can knock it off a bit more and get under $100. No one has replied with specifics about the type of cables that would be appropriate for me, so I may be way off base with that part though.

The cables are multi-mode LC-LC 50/125
 
arnemetis, don't worry, it might be the specific firmware of my cards. I don't want to risk to try to update firmware. Also I use them with the built-in driver of Windows. I'm also reluctant to try old Mellanox drivers for these cards on my new Windows'es as they work Ok apart from that "reboot bug".
My observations are that when one machine reboots, the problem arises on the other machine - there I (or the script) have to cycle the connection disable-enable to restore connectivity. The funny thing is the connection on one end seems Ok but on the other it is with red X and "cable unplugged".
As to cat5 cables, they can carry 1gbps for short runs by design. cat5e is 1gbps for 100m. I read something about upcoming 2.5 and 5gbps standards and cat5e was rated 5gbps for something like 50m or so, see wikipedia.
5gbps RJ45 was going to be my choice if it was available now. 2.5Gbps is slow but 5 is Ok if it can use existing cat5e cables - in a household most likely the lengths of individual cables won't surpass 30-40m.

P.S. Arghh, now I saw there is a second page of the thread :) . I get those 9.5Gbps with all Windows' built-in drivers, either Win2016 Server or Win10.
The bigger issue for me is Windows Explorer file copy. By default it uses buffered copy which for very large files is bad and after the initial burst the speed plummets to 100-120MB/s and average speed is low. With unbuffered copy (for example using teracopy) I can get normal speeds (180MB/s for a HDD) across the entire transfer.
 
Last edited:
arnemetis, don't worry, it might be the specific firmware of my cards. I don't want to risk to try to update firmware. Also I use them with the built-in driver of Windows. I'm also reluctant to try old Mellanox drivers for these cards on my new Windows'es as they work Ok apart from that "reboot bug".
My observations are that when one machine reboots, the problem arises on the other machine - there I (or the script) have to cycle the connection disable-enable to restore connectivity. The funny thing is the connection on one end seems Ok but on the other it is with red X and "cable unplugged".
As to cat5 cables, they can carry 1gbps for short runs by design. cat5e is 1gbps for 100m. I read something about upcoming 2.5 and 5gbps standards and cat5e was rated 5gbps for something like 50m or so, see wikipedia.
5gbps RJ45 was going to be my choice if it was available now. 2.5Gbps is slow but 5 is Ok if it can use existing cat5e cables - in a household most likely the lengths of individual cables won't surpass 30-40m.

P.S. Arghh, now I saw there is a second page of the thread :) . I get those 9.5Gbps with all Windows' built-in drivers, either Win2016 Server or Win10.
The bigger issue for me is Windows Explorer file copy. By default it uses buffered copy which for very large files is bad and after the initial burst the speed plummets to 100-120MB/s and average speed is low. With unbuffered copy (for example using teracopy) I can get normal speeds (180MB/s for a HDD) across the entire transfer.
With Win10, the mellanox drivers & freenas, I reguarly copy > 300MB/s from an older SSD to freenas with no special tweaks....
 
I'm doing basically the same thing only going connectx-3; $26.95 https://www.ebay.com/itm/Mellanox-M...etwork-Card-10GbE-SinglePort-SFP/322917300357

I actually have 3 machines that should probably go 10G. My NAS, my server and my desktop could certainly benefit. I'm just going with the server and the NAS to start things out. It looks like it gets crazy expensive to have more than 2 SFP+ ports on a switch. I can get away with 2 port switches around $100. I'm a bit too far over budget right now so will hold off.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
The transceivers came in today, and it got me thinking. What is the appropriate way to support the excess fiber? I'll likely have a few loops of excess length at one end. Make a few large loops and hang it on two nails? I figured one nail may be too much of a point load on the cable. Searching for this I seem to be coming up empty handed. I have a 2x3 on the wall by the server I can support it from, or I can support the loop at the ceiling level on the 2x8 joists. Suggestions?
 
You're putting too much thought into this. 1 nail will be sufficient. Most cables are coated with Kevlar or some other thing decent material, they're not that fragile, just don't bend them to 180 degrees or spin them round like a windmill.

If you want to be extra cautious, put a bit of foam or something over the nail (a packing bead would work)

Heck you could duct tape it to the joist if you wanted and that would work
 
You're putting too much thought into this. 1 nail will be sufficient. Most cables are coated with Kevlar or some other thing decent material, they're not that fragile, just don't bend them to 180 degrees or spin them round like a windmill.

If you want to be extra cautious, put a bit of foam or something over the nail (a packing bead would work)

Heck you could duct tape it to the joist if you wanted and that would work
You're probably right, in my mind I started thinking I would need a fiber sized support bracket like this:
s-l1000.jpg
 
<10char>
 

Attachments

  • penn_slammer_iii_5500_reel_15103_5564768_L.jpg
    penn_slammer_iii_5500_reel_15103_5564768_L.jpg
    27.4 KB · Views: 0
Damnit you weren't supposed to post actual products! I don't think I need anything so fancy, it's lighter than I thought it would be. Surprised by much faster deliver than expected, the gang's all here! Will probably be stringing this up tonight after work.
0921181358.jpg
 
You're putting too much thought into this. 1 nail will be sufficient. Most cables are coated with Kevlar or some other thing decent material, they're not that fragile, just don't bend them to 180 degrees or spin them round like a windmill.

If you want to be extra cautious, put a bit of foam or something over the nail (a packing bead would work)

Heck you could duct tape it to the joist if you wanted and that would work
can I use it as a lasso?
 
Damnit you weren't supposed to post actual products! I don't think I need anything so fancy, it's lighter than I thought it would be. Surprised by much faster deliver than expected, the gang's all here! Will probably be stringing this up tonight after work.
View attachment 105421
Mines coming monday. the same only using connectx-3. I'm finding myself wanting to do a third and fourth machine but the cost of a switch with 4 ports noped me right out of that.
 
So I Got it hooked up last night, and lost the 50/50 chance that the cable I plugged in on one side was the same as the other (forgot to label the two wires.) After swapping one end, it works! I installed the Mellanox 4.80 Windows 8.1 drivers on both machines, did the complete install and rebooted each. I changed the ip addresses to static, and then did a few quick tests. iferp turns up results between 2.5gbit-3gbit, and I did a copy test of 27.6gb of a few movie files, that took 1:50 at 256MB/sec according to teracopy. The disk usage seemed to peak around 300MB/sec on my pc's side, I think I'm reaching the limit of my sata Samsung 850 evo ssd. Disk usage via task manager on my pc was averaging 60-70%, on my server only 5-10% It's also worth noting that I am also doing burstcoin mining, and while that happened I saw the same disk on my server shoot up to 80-90% usage and over 500MB/sec reads, so I know the server's arrays are capable of more. I just finished a second test of 49.3gb and it finished in 4:27 with an average of 189MB/sec according to teracopy, for another data point.

So, are there any standard tweaks I should implement on a fresh install here? It's quite possible my system's ssd is the limiting factor, as I did expect it to be but thought I'd do a little better. Also is there a more user friendly way to monitor these cards? using mget_temp I have peaked at 36c on my pc's card during a transfer, I attached an old chipset fan to the heatsink. In the server I put a 120mm fan on the bottom of the case standing up, it's blowing over the mellanox card, my raid card, and the gpu in the server.
0921182035.jpg
 
So I Got it hooked up last night, and lost the 50/50 chance that the cable I plugged in on one side was the same as the other (forgot to label the two wires.) After swapping one end, it works! I installed the Mellanox 4.80 Windows 8.1 drivers on both machines, did the complete install and rebooted each. I changed the ip addresses to static, and then did a few quick tests. iferp turns up results between 2.5gbit-3gbit, and I did a copy test of 27.6gb of a few movie files, that took 1:50 at 256MB/sec according to teracopy. The disk usage seemed to peak around 300MB/sec on my pc's side, I think I'm reaching the limit of my sata Samsung 850 evo ssd. Disk usage via task manager on my pc was averaging 60-70%, on my server only 5-10% It's also worth noting that I am also doing burstcoin mining, and while that happened I saw the same disk on my server shoot up to 80-90% usage and over 500MB/sec reads, so I know the server's arrays are capable of more. I just finished a second test of 49.3gb and it finished in 4:27 with an average of 189MB/sec according to teracopy, for another data point.

So, are there any standard tweaks I should implement on a fresh install here? It's quite possible my system's ssd is the limiting factor, as I did expect it to be but thought I'd do a little better. Also is there a more user friendly way to monitor these cards? using mget_temp I have peaked at 36c on my pc's card during a transfer, I attached an old chipset fan to the heatsink. In the server I put a 120mm fan on the bottom of the case standing up, it's blowing over the mellanox card, my raid card, and the gpu in the server.
View attachment 105680
I didn't even know there were two ways to hook them up. I won my 50/50 apparently. I have freenas and a win 10 machine went straight to 620Mb/sec which the 1G was around 113 which I'm not confident my storage array can do much more than that. I should switch to terracopy. I haven't done any tweaking yet. I think I should probably go ahead and order a switch so that I can plug these into the normal network I think the file server specifically saturates that 1G a bit too often.
 
So you got 2-3Gbps with iperf?! That's odd. Have you monitored the CPU usage on both systems? You should get 9+Gbps unless some of the systems is very old socket AM2/775 system that couldn't keep up.
The server's arrays or disks or SSDs (or on any computer) are easy to assess their speeds without having to resort to network transfers and metering.

P.S. 620Mb/s = 77MB/s. That's slow.
 
Using iperf-3.1.3-win32.
So you got 2-3Gbps with iperf?! That's odd. Have you monitored the CPU usage on both systems? You should get 9+Gbps unless some of the systems is very old socket AM2/775 system that couldn't keep up.
The server's arrays or disks or SSDs (or on any computer) are easy to assess their speeds without having to resort to network transfers and metering.
Both of the systems are in my sig, mine is 8700k server is Ryzen 1700x. During the test i did see a bit of cpu activity, but no more than 10% on either. I just ran the standard command iperf3.exe -s on the server and iperf3.exe -c ip address on my machine. The only thing I can think of is the server's ssd is an old sandisk 128gb, not a very fast one, but I don't think that should matter.

PS C:\Users\arnemetis\Downloads\iperf-3.1.3-win32> ./iperf3.exe -c 192.168.99.50
Connecting to host 192.168.99.50, port 5201
[ 4] local 192.168.99.168 port 51649 connected to 192.168.99.50 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 285 MBytes 2.39 Gbits/sec
[ 4] 1.00-2.00 sec 287 MBytes 2.41 Gbits/sec
[ 4] 2.00-3.00 sec 292 MBytes 2.45 Gbits/sec
[ 4] 3.00-4.00 sec 292 MBytes 2.45 Gbits/sec
[ 4] 4.00-5.00 sec 288 MBytes 2.42 Gbits/sec
[ 4] 5.00-6.00 sec 291 MBytes 2.44 Gbits/sec
[ 4] 6.00-7.00 sec 290 MBytes 2.43 Gbits/sec
[ 4] 7.00-8.00 sec 288 MBytes 2.42 Gbits/sec
[ 4] 8.00-9.00 sec 293 MBytes 2.46 Gbits/sec
[ 4] 9.00-10.00 sec 290 MBytes 2.43 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.83 GBytes 2.43 Gbits/sec sender
[ 4] 0.00-10.00 sec 2.83 GBytes 2.43 Gbits/sec receiver

iperf Done.
 
I didn't even know there were two ways to hook them up. I won my 50/50 apparently. I have freenas and a win 10 machine went straight to 620Mb/sec which the 1G was around 113 which I'm not confident my storage array can do much more than that. I should switch to terracopy. I haven't done any tweaking yet. I think I should probably go ahead and order a switch so that I can plug these into the normal network I think the file server specifically saturates that 1G a bit too often.
I bought two cables and ran them side by side, in case I broke one and to make my life easier later on. There should only be one way to normally plug in these cables.
 
Using iperf-3.1.3-win32.

Both of the systems are in my sig, mine is 8700k server is Ryzen 1700x. During the test i did see a bit of cpu activity, but no more than 10% on either. I just ran the standard command iperf3.exe -s on the server and iperf3.exe -c ip address on my machine. The only thing I can think of is the server's ssd is an old sandisk 128gb, not a very fast one, but I don't think that should matter.

PS C:\Users\arnemetis\Downloads\iperf-3.1.3-win32> ./iperf3.exe -c 192.168.99.50
Connecting to host 192.168.99.50, port 5201
[ 4] local 192.168.99.168 port 51649 connected to 192.168.99.50 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 285 MBytes 2.39 Gbits/sec
[ 4] 1.00-2.00 sec 287 MBytes 2.41 Gbits/sec
[ 4] 2.00-3.00 sec 292 MBytes 2.45 Gbits/sec
[ 4] 3.00-4.00 sec 292 MBytes 2.45 Gbits/sec
[ 4] 4.00-5.00 sec 288 MBytes 2.42 Gbits/sec
[ 4] 5.00-6.00 sec 291 MBytes 2.44 Gbits/sec
[ 4] 6.00-7.00 sec 290 MBytes 2.43 Gbits/sec
[ 4] 7.00-8.00 sec 288 MBytes 2.42 Gbits/sec
[ 4] 8.00-9.00 sec 293 MBytes 2.46 Gbits/sec
[ 4] 9.00-10.00 sec 290 MBytes 2.43 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 2.83 GBytes 2.43 Gbits/sec sender
[ 4] 0.00-10.00 sec 2.83 GBytes 2.43 Gbits/sec receiver

iperf Done.
So I decided hey, I have two cables, may as well try the other one. It's a little bit higher, but I doubt it makes that much of a difference?
PS C:\Users\arnemetis\Downloads\iperf-3.1.3-win32> ./iperf3.exe -c 192.168.99.50
Connecting to host 192.168.99.50, port 5201
[ 4] local 192.168.99.168 port 51726 connected to 192.168.99.50 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 308 MBytes 2.58 Gbits/sec
[ 4] 1.00-2.00 sec 316 MBytes 2.65 Gbits/sec
[ 4] 2.00-3.00 sec 320 MBytes 2.69 Gbits/sec
[ 4] 3.00-4.00 sec 319 MBytes 2.67 Gbits/sec
[ 4] 4.00-5.00 sec 318 MBytes 2.67 Gbits/sec
[ 4] 5.00-6.00 sec 314 MBytes 2.64 Gbits/sec
[ 4] 6.00-7.00 sec 317 MBytes 2.66 Gbits/sec
[ 4] 7.00-8.00 sec 319 MBytes 2.68 Gbits/sec
[ 4] 8.00-9.00 sec 321 MBytes 2.70 Gbits/sec
[ 4] 9.00-10.00 sec 307 MBytes 2.58 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 3.09 GBytes 2.65 Gbits/sec sender
[ 4] 0.00-10.00 sec 3.09 GBytes 2.65 Gbits/sec receiver

iperf Done.

Edit: I did the test with the original 27.6gb of files, and they copied in 1:51 at 255MB/sec. So given that, I think it's safe to say swapping cables had no real effect.
 
Last edited:
Have you maxed out buffers and offloads? That made a big difference to my experience
 
Try larger window size (socket buffer) as in:
iperf -c 192.168.99.50 -w 2M
but either way it should be much faster, even with the defaults.
Also ~10% CPU is almost 1 core at full load and this may be something.
Check netowrk card settings like Receive Side Scaling (Enabled), Large send buffers/Offload, etc.
Also SR-IOV now I see I have it disabled, I think it was enabled but don't remember why now disabled. Try these.
 
Last edited:
Try larger window size (socket buffer) as in:
iperf -c 192.168.99.50 -w 2M
but either way it should be much faster, even with the defaults.
Also ~10% CPU is almost 1 core at full load and this may be something.
Check netowrk card settings like Receive Side Scaling (Enabled), Large send buffers/Offload, etc.
Also SR-IOV now I see I have it disabled, I think it was enabled but don't remember why now disabled. Try these.
I first tried a larger window size as instructed, and it was no different.

Then I decided to more closely monitor my cpu usage. During a series of tests with the server running as the client, my pc (8700k) sees 3.3% cpu use from iperf3.exe, however the server (1700x) sees 10% cpu usage during the test. Reversing roles and my pc now operating as the server, my pc sees 7.7% cpu use, and the server sees 8%. Also of note, is with it operating this way, I am getting iperf scores between 4.74Gbit and 3.84gbit. Switching back and forth which pc is operating as the server, I see the same results repeated. So I'm gettign slower speeds with my server operating as the server in iperf.

Turning off Receive Side Scaling reduced my speed with my pc as server to 3.21gbit and with the server as server to 2.47gbit. So I'll put this to back to enabled.

I couldn't find an exact match, but there's Large Send Offload V2 (IPv4 & 6) which are both enabled, and Receive Buffers are 4096 and Send Buffers are 2048.

Turning off SR-IOV results in no change. So I'll put it back to enabled.

Have you maxed out buffers and offloads? That made a big difference to my experience

Ok let's give this a try. Receive Buffers are at their max with 4096, Send Buffers I increased from 2048 to their max of 4096. This made the performance worse with my pc as the server, bring it down to 3.73gbit, and with my server as server down to 2.33gbit. I am returning the send buffers to 2048.


So none of this worked sadly, although I did learn that it runs faster with my pc operating as the server. I wonder why that is? I suppose I'll keep looking, I appreciate all the help so far!

*Edit* So I saw someone else discussing running multiple streams and getting faster results. With -P 2 flag set while my pc is the server, I get 5.62gbit for the sum. With P -3 I get 6.85gbit, and with -P 4 I get 6.08gbit so I'm hitting some bottleneck there. However when I switch to my server being the server, -P 2 gets me 2.40gbit, -P 3 gets me 3.42gbit, -P 4 gets me 3.59gbit, and -P 5 gets me 3.73gbit. Not sure how useful thsi data is, but I figured I'd throw it on the pile!

*Edit The Second* Well changing Jumbo Frames from 1514 to 9000 gets me 5.44gbit to 6.04gbit with my server as the server, and 4.66gbit to 6.32gbit with my pc as the server! How did I miss this part?! However this has not translated to the file transfer via teracopy, still transferring 26.7gb in 1:48 at 261MB/sec. Maybe time to test out this ssd of mine, now that iperf is running better.
 
Last edited:
iperf should get you faster speeds than real copy, even with SSDs (if not NVMEs of course).
Try with the windows own drivers for the cards. I'm using them and never installed any third party driver.
My server machine is way less powerful than yours (or my desktop) and still I get easily to 9.5Gbps while the server load is about 30% (4 core Phenom II X4 965 at 3.4GHz). The server machine is used for light purposes and even this CPU is too powerful - IPTV, more as NAS with FTP, DLNA etc.
I never pass the 4Gbps mark anyway in everyday use so the CPU load by the network stack is nearly negligible.

So there must be something else hindering your way to 9+Gbps.
Maybe the cable(s) is not good in some way and some error correction is "eating" from your speeds.
 
Last edited:
iperf should get you faster speeds than real copy, even with SSDs (if not NVMEs of course).
Try with the windows own drivers for the cards. I'm using them and never installed any third party driver.
My server machine is way less powerful than yours (or my desktop) and still I get easily to 9.5Gbps while the server load is about 30% (4 core Phenom II X4 965 at 3.4GHz). The server machine is used for light needs and even this CPU is too powerful - IPTV, more as NAS with FTP, DLNA etc.
I never pass the 4Gbps mark anyway in everyday use so the CPU load by the network stack is nearly negligible.

So there must be something else hindering your 9+Gbps goal.
Maybe the cable(s) is not good in some way and some error correction is "eating" from your speeds.
Maybe, I'll try swapping drivers soon. Gotta run out for a bit and then I'm probably busy tonight. For now, here's some crystaldiskmark of the drives in their current state. i chose 8GB because that is approximately the size of the test files I was using. The array was much faster when it was empty.
Areca WD Array & Samsung 850 SSD 09-23-2018.png
 
iperf should get you faster speeds than real copy, even with SSDs (if not NVMEs of course).
Try with the windows own drivers for the cards. I'm using them and never installed any third party driver.
My server machine is way less powerful than yours (or my desktop) and still I get easily to 9.5Gbps while the server load is about 30% (4 core Phenom II X4 965 at 3.4GHz). The server machine is used for light purposes and even this CPU is too powerful - IPTV, more as NAS with FTP, DLNA etc.
I never pass the 4Gbps mark anyway in everyday use so the CPU load by the network stack is nearly negligible.

So there must be something else hindering your way to 9+Gbps.
Maybe the cable(s) is not good in some way and some error correction is "eating" from your speeds.
Just checking in because I had a small window to try the windows drivers (5.1.11548.0 dated 4/10/2016,) all I changed was jumbo frame size to max at 9614. 6.02gbit with my server as the server, 6.43gbit with my pc as the server, same file transfer again 27.6gb took 1:53 at 249MB/sec. I tried dropping it to 9000 on jumbo frames and disabled SR-IOV, and it slowed down to taking 2:05 at 225MB/sec. Ok so I enabled SR-IOV again and it's even slower at 2:15 & 210MB/sec. Ok back to 9614...2:07 at 221MB/sec. This test isn't really proving anything at this point I suppose.

Also of note since I've gone back to the windows driver, making changes on the server takes a good 30 seconds to close the window after applying changes. Disabling and enabling the adapter also takes a long time. A reboot may clean that up, but I can't afford to do that now.

Honestly, I would be fine with the 5-6gbit - if that translated to file transfer speed, not just iperf tests. Now that I'm getting faster tests I may try the other fiber cable again to see if it's any different.
 
I don't know. You got ConnectX-2 or 3?
I tried Jumbo frames up to 9000 but there was no difference and I reduced this to its defaults.
SR-IOV had something to do with virtual machines in enterprise environments or something like that.... I don't remember but I decided then to disable it with no side effects.
I've got no such delays when applying changes or disabling/enabling the adapter. I don't remember if you shared what are your OSes on the server and desktop. Is there a Linux involved..
Even if you would be satisfied with real world 5-6Gbit, I would be challenged to reach 9+Gbps anyway :) .
 
I don't know. You got ConnectX-2 or 3?
I tried Jumbo frames up to 9000 but there was no difference and I reduced this to its defaults.
SR-IOV had something to do with virtual machines in enterprise environments or something like that.... I don't remember but I decided then to disable it with no side effects.
I've got no such delays when applying changes or disabling/enabling the adapter. I don't remember if you shared what are your OSes on the server and desktop. Is there a Linux involved..
Even if you would be satisfied with real world 5-6Gbit, I would be challenged to reach 9+Gbps anyway :) .
I'm using Windows 10 pro on both machines. Yes is very much like to strive for the most speed possible, but if I at least got up to par with my arrays, I might leave it alone.

I'm guessing the delay with applying changes on the server is related to me uninstalling the mellanox drivers and installing the windows ones without a reboot. It didn't exhibit this delay before.
 
Well, it's a rule of thumb Windows gets restarted after something more than running a program :D . I have implicitly guessed that you did restart the PC after every tinkering with drivers..
This may explain the low iperf speeds as well.

Telling the truth my server had an old Athlon II X3 at 3GHz and the only reason I upgraded the CPU was that with it it coudn't go higher than about 7Gbit :) . The processor was maxed out. I wanted to see what others told me I had to see (9 - 9.5Gbps) :) . And I saw it. So, even the Athlon II was otherwise enough for my server's needs.
 
Well, it's a rule of thumb Windows gets restarted after something more than running a program :D . I have implicitly guessed that you did restart the PC after every tinkering with drivers..
This may explain the low iperf speeds as well.

Telling the truth my server had an old Athlon II X3 at 3GHz and the only reason I upgraded the CPU was that with it it coudn't go higher than about 7Gbit :) . The processor was maxed out. I wanted to see what others told me I had to see (9 - 9.5Gbps) :) . And I saw it. So, even the Athlon II was otherwise enough for my server's needs.
Yeah I should have rebooted after switching back to the windows drivers, but the server is pretty busy and I didn't really have time to do that in the moment. If it's not busy later I'll give it another reboot. I am worried it is because it is in that last pci express x16 slot, which is physically 16x but electrically only pci express 2.0 x4. I've confirmed it is operating in this manner with the Mellanox drivers as there was an info tab in adapter properties, but the windows driver doesn't have this so easily available. I found Get-NetAdapterHardwareInfo, which reports it corrects at PcieLinkSpeed of 5.0 GT/sec and PcieLinkWidth of 4.
 
Mine is on PCIE 2.0 x4 too, the last slot which is fed by the chipset, and the chipset is inherently 2.0. That's Ok. With a 20/24 lanes processor, using a second x16 slot on a board that supports SLI/CF will most likely take from the graphic card's 3.0 lanes which we don't want :) .
PCIe 2.0 x4 provides 16Gbps theoretically, so this is not an issue, the card itself is thought to work with this setup.
 
Ok did a reboot, the delay in applying changes to the adapter is gone. Still getting the same in iperf mid 5gbit with my server as the server and mid 6gbit with my pc as the server. File transfer the same as ever, 27.6gb in 1:51 at 253MB/sec. I'll try swapping cables again after work, that won't require a reboot at least.
 
Fyi I have had 10gbps working on win 10 in the 16x (4x electrically) slot on a ryzen 1700 using the connectx-2, so it will work in that config perfectly fine. Note that it's 10gbps up and 10gbps down, so it will saturate the 4x2.0 lanes when in full swing. That said, I never ran into that issue as while my nvme could hit 10gig speeds, my server on the other side couldn't
 
Once when I had a mATX mobo I swapped the graphic card and the 10g card because it had 1 x16 slot and 2 x1 pcie and when used the 10g card through "mining" extender it struggled to reach 200MB/s. Then I put the card in the slot that the graphic card was and the graphic card went through the extender in x1 and everything was fine :) . Well, the graphics performance hit was sometimes felt in desktop use :p.
As to saturating the link... if you have more than one drive in both machines you could initiate two/three independent transfers from different physical drives on source, to different drives on target. At one time I was using virtual machines whose files were stored on my server/NAS on a dedicated drive, at the same time running backups etc. on another drive.
 
Back
Top