starting to play with 10gb - Mellanox and need help

im getting older and it is starting to frustrate me a bit more but still love toying with this..

so this thread is good for old archive sake.. good info here LOL..

so I have the post above showing 2.8 fw and failing on 2.9 that is no HP fw...

I have another card in another machine (to get 2 machines with ethernet going)...

so back to this one first..

there are 2 nic shown under like in post 1 but under system devices there is another...found out while cruizin all over google!!!!! hah..
driver.png



I uncheck HW Defaults - and choose ETH for both and BOOM... all good.. even the top 2 adapters change from IPoib to Ethernet Adapters. can post screenshot later...


so I have a 2nd machine up and running and . if I uncheck HW defaults... and choose ETH my machine bluescreens and reboots... doing fw check it was 2.7 and I found HP FW of 2.8 which is what the good one has. Flashed it.. yup worked!!!! WOOOOT..
still bluescreens.. CRAP...

but side note on flashing... these are old cards...
having WinMFT_x64_4_11_0_103.exe installed it gave an unsupported card issue... google once again for the save.. said install an older version...
installed... WinMFT_x64_3_5_0_16.exe

AND!!!!!!!!!!!

2.7Fw.png




hah.. so still trying to get 2 machines going. I have 3 other cards but in the same machine where these seem to be working... wont load drivers, etc so I may have a few defective cards..

more to come!
 
Last edited:
more progress...

I'm flashing all my cards.. some were at 2.7 but 2 I thought were bad cards.. turn out that one has 2.5... flashed to 2.8 then reboot and boom working - well at least in device manager.

other card I'm fighting with.. still working on it though..
I'm wondering if even older fw.. so I had to use an older flint tool.. so maybe I need to go even older!!!!!
 
If one of the cards causes a bluescreen when you switch it to ethernet mode then it could very well be bad firmware or bad hardware in the nic.
Another thing to try would be to move the known good nic, that works in ethernet mode, over to the other computer that is having bluescreen issues. If the known good nic works there then you have at least narrowed it down to an issue with the nic itself and not the system it is in.
 
If one of the cards causes a bluescreen when you switch it to ethernet mode then it could very well be bad firmware or bad hardware in the nic.
Another thing to try would be to move the known good nic, that works in ethernet mode, over to the other computer that is having bluescreen issues. If the known good nic works there then you have at least narrowed it down to an issue with the nic itself and not the system it is in.


yup.. got 9 flashed all to 2.8 and 1 that just will not flash... hah...

so next up is that I'm going to get 2 machines up and running to see if they can see each other.


so another question...
from all of the googling on IB... some people used 3 dual port cards and go from machine to machine in pier to pier mode... will this work with these cards in Ethernet mode too?

I'm going to go to the switch and give it a try... and see what happens.

I know that when I plugged cables from NIC to switch a green light came on then launch opensm and yellow light comes on...

opensm doesn't run in Ethernet mode... it exits... and I'm not getting any lights on the switch...
 
woo hooo... sees it connected 10gb from nic to nic... sweet...

now I have to learn how to add this in for my vms so I can copy files from my physical win10 box to a vm ....


10gb.png
 
The switch is an infiniband switch, so it wont connect when the cards are in Ethernet mode. Mellanox brand switches may be able to work in Infiniband and Ethernet mode at the same time,not sure. But you dont have a Mellanox brand so it wont have Mellanox's unique "VPI" hardware feature of the chip supporting both modes. You can do direct nic to nic like you found out. You should be able to have a couple cards in your VM server machine and connect your other desktop systems to the server. Then you can transfer files directly from those computers to the server.

If you enable packet forwarding on the VM server and run it like a router (or make a VM router like making a Pfsense vm?) you could even make the server act like a "switch" and pass traffic between all your other systems and the VMs. basically a software switch type setup. Depending on your CPU, it could very well be fast enough to handle 10gb switching duties. I have a 7600K at 5GHz and it is fast enough for 80gb switching duties, so I imagine just about any modern CPU should handle 10gb just fine.
 
so I'm still building a vm to set up and copy files between...

but why cant I use infiniband then instead of tcp? or if I can.. just change cards to IB and launch OpenSM and then .... ?

OH.. wait.. my esxi 6.7 wont use this card as an IB adapter.. so I should be happy its working as a 10gb IP adapter...
 
ok up and running... how is this?

so I read somewhere about fw 2.9.1000 and under get hit bigtime with performance.. ill have to figure out how to do a custom FW... and get to 2.9.1200 or up... if I can...


10gbTransfer.png
 
ok up and running... how is this?

so I read somewhere about fw 2.9.1000 and under get hit bigtime with performance.. ill have to figure out how to do a custom FW... and get to 2.9.1200 or up... if I can...


View attachment 128857

Thats basically just gigabit level performance. Could be your hard drives you are going to and from, or it could be the NICs themselves. Try setting up a small RAMdisk on each side you are testing and put a 4GB file in one of the disks and transfer it into the RAMdisk of the other PC/VM. That will eliminate your storage as a bottleneck and test "only" the NIC transfer speed.

If you go RAMdisk to RAMdisk over these 10gb NICs, I would expect you to have between 1.0-1.2 gigabytes per second speed.
 
Last edited:
Thats basically just gigabit level performance. Could be your hard drives you are going to and from, or it could be the NICs themselves. Try setting up a small RAMdisk on each side you are testing and put a 4GB file in one of the disks and transfer it into the RAMdisk of the other PC/VM. That will eliminate your storage as a bottleneck and test "only" the NIC transfer speed.

If you go RAMdisk to RAMdisk over these 10gb NICs, I would expect you to have between 1.0-1.2 gigabytes per second speed.

thx for the help. enjoy your xmas/new year. im off work now until 1/2/19 but have some cards to play with at home.

I got 1 machine with 2 cards and set to ethernet...
from there nothing else.. hah..

I have 2 machines (gigabyte mobo and Lenovo both i7 6th gens) that see the card just fine but don't have the protocol settings tab so I cant change to ethernet. I found the command for esxi but trying to see if it is the same for windows as I cant get it to go..

C:\Program Files\Mellanox\WinMFT>mlxconfig.exe -d mt26418_pci_cr0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2
FATAL - Can't find device id.
-E- Failed to identify the device

im not gonna toy too much with it. other thing is I have an esxi machine but it didn't pickup the card... so ill have to play with that..



for my speed testing... yes it was spinny drives (call it that since its that or ssd or nvme… hah)..
so spinny drive in a physical computer then through the 10g to a vm which is on an ssd… so yes I think a ramdisk would help to test... or switch to ssd in physical as well...

but more to come but not as quick as it has been...
 
my ocd kicked in again.. hah..

so I had to think about the issue with flashing fw… I had to use an older version of fw tools.. so that is installed at work.
I installed 3.5 and boom...

had to do
C:\Program Files\Mellanox\WinMFT>mlxconfig -h
Usage:
mlxconfig [-d | --device] <mst device> OPTION
Help:
-----
-d | --device <mst device> Mst device path
-g | --gen_cfg <cfg file path> Generate template configuration file
Options
-------
-c | --cfg <cfg file path> Configuration file to apply
-q | --query Query current configuration
-r | --restore Restore defaults
-v | --version Print tool version and exit
| --verbose Print more run info


came up with:
C:\Program Files\Mellanox\WinMFT>mlxconfig -d mt26418_pciconf0 -g .\config
-I- Printing to .\config
-I- Template configuration file (.\config) was generated successfully


edit file to:
[PortProtocolConfig]
Port1Protocol = ETH ; Allowed values [ VPI, IB, ETH ]
Port2Protocol = ETH ; Allowed values [ VPI, IB, ETH ]

C:\Program Files\Mellanox\WinMFT>mlxconfig -d mt26418_pciconf0 -c config
-I- Parsing configuration file (config)...
-I- Configuraion file is valid
-I- Writing configuration to flash...
-I- Configuration was written to flash successfully



--------------------
GRRR DIDNT TAKE...

C:\WINDOWS\system32>ibstat
CA 'ibv_device0'
CA type:
Number of ports: 2
Firmware version: 2.8.0
Hardware version: 0xa0
Node GUID: 0x001635ffffbf0b4c
System image GUID: 0x001635ffffbf0b4f
Port 1:
State: Down
Physical state: Polling
Rate: 10
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x90580000
Port GUID: 0x001635ffffbf0b4d
Link layer: IB
Transport: IB
Port 2:
State: Down
Physical state: Polling
Rate: 10
Base lid: 0
LMC: 0
SM lid: 0
Capability mask: 0x90580000
Port GUID: 0x001635ffffbf0b4e
Link layer: IB
Transport: IB
 
I guess this is the steps to poke around old hardware...

Machine that was bluescreening is still with any card trying to set to eth.. im thinkin o/s issue. Pc never shuts down eother. Choose shutdown and it just reboots
 
Last edited:
thx for the help. enjoy your xmas/new year. im off work now until 1/2/19 but have some cards to play with at home.

I got 1 machine with 2 cards and set to ethernet...
from there nothing else.. hah..

I have 2 machines (gigabyte mobo and Lenovo both i7 6th gens) that see the card just fine but don't have the protocol settings tab so I cant change to ethernet. I found the command for esxi but trying to see if it is the same for windows as I cant get it to go..

C:\Program Files\Mellanox\WinMFT>mlxconfig.exe -d mt26418_pci_cr0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2
FATAL - Can't find device id.
-E- Failed to identify the device

im not gonna toy too much with it. other thing is I have an esxi machine but it didn't pickup the card... so ill have to play with that..



for my speed testing... yes it was spinny drives (call it that since its that or ssd or nvme… hah)..
so spinny drive in a physical computer then through the 10g to a vm which is on an ssd… so yes I think a ramdisk would help to test... or switch to ssd in physical as well...

but more to come but not as quick as it has been...



I went to check on my computer right now to see the tab where you change port protocol from IB to ETH, but my driver page no longer shows the right tab either! lol. I guess Windows must have decided to update the driver on its own (I hate Win10) and this Microsoft version driver doesnt give the necessary options. Thankfully mine are already in Ethernet mode so I dont have issues. But for your Gigabyte and Lenovo system, you install a real Mellanox driver and you will (or should?) get the option to change it. I just did a driver re-install on my system and confirmed on my end at least I now have the right Port Protocol tab showing again. I used the same WinOF 5.35 driver I used when I first installed all this stuff a year or two ago. Remember that you dont open the driver properties under the network adapters, but rather the "Mellanox ConnextX-# VPI Network Adapter" under System Devices:


OCLgmNK.png



Im not sure if you can use the WinOF 5.35 driver or not, as I dont see ConnectX-2 listed on their site at all anymore. You may be able to use an older version, as I recall when I was first setting these up that they used to list X-2 cards and had driver info for them. Seems they phased them off the site in the last year or two :( But I would say this is probably your issue in the ease of getting these set up, you just need the right Mellanox branded driver with the proper options to select.







so here is a stupid question...

you can do IP over InfiniBand... I guess I should try that? would get to use switch too.. but for now could try just machine to machine.. and run opensm...

thoughts?
at least I know I got good FW in machines and good cards...


Infiniband can also do direct connection from "nic" to "nic, just like Ethernet. In infiniband they are called host bus adapters though and not NICs. Anyway, if you connect two computers directly over IB and run your subnet manager they will connect to each other. However you cant just straight fire up a shared folder and map it in Windows, you will want to set up your iSCSI SAN software on the computer sharing the drive data and configure your iSCSI drive. Then on the other computer you open iSCSI Initiator under your Administrative Tools and configure your target to the SAN you just set up. Then you can share the drive data between the computers. This is obviously way more work and you really shouldnt even bother with this until you actually get the cards working and tested between all your systems first. You dont want to have to figure out SAN stuff and the NIC stuff at the same time and not know which is causing the issues.

Remote data storage
You can share physical or virtual devices from a target (host/server) to an initiator (guest/client) system over an IB network, using iSCSI, iSCSI with iSER, or SRP. These methods differ from traditional file sharing (i.e. Samba or NFS) because the initiator system views the shared device as its own block level device, rather than a traditionally mounted network shared folder. i.e. fdisk /dev/block_device_id, mkfs.btrfs /dev/block_device_id_with_partition_number

The disadvantage is only one system can use each shared device at a time; trying to mount a shared device on the target or another initiator system will fail (an initiator system can certainly run traditional file sharing on top).

The advantages are faster bandwidth, more control, and even having an initiator's root filesystem being physically located remotely (remote booting).





[shameless brag]
I use 40gb adapters and bond the ports on each NIC together for 80gb connections. The best I can get when sharing over the network between RAMdisks with that connection is this:

nPU7YBU.png


About the speed of a high end m.2 drive that you install in your computer, only this is a network share. The way I do it is use a caching program to write directly into RAM for my server and then it writes to disk when it can. So my server actually does get pretty good performance like that overall when transferring files to it. These are with MCX354A-FCBT model cards, which are about $120 on Ebay.
[/shameless brag]

On your system you should be able to get at least 1GB/s network speeds with your cards.
 
Last edited:
I originally did 5.50 on my work machine and worked great... these not showing tab... i installed 5.35 and still no....

So im not sure why but i also know these settings getting set are windows and not card set because as i was flashing fw... i set 1 and all other cards set to eth when i checked them...


So with that i think ill use wisescript to snapshot machine..set settings and snapshot. See what gets truely set but that wont happen for some time...like 1/2 hah
 
I originally did 5.50 on my work machine and worked great... these not showing tab... i installed 5.35 and still no....

So im not sure why but i also know these settings getting set are windows and not card set because as i was flashing fw... i set 1 and all other cards set to eth when i checked them...


So with that i think ill use wisescript to snapshot machine..set settings and snapshot. See what gets truely set but that wont happen for some time...like 1/2 hah


You probably cant answer this now that you are off work, but I was wondering if you open the driver properties under the Network Adapters (not the system devices where you look for port protocol) on your work computer, do you see a bunch of tabs like these:

82OJ5mn.png



Do you see that on any of your home systems too? Or do all your home systems get the basic 5 tabs of a generic driver with no option to change things? I am wondering cause it would be interesting if you had all these options on the network adapter, but still for some reason dont have the port protocol selection on the systems devices one.
 
well at home I see weird stuff... even on working machines...

guys are still at work and one is going to plug nic cable in for me and get me the ip of machine so I can remotely get pics... I didn't do that before I left cuz I didn't expect to have these issues at home... go figure..


so I installed the 5.35 driver like you recommended but when I check on the driver tab it still says 5.50.. this is on the gigabyte and lenovo

so my server 2016 box has 2 of them and here is whats there...
this is both device manager and network properties.. you can see mine are different than yours...

mellanox2.png




now on my working machine but that keeps bluescreening.....

has card, tried both 5.35 and 5.50 drivers but as said.. keeps saying on driver tab 5.50 and I can uncheck hw default and select IB for both and im good. But if I dare hit ETH on one or both BSOD immediately!!!!!

mellanox3.png
 
I think the server 2016 is working properly. Both NICs on the Network Connections page show they are in Ethernet mode, and you have the page to select from IB to Eth in the Systems Devices page. The one you how open right now does show a generic "no ip" though, which could just be because it is set to DHCP and you dont have DHCP available to it. My thought on why you have less tabs in the actual NIC properties is that since you have an X-2 card it probably just has less features so it has less tabs. So I think the server end of it is good.



On your other working machine, the one that bluescreens when you select Ethernet, you have tried multiple different cards in that machine and they all bluescreen right?
 
On your other working machine, the one that bluescreens when you select Ethernet, you have tried multiple different cards in that machine and they all bluescreen right?

yes...

and I know this will work with win10 as I got into my machine at work..

so yes.. for some reason my work computer is able to load all drivers, settings, etc...


but home pc (gigabyte and Lenovo) are not getting extra tabs...

other machine that bluescreens also doesn't have all the tabs on network connections properties..

work one that is working looks like this...


so I don't know why the others wont load fully as I have followed the same exact steps to each... but until I get those, I feel I will have the same issue. hah... not sure the differences between them... all are win10 pro 1809!!!!

also I dont have connectx-2 cards.. I have just plane connectx - that is what I find when I dl the firmware...
Firmware for use with HP IB 4X DDR Conn-X PCI-e G2 Dual Port HCA




mellanox4.png
 
so I figured it out.... hah..

more so the issues being with my main computer ive been tryin so hard to get it running on... which is the one im typing this on now...

so...

here we go..

Lenovo / gigabyte - fixed...
what I had to do was uninstall all Mellanox stuff...
reboot
uninstall card in device manager and it asked me if I wanted to delete drivers. said yes.

so now im back to just windows drivers, etc...
super simple...

1. install Mellanox 5.50 drivers
2. go into device manager and under system devices double click Mellanox card and update driver
3. choose driver in C:\Program Files\Mellanox\MLNX_VPI\HW\mlx4_bus
4. verify driver now says Mellanox as Driver Provider and not Microsoft
5. change both to ETH
6. verify that on the cards under network adapters that they both say Ethernet adapter now instead of IPoIB
7. go in and make sure driver now says Mellanox as Driver Provider and not Microsoft
8. if it is Microsoft, use this one C:\Program Files\Mellanox\MLNX_VPI\ETH
9. boom up and running.


now on my pc that keeps BSOD...
I was able to do an old restore point and it worked; albeit I have stuff missing as the start of all this testing I stupidly smoked my GTX 750 TI (don't want to get into it now.. Hah)...
so im missing drivers for new card, etc.. and not sure what else.
I know this for fact as I did a restore and went through the steps above and got it working here. so restored back to current and get BSOD so I have a corruption so will do the old restore point again and start from there.



all in all...
Server 2016 has 4 ports now
Gigabyte has 2
Lenovo has 2
Dell eventually will have 2

so I get it... I can connect Gig/Len/Dell to the server then and I have a second Lenovo that im going to work on in a bit.

my esxi in this machine didn't detect the card like it did in my work machine. totally different machine types so ill have to play with that maybe tomorrow...


my end goal...
ESXI - connect to 2k16 for iso copies
ESXI - connect to Lenovo which will be running freenas and iscsi with vms on it
gigabyte - not sure its use yet
Lenovo - is my plex box so connect to 2k16
Dell - is my gamer - got a new video card coming for xmas from wife so have to wait. but if that card wont work in the dell, it will go in the gigabyte and be my new gamer


so this is all for testing and playing... crappy parts in my lab but it all works... ive got friends that throw stuff out and give it to me, their jobs let them get property passes and take stuff home so they will hook me up...






I appreciate all of your time and ears (well eyes) and thoughts on this as it seems it is going to work.

now on last side note/question.... what switch could I go with instead of doing pier to pier... you mentioned Mellanox may? ill have to dig a bit..
 
so found these
https://www.ebay.com/itm/Voltaire-I...=item2846711b76:g:DF8AAOSwFSxaCzMr:rk:20:pf:0
and
https://www.ebay.com/itm/NEW-Voltai...ountable-1U-w-Mounting-Brackets-/351453769492

but not sure they will do ethernet... gonna restore my machine and work on this one for now...


side note - I want to run a switch as all but 1 machine will be next to each other. my gaming rig is too far away for 1 cable (unless I measure and can get a longer cable...
but I could mount switch to ceiling in middle.. run computer to it then it to other machines and be in business...hah..
 
As an eBay Associate, HardForum may earn from qualifying purchases.
so found these
https://www.ebay.com/itm/Voltaire-I...=item2846711b76:g:DF8AAOSwFSxaCzMr:rk:20:pf:0
and
https://www.ebay.com/itm/NEW-Voltai...ountable-1U-w-Mounting-Brackets-/351453769492

but not sure they will do ethernet... gonna restore my machine and work on this one for now...


side note - I want to run a switch as all but 1 machine will be next to each other. my gaming rig is too far away for 1 cable (unless I measure and can get a longer cable...
but I could mount switch to ceiling in middle.. run computer to it then it to other machines and be in business...hah..
Voltaire is basically super old Mellanox. Mellanox bought them something like 8 years ago. I actually have a really nice Voltaire infiniband switch that does 40gb Infiniband and includes an IB/Eth gateway for connecting IB networks to Ethernet networks (a Voltaire 4036E).

Those Voltaire switches you linked wont really work for you in this setup. Since you are connecting everything over Ethernet right now and those are both Infiniband only switches, and they do not have SFP+ ports on them. So you would need new cables to use them anyway even if you were connecting on infiniband.
Switches will be hard to find, as even cheap used ones are a few hundred dollars at least. Mellanox, HP, and Dell all use Mellanox stuff so anything from those brands will work for you with the equipment that you have. Cisco, Juniper, etc will not work with your existing cables as they require differently keyed SFP transceivers.

What you could to connect everything now without buying a switch is this:

Stick two cards (for total of 4 ports) into the ESXI server/host. Then connect Server 16 directly to one port on esxi and one port on the lenovo. Lenovo will have a server16 and esxi connection as well. Then gigabyte and dell will also connect to esxi.
Give each port on esxi a different IP, for example: 172.16.0.1, 172.16.0.2, 172.16.0.3, and 172.16.0.4
Server 16 could have IP 172.16.0.5 and 172.16.1.1
Lenovo could have 172.16.0.6 and 172.16.1.2
Gigabyte could have 172.16.0.7
Dell could have 172.16.0.8

To share files add a connection where needed to the esxi port the computer you are on connects to. For sharing files between Server16 and Lenovo you would connect over the 172.16.1.x network for a direct connection there. That way you know a xx.0.x connections are esxi connections and xx.1.x are direct server connections. Keeps things organized decently well. With that, you would have everything talking to the esxi host over 10gb, and the server16 and Lenovo NAS/Plex talk to each other directly over 10gb too.

As for cable lengths, on Ebay you should be able to find a Mellanox compatible SFP cable in the length you need if you look hard enough. I know I got 75 meter 40 gigabit QSFP cables for $50 on Ebay.
Actually I just looked a couple up. These should work for you as they are HP or Dell brand which use Mellanox equipment:
https://www.ebay.com/itm/HP-J9286B-...h=item1a283d9c7d:g:jGcAAOSw4A5Yq1l9:rk:7:pf:0
https://www.ebay.com/itm/30m-H3C-SF...=item2cdf11e41b:g:w3AAAOSw1m9aolww:rk:10:pf:0


I cant remember off the top of my head, but I know I have seen cheap places you can buy a cable the length you need with the transceiver coding you require directly. 10GTek brand maybe?

EDIT: there we go:
https://www.sfpcables.com/sfp-aoc-20m/?source=10gtekProductPage
https://www.sfpcables.com/sfp-aoc-50m/?source=10gtekProductPage


Alternatively, you could buy these to go into each NIC:
https://www.sfpcables.com/10gsfp-transceiver-axs85-192-m3

and then whatever length cables with LC connectors you need:
https://www.sfpcables.com/lc-to-lc-multimode-duplex-om3-10gb-50-125
or upgrade to OM4 cables if you plan to keep these and want to use them eventually for 100gigabit stuff:
https://www.sfpcables.com/om4-lc-to...x-50-125-lszh-for-10gb-40gb-100gb-application





Oh also another way to connect everything without a hardware switch would be to spin up a pfsense VM on the ESXI host and have all the 10gigabit ports assigned to that. Set up a bridge for all of them and now the ESXI will act as a real switch. Since this is only 10gb, performance should be basically the same for a home environment as a hardware switch would be anyway.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
SO I just remembered that my idea for a pfsense VM wont work for you. PFsense rips out driver support for things they dont want you to be able to use so they can make you buy their partnered equipment. This includes ripping out all Mellanox drivers. Unfortunately you cannot even compile your own drivers and install them as Mellanox does not make their own new driver source code for the newer versions of FreeBSD that pfsense runs on. Using the driver source code designed for older FreeBSD will not work. Trust me, I have tried it. If you really want a pfsense VM to act as both a switch and a router for your network, you would have to buy some new Chelsio brand 10gb SFP+ NICs and use those instead.

You could however use an OPNsense VM, which broke off from PFsense a long time ago. OPNsense does not rip out drivers and does work with Mellanox cards, both Ethernet and Infiniband models. Untangle probably wont work, as they have very bad driver support. I also know that IPFire will not work either, they dont have the right linux drivers for Mellanox and when I asked to fund such a feature they told me they didnt have time.
 
Thx for all the info. For now my 2k16 will be main machine...

So i have 2 win10 boxes connected to it. 1 to each nic just to test.

Installed imdisk.. gonna test now then look at your benchmark pic to see what sw i need for that too hah...


Oh and my bsod machine..had to reinstall and now working but so many apps and games need to be reinstalled...
 
Last edited:
something weird if seen now... and not sure why but i can only get lights when i use the top ports. if i use the bottom on any card, i get no connection... weird...


ok so i tested via the 10gb and came up with this...

ramdrive.PNG



then disconnected cable and mapped through 1gb and got this..

1gb.PNG


these numbers just seem slow...


so from what you see here.. do things look right? is my 1gb not as nice as it should be?


what is weird is copying a 9gb file between 1gb and 10gb seemed to be just as fast...
 

Attachments

  • ramdrive.PNG
    ramdrive.PNG
    122.8 KB · Views: 0
1gb networking maxes out right around 115MB/s, since 115MB*8=920megabit. Large sustained transfers do eek out a few more MB as you see in your pic because they incur a little less packet overhead, but overall with Ethernet and TCP/IP overhead you end up between 920-950 mebagit per second with a standard 1gb Ethernet connection.

The 4KB with queue depth of 1 is slow because it has maximum packet overhead compared to the rest. It is not only sending less data, but the majority of the packet is the normal overhead and then null space.


Your 10gb numbers look decent. 889MB/s is 7.1gigabit per second. Its possible these first gen cards just couldnt quite push the full 10gb or you could just be hitting file system issues causing a drop. All the 4KB tests again incur a lot of additional overhead from their very small size. The larger queue depths are pushing past the 1gb/s barrier by a good margin though.


As for it looking like the files transfer the same speed, that is most likely Windows playing tricks on you. Microsoft doesnt tell you this, but they cache network transfers of large files in RAM. So if you open Task Manager and watch your RAM usage, start a large transfer and you see the memory climb up. The transfer will look like it finishes, and then your RAM usage will slowly crawl back down to normal. That is when you know the file transfer is actually done, even though it looked like it finished a whole minute earlier. To me this seems very dangerous that MS does this without the user knowing or having control over the behavior. A power loss could lead to major data corruption with this behavior. The 10gb network will just help ensure this small chance of data corruption is less likely to happen since the real transfer is going faster.
 
Last edited:
thx for all that..

so all in all im good then! hah..

just don't know why I cant use both ports on the card though.. that kinda sucks... both show up in dev manager on both machines.. I plug cable into them and no lights.. says cable unplugged...weird.. but oh well..
 
thx for all that..

so all in all im good then! hah..

just don't know why I cant use both ports on the card though.. that kinda sucks... both show up in dev manager on both machines.. I plug cable into them and no lights.. says cable unplugged...weird.. but oh well..

My only guess is that maybe there is a difference between single and dual port firmware and the firmware is causing the second port to never activate. I would think the second one wouldnt show up in Windows if that were the case though...
 
so I'm back at it.. back at work and thought id get to the next step. Originally, the Mellanox site stated that ESXI 6.7 does not support this card but it seems that it does as I did get it shown in esxi but as Ethernet and not IB which is fine with me..

next up... how do you set an ip address up? I just cant figure that out now... the card is seen, etc... ill post a new section asking but still... thx...and happy new year!
 
so I'm back at it.. back at work and thought id get to the next step. Originally, the Mellanox site stated that ESXI 6.7 does not support this card but it seems that it does as I did get it shown in esxi but as Ethernet and not IB which is fine with me..

next up... how do you set an ip address up? I just cant figure that out now... the card is seen, etc... ill post a new section asking but still... thx...and happy new year!


Im not really up on ESXI stuff, but the web GUI section at the bottom of this page may help:
https://thebackroomtech.com/2017/09/29/configure-vmware-esxi-static-ip-address/
 
yeah that works for setting my 1gb nic up that is on the network. I didn't want to add my 10g nic to that management network. ill have to google and see how to make a vmotion network and follow those instructions...

happy new year bud.. and I'm back at it.. LOL


also another side note to the firmware... so HP only has 2.8 as a latest and on Mellanox site they have a 2.9 but for MHGH29-XTC to which I found a pdf that showed the HP part number and this one too.. so I said screw it, got enough cards and forced the firmware... she is working on 2.9.1000 now but still no bottom ports.. drats.. lol


3rd one down under HCA
http://www.mellanox.com/pdf/products/oem/HP_Reference_Guide.pdf

lead me to mellanox
used fw-25408-2_9_1000-MHGH29-XTC_A1.bin


2_9_1000fw.PNG


Dual10gb.PNG




so I put 2 cards in one machine and connected the cable.. boom... hah... just did it to see if works and no custom firmware...... and needed to test 2 in this machine as I'm going to wipe win10 and put freenas... and connect 2 machines to it.. one on each nic top port...
 
update...

I'm heading to lunch so cant really test right now BUT

Procurve is in.. WOOT..
I can ping between 2 machines!!!!!!!! SHAWEET!!!! forget the bottom port now.. ive got 6 ports on switch and 5 machines to use!!!!
 
So now the new delema.. 10gb switch and my drives cant handle it lol...
Ive got some reading and understanding to do...
 
still screwin around. it seems my hp 2.8 firmware in an i7 is copying files at 700+MB/s and the machine im on now.. xeon… forced it to 2.9.1000 and im only getting 450MB/s...
gonna go back to older fw...

but I keep forgetting this..

flint -d mt26418_pci_cr0 -i fw-25408-2_9_1000-MHGH29-XTC_A1.bin -allow_psid_change burn
 
Last edited:
so I have plenty of the old ConnectX cards... but found out there are ConnectX-2 cards that have CX4 ports... and gotta play.. so got one coming..
https://www.ebay.com/itm/202290529391

Mellanox Network Adapter Cards | MHGH29B -XTR

in post 72.. I put a link to pdf and its for HP but gives name and part numbers and lists mellanox... so I will hopefully be good.
HP IB 4X DDR CX-2 PCI-e G2 Dual Port HCA 592521-B21 593413-001 MHGH29B-XTR
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Made my geek heart jump for a moment looking at used prices on 40Gbps Infiniband cards...


[and then I considered cabling and switch pricing, and switch noise...]
cabling is dirt cheap, no worse than ethernet cable. Switch noise is bad though, I just keep mine in my closet to keep it away.
Infiniband is a pain in the butt though, better to just get 40gb ethernet, which most of the Mellanox NICs already can do for the infiniband models. The switch is the only issue.
 
Made my geek heart jump for a moment looking at used prices on 40Gbps Infiniband cards...


[and then I considered cabling and switch pricing, and switch noise...]

hah... 10gb is just fine for me..

so for me to do this on Ethernet rj45..
$100ish per card plus a switch.. I'm $67 in on 6 machines on 10gb with cx4 ports!!! LOL... switch is fairly loud but not bad as its in my utility room in basement next to my heater...
but I need to get a serial cable to program it and see if I can turn the fans down...
 
im gonna keep piling in here.. HAH..

so im lost on this..
I got my ConnectX-2 VPI card in and figured out how to get 2.10 installed. Figured out what file had the files I needed and custom flashed...
Weird part is windows 10 still sees it as a Conx-1 card??? thoughts?
still getting 450MB/s with this card too so has to be the system...

upload_2019-2-7_21-20-1.png
 
Back
Top