10Gb SFP+ card for RAID PC to switch, Sonnet Presto Solo reviews?

Lifespeed

Weaksauce
Joined
May 12, 2013
Messages
75
I would like to connect my PC RAID server to US-16-XG Ubiquiti 10Gb switch using SFP+. I would prefer only a single port, and would like it to be PICe3.0 X4, so as to not to unduly burden my expansion slot requirements. PCIe X8 just isn't needed for a single-port card on PCIe3.0. But if the best card with a single port is PCIe3.0 X8 I can make it work.

The Sonnet Presto Solo is $150, which seems a good deal. Does anybody here have experience with this card?
 

IdiotInCharge

NVIDIA SHILL
Joined
Jun 13, 2003
Messages
14,712
This is one of the main reasons I used 10Gbase-T instead, as you can get PCIe 3.0 x4 cards for cheaper. SFP+ is great for server boards, but for consumer boards you rarely have the PCIe 2.0+ x8 slot available without interfering with discrete GPUs and so on.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,581
I'd pick up one of the Mellanox ConnectX-3 single ports instead personally, I think the model is CX311A for the single port x4 card and only cost about $40.

What OS are you using? Windows?
Are you using DAC cable or planning on using fiber and a transceiver?
 

Lifespeed

Weaksauce
Joined
May 12, 2013
Messages
75
I'd pick up one of the Mellanox ConnectX-3 single ports instead personally, I think the model is CX311A for the single port x4 card and only cost about $40.

What OS are you using? Windows?
Are you using DAC cable or planning on using fiber and a transceiver?
I'm going to use these with Windows, fiber and transceiver. Where do you buy these cards for $40, or did you mean $140?
 

Lifespeed

Weaksauce
Joined
May 12, 2013
Messages
75
This is one of the main reasons I used 10Gbase-T instead, as you can get PCIe 3.0 x4 cards for cheaper. SFP+ is great for server boards, but for consumer boards you rarely have the PCIe 2.0+ x8 slot available without interfering with discrete GPUs and so on.
I chose an X99 motherboard with sufficient slots for this reason, but 5 years from now it will be nice to replace to a motherboard with native 10Gb SFP+.
 

IdiotInCharge

NVIDIA SHILL
Joined
Jun 13, 2003
Messages
14,712
I chose an X99 motherboard with sufficient slots for this reason, but 5 years from now it will be nice to replace to a motherboard with native 10Gb SFP+.

That's quite unlikely to happen. Native 10GbE is a thing now; native SPF+ is likely to never arrive for consumer boards. At the consumer level, including HEDT, where it isn't WiFi, it's going to be CAT6(A) and RJ-45.
 

Lifespeed

Weaksauce
Joined
May 12, 2013
Messages
75
That's quite unlikely to happen. Native 10GbE is a thing now; native SPF+ is likely to never arrive for consumer boards. At the consumer level, including HEDT, where it isn't WiFi, it's going to be CAT6(A) and RJ-45.
10Gbase-T will be the option for most boards, but SFP+ is available now and will be in the future. I just bought a Supermicro FlexATX board with two SFP+ ports, although admittedly not a "consumer" boad, this was for the network router upgrade.

I'm actually looking right now at an 4-year-old i7-5820K plugged into an Asrock X99 Extreme 3 motherboard, thinking perhaps instead of plugging in an SFP+ card, I should plug in a new motherboard with everything else the same, right down to the RAM. Or maybe not . . .
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,581
I'm going to use these with Windows, fiber and transceiver. Where do you buy these cards for $40, or did you mean $140?
Ebay: Heres a listing that includes a multimode transceiver for $50: https://ebay.us/nQGneJ
One by itself for $35 (best offer you might even get it for less): https://ebay.us/rQFJzx (I've bought from ESISO before, if you message them they'll usually include whichever you need full/low profile bracket) ping them before buying just to make sure
 

Lifespeed

Weaksauce
Joined
May 12, 2013
Messages
75
Just wanted to update this thread; I've connected two PCs using Mellanox MCX311A-XCAT ConnectX-3 EN SFP+ to a US-16-XG Ubiquiti switch uplinked to a pfSense router running on Supermicro X11SDV-4C-TP8F 10Gb SFP+ Xeon-D motherboard.

It is impressive, I know have true SAN speeds to my RAID array on a PC with Mellanox card. However, I noticed when I ran an iPerf speed test I was only getting 8Gb/s. Further reading of Mellanox specifications shows both their PCIe-3.0 x4 and x8 cards list the PCIe-3.0 bus as 8GT/s. Shouldn't I be able to get 10Gb/s on my SFP+ optical network, or are the Mellanox cards robbing me of 2Gb/s? It was a significant upgrade, and my RAID PC can read/write 10Gb/s. I need to buy one more SFP+ card (single port) for a 3rd PC to be added soon, it is looking like Mellanox might be a bottleneck.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,581
Just wanted to update this thread; I've connected two PCs using Mellanox MCX311A-XCAT ConnectX-3 EN SFP+ to a US-16-XG Ubiquiti switch uplinked to a pfSense router running on Supermicro X11SDV-4C-TP8F 10Gb SFP+ Xeon-D motherboard.

It is impressive, I know have true SAN speeds to my RAID array on a PC with Mellanox card. However, I noticed when I ran an iPerf speed test I was only getting 8Gb/s. Further reading of Mellanox specifications shows both their PCIe-3.0 x4 and x8 cards list the PCIe-3.0 bus as 8GT/s. Shouldn't I be able to get 10Gb/s on my SFP+ optical network, or are the Mellanox cards robbing me of 2Gb/s? It was a significant upgrade, and my RAID PC can read/write 10Gb/s. I need to buy one more SFP+ card (single port) for a 3rd PC to be added soon, it is looking like Mellanox might be a bottleneck.
I think thats because PCI-e 3.0 is 8.0 GT/s (7.88 GB/s for a 8x)
https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions

Edit keep in mind this is 8GB/s not Gbits, I have the same cards and get 9.9Gbit/s on my tests (I don't have a switch inbetween though).
What oses are you using to test between (mellanox can sometimes be hundered by the windows driver installed)
What kind of array/disk amount is it? because 9.9Gbit/s is equivalent to 1.15GB/s
 
Last edited:

Lifespeed

Weaksauce
Joined
May 12, 2013
Messages
75
I think thats because PCI-e 3.0 is 8.0 GT/s (7.88 GB/s for a 8x)
https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions

Edit keep in mind this is 8GB/s not Gbits, I have the same cards and get 9.9Gbit/s on my tests (I don't have a switch inbetween though).
What oses are you using to test between (mellanox can sometimes be hundered by the windows driver installed)
What kind of array/disk amount is it? because 9.9Gbit/s is equivalent to 1.15GB/s

The Ubiquiti US-16-XG switch has 170Gb of non-blocking BW, it is unlikely to be a bottleneck. Good point regarding Gbytes vs Gbits. I tried re-running iPerf to check, but something was blocking the network connection. It wasn't Windows 10 firewall, will have to check the new pfSense router - although I wouldn't expect it to block LAN traffic. I have an Areca ARC-1882ix-16 RAID card with 6 each Western Digital 14TB Ultrastar DC HC530 Datacenter hard drives in RAID6, which can reach 1GByte+ transfers between two SFP+ PCs.

I did install the Mellanox drivers for Windows 10 64-bit. You're experiences seem to indicate full "wire" speed, so I'll try to overcome my new-network difficulties and re-run iPerf to confirm 1.25GByte = 10Gbit performance. It does sound as though I'm reaching specified speeds. Just wanted to double-check before buying a third card for my son's new gaming PC.
 

Spartacus09

[H]ard|Gawd
Joined
Apr 21, 2018
Messages
1,581
Fair enough, and something to note if you only have 6 drives in a raid 6, you should only expect about 750-800 MB/s real world (iperf should be 10Gb though).
 
Last edited:
Top