• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

Looking for >gigabit between 2 PCs, I'm just at the 10m range of sfp+ DAC

510MB/s is what a SATA SSD should be doing... and some NVMe SSDs stop at around 500MB/s writes too, like my OG WD Black NVMe.

Of course it reads at >1500MB/s.
Pretty sure at this point the 510 is the limit of my array's read capabilities (it was 3x that when it was empty lol)
 
Pretty sure at this point the 510 is the limit of my array's read capabilities (it was 3x that when it was empty lol)

Inner tracks of spinners?

Quite likely, and that's best-case. I have a pair of SATA SSDs that will be caching for my duty array. Whenever I can figure out FreeNAS Samba permissions.

[which I'm using because Server 2016 won't let me set up Storage Spaces the way I'd like, for whatever reason]
 
Copying an 11gb file from my ssd to ramdisk went at 461MB/sec.
Copying that same file back from ramdisk to ssd went at 399MB/sec.
Copying that file from the ssd to the array via network went at 462MB/sec (same as copying to ramdisk, so probably the limit of my ssd's reads.)
Copying that file back across the network from array to ssd went at 277MB/sec.

Yet above I showed that the network can handle transferring to my machine at 510MB/sec, going from array to ramdisk.

So the ssd can be written to faster, from ramdisk to ssd. For some reason writing to the ssd over the network is limited at the mid to high 200s. I can't imagine why this is.
 
Inner tracks of spinners?

Quite likely, and that's best-case. I have a pair of SATA SSDs that will be caching for my duty array. Whenever I can figure out FreeNAS Samba permissions.

[which I'm using because Server 2016 won't let me set up Storage Spaces the way I'd like, for whatever reason]
I dunno how the raid card is allocation the data, but 510 is quite fine to me for the array. It's my pc's ssd's slow performance that I'm trying to figure out.
 
Ok I have confirmed that my ssd is the limiting factor. I remembered that Asus has a ramdisk utlity, so I fired it up and made a 12gb disk, then copied over an 8gb file. Copied at 510MB/sec. So the network is capable of fast file transfers (as is my array.) My ssd is not. I think that wraps things up? Thanks a lot for all the help!
Not sure I understand this.
When copying from and to the same drive it is perfectly obvious that the total bandwidth of about 500-550MB/s is split in half by the read and simultaneous write operations on the same drive. When you copy from a SSD to a SSD across network, you should get 500+MB/s.

P.S. Abother thing - your array is capable of more than 500MB/s write? Because when you copy from ramdisk to the server array (across the network) you got the same 460MB/s, to this tells us your array is the limiting factor here (if network can do 9Gbps).

Also have in mind the limitations of the shell (explorer) file copy which is flawed when dealing with huge files over SMB. How do you copy files, using Windows Explorer and shared folders/UNC paths?
But if you use Teracopy I guess it uses unbuffered copy which is the better way.
 
Last edited:
Not sure I understand this.
When copying from and to the same drive it is perfectly obvious that the total bandwidth of about 500-550MB/s is split in half by the read and simultaneous write operations on the same drive. When you copy from a SSD to a SSD across network, you should get 500+MB/s.
Sorry you're right, but see my better analysis I did above which uses ssd & ramdisk along with the network. It's specifically going from array to ssd that it's slow, even writing from ssd to array is fast.
 
I know it's ridiculous, but I got a Samsung 500gb 970 Evo NVMe drive during the recent ebay sale for $127, and just got the old ssd cloned over to it and rebooted. Copying that same 26.7GB of data that I was originally testing with (that usually went at about 256MB/sec and around 1:50 in transfer time) just completed on the nvme drive at 426MB/sec and 1:06. So it's not a driver issue, or a configuration issue, the ssd really was the limitation, even though crystal disk mark tests faster. Just did a 32gb test file from my 'slower' array and it copied at 400MB/sec.

It looks like one core of my pc's cpu was pegged pretty heavily during the transfer, so that's the limitation. Which in itself confuses the hell out of me, where do I find a cpu with 3x the single core performance of a 5.1ghz 8700k to fully saturate just 10 gigabit for a file transfer? I am using the windows drivers again. Realistically though I'm pretty close to the speed limits of the arrays I think so I think I'm wrapping things up. Just wanted to give final closure.
 
If you can get 9+ with iperf just live with it :) .
Install an FTP server at one end and test with transfer through FTP (plain FTP). Windows SMB shares are not very efficient. Maybe Win10 is the limiting factor. See above video - there they used WinServer 2016 to squeeze 4-5+ GBytes/sec (or maybe more, I watched it few days ago) :) .
I also use Win2016 at both ends but I have dual boot Win10 at one end though. Maybe there are TCP/IP stack parameters that can be tuned but I don't know.
You may just get 100Gbps cards and cable, maybe they would be more hardware-assisted :) . At the end of that video I expected the guy to mention his next challenge - 1 terabit/s LAN card.
 
If you can get 9+ with iperf just live with it :) .
Install an FTP server at one end and test with transfer through FTP (plain FTP). Windows SMB shares are not very efficient. Maybe Win10 is the limiting factor. See above video - there they used WinServer 2016 to squeeze 4-5+ GBytes/sec (or maybe more, I watched it few days ago) :) .
I also use Win2016 at both ends but I have dual boot Win10 at one end though. Maybe there are TCP/IP stack parameters that can be tuned but I don't know.
You may just get 100Gbps cards and cable, maybe they would be more hardware-assisted :) . At the end of that video I expected the guy to mention his next challenge - 1 terabit/s LAN card.
Heh no this went beyond silly. I already had an ftp server on my server (had an expired certificate too, stupid 1 year things) and I did see faster speeds when transferring several files at once vs teracopy. When transferring a lone file however the rate was no better, around 460MB/sec. Problem is of course my use case is generally working with single large files at a time. Still much much faster than gigabit!
 
Yeah, it's about f** time something better than 1Gbps to come to masses. Even older HDDs are considerably faster on sequential than 1Gbps.
Telling the truth I rarely use the 10G link (backups, some movies...) and it sits idle maybe 99.5% of the time but when I move something big it's just different with 180-200MB/s than 105MB. Yeah, I don't use RAIDs, I just wanted the network to be able to keep up with at least the HDDs speeds (up to 200MB/s). And at the same time when a transfer is ongoing I used at one time some VMs stored on the server on another physical HDD.
 
arnemetis your thread here inspired me to try the same thing myself, although my setup seems to have gone a bit easier than yours.

I was looking to network three PCs with a 10Gb connection; two are in the same room, but one is in the basement, so I needed at least a 10 meter run for that one. eBay and Amazon to the rescue!

10Gbps_n826y6dnkt.jpg


That's one dual port and two single port Mellanox ConnectX-3s, some pretty cheap transceivers, and the smaller 3m run of OM3 fiber. Unfortunately one of the single port cards was DOA, so I can only connect two PCs for now, but the eBay seller is giving me an exchange soon.

Getting the network set up and running at full speed was relatively easy. The only mistake I made was putting the dual port card in a PCIe x16 slot that was actually running at only x1. After fixing that, the iperf test looked pretty good:

iPerf10Gb_5bqz0zy3pb.jpg


9.27 Gbits/sec? That'll do, especially at around $170 for everything.

So thanks again arnemetis and everyone who participated in this thread.
 
arnemetis your thread here inspired me to try the same thing myself, although my setup seems to have gone a bit easier than yours.

I was looking to network three PCs with a 10Gb connection; two are in the same room, but one is in the basement, so I needed at least a 10 meter run for that one. eBay and Amazon to the rescue!

View attachment 111460

That's one dual port and two single port Mellanox ConnectX-3s, some pretty cheap transceivers, and the smaller 3m run of OM3 fiber. Unfortunately one of the single port cards was DOA, so I can only connect two PCs for now, but the eBay seller is giving me an exchange soon.

Getting the network set up and running at full speed was relatively easy. The only mistake I made was putting the dual port card in a PCIe x16 slot that was actually running at only x1. After fixing that, the iperf test looked pretty good:

View attachment 111461

9.27 Gbits/sec? That'll do, especially at around $170 for everything.

So thanks again arnemetis and everyone who participated in this thread.
Glad it worked out easier for you than it did for me !
 
Back
Top