What's the best Linux networking protocol?

Deadjasper

2[H]4U
Joined
Oct 28, 2001
Messages
2,584
Been using Samba but wondering if NFS or some other protocol would be better. Samba is kinda slow.

TIA
 
NFS support Unix/POSIX semantics much better. Whether it is faster depends on what you are currently doing, but generally NFS will easily max out 1 GbE connections.
 
I use both from my NAS. NFS for linux machines, Samba for Windows. If you have lots of small files, supposedly NFS is better, and also it supposedly has less CPU overhead, but it is all subjective. Without a really specific use-case, go with whichever one you prefer
 
Been using Samba but wondering if NFS or some other protocol would be better. Samba is kinda slow.

TIA
Faster and works well *ix to *ix. But not really for Windows.

Edit: Running both "can work", but there are things you have to know...
 
Netatalk! No, no, not really. I kid.

+1 to pretty much what everyone else said. My only exception is over WiFi. In my experience NFS is kinda squirrley over wireless, so I typically use SMB from my Linux laptop.
 
I use both on my Mint install.
NFS = Virtual Machine data store = you need to use NFS for any type of datastore for VMs
SMB = cause I have some windows devices that use some shares and such, just makes it easier.

Also comes down to controls, how do you want to control access? User/pass or via IP's.....
They each have their use cases.
 
Deadjasper, maybe a dumb question, but is your network 100 Mbps or 1000 Mbps?

Correct me if I'm wrong, but I think NFS still doesn't have an easy way to authenticate users. The minimum you should do is restrict your NFS shares to be accessed only by the IP addresses of the devices from which you are going to be accessing those shares, and your in the router you should configure those devices to never change their addresses. It's still quite poor security. If you trust all the devices in your LAN then it's not a big deal. One good way to secure NFS is to put all your trusted devices into a VLAN, but it might be pretty hard to configure as well...
 
Deadjasper, maybe a dumb question, but is your network 100 Mbps or 1000 Mbps?
what-year-is-it-robin-williams.jpg
 
Deadjasper, maybe a dumb question, but is your network 100 Mbps or 1000 Mbps?

Correct me if I'm wrong, but I think NFS still doesn't have an easy way to authenticate users. The minimum you should do is restrict your NFS shares to be accessed only by the IP addresses of the devices from which you are going to be accessing those shares, and your in the router you should configure those devices to never change their addresses. It's still quite poor security. If you trust all the devices in your LAN then it's not a big deal. One good way to secure NFS is to put all your trusted devices into a VLAN, but it might be pretty hard to configure as well...

My network is 1G, of course. I looked at NFS years ago and wasn't able to make it work. I was hoping things were different now but it seems they are not. I have manged to figure out how to connect to a Windows share and of course, Linux to Linux is no problem. Guess I'll stick with Samba for now. It's not as fast as Windows but it's better than nothing.
 
My network is 1G, of course. I looked at NFS years ago and wasn't able to make it work. I was hoping things were different now but it seems they are not. I have manged to figure out how to connect to a Windows share and of course, Linux to Linux is no problem. Guess I'll stick with Samba for now. It's not as fast as Windows but it's better than nothing.
Honestly, SMB should have zero issues with throughput, especially on 1G networks. It keeps up just fine on my 10G:
1679684495078.png
 
Eulogy, that's cool. Could you please post your setup? What operating systems are on either end of the transfer, the network cards, switch, type of cabling and their length? Are you using NVMe drives?
 
Eulogy, that's cool. Could you please post your setup? What operating systems are on either end of the transfer, the network cards, switch, type of cabling and their length? Are you using NVMe drives?
Sure.
That's from my gaming desktop, but I get similar speeds from any of my 10Gbps connected machines. In that particular screenshot, I'm transferring to my SSD NAS.
Desktop from that screenshot is an AMD 5900X (12c/24t), 32GB DD4, Win 10 Pro, all nVME storage (1x1TB, 1x2TB, both 980 Pros), NIC is an Intel X550-T2.

"smallfast" server is ~1TB (usable) of SATA SSD storage in ZFS's version of RAID10 (4x500GB SSDs). Funny enough, they're mismatched SSDs as well, as I couldn't find a fourth MX500, so it's 3x MX500s and a WD Blue:
Code:
NAME                                          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
smallfast                                     928G   295G   633G        -         -     0%    31%  1.00x    ONLINE  -
  mirror-0                                    464G   159G   305G        -         -     0%  34.3%      -    ONLINE
    scsi-SATA_CT500MX500SSD1_2252E6968A00        -      -      -        -         -      -      -      -    ONLINE
    scsi-SATA_CT500MX500SSD1_2246E687A7AC        -      -      -        -         -      -      -      -    ONLINE
  mirror-1                                    464G   136G   328G        -         -     0%  29.4%      -    ONLINE
    scsi-SATA_CT500MX500SSD1_2246E687B65B        -      -      -        -         -      -      -      -    ONLINE
    scsi-SATA_WD_Blue_SA510_2._22431K800256      -      -      -        -         -      -      -      -    ONLINE
Other server specs:
Code:
Dell R640
2x Intel Xeon Gold 5120s
472GB DDR4 ECC RDIMMs
40Gbps (Mellanox ConnectX-3)
240GB OS SSD (SATA)
Ubuntu 22.04.02

Networking, simplified:
Desktop -> Mikrotik CRS305-1G-4S+ -> 10Gbps OM4 MM Fiber -> Brocade ICX-6610-48P -> 40Gbps QSFP+ -> smallfast server

That particular fiber run is ~50m. Some of the other leaf switches are shorter, and my longest one is about 120m.

I think that covers it decently well without digging into my VLAN setup etc. Could likely do with unmanaged switches if you wanted. The CRS305 is pretty affordable (~$155) for a 4 port SFP+ managed switch, capable of sustained 10Gbps on each port. I have a couple of these throughout the house (4 in total), for end device connectivity (in a spine-leaf kind of architecture).
 
Last edited:
Networking, simplified:
Desktop -> Mikrotik CRS305-1G-4S+ -> 10Gbps OM4 MM Fiber -> Brocade ICX-6610-48P -> 40Gbps QSFP+ -> smallfast server
I have almost this exact setup. Only difference is that I went with single mode fiber.
 
Why SM? Unless you're running extreme distance, doesn't seem worth the extra cost to me.
When I first started getting the cables and optics I bought SMF, so I have just maintained that. SMF for 10 Gb was not that much more expensive, and was much less confusing as I started getting into things. I am not yet utilizing the 40Gb QSFP+ on the 6610, but I would go MMF between that and whatever I end up hooking it to (likely NAS). The primary goal was to get a 10Gb backbone
 
Last edited:
Back
Top