10gbe home network.

rkd29980

Limp Gawd
Joined
Oct 19, 2015
Messages
181
I want to build a 10gbe home network. 10gbe cards and switches are inexpensive. I wanted to go for 40gbe but am I right to assume that an individual PC's or server isn't capable of those speeds anyway? So do I just put a card like this in each system and connect them to a 10gbe switch and that's it? Does the specs of each system come into play and how so?
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Moreover, would you need a router with 10GBE capability?

Also, you will need at minimum CAT6, possibly solid core CAT6A to be garenteed 20GBPS over medium-long distances.
 
Moreover, would you need a router with 10GBE capability?

Also, you will need at minimum CAT6, possibly solid core CAT6A to be garenteed 20GBPS over medium-long distances.

No, just a switch. This is just a local network for moving files back and forth fast.

All of my computers will be near each other and only one will be kind of far-ish away, 100ft or so.
 
Just as an FYI - Cat6a is required to reach the full distance of 100M (330 ft) and Cat6 may reach a distance of 55 meters. So you can use that as reference.

If you're just talking one single network and you don't need to route between two networks, a single 10G switch and 10G capable cards. If you're going to route between networks, your bottleneck will be the router unless it is also 10G.
 
depending on how many machines you want to connect, you may not even need a switch... also, the system supporting the speeds largely depends on the use case... most people don't have a storage system at home that could saturate a 10g link
 
depending on how many machines you want to connect, you may not even need a switch... also, the system supporting the speeds largely depends on the use case... most people don't have a storage system at home that could saturate a 10g link

This is what I do, two desktop PC's wired directly to nas that has two port 10gb nic in it. Place 10gb nics in their own subnet, so internet still goes out of my desktop's 1gb nic, when I access the NAS via dedicated subnet it's 10gb. I have zero reason for the desktops to speak to each other over 10gb so that works fine for me and only had to buy 3 cards, no switch.
 
I want to build a 10gbe home network. 10gbe cards and switches are inexpensive. I wanted to go for 40gbe but am I right to assume that an individual PC's or server isn't capable of those speeds anyway? So do I just put a card like this in each system and connect them to a 10gbe switch and that's it? Does the specs of each system come into play and how so?

If only....

So the card you linked isn't officially supported on Windows 10 at all. There are legacy drivers that go up to Windows 8.1 and Server 2012 R2. Also it would appear support for these were dropped in VMWare 6.0. That's likely the reason why it's so cheap.

http://www.mellanox.com/page/winof_matrix?mtag=windows_sw_drivers

What's pictured is a twinax cable, which allows for SFP+ direct connections between another device with an SFP+ port. If your computers are basically sitting right on top of each other in the same room, you might be able to get away with using those. I'd guess the longest cable you'll find that is affordable is 5 - 10M (so like 20 - 35ft) Anything longer and you'll likely just end up using some type of fiber. But fiber is another ball of wax because you need to get fiber sfp+ modules that slide into those ports. And those come in a variety of flavors, some of single mode, some for multimode, and others designed for different lengths. So you need to make sure if you buy multimode sfp+ modules, you get ones for short range and use a MM cable. You can burn out the optics if you try to use long haul optics on a 10M cable.

Then you're going to be in the same boat on the switch side. Most likely you'll find 10gbe switches that just have a bunch of SFP+ ports in them, so you'll also need to outfit those correctly or use your twinax cables. So sure a card is $20, but then you might spend another $50 per card to outfit the connection, and then you still need to spend a couple hundred to get a switch. 10gbe could still cost you $50 - $100 per port to setup, so if you wanted to do like 5 computers, I wouldn't be surprised it costing $500 to set it up. (With all used equipment)


Once you hook everything up, I can basically guarantee that it's not turnkey to just start moving traffic at 10gig. Windows SMB scales very well, but very well might mean out of the box getting like 2.5Gbps if your hardware is even fast enough to do that. (320MBps sustained) If you're trying to mix Linux / Windows, you'll probably have an even more challenging time getting everything to scale up. The actual specs of your computers will matter mainly on CPU utilization. If you have 1500mtu and it's not doing full hardware offload on the cards, you'll start to eat cpu cycles for the network card, on top of the cpu being used for disk access. Modern CPUs can likely handle this no issue though. (IE a 2600k or faster) I can't even tell but those cards might be PCI-E 1.0 x8. New boards are "supposed" to be backwards compatible with the older cards, but something to keep in mind in case you have a brand new board with an old network adapter. Sata 3 is NOT fast enough to max out a 10gbe card, so unless you're using PCIe SSDs, the best you might get from one drive to another is around 5Gbps.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
So would it be better to go with a Infiniband or Fiber Channel setup? I just want to be able to move data between my computers at a speed of at least 1GB/s (Gigabyte) and not the 30MB/s it is now. What would your recommendations be to achieve this?
 
So would it be better to go with a Infiniband or Fiber Channel setup? I just want to be able to move data between my computers at a speed of at least 1GB/s (Gigabyte) and not the 30MB/s it is now. What would your recommendations be to achieve this?

Well first you need to take a step back then. If you can only move files at 30MBps right now, then that has nothing to do with your current network. The 30MBps wall is well known and is usually related around scaling with SMBv1. We need to solve that first otherwise you can throw all the hardware at this you want, but if you don't make any changes in software, you're still only going to be moving at 30MBps and just doing it on more expensive hardware.

So what is the server, and what is the client? Just need to know operating systems, and maybe hardware. (Like if it's an Intel system or an appliance)
 
Well first you need to take a step back then. If you can only move files at 30MBps right now, then that has nothing to do with your current network. The 30MBps wall is well known and is usually related around scaling with SMBv1. We need to solve that first otherwise you can throw all the hardware at this you want, but if you don't make any changes in software, you're still only going to be moving at 30MBps and just doing it on more expensive hardware.

So what is the server, and what is the client? Just need to know operating systems, and maybe hardware. (Like if it's an Intel system or an appliance)

My current PC is running Win7. Has Intel i7 CPU and NIC. everything is less than 3 years old, built when the GTX 980Ti came out.

I can transfer file between my primary PC and... at...


An old PC originally designed for Vista now with Win7 - 150-200MB/s

My Norco RPC-4224 with a Supermicro MBD-X10SL7-F-O mobo and Intel i3 4360 CPU running OmniOS and Napp-it - 100-200MB/s

An older 36 bay server with a Supermicro X8DTH-IF and Intel Xeon E5520 CPU running FreeNAS 11 - 12-30MB/s https://forums.freenas.org/index.php?threads/slow-transfer-speeds.56177/

I have two other 36 bay Supermicro servers and a few old 8 and 16 bay Supermicro servers and a 48 bay Chenbro that are not yet set up.
 
Well first you need to take a step back then. If you can only move files at 30MBps right now, then that has nothing to do with your current network. The 30MBps wall is well known and is usually related around scaling with SMBv1. We need to solve that first otherwise you can throw all the hardware at this you want, but if you don't make any changes in software, you're still only going to be moving at 30MBps and just doing it on more expensive hardware.

So what is the server, and what is the client? Just need to know operating systems, and maybe hardware. (Like if it's an Intel system or an appliance)

This. Fix it so you're getting the proper 110+MB/sec from 1GbE first.
 
Hmm...

Obviously we know that FreeNAS needs some tweaking, but I'm trying to do some reading to see if anyone was successful with Windows 7 and 10 gig performance. Windows 7 only uses SMB2, and it's lacking some of the features that Windows 8.1 and up have with SMB3. This also would need to be supported on the server side, so I'll see if I can find info on your Napp-it to see what it supports.
EDIT: Best I can tell you might get most of the way there on SMBv2. FreeNAS should support it and I can't find much for OnmiOS.


So for Reference your Windows 7 computer only has one NIC in it right? From the forum post you linked that's what it sounds like.



Okay to help clarify in this thread the stuff from the other thread.

Windows likes to report caching if you just look at the transfer window. As stated in that other thread, the network will only move up to around 110MBps, so anything above that it's really the network moving at full speed. It sounds like in some cases you are seeing decent performance, and in other scenarios you are not. You should watch task manager and look at the traffic graph, and ignore the transfer window. The traffic graph will show the proper numbers for the transfer.


I think Windows 7 might be able to scale halfway decent, but if you're running Windows 7 I can basically assume you're using SATA SSDs, so if your goal is to transfer from that to your NAS, you will never achieve 1GBps, because SATA SSDs top out around 500MBps. If your goal is to copy files from server A to server B, then we can look at how those stack up.
 
Last edited:
Yea I probably wouldn't attempt to do 10 gig copper at this point. The main reason is that anything based upon it is so new that you are only going to be able to find brand new hardware. You can get SFP+ copper transceivers, but those are super expensive as well because they are only a year or so old. Stick with Twinax cables if all of those servers are in the same rack, and try to find a reasonably priced switch. Then you can just pick up a pair of 10 gig multimode SFP+ transceivers and some multimode cable to run to your desktop. If you can get your FreeNAS issues sorted out, you should scale over 1 gig, but I definitely wouldn't even want you to think that 1GBps like you want is just going to be drop the cards in and go. If you were using Server 2012 R2 with Windows 8.1, it might more simple, but mixing SMB and samba there will be more hurdles to jump.


EDIT: This just in, 10 gig is still expensive lol. I honestly can't find much for switches that are 10 gig and are not super expensive. It's one thing to want a couple of 10 gig ports, but it sounds like you need at least 5. Even going with those $100 nics the cheapest switch would be that 8 port netgear at $800. Meaning in order to wire up 5 computers it's easily $1,300 and that doesn't include Cat6A cables.

And it would matter because there is no driver for a brand new company I've never heard of.
https://forums.freenas.org/index.php?threads/new-cheap-asus-xg-c100c-nic.56160/


The best thing that I see on Ebay is a brand I've never heard of either: Quanta LB6M
Sounds like that switch is around $300 and seems to be popular with home users trying to get 10 gig for a decent price. If you want a known brand like brocade, expect to pay at least $500.
 
Last edited:
So for Reference your Windows 7 computer only has one NIC in it right? From the forum post you linked that's what it sounds like.

My primary PC has a Realtek and an Intel nic. I use the Intel nic.

I think Windows 7 might be able to scale halfway decent, but if you're running Windows 7 I can basically assume you're using SATA SSDs, so if your goal is to transfer from that to your NAS, you will never achieve 1GBps, because SATA SSDs top out around 500MBps. If your goal is to copy files from server A to server B, then we can look at how those stack up.

My primary storage is mechanical hard drives, Toshiba mostly, which are pretty fast. I can easily get 200MB/s+ transferring from one drive but transferring from an array should much faster.

...but I definitely wouldn't even want you to think that 1GBps like you want is just going to be drop the cards in and go.

Why you gotta harsh my mellow :p

EDIT: This just in, 10 gig is still expensive lol. I honestly can't find much for switches that are 10 gig and are not super expensive. It's one thing to want a couple of 10 gig ports, but it sounds like you need at least 5. Even going with those $100 nics the cheapest switch would be that 8 port netgear at $800. Meaning in order to wire up 5 computers it's easily $1,300 and that doesn't include Cat6A cables.

The best thing that I see on Ebay is a brand I've never heard of either: Quanta LB6M
Sounds like that switch is around $300 and seems to be popular with home users trying to get 10 gig for a decent price. If you want a known brand like brocade, expect to pay at least $500.

I see a lot off Arista switches and HP ProCurves on ebay for $600-800 and some Barcode switches for $400-500. Any idea what kind of card I need since that Mellanox card is crap?


And it would matter because there is no driver for a brand new company I've never heard of.
https://forums.freenas.org/index.php?threads/new-cheap-asus-xg-c100c-nic.56160/

Well that sucks. If Aquantia\ASUS put effort into making sure their cards are as widely supported as possible, they could have had so many more customers. Before I settled on FreeNAS, I was trying to get OpenMediaVault to work which is Linux based but OMV has so many issues is may as well be considered to be in Alpha state. OMV core does not have native ZFS support and in order to use ZFS, you need to install a plugin to install the ZFS plugin which is broken and maintained by anyone who actually understands ZFS. If OMV wasn't such a mess, I would be using it and would have had fewer issues to deal with



One option I am concidering is to have my servers dual boot Windows Server and FreeNAS and create a zvol using FreeNAS and then format it NTFS. That way, I get a network drive that Windows likes but still get all the benefits of ZFS. I don't know how Samba\SMB fits into that but it is just an idea.
 
I see a lot off Arista switches and HP ProCurves on ebay for $600-800 and some Barcode switches for $400-500. Any idea what kind of card I need since that Mellanox card is crap?

The Mellanox card isn't crap at all, it's just old. Given that you're still using Windows 7, the drivers are there and the card should work perfectly. Given the OSes you have, it actually makes sense to purchase those cards. If you were planning on using VMWare then I definitely wouldn't bother. BSD on the other hand tends to be behind the curve when it comes to drivers, so it's another good case where the card would probably work.

It looks like they just started supporting those cards in FreeBSD 10. The connect-2 came out in 2009 so it only took them 5 years to get driver support for it...


Well that sucks. If Aquantia\ASUS put effort into making sure their cards are as widely supported as possible, they could have had so many more customers. Before I settled on FreeNAS, I was trying to get OpenMediaVault to work which is Linux based but OMV has so many issues is may as well be considered to be in Alpha state. OMV core does not have native ZFS support and in order to use ZFS, you need to install a plugin to install the ZFS plugin which is broken and maintained by anyone who actually understands ZFS. If OMV wasn't such a mess, I would be using it and would have had fewer issues to deal with

Trying to use ZFS on linux is going to be a hit or miss issue. ZFS is native to BSD, so it had to be rebuilt to work on Linux. I believe it is stable now in the latest versions of Linux, so as long as OMV is using the latest and greatest, it's probably better than it used to be.

One option I am concidering is to have my servers dual boot Windows Server and FreeNAS and create a zvol using FreeNAS and then format it NTFS. That way, I get a network drive that Windows likes but still get all the benefits of ZFS. I don't know how Samba\SMB fits into that but it is just an idea.

[/quote]

So the format of the volume doesn't really matter to a share. Windows doesn't know or care that your share is running from ZFS, EXT4, FAT32, or NTFS. That portion is largely hidden when using a share. Native SMB on Windows is going to be faster than Samba on Linux because Samba is just built to try to use Microsoft's protocols without a lot of information about what's going on in the background. That said FreeNAS 11 supports SMB versions up to 3.11 so it should work fairly well with your Windows 7 computers as 7 only supports up to SMB v2. (You just need to figure out why it's probably downgrading to SMBv1) I know Samba has improved recently as earlier this year I was finally able to take a Linux pc, install it from a disc, and create a share that supported SMB2 out of the box without any tweaking. That share did full gigabit no issues. This is because the server was using Samba v4.3. The older versions you'd see Reads from the server at up to ~60MBps and writes back to the server around ~30MBps. I don't have 10gig to see how far the new version will scale though.

If you dual boot, Windows can't see the zpool, and while FreeNAS can probably format a volume NTFS, you wouldn't want to do that. The only way to make both work together is to make FreeNAS a VM server. Create your zpool, then install Server 2012 R2 or 2016 as a VM and let that format it's virtual hard drive as NTFS. Probably a lot of work for little gain though, as native SMB will be faster, but the VM portion and the overhead will probably cancel out any gains.

EDIT: Before I forget to mention it again, you'll need to pay close attention to twinax cables if you buy them. I looked at a couple cables I had laying around in the office, and they only have 1 row of pins on the end. That means they are only going to be 1 gig capable and were designed for an SFP port. An SFP+ port (Which is required for 10 gig) should have 2 rows of pins on the connector, so you'll need to make sure your twinax cables support 10 gig.
 
Last edited:
So would it be better to go with a Infiniband or Fiber Channel setup? I just want to be able to move data between my computers at a speed of at least 1GB/s (Gigabyte) and not the 30MB/s it is now. What would your recommendations be to achieve this?

Isolate your network performance testing away from storage speeds (even local hard drives) to paint a clearer picture of where a bottleneck is. Though it looks like you know FreeNAS is the culprit already.

Use iperf or jperf to test between nodes.

You can start up iperf as a server and as a client on the same box and verify that node is at least capable of some speed you're trying to hit.
 
Isolate your network performance testing away from storage speeds (even local hard drives) to paint a clearer picture of where a bottleneck is. Though it looks like you know FreeNAS is the culprit already.

The problem is definitely FreeNAS and not a hardware or network issue. I used OMV to copy files to my server and I got sustained transfer speeds of 110-115MB/s.
 
I have z really old build of freenas running on a hp micro server with 5 1tb drives in raid1z. I can hit about 80mb/s using a cheap switch and intel cards a win 7 client.
 
So are these the kinds of switches I need?

Arista DCS-7050S-64-F 48-Port 1/10GbE SFP+ 4 x 40GbE QSFP+ Gigabit Switch
Force 10 Networks S2410-01-10GE-24CP S2410 10GbE/40GbE 20x CX4 4x XFP Switch
HP ProCurve J9265A E6600-24XG Managed Layer 3 Switch 24 10 GbE SFP+ Ports
Blade RackSwitch G8124 24 Port 10GbE SFP+ Ethernet Network Switch Layer 3 L3 10G
Brocade BR-VDX6720-40-R 40 Port 10GbE SFP+ Switch

Also since the Mellanox Connect X-2 are so old, Is there a newer version that is supported by newer OS's? Are there other 10Gbe cards out there that you would recommend? Lastly, do you know where I can get low profile brackets for a decent price. I see them on ebay but I am not going to pay $10 for a little tiny piece of steel.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
damn.

10gbe is still freaking expensive for switches.

i guess i'm "stuck" with infiniband just for a little longer.
 
I'd just go host to host if you want 10gig now and can't afford a switch. Chances are you don't need the 10gig 'everywhere' anyway.

If you have 3 dual port cards you can connect 3 hosts in a triangle. A sexy 10gb triangle
 
That is truly funny, thanks for that.


It sounds like you are taking about link aggregation or network bridging?

Nope just point to point links. Dual port card on A connects directly to host B and C. Dual port card on B connects to A and C. Dual port on C connects to A and B

A-----B
..\...../
...\.../
.....C

Each link on it's own /30 subnet, change host file is you want to access them by name instead of IP.
 
I don't really have one, if you can set a static IP on a network adapter then you can do this.


In the below IP's, set X to something you aren't already using, most home routers use 192.168.0 or 192.168.1 so you would use 192.168.2 or 192.168.20 etc

Host A

Port 1 to B-1
IP 192.168.x.1
Mask 255.255.255.252

Port 2 to C-2
IP 192.168.x.5
Mask 255.255.255.252



Host B

Port 1 to A-1
IP 192.168.x.2
Mask 255.255.255.252

Port 2 to C-1
IP 192.168.x.9
Mask 255.255.255.252




Host C

Port 1 to B-2
IP 192.168.x.10
Mask 255.255.255.252

Port 2 to A-2
IP 192.168.x.6
Mask 255.255.255.252


Leave the default gateway and DNS options blank on the TCP/IP settings for all of these adapters.
Just cable the hosts directly, edit your hosts file so that the host names go to the respective IP's from that host, ie-
host file on A
192.168.x.2 HOSTBNAME
192.168.x.6 HOSTCNAME

host file on B
192.168.x.1 HOSTANAME
192.168.x.10 HOSTCNAME

host file on C
192.168.x.5 HOSTANAME
192.168.x.9 HOSTBNAME


Now you have three hosts with 10gb links to each other, without having to buy a switch. Just need 3 dual port cards.


If you only have 2 pc's that need 10gb this is much easier as you only have to create one link.

You will still need to use the normal 1gb adapter for each host to your switch/router for internet and access to other network devices.
 
So are these the kinds of switches I need?

Arista DCS-7050S-64-F 48-Port 1/10GbE SFP+ 4 x 40GbE QSFP+ Gigabit Switch
Force 10 Networks S2410-01-10GE-24CP S2410 10GbE/40GbE 20x CX4 4x XFP Switch
HP ProCurve J9265A E6600-24XG Managed Layer 3 Switch 24 10 GbE SFP+ Ports
Blade RackSwitch G8124 24 Port 10GbE SFP+ Ethernet Network Switch Layer 3 L3 10G
Brocade BR-VDX6720-40-R 40 Port 10GbE SFP+ Switch

Also since the Mellanox Connect X-2 are so old, Is there a newer version that is supported by newer OS's? Are there other 10Gbe cards out there that you would recommend? Lastly, do you know where I can get low profile brackets for a decent price. I see them on ebay but I am not going to pay $10 for a little tiny piece of steel.





So are these the kinds of switches I need and is there a newer version of the Mellanox Connect X-2 that is supported by newer OS's and are there other 10Gbe cards out there that you would recommend and do you know where I can get low profile brackets for a decent price?
 
As an eBay Associate, HardForum may earn from qualifying purchases.
EDIT: Before I forget to mention it again, you'll need to pay close attention to twinax cables if you buy them. I looked at a couple cables I had laying around in the office, and they only have 1 row of pins on the end. That means they are only going to be 1 gig capable and were designed for an SFP port. An SFP+ port (Which is required for 10 gig) should have 2 rows of pins on the connector, so you'll need to make sure your twinax cables support 10 gig.

So If I buy the Mellanox Connect X-2 cards, how to I use them with a QSFP+ port?
 
I want to build a 10gbe home network. 10gbe cards and switches are inexpensive. I wanted to go for 40gbe but am I right to assume that an individual PC's or server isn't capable of those speeds anyway? So do I just put a card like this in each system and connect them to a 10gbe switch and that's it? Does the specs of each system come into play and how so?
Yeah, you can use Cat6 and Cat6a for 10G Ethernet connection. They are rather cheap compared with optical switch. For 1ft cat6 (STP) at FS.COM, it only need $2.1 a piece.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Yeah, you can use Cat6 and Cat6a for 10G Ethernet connection. They are rather cheap compared with optical switch. For 1ft cat6 (STP) at FS.COM, it only need $2.1 a piece.

If you are doing 10gb from host to host and even switch to host for real short runs (in the same rack, 10-20ft or less) you can use cat5e. I do it all the time
 
Yes that exists, but once again bears the question, what would you be trying to accomplish with that?
 
Yes that exists, but once again bears the question, what would you be trying to accomplish with that?

My goal is to get a 40Gbe switch and a 40Gbe card for my main PC and storage server, for the others, I would get the Mellanox Connect X-2 cards and still be able to use them with my 40Gbe switch.
 
40gb is beyond the capability of most desktop PCs. You may have a very powerful NAS that can read or write at these speeds, but 10gb will be faster than the vast majority of desktop computers are able to read or write. Unless you have NVME drives or massive arrays on both ends, 40gb is a waste.
 
40gb is beyond the capability of most desktop PCs. You may have a very powerful NAS that can read or write at these speeds, but 10gb will be faster than the vast majority of desktop computers are able to read or write. Unless you have NVME drives or massive arrays on both ends, 40gb is a waste.

Don't care. I am planning for the future and with modern hardware and the falling prices on 40gbe switches, even if I don't get the full 5GB/s speeds on every machine I still think it will be worth it.

So to confirm what that site I linked to was saying. I can get a 40Gbe QSFP+ switch and use adapters to convert those ports to SFP+ for the Mellanox cards?
 
Back
Top