Become a Patron!

10gbe home network.

Discussion in 'Networking & Security' started by rkd29980, Jul 16, 2017.

  1. rkd29980

    rkd29980 Limp Gawd

    Messages:
    151
    Joined:
    Oct 19, 2015
    I want to build a 10gbe home network. 10gbe cards and switches are inexpensive. I wanted to go for 40gbe but am I right to assume that an individual PC's or server isn't capable of those speeds anyway? So do I just put a card like this in each system and connect them to a 10gbe switch and that's it? Does the specs of each system come into play and how so?
     
  2. KazeoHin

    KazeoHin [H]ardness Supreme

    Messages:
    6,354
    Joined:
    Sep 7, 2011
    Moreover, would you need a router with 10GBE capability?

    Also, you will need at minimum CAT6, possibly solid core CAT6A to be garenteed 20GBPS over medium-long distances.
     
  3. rkd29980

    rkd29980 Limp Gawd

    Messages:
    151
    Joined:
    Oct 19, 2015
    No, just a switch. This is just a local network for moving files back and forth fast.

    All of my computers will be near each other and only one will be kind of far-ish away, 100ft or so.
     
  4. Cmustang87

    Cmustang87 2[H]4U

    Messages:
    3,975
    Joined:
    Oct 4, 2007
    Just as an FYI - Cat6a is required to reach the full distance of 100M (330 ft) and Cat6 may reach a distance of 55 meters. So you can use that as reference.

    If you're just talking one single network and you don't need to route between two networks, a single 10G switch and 10G capable cards. If you're going to route between networks, your bottleneck will be the router unless it is also 10G.
     
  5. goodcooper

    goodcooper [H]ardForum Junkie

    Messages:
    10,056
    Joined:
    Nov 4, 2005
    depending on how many machines you want to connect, you may not even need a switch... also, the system supporting the speeds largely depends on the use case... most people don't have a storage system at home that could saturate a 10g link
     
  6. Eickst

    Eickst [H]ard|Gawd

    Messages:
    1,431
    Joined:
    Aug 24, 2005
    This is what I do, two desktop PC's wired directly to nas that has two port 10gb nic in it. Place 10gb nics in their own subnet, so internet still goes out of my desktop's 1gb nic, when I access the NAS via dedicated subnet it's 10gb. I have zero reason for the desktops to speak to each other over 10gb so that works fine for me and only had to buy 3 cards, no switch.
     
  7. bman212121

    bman212121 Gawd

    Messages:
    1,022
    Joined:
    Aug 18, 2011
    If only....

    So the card you linked isn't officially supported on Windows 10 at all. There are legacy drivers that go up to Windows 8.1 and Server 2012 R2. Also it would appear support for these were dropped in VMWare 6.0. That's likely the reason why it's so cheap.

    http://www.mellanox.com/page/winof_matrix?mtag=windows_sw_drivers

    What's pictured is a twinax cable, which allows for SFP+ direct connections between another device with an SFP+ port. If your computers are basically sitting right on top of each other in the same room, you might be able to get away with using those. I'd guess the longest cable you'll find that is affordable is 5 - 10M (so like 20 - 35ft) Anything longer and you'll likely just end up using some type of fiber. But fiber is another ball of wax because you need to get fiber sfp+ modules that slide into those ports. And those come in a variety of flavors, some of single mode, some for multimode, and others designed for different lengths. So you need to make sure if you buy multimode sfp+ modules, you get ones for short range and use a MM cable. You can burn out the optics if you try to use long haul optics on a 10M cable.

    Then you're going to be in the same boat on the switch side. Most likely you'll find 10gbe switches that just have a bunch of SFP+ ports in them, so you'll also need to outfit those correctly or use your twinax cables. So sure a card is $20, but then you might spend another $50 per card to outfit the connection, and then you still need to spend a couple hundred to get a switch. 10gbe could still cost you $50 - $100 per port to setup, so if you wanted to do like 5 computers, I wouldn't be surprised it costing $500 to set it up. (With all used equipment)


    Once you hook everything up, I can basically guarantee that it's not turnkey to just start moving traffic at 10gig. Windows SMB scales very well, but very well might mean out of the box getting like 2.5Gbps if your hardware is even fast enough to do that. (320MBps sustained) If you're trying to mix Linux / Windows, you'll probably have an even more challenging time getting everything to scale up. The actual specs of your computers will matter mainly on CPU utilization. If you have 1500mtu and it's not doing full hardware offload on the cards, you'll start to eat cpu cycles for the network card, on top of the cpu being used for disk access. Modern CPUs can likely handle this no issue though. (IE a 2600k or faster) I can't even tell but those cards might be PCI-E 1.0 x8. New boards are "supposed" to be backwards compatible with the older cards, but something to keep in mind in case you have a brand new board with an old network adapter. Sata 3 is NOT fast enough to max out a 10gbe card, so unless you're using PCIe SSDs, the best you might get from one drive to another is around 5Gbps.
     
    MrGuvernment likes this.
  8. rma

    rma Limp Gawd

    Messages:
    184
    Joined:
    Mar 16, 2015
    look here for cables/adapters etc. http://www.sfpcables.com/ I use them at work for bigger projects, they sell them on Amazon.
     
  9. rkd29980

    rkd29980 Limp Gawd

    Messages:
    151
    Joined:
    Oct 19, 2015
    So would it be better to go with a Infiniband or Fiber Channel setup? I just want to be able to move data between my computers at a speed of at least 1GB/s (Gigabyte) and not the 30MB/s it is now. What would your recommendations be to achieve this?
     
  10. Eickst

    Eickst [H]ard|Gawd

    Messages:
    1,431
    Joined:
    Aug 24, 2005
  11. bman212121

    bman212121 Gawd

    Messages:
    1,022
    Joined:
    Aug 18, 2011
    Well first you need to take a step back then. If you can only move files at 30MBps right now, then that has nothing to do with your current network. The 30MBps wall is well known and is usually related around scaling with SMBv1. We need to solve that first otherwise you can throw all the hardware at this you want, but if you don't make any changes in software, you're still only going to be moving at 30MBps and just doing it on more expensive hardware.

    So what is the server, and what is the client? Just need to know operating systems, and maybe hardware. (Like if it's an Intel system or an appliance)
     
  12. rkd29980

    rkd29980 Limp Gawd

    Messages:
    151
    Joined:
    Oct 19, 2015
  13. rkd29980

    rkd29980 Limp Gawd

    Messages:
    151
    Joined:
    Oct 19, 2015
    My current PC is running Win7. Has Intel i7 CPU and NIC. everything is less than 3 years old, built when the GTX 980Ti came out.

    I can transfer file between my primary PC and... at...


    An old PC originally designed for Vista now with Win7 - 150-200MB/s

    My Norco RPC-4224 with a Supermicro MBD-X10SL7-F-O mobo and Intel i3 4360 CPU running OmniOS and Napp-it - 100-200MB/s

    An older 36 bay server with a Supermicro X8DTH-IF and Intel Xeon E5520 CPU running FreeNAS 11 - 12-30MB/s https://forums.freenas.org/index.php?threads/slow-transfer-speeds.56177/

    I have two other 36 bay Supermicro servers and a few old 8 and 16 bay Supermicro servers and a 48 bay Chenbro that are not yet set up.
     
  14. Ultima99

    Ultima99 [H]ardness Supreme

    Messages:
    4,210
    Joined:
    Jul 31, 2004
    This. Fix it so you're getting the proper 110+MB/sec from 1GbE first.
     
  15. bman212121

    bman212121 Gawd

    Messages:
    1,022
    Joined:
    Aug 18, 2011
    Hmm...

    Obviously we know that FreeNAS needs some tweaking, but I'm trying to do some reading to see if anyone was successful with Windows 7 and 10 gig performance. Windows 7 only uses SMB2, and it's lacking some of the features that Windows 8.1 and up have with SMB3. This also would need to be supported on the server side, so I'll see if I can find info on your Napp-it to see what it supports.
    EDIT: Best I can tell you might get most of the way there on SMBv2. FreeNAS should support it and I can't find much for OnmiOS.


    So for Reference your Windows 7 computer only has one NIC in it right? From the forum post you linked that's what it sounds like.



    Okay to help clarify in this thread the stuff from the other thread.

    Windows likes to report caching if you just look at the transfer window. As stated in that other thread, the network will only move up to around 110MBps, so anything above that it's really the network moving at full speed. It sounds like in some cases you are seeing decent performance, and in other scenarios you are not. You should watch task manager and look at the traffic graph, and ignore the transfer window. The traffic graph will show the proper numbers for the transfer.


    I think Windows 7 might be able to scale halfway decent, but if you're running Windows 7 I can basically assume you're using SATA SSDs, so if your goal is to transfer from that to your NAS, you will never achieve 1GBps, because SATA SSDs top out around 500MBps. If your goal is to copy files from server A to server B, then we can look at how those stack up.
     
    Last edited: Jul 22, 2017 at 11:36 PM
  16. bman212121

    bman212121 Gawd

    Messages:
    1,022
    Joined:
    Aug 18, 2011
    Yea I probably wouldn't attempt to do 10 gig copper at this point. The main reason is that anything based upon it is so new that you are only going to be able to find brand new hardware. You can get SFP+ copper transceivers, but those are super expensive as well because they are only a year or so old. Stick with Twinax cables if all of those servers are in the same rack, and try to find a reasonably priced switch. Then you can just pick up a pair of 10 gig multimode SFP+ transceivers and some multimode cable to run to your desktop. If you can get your FreeNAS issues sorted out, you should scale over 1 gig, but I definitely wouldn't even want you to think that 1GBps like you want is just going to be drop the cards in and go. If you were using Server 2012 R2 with Windows 8.1, it might more simple, but mixing SMB and samba there will be more hurdles to jump.


    EDIT: This just in, 10 gig is still expensive lol. I honestly can't find much for switches that are 10 gig and are not super expensive. It's one thing to want a couple of 10 gig ports, but it sounds like you need at least 5. Even going with those $100 nics the cheapest switch would be that 8 port netgear at $800. Meaning in order to wire up 5 computers it's easily $1,300 and that doesn't include Cat6A cables.

    And it would matter because there is no driver for a brand new company I've never heard of.
    https://forums.freenas.org/index.php?threads/new-cheap-asus-xg-c100c-nic.56160/


    The best thing that I see on Ebay is a brand I've never heard of either: Quanta LB6M
    Sounds like that switch is around $300 and seems to be popular with home users trying to get 10 gig for a decent price. If you want a known brand like brocade, expect to pay at least $500.
     
    Last edited: Jul 23, 2017 at 12:07 AM
  17. rkd29980

    rkd29980 Limp Gawd

    Messages:
    151
    Joined:
    Oct 19, 2015
    My primary PC has a Realtek and an Intel nic. I use the Intel nic.

    My primary storage is mechanical hard drives, Toshiba mostly, which are pretty fast. I can easily get 200MB/s+ transferring from one drive but transferring from an array should much faster.

    Why you gotta harsh my mellow :p

    I see a lot off Arista switches and HP ProCurves on ebay for $600-800 and some Barcode switches for $400-500. Any idea what kind of card I need since that Mellanox card is crap?


    Well that sucks. If Aquantia\ASUS put effort into making sure their cards are as widely supported as possible, they could have had so many more customers. Before I settled on FreeNAS, I was trying to get OpenMediaVault to work which is Linux based but OMV has so many issues is may as well be considered to be in Alpha state. OMV core does not have native ZFS support and in order to use ZFS, you need to install a plugin to install the ZFS plugin which is broken and maintained by anyone who actually understands ZFS. If OMV wasn't such a mess, I would be using it and would have had fewer issues to deal with



    One option I am concidering is to have my servers dual boot Windows Server and FreeNAS and create a zvol using FreeNAS and then format it NTFS. That way, I get a network drive that Windows likes but still get all the benefits of ZFS. I don't know how Samba\SMB fits into that but it is just an idea.
     
  18. bman212121

    bman212121 Gawd

    Messages:
    1,022
    Joined:
    Aug 18, 2011
    The Mellanox card isn't crap at all, it's just old. Given that you're still using Windows 7, the drivers are there and the card should work perfectly. Given the OSes you have, it actually makes sense to purchase those cards. If you were planning on using VMWare then I definitely wouldn't bother. BSD on the other hand tends to be behind the curve when it comes to drivers, so it's another good case where the card would probably work.

    It looks like they just started supporting those cards in FreeBSD 10. The connect-2 came out in 2009 so it only took them 5 years to get driver support for it...


    Trying to use ZFS on linux is going to be a hit or miss issue. ZFS is native to BSD, so it had to be rebuilt to work on Linux. I believe it is stable now in the latest versions of Linux, so as long as OMV is using the latest and greatest, it's probably better than it used to be.

    [/quote]

    So the format of the volume doesn't really matter to a share. Windows doesn't know or care that your share is running from ZFS, EXT4, FAT32, or NTFS. That portion is largely hidden when using a share. Native SMB on Windows is going to be faster than Samba on Linux because Samba is just built to try to use Microsoft's protocols without a lot of information about what's going on in the background. That said FreeNAS 11 supports SMB versions up to 3.11 so it should work fairly well with your Windows 7 computers as 7 only supports up to SMB v2. (You just need to figure out why it's probably downgrading to SMBv1) I know Samba has improved recently as earlier this year I was finally able to take a Linux pc, install it from a disc, and create a share that supported SMB2 out of the box without any tweaking. That share did full gigabit no issues. This is because the server was using Samba v4.3. The older versions you'd see Reads from the server at up to ~60MBps and writes back to the server around ~30MBps. I don't have 10gig to see how far the new version will scale though.

    If you dual boot, Windows can't see the zpool, and while FreeNAS can probably format a volume NTFS, you wouldn't want to do that. The only way to make both work together is to make FreeNAS a VM server. Create your zpool, then install Server 2012 R2 or 2016 as a VM and let that format it's virtual hard drive as NTFS. Probably a lot of work for little gain though, as native SMB will be faster, but the VM portion and the overhead will probably cancel out any gains.

    EDIT: Before I forget to mention it again, you'll need to pay close attention to twinax cables if you buy them. I looked at a couple cables I had laying around in the office, and they only have 1 row of pins on the end. That means they are only going to be 1 gig capable and were designed for an SFP port. An SFP+ port (Which is required for 10 gig) should have 2 rows of pins on the connector, so you'll need to make sure your twinax cables support 10 gig.
     
    Last edited: Jul 24, 2017 at 1:18 PM