More Affordable 10GBase-T Switches Imminent

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,743
If you are anything like me, you have for some time been excited about upgrading your home network to 10 gigabit speeds, but have been frustrated at the relatively high cost of 10 gigabit capable switches. The common wisdom has been that, to the great frustration of us enthusiasts, consumers no longer care about wired Ethernet, preferring no wires and WiFi, relegating 10 gigabit switches to the realm of Enterprise gear, and Enterprise gear is pricy. Well, this may be about to change. Aquantia Solutions, reportedly has switching silicon in the works that will drastically reduce the cost of 10Gbase-T switches to about $30 per port, compared to the greater than $100 (and in some cases much much greater) per port.

Personally I'm looking forward to this. I currently have a dedicated direct 10 gigabit line running between my workstation and my NAS, relying on gigabit for everything else. I would absolutely love to bump up my entire network to 10 gigabit speeds.


Naturally, we asked about pricing of the switches and availability. With the aforementioned caveats, we were told that the switch vendors themselves will be the ones dictating pricing. That being said, after suggesting that pricing in the region of $250-$300 for an 8-port switch that supports Aquantia 10G solutions (so likely 5GBase-T and 2.5GBase-T as well) would be great, we were told that this was likely a good estimate. Previously in this price range, options were limited to a sole provider: ASUS’ XG-U1008, a switch with two 10GBase-T ports and six one-gigabit Ethernet ports for $200. Above that, some Netgear solutions were running almost $800 for an 8-port managed solution. So moving to eight full 10G ports in this price bracket would be amazing, and I told Aquantia to tell OEMs that at that price ($~30 per port), those switches will fly off the shelves with enthusiasts who want to upgrade.
 
Not sure I need this at home, but I'd be happy if this results in lower cost small business 10GB switches.
The 1 GB ports are starting to become a bottleneck on my servers, and I'd love to be able connect them at 10GB so they don't slow down when I have multiple users copying large VMs.
 
The current value leader for 8 ports of 10G is this guy: https://www.amazon.com/NETGEAR-ProSAFE-10-Gigabit-Ethernet-XS708E-200NES/dp/B01GTWPTJY

Warehouse deals pricing hovers between $525-$560.
I have a better option here: https://www.amazon.com/D-Link-Syste...qid=1496687403&sr=8-1&keywords=dlink+dgs+1510

<$500 with 4 SFPs. Sure, it doesn't have all 10G ports, but it does have 4, and there are very few home situations where more than 4 ports are needed. I only use 3, and I have both a server and a VM host. (The last one is used by my main machine, and my pfsense router is a VM connected to the 10G side.) It has enough 1G ports to cable a whole house, and you wouldn't have bottlenecking attaching 1G clients to a server on the 10G.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
All of my stationary machines are on wired. Hell even my TV and other devices are all on hard wired connections. The only things in my house that use wireless are my laptops, cell phone, and guests devices. I will DEFINITELY be upgrading to 10gb as soon as I have the device support for it.
 
Do they still suck 1.21 gigawatts?

As far as i know, at full speed:
100t .1watt
1000t 1 watt
10,000t 10 watt

I really hated it in the core2duo days when notebook makers would only put in 100t connection in order to up the battery stats.
 
The current value leader for 8 ports of 10G is this guy: https://www.amazon.com/NETGEAR-ProSAFE-10-Gigabit-Ethernet-XS708E-200NES/dp/B01GTWPTJY

Warehouse deals pricing hovers between $525-$560.

Yep. It's a nice switch. I wish it had like 16 gigabit ports in addition to the 10gig ports. If it did, I'd buy it in a heartbeat.

My current 24port gigabit Procurve is just about full (I might have one or two open ports). It would be nice to use 10gig ports for my server, workstation and other equipped devices and still be able to keep the stuff that only needs gigabit speeds connected.

Otherwise I need to find a way to link two switches together, which is a pain in the butt and inefficient.

I guess I could link up a switch like this with my Procurve by using 4 of the ports in link aggregation mode, keeping the remaining 4 ports for 10 gig duty, but that seems like such a waste of those expensive 10gig ports...

I want something like in the old days, where we had 24 or 48 port 100mbit switches with a few gigabit uplink ports.

I'd totally buy a many port gigabit switch with a few 10gig uplink ports.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
  • Like
Reactions: DPI
like this
Were already at 30 dollars a port for 10GbE with Ubiquiti...

https://www.amazon.com/Ubiquiti-Networks-ES-16-XG-Edge-Switch/dp/B01K2Y1HP0

16 Port (4x 10GbE Base-T & 12x 10GbE SFP+) for 511 dollars, divide that by 16 and you get about 32 bucks a port.

I got excited for a moment. Then I saw that it was SFP+ for most of the ports.

I tried playing with a couple of brocade adapters and matched transducers a while back. I had nothing but trouble. Ever since I got my Intel 10gig adapters everything has been fine.

(Well, except for the fact that Intel hasn't released a driver for the 82598EB 10gig chip for Windows 10, which is kind do of a bummer, but I don't really need it under Windows anyway)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Can you use Cat5e?

Possibly, for very short runs, but just like how gigabit ethernet was designed for Cat6, but Cat5e works for shorter runs, 10Gbase-T was designed for Cat7 and Cat6a, but Cat6 works for shorter runs.

Cat5e would probably be pushing it, but as mentioned above, it may do the trick for very short runs.

The only official info I can find is that it is speced up to 100M (328ft) using Cat 7 or Cat 6a cables, and up to 55m (180ft) with cat 6 cables.

It may work over Cat 5e, but - as mentioned before - at greatly reduced run lengths. Or you might find that it won't work at all, and will just drop down to gigabit speeds.

In my house I have cat 5e cables in the walls, so when I ran my dedicated 10 gig run to my server, I didn't use the cables in the walls.
 
Last edited:
If you are anything like me, you have for some time been excited about upgrading your home network to 10 gigabit speeds
Can SATAIII and the overwall bus width available between a high end SSD and your NIC even support 10gbps transfers?
 
Can SATAIII and the overwall bus width available between a high end SSD and your NIC even support 10gbps transfers?

SATA3 peaks around 600MB/sec minus overhead. A M.2 (PCIe 3.0 4x) drive will run around 3.2GB/s (fastest speed I've seen from a Samsung 960 Pro anyway). 10Gb will peak at around 1200MB/sec, and regular 1Gb around 120MB/sec.

The problem here is that limit on 1GbE. It's easy for even a regular HDD to saturate 1GbE. While a home setup probably won't see peak transfers on 10GbE, it WILL be able to push sustained transfers much higher than a 1GbE link.

For example, moving large files from my local PC to my NAS. With my current 1GbE, I hit 120MB/sec easily, and it stays there. If I went 10GbE, I'd hit around 350MB/sec (the max write speed of my NAS). It's not full 10GbE, but it's 3 times faster than what I see now.
 
Can you use Cat5e?

Intel says their 10GB-baseT cards (ie, 10GB over standard cables) can do 10GB over Cat5e to 20m. I believe (though haven't tested) that this assumes a single cable; if you have a tight bundle of these the crosstalk is likely to greatly effect your transmission distance. Cat 6 goes to ~50m and doesn't care so much about other cables, and Cat6a is necessary for a traditional 100m install (which is why the shorthand version is that "10GB requires Cat6a").

Also, Cat7 is a myth by cable vendors to charge you extra money for bullshit cables, unless you, god help you, got ahold of some of the few actual Cat7 using switches (the cables have the pairs separated out into sort of an X shape on the end). The actual Cat7 spec calls for a different end connector, not RJ45. There's also a Cat7a that never got off the ground. Cat8 is kinda-sorta ready to go, but it's still just a spec sheet.
 
10Gb is sweet.

I have a couple computers hooked up with 10Gb fiber.

They are older cards that are no longer supported, but they still work fine in Windows 10 and were dirt cheap to boot. Because they are dual port cards I can daisychain a bunch of stuff together without the need for a switch.
 
I just scrounge old DC parts. You can pick up a couple of old Mellanox NICs, SFP+s and some OM3 off ebay and build a point to point network at home for about $30 a node.

Sure you'll have to pull fiber, but unless your twisted pair is new, you'll have to upgrade it for 10gbase-T anyway.
 
The problem here is that limit on 1GbE. It's easy for even a regular HDD to saturate 1GbE. While a home setup probably won't see peak transfers on 10GbE, it WILL be able to push sustained transfers much higher than a 1GbE link.

For example, moving large files from my local PC to my NAS. With my current 1GbE, I hit 120MB/sec easily, and it stays there. If I went 10GbE, I'd hit around 350MB/sec (the max write speed of my NAS). It's not full 10GbE, but it's 3 times faster than what I see now.

Exactly.

With my NAS (12 4TB WD Reds in 2x6 RAIDz2 configuration, or approximately ZFS equivalent of RAID 60) I very rarely max it out. I do have a bunch of SSD and RAM caching going on though.

When something happens to be in RAM cache, I'll pretty much max out at 1.2GB/s, on SSD cache it is slower, and on drives it is yet slower than that, but my drives can hit a few hundred MB/s on their own without cache, so while I may not be maxing out 10 gigabit, I'm certainly getting 3-5 times faster speed than I can get with just gigabit.

I've found my network file transfers to be rather bursty. They will often blast up to between 800MB/s and 1.2GB/s and then drop down for a bit, averaging maybe 300-400MB/s. This is probably because I have many things accessing the same ZFS pool at the same time. At any given time I could be copying files to and from it from my desktop, backups could be read from it going to my Crashplan account, my DVR could start recording a TV show, or any one of the 3 HTPC's in the house could start playing back a DVR:ed show, or something from my media library, or one of my friends could log on to my SFTP server, etc. etc.

If I only had a gigabit connection to my NAS, it would peak at ~120MB's, with the troughs falling below that, and the average transfer speed being a bit lower. Being able to peak all the way up to 1.2GB/s allows for fairly decent average transfer speeds between 3 and 5 times faster than
 
I just scrounge old DC parts. You can pick up a couple of old Mellanox NICs, SFP+s and some OM3 off ebay and build a point to point network at home for about $30 a node.

Sure you'll have to pull fiber, but unless your twisted pair is new, you'll have to upgrade it for 10gbase-T anyway.

Maybe people will have better luck with Mellanox than I did with Brocade. Brocade was a mess for me, and made me swear off fiber for good.
 
I have absolutely no need, but would totally upgrade my entire house if 10gbps came down to these prices. I hate wireless for anything except my phones/tablets.
 
I had my new house wired with cat 6a and then started researching 10gig hardware and was totally bummed to see that it didn't really exist for consumers. But looks like this might be out the same time my house is done being built so I'm totally pumped.
 
Maybe people will have better luck with Mellanox than I did with Brocade. Brocade was a mess for me, and made me swear off fiber for good.

My experience with Brocade is that they're picky as hell with SFPs. If it didn't come with the card, I don't trust it. Twinax is a little better, but limited to 10m cable lengths.

I've had decent luck with Mellanox, probably because I have the advantage of a pile of gear at work to test out compatibility before I go shopping for home. We're mostly running Intel and Chelsio.
 
Maybe people will have better luck with Mellanox than I did with Brocade. Brocade was a mess for me, and made me swear off fiber for good.

The ones I use are Chelsio cards that were sold by Netapp. They work great and were super easy to set up.

Although I got mine cheaper than this, you can get dual port cards with SFP+ transceivers included for $50 a pop.
http://www.ebay.com/itm/X1008A-R6-N...868042?hash=item5698731bca:g:VmsAAOSwBOtY95xR

You can also get single port Chelsio cards for $17 a pop.
http://www.ebay.com/itm/Chelsio-110...490540?hash=item281bc3862c:g:hY4AAOSwlMFZMYsi
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I got excited for a moment. Then I saw that it was SFP+ for most of the ports.

I tried playing with a couple of brocade adapters and matched transducers a while back. I had nothing but trouble. Ever since I got my Intel 10gig adapters everything has been fine.

(Well, except for the fact that Intel hasn't released a driver for the 82598EB 10gig chip for Windows 10, which is kind do of a bummer, but I don't really need it under Windows anyway)

You can pickup a bunch of X520-DA1 adapters on Ebay for like 50-70 bucks (Server pulls) which will easily work, and if your lucky, you can find twinax or fiber transceivers for like 20 bucks per transceiver/cable (Either third party or server pulls.) They also have this tested 3rd party compatibility list on the community page which makes things a little easier:

https://forums.servethehome.com/ind...itch-es-16-xg-sfp-compatibility-thread.11129/

However, I hear you when it comes to the inexpensive and simple Cat6 cabling. You really can't beat implementing switches that use that.
 
This is good news. I was just looking at 10G solutions between my NAS and 2 workstations.
 
8 port 10GB for $300? Sold. Adapters are inexpensive and can be found on ebay.
 
Going to wire my new house soon with CAT6a. Looks fun with 10G coming about.
 
I'd settle for a reasonably priced domestic quiet switch that could give me 200MBps quite honestly.
 
This looks like copper only. What a shame. SFP+ gives you so many (better) options...
 
I remember paying $10000 a port in 2008, what with you only being able to fit a few ports in a 6509 as the supervisor modules weren't fast enough.

Was worth it though, the ESX boxes I deployed on them had 120:1 consolidation ratio so we were still way ahead.
 
Going to wire my new house soon with CAT6a. Looks fun with 10G coming about.
Cat6A is trickier to work with. It's shielded so you can't cut back the sheathing as far or as easily, and it requires far less distance of unwinding the pairs next to the connectors.
 
It's also (or at least can be) a real pain in the ass to pull through walls when you can't see what you're doing. It is very inflexible, and so if you need it to bend around something in the wall, it isn't happening. On the other hand, it's much easier to push through horizontal spaces due to the added rigidity, again assuming that you aren't trying to go around anything.

Edit: Oh, and it's actually pretty difficult to get a "good" 6a termination, in terms of meeting specs. If you don't have a tester that will certify your crimps, they probably won't be correct. It might work anyway, depending on how off you were and what your run lengths look like. Just realize that if you're doing a straight continuity test and it looks good but you can't get a 10GB link on it, then your crimps are bad.
 
This looks like copper only. What a shame. SFP+ gives you so many (better) options...


It's a mixed blessing.

I had a hell of a time getting my brocade adapters to work properly. LC cable problems, transducer (from the fiber store) issues, as well as Linux driver problems.

Even when they worked I never got them above ~2 Gbit/s repeatedly, and 3-4Gbit/s sporadically, but a lot of the time I just couldn't get them to work.

My used server pull Intel 82598EB 10GBaseT adapters - however - just work. 9.5 - 9.8 Gbit/s out of the box as measured using iperf.

Maybe my problems with the Brocade's were isolated incidents, but it really left so much of a bad taste in my mouth that you'd have to twist my arm to try fiber again.
 
Cat6A is trickier to work with. It's shielded so you can't cut back the sheathing as far or as easily, and it requires far less distance of unwinding the pairs next to the connectors.

It's also (or at least can be) a real pain in the ass to pull through walls when you can't see what you're doing. It is very inflexible, and so if you need it to bend around something in the wall, it isn't happening. On the other hand, it's much easier to push through horizontal spaces due to the added rigidity, again assuming that you aren't trying to go around anything.

Edit: Oh, and it's actually pretty difficult to get a "good" 6a termination, in terms of meeting specs. If you don't have a tester that will certify your crimps, they probably won't be correct. It might work anyway, depending on how off you were and what your run lengths look like. Just realize that if you're doing a straight continuity test and it looks good but you can't get a 10GB link on it, then your crimps are bad.

I've done about 100+ terminations with CAT5e/CAT6, but haven't done any testing with 10GB with CAT6a. Since each run is well under the 100 meter specs of 10G (i'm looking at max 150 feet, under half of the spec), I'm hoping my DIY termination works well. Thanks for the heads up!
 
I need to figure out what adapters work well with my Mac Pro, Windows 10 box, and Synology 1817+. I hadn't even thought about Win10 drivers not being available until it was mentioned earlier in the thread.

I'll have 3 devices that need 10G, with a bunch just needing 1G. That D-Link switch that was linked looks like it would work perfectly if I went the fiber route.
 
Back
Top