Intel 1226-V 2.5GbE Ethernet Chipset Showing Connection Drop Issues (Chipset Used on most Raptor Lake Motherboards)

StryderxX

[H]ard|Gawd
Joined
Jun 22, 2006
Messages
1,725
TechPowerUp posted an article today regarding random disconnect issues with Intel's latest Ethernet chipset (1226-V) that most motherboard manufacturers are integrating on their Raptor Lake (Z700 series) boards. Here's a quote:

The Intel Ethernet i226-V onboard 2.5 GbE controller appears to have a design flaw that causes the Ethernet connection to drop at random times for a few seconds. The I226-V is the latest version of Intel's cost-effective 2.5 Gbps Ethernet networking chips meant for PC motherboards with chipsets that have integrated MACs (i.e. Intel chipsets).

I checked the Event Viewer on my PC using the Asus Rog Strix Z790-F motherboard and I'm seeing the problem. Details on the particular error messages can be found in the article.

Techpowerup Article

How many of you are experiencing this issue?
 
Huh. And I thought mine with a Realtek was inferior

For 2.5G, the tables have turned. I hear good things about realtek (although no FreeBSD drivers for their usb 2.5g nic), and bad things about Intel. For 1G and 10G, Intel's been great for me, and I've had real issues with Realtek, some of which are easy to reproduce, ugh.
 
For 2.5G, the tables have turned. I hear good things about realtek (although no FreeBSD drivers for their usb 2.5g nic), and bad things about Intel. For 1G and 10G, Intel's been great for me, and I've had real issues with Realtek, some of which are easy to reproduce, ugh.
Yeah, Intel's 2.5G has been total dogshit. It's amazing they are so incompetent that several revisions and a generation later they still can't get it right.
 
Yeah, Intel's 2.5G has been total dogshit. It's amazing they are so incompetent that several revisions and a generation later they still can't get it right.
The 225 was all borked, now the 226 as well? I've been seeing good things with the RLT stuff and pfsense, I may have to go that route, or just drop in an AQC107 based NIC and call it a day.
 
I had to stop using Intel i225-V because of a known problem with Fios ONT. Even Intel's fix for it didn't work. No problems with Realtek. I haven't tried a board with the i226
 
Dang, the z690 problem some motherboards had with the pcie slot causing tons of WHEA errors was bad enough. Glad I decided to skip this generation.
 
This has been an issue ever since the first Intel 2.5gb variant was released. Intel and MB makers have claimed they implemented new variants/fixes, but every time anyone paying attention reports it is just as bad as before. It is a fundamentally flawed design that cannot be fixed with either new hardware or new software. Stick with Intel 1GB, Realtek 2.5GB, or upgrade all the way to 10GB. Intel 2.5GB is fundamentally broken.

It really goes to show how much YouTube Reviewers have no concept of the real-world when it comes to usage of the hardware they review. You never hear a peep out of any of them on issues like this and the USB disconnect bug on AM4 motherboards, both years-long widespread issues that were never fixed, yet every review keeps recommending these products with not even a side mention of the ongoing serious issues that should disqualify them from recommendation.
 
For 2.5G, the tables have turned. I hear good things about realtek (although no FreeBSD drivers for their usb 2.5g nic), and bad things about Intel. For 1G and 10G, Intel's been great for me, and I've had real issues with Realtek, some of which are easy to reproduce, ugh.
The x550s do multi gig fine and have bsd support.
 
Just fantastic, just got my Asus Rog Strix Z790-E 3 days ago which has the i226, didn't even get to run it yet.

I could return it but... Newegg...
 
I could return it but... Newegg...
9f751c1fcaad5b11eb77f4c3758d45977267cfd15a635399c4.jpg
 
How does Intel manage to have what is generally considered to be one of the better 10GbE chipsets alongside Mellanox, only to screw up a 2.5GbE implementation on consumer-class boards?

Fortunately, I'm not noticing any random disconnect issues on my I225-V (I have a Z690 board) right now, but if anything happens, I won't hesitate to cram a real NIC in this system, even if I need to cough up several PCIe lanes for 10GbE or even 40GbE/FDR InfiniBand.

Perhaps part of that is because all my Ethernet switches and routers are still 1GbE at most, none of this 2.5GbE quarter-assed crap when we should all be on 10GbE across the board by now for home systems instead of 1GbE that's stagnated longer than Skylake-era Intel.
 
Yeah, most people say the issues go away at 1G, so that could be why you're not seeing anything.
So they make such a selling point out of 2.5GbE, only for it to be objectively worse at stability than 1GbE while also being supported by far less networking equipment than 10GbE due to being a newer standard?

Talk about giving people zero reason to really upgrade their switches and routers at all if it's not even stable. Networking is one of the last things I'd ever want to be unstable to begin with, even if more bandwidth would be nice for clustering purposes.
 
Yeah, Intel's 2.5G has been total dogshit. It's amazing they are so incompetent that several revisions and a generation later they still can't get it right.

This is surprising to me too.

Intel has been strong in the Enterprise world, which probably explains their excellent 1, 10 (and 40, and 25 and 100) gig results. 2.5 and 5 gig are ultimately consumer standards and probably didn't get the same levels of validation, which is expensive.

That said I still don't understand the reason for 2.5G and 5G to exist. 10G had existed and worked well for over a decade. We should have just mass adopted 10G in the consumer space.
 
Last edited:
So they make such a selling point out of 2.5GbE, only for it to be objectively worse at stability than 1GbE while also being supported by far less networking equipment than 10GbE due to being a newer standard?

Talk about giving people zero reason to really upgrade their switches and routers at all if it's not even stable. Networking is one of the last things I'd ever want to be unstable to begin with, even if more bandwidth would be nice for clustering purposes.

Yeah, used to be the 2.5Gb NICs were more expensive than 10G too, although prices are coming down and 2.5G nics aren't too bad, and switches are getting almost close to maybe reasonable. I suspect we'll see some uptake soon. But IMHO, gigE on consumer parts started really shipping when Apple did it, and they only recently included anything more than 1G, so that's probably the kick in the pants needed.
 
My x570 board has the Intel 225 2.5Gbe adapter built-in. I screwed around trying to make it work at any speed for a week before disabling it and replacing it with an older Intel CT card.
 
  • Like
Reactions: Meeho
like this
This is surprising for me too.

Intel has been strong in the Enterprise world, which probably explains their excellent 1, 10 (and 40, and 25 and 100) gig results. 2.5 and 5 gig are ultimately consumer standards and probably didn't get the same levels of validation, which is expensive.

That said I still don'tubderstand the reason for 2.5G and 5G to exist. 10G had existed and worked well for over a decade. We should have just mass adopted 10G on the consumer space.
Cables. 10g is a pain over twisted pair. To get it 100m you need either shielded wire or REALLY well made, and thick, UTP (Belden makes some). It's expensive, but also it requires rewiring your building. Remember the commercial market drives this stuff. Now it can work over Cat-6, but at shorter distances, and even shorter over Cat-5e. Also if you have an old school 10gig NIC, not a multi-gig one, it'll often just not work if there's a run that is too long/degraded for it. It won't step down the speed, it just won't link unless you manually set it to 1gig. So not great for business. You could spend a lot getting 10gig switches and NICs, only to find out that very few of your cables are actually good enough to link up at 10gig, the rest get no improvement. Then you either have to eat it, or recable the whole damn building.

Well multi-gig offers a solution. Here you have something that can step down speeds, and will do what it can based on the run. So you can have something that'll do 10/5/2.5/1. Generally speaking nearly all runs that worked at 1gig will work at 2.5, so you get at least a 2.5x performance improvement in all cases. Plenty of the more mid range stuff probably can link up at 5, and then the closest stuff at 10. You get pretty much an across the board improvement, and if something needs more speed you can always re run just that specific cable.

Also there's the advantage that you don't have to do the full 10gig speed when you make a NIC (or switch), like this one. 10gig still is costly for an ASIC, and they use more power. If you look at 10gig NICs you'll see they almost always have heatsinks, 1gig ones rarely do. A 2.5gig NIC isn't as intense, either cost or power requirement wise. So it can be good for a situation where you want to keep the cost down like an integrated motherboard chip, or consumer switch/router. It is still over twice as fast, which is not an improvement to sneeze at, yet costs less.

It just makes the adoption much easier for end-user devices. For servers sure, 10gig or more isn't an issue, you usually go with a twinax cable or fiber anyhow. Cost isn't such a concern. But for desktops, there's a reason to want the lower/more flexible speeds.
 
Cables. 10g is a pain over twisted pair. To get it 100m you need either shielded wire or REALLY well made, and thick, UTP (Belden makes some). It's expensive, but also it requires rewiring your building. Remember the commercial market drives this stuff. Now it can work over Cat-6, but at shorter distances, and even shorter over Cat-5e. Also if you have an old school 10gig NIC, not a multi-gig one, it'll often just not work if there's a run that is too long/degraded for it. It won't step down the speed, it just won't link unless you manually set it to 1gig. So not great for business. You could spend a lot getting 10gig switches and NICs, only to find out that very few of your cables are actually good enough to link up at 10gig, the rest get no improvement. Then you either have to eat it, or recable the whole damn building.

Well multi-gig offers a solution. Here you have something that can step down speeds, and will do what it can based on the run. So you can have something that'll do 10/5/2.5/1. Generally speaking nearly all runs that worked at 1gig will work at 2.5, so you get at least a 2.5x performance improvement in all cases. Plenty of the more mid range stuff probably can link up at 5, and then the closest stuff at 10. You get pretty much an across the board improvement, and if something needs more speed you can always re run just that specific cable.

Also there's the advantage that you don't have to do the full 10gig speed when you make a NIC (or switch), like this one. 10gig still is costly for an ASIC, and they use more power. If you look at 10gig NICs you'll see they almost always have heatsinks, 1gig ones rarely do. A 2.5gig NIC isn't as intense, either cost or power requirement wise. So it can be good for a situation where you want to keep the cost down like an integrated motherboard chip, or consumer switch/router. It is still over twice as fast, which is not an improvement to sneeze at, yet costs less.

It just makes the adoption much easier for end-user devices. For servers sure, 10gig or more isn't an issue, you usually go with a twinax cable or fiber anyhow. Cost isn't such a concern. But for desktops, there's a reason to want the lower/more flexible speeds.
do you have extensive real world experience with multi-gig or 10g networking?
 
Cables. 10g is a pain over twisted pair. To get it 100m you need either shielded wire or REALLY well made, and thick, UTP (Belden makes some). It's expensive, but also it requires rewiring your building. Remember the commercial market drives this stuff. Now it can work over Cat-6, but at shorter distances, and even shorter over Cat-5e. Also if you have an old school 10gig NIC, not a multi-gig one, it'll often just not work if there's a run that is too long/degraded for it. It won't step down the speed, it just won't link unless you manually set it to 1gig. So not great for business. You could spend a lot getting 10gig switches and NICs, only to find out that very few of your cables are actually good enough to link up at 10gig, the rest get no improvement. Then you either have to eat it, or recable the whole damn building.

Well multi-gig offers a solution. Here you have something that can step down speeds, and will do what it can based on the run. So you can have something that'll do 10/5/2.5/1. Generally speaking nearly all runs that worked at 1gig will work at 2.5, so you get at least a 2.5x performance improvement in all cases. Plenty of the more mid range stuff probably can link up at 5, and then the closest stuff at 10. You get pretty much an across the board improvement, and if something needs more speed you can always re run just that specific cable.

Also there's the advantage that you don't have to do the full 10gig speed when you make a NIC (or switch), like this one. 10gig still is costly for an ASIC, and they use more power. If you look at 10gig NICs you'll see they almost always have heatsinks, 1gig ones rarely do. A 2.5gig NIC isn't as intense, either cost or power requirement wise. So it can be good for a situation where you want to keep the cost down like an integrated motherboard chip, or consumer switch/router. It is still over twice as fast, which is not an improvement to sneeze at, yet costs less.

It just makes the adoption much easier for end-user devices. For servers sure, 10gig or more isn't an issue, you usually go with a twinax cable or fiber anyhow. Cost isn't such a concern. But for desktops, there's a reason to want the lower/more flexible speeds.

I don't know man.

Cat 6a cabling is slightly more expensive than 5e or 6, but it's not exactly budget busting

True, it will cost you if you have to re-run cabling through, so I believe that the inability to use existing cables might be a motivator, but cables don't last forever, so needing to replace them was always going to happen.

If anything I bet it's the cost of running new cable that is the problem, not the cost of the cable itself.

Personally I just use fiber for 10gig though

My 10gig Intel 82599 based adapters do have big heatsinks and do get hot, but they are also 65nm chips. I bet they would be a lot cooler on an Intel 4 or Intel 3 nodes.

65nm is ancient.
 
My board has the 1225 and I've always had problems with long downloads, like 100GB games. It'll eventually trickle down to a couple kb/s and I have to pause it then resume. I switched to using my onboard wifi and it works perfectly.
 
do you have extensive real world experience with multi-gig or 10g networking?
Yes. I'm an IT guy (my title is technically Information Technology Architect III) for a big university. We've been dealing with the whole "when do we upgrade the cables" thing since long before this as we were doing networking back in the 10mb/Cat-3 days. Rewiring a building costs a buttload of money. It is a particularly hard sell if the building is new-ish and already has decent wiring. Labor costs aside (which are the biggest part) Cat-6a sucks. S/FTP is a pain to work with, and the UTP stuff is MASSIVE, like bigger than coax, and stiff. Both are a much bigger PITA than regular Cat-6.

The other thing is that switches are a good deal more expensive for 10gig than 1gig. It has gotten better, but it is still a big price delta. So you aren't going to be real happy to spend that kind of money, only to have it work on like 30% of your connections or the like.

Multigig really does help this, a lot. It can get you more speed, over existing wires, at longer distances. Particularly 2.5gig it really seems to work in most cases that 1gig does. It is also good for WiFi. High end WiFi 6 stuff is to the point where you can do more than a gig of traffic, and of course more shit is on WiFi than ever. So, upgrade your APs, and then run a 2.5g connection to them, and there you go, more WiFi bandwidth. You see some switches designed with that kind of thing in mind like the Juniper EX4100 multigig. It has 32 1gig ports, and 16 2.5gig ports. The idea being it keeps cost and power requirements down, you use the 1gig ports for most of your wired stuff, but 2.5gig for where you need it, and it does PoE because APs might be where you are using it. Likewise they have ones that have some 10/5/2.5/1 ports, and more 2.5/1 ports because again, cost, power, etc.

The higher the signaling rate, the more it costs both for the PHY and the ASIC.

I don't know man.

Cat 6a cabling is slightly more expensive than 5e or 6, but it's not exactly budget busting.
The cost isn't the cables, it is the labor. If you are doing things at home, sure just get whatever and run it no problem. However, if you have a building with 1000 jacks with cables through the wall, you are paying a bunch of dudes to come in and redo that. That is a 6 figure kind of job, maybe more. You'd really rather not do that, if possible. Eventually it has to be done, but doing it ever time a new cable standard comes out is not soemthing you want.

Also size matters in these cases. Sometimes even Cat-6 is an issue since it is thicker than Cat-5e, though not a ton. When you are running hundreds of cables through conduit, there is limited room. If you run out what do you do? Remove some jacks? Tear out the conduit and replace it, at a MASSIVE cost?

Personally I just use fiber for 10gog
Again, easy at home, or in a datacenter, not so easy in a building. You'd have to re-run everything. Fiber also has the issue that not everything supports it, and that it can't carry power. If you built a building at did fiber to all the desks, you'd need to buy NICs for ever single computer, no desktop is shipping with that built in. Laptops or other devices that wanted to hook up wired are screwed, they don't have fiber and while you can get USB fiber NICs, they aren't the kind of thing anyone has normally. APs can be got with fiber, but then you are running power to them, no PoE.

My 10gig Intel 82599 based adapters do have big heatsinks and do get hot, but they are also 65nm chips. I bet they would be a lot cooler on an Intel 4 or Intel 3 nodes.

65nm is ancient.
New nodes are expensive, and in demand for other shit. Hence NICs get made on older stuff. But it isn't just the NICs that are an issue, it is the other end. The more traffic you need to push, the bigger, costlier, and power hungry the ASIC that is needed to support that gets. 10gig switches are a good bit more expensive than 1gig, particularly enterprise ones that have a lot of features. Even on just the consumer level though. If you go and look at a basic-ass, unmanaged, 10gig switch TP-LINK has a 5-port for $270, Netgear has 10-port with 8x1g 2x10gig for $220. A TP-LINK 1gig 5 port? Shit that's $15, a Netgear 8-port is $25. Those ASICs (and PHYs) on the other end cost a lot more for the switch.

2.5g is still more expensive than 1g, and not by a little bit, but better. A Trendnet 5-port 2.5g switch is $100.

Multi-gig is a good stepping stone with less cost, ten same way 25gig is on fiber in data centers. 2.5gig gets you 2.5x the speed of 1gig, in the same way that 25gig gets you 2.5x the speed of 10gig and you don't have to replace much, if any, cabling. That's part of 25gig's appeal is if you have servers that need more bandwidth, but not a ton more, you can replace the switch, NICs, and SFPs in the systems needed but everything else is the same, and you coudl keep the 10gig SPFs for the servers that don't need the extra speed yet.
 
For 2.5G, the tables have turned. I hear good things about realtek (although no FreeBSD drivers for their usb 2.5g nic), and bad things about Intel. For 1G and 10G, Intel's been great for me, and I've had real issues with Realtek, some of which are easy to reproduce, ugh.
Every generation has different issues. I can go on for HOURS about the 1st gen 10G cards and which worked, which didn't, etc. Now we're seeing it with multi-gig, that's it.
 
This is surprising to me too.

Intel has been strong in the Enterprise world, which probably explains their excellent 1, 10 (and 40, and 25 and 100) gig results. 2.5 and 5 gig are ultimately consumer standards and probably didn't get the same levels of validation, which is expensive.

That said I still don't understand the reason for 2.5G and 5G to exist. 10G had existed and worked well for over a decade. We should have just mass adopted 10G in the consumer space.
Multi-gig (1/2.5/5/10) and the original 10G specs are actually different RFCs. they CAN be cross-compatible, but aren't necessarily (which is why a lot of the 10G cards are 10/1, not the full spectrum). New spec, new bugs.
 
Cables. 10g is a pain over twisted pair. To get it 100m you need either shielded wire or REALLY well made, and thick, UTP (Belden makes some). It's expensive, but also it requires rewiring your building. Remember the commercial market drives this stuff. Now it can work over Cat-6, but at shorter distances, and even shorter over Cat-5e. Also if you have an old school 10gig NIC, not a multi-gig one, it'll often just not work if there's a run that is too long/degraded for it. It won't step down the speed, it just won't link unless you manually set it to 1gig. So not great for business. You could spend a lot getting 10gig switches and NICs, only to find out that very few of your cables are actually good enough to link up at 10gig, the rest get no improvement. Then you either have to eat it, or recable the whole damn building.

Well multi-gig offers a solution. Here you have something that can step down speeds, and will do what it can based on the run. So you can have something that'll do 10/5/2.5/1. Generally speaking nearly all runs that worked at 1gig will work at 2.5, so you get at least a 2.5x performance improvement in all cases. Plenty of the more mid range stuff probably can link up at 5, and then the closest stuff at 10. You get pretty much an across the board improvement, and if something needs more speed you can always re run just that specific cable.

Also there's the advantage that you don't have to do the full 10gig speed when you make a NIC (or switch), like this one. 10gig still is costly for an ASIC, and they use more power. If you look at 10gig NICs you'll see they almost always have heatsinks, 1gig ones rarely do. A 2.5gig NIC isn't as intense, either cost or power requirement wise. So it can be good for a situation where you want to keep the cost down like an integrated motherboard chip, or consumer switch/router. It is still over twice as fast, which is not an improvement to sneeze at, yet costs less.

It just makes the adoption much easier for end-user devices. For servers sure, 10gig or more isn't an issue, you usually go with a twinax cable or fiber anyhow. Cost isn't such a concern. But for desktops, there's a reason to want the lower/more flexible speeds.
Bingo. This guy totally gets it :)

DC space in the US especially (less so in Europe, oddly) standardized on SFP+ and fibre - it's only really the home side that seems to use 10GBASE here, or multi-gig, and it just hasn't had a compelling reason to change yet. I run almost all 10G at home, and it's a PITA sometimes for this reason.
 
Yes. I'm an IT guy (my title is technically Information Technology Architect III) for a big university. We've been dealing with the whole "when do we upgrade the cables" thing since long before this as we were doing networking back in the 10mb/Cat-3 days. Rewiring a building costs a buttload of money. It is a particularly hard sell if the building is new-ish and already has decent wiring. Labor costs aside (which are the biggest part) Cat-6a sucks. S/FTP is a pain to work with, and the UTP stuff is MASSIVE, like bigger than coax, and stiff. Both are a much bigger PITA than regular Cat-6.

The other thing is that switches are a good deal more expensive for 10gig than 1gig. It has gotten better, but it is still a big price delta. So you aren't going to be real happy to spend that kind of money, only to have it work on like 30% of your connections or the like.

Multigig really does help this, a lot. It can get you more speed, over existing wires, at longer distances. Particularly 2.5gig it really seems to work in most cases that 1gig does. It is also good for WiFi. High end WiFi 6 stuff is to the point where you can do more than a gig of traffic, and of course more shit is on WiFi than ever. So, upgrade your APs, and then run a 2.5g connection to them, and there you go, more WiFi bandwidth. You see some switches designed with that kind of thing in mind like the Juniper EX4100 multigig. It has 32 1gig ports, and 16 2.5gig ports. The idea being it keeps cost and power requirements down, you use the 1gig ports for most of your wired stuff, but 2.5gig for where you need it, and it does PoE because APs might be where you are using it. Likewise they have ones that have some 10/5/2.5/1 ports, and more 2.5/1 ports because again, cost, power, etc.

The higher the signaling rate, the more it costs both for the PHY and the ASIC.


The cost isn't the cables, it is the labor. If you are doing things at home, sure just get whatever and run it no problem. However, if you have a building with 1000 jacks with cables through the wall, you are paying a bunch of dudes to come in and redo that. That is a 6 figure kind of job, maybe more. You'd really rather not do that, if possible. Eventually it has to be done, but doing it ever time a new cable standard comes out is not soemthing you want.

Also size matters in these cases. Sometimes even Cat-6 is an issue since it is thicker than Cat-5e, though not a ton. When you are running hundreds of cables through conduit, there is limited room. If you run out what do you do? Remove some jacks? Tear out the conduit and replace it, at a MASSIVE cost?


Again, easy at home, or in a datacenter, not so easy in a building. You'd have to re-run everything. Fiber also has the issue that not everything supports it, and that it can't carry power. If you built a building at did fiber to all the desks, you'd need to buy NICs for ever single computer, no desktop is shipping with that built in. Laptops or other devices that wanted to hook up wired are screwed, they don't have fiber and while you can get USB fiber NICs, they aren't the kind of thing anyone has normally. APs can be got with fiber, but then you are running power to them, no PoE.


New nodes are expensive, and in demand for other shit. Hence NICs get made on older stuff. But it isn't just the NICs that are an issue, it is the other end. The more traffic you need to push, the bigger, costlier, and power hungry the ASIC that is needed to support that gets. 10gig switches are a good bit more expensive than 1gig, particularly enterprise ones that have a lot of features. Even on just the consumer level though. If you go and look at a basic-ass, unmanaged, 10gig switch TP-LINK has a 5-port for $270, Netgear has 10-port with 8x1g 2x10gig for $220. A TP-LINK 1gig 5 port? Shit that's $15, a Netgear 8-port is $25. Those ASICs (and PHYs) on the other end cost a lot more for the switch.

2.5g is still more expensive than 1g, and not by a little bit, but better. A Trendnet 5-port 2.5g switch is $100.

Multi-gig is a good stepping stone with less cost, ten same way 25gig is on fiber in data centers. 2.5gig gets you 2.5x the speed of 1gig, in the same way that 25gig gets you 2.5x the speed of 10gig and you don't have to replace much, if any, cabling. That's part of 25gig's appeal is if you have servers that need more bandwidth, but not a ton more, you can replace the switch, NICs, and SFPs in the systems needed but everything else is the same, and you coudl keep the 10gig SPFs for the servers that don't need the extra speed yet.

I hear you labor to re-run cables is expensive.


But it was expensive when we moved from 10base5 and 10base2 to cat3 for 10baseT as well, but that didn't stop us.

It also didn't stop us moving to 100baseT which required at least Cat4 cable and then again to 1000baseT which required cat5e.

10baseT was introduced in 1990. 100baseTX was introduced five years later in 1995 and the industry embraced it

Then 1000baseT was introduced 4 years after that in 1999 and the industry again embraced it and re-ran cabling.

It's been 24 years since 1000baseT required running Cat5e. It doesn't seem unreasonable to have to re-run it now. Heck, if we followed the early cadence, industry should have been starting to run cat6a no later than 2004, but I guess we collectively decided we no longer have a shit and did nothing for 20 more years instead throwing a hissy fit about the prospect of improving cabling runs.

Had we kept up tith he cadence we should have been 10G by 2005, 100G by 2010, 1T by 2015 and 10T by 2020, each time requiring new cabling. :p

Old Cat5e runs won't last forever. Going 2.5 to keep old cabling is just kicking the can down the road at this point.
 
I hear you labor to re-run cables is expensive.


But it was expensive when we moved from 10base5 and 10base2 to cat3 for 10baseT as well, but that didn't stop us.

It also didn't stop us moving to 100baseT which required at least Cat4 cable and then again to 1000baseT which required cat5e.

10baseT was introduced in 1990. 100baseTX was introduced five years later in 1995 and the industry embraced it

Then 1000baseT was introduced 4 years after that in 1999 and the industry again embraced it and re-ran cabling.

It's been 24 years since 1000baseT required running Cat5e. It doesn't seem unreasonable to have to re-run it now. Heck, if we followed the early cadence, industry should have been starting to run cat6a no later than 2004, but I guess we collectively decided we no longer have a shit and did nothing for 20 more years instead throwing a hissy fit about the prospect of improving cabling runs.

Had we kept up tith he cadence we should have been 10G by 2005, 100G by 2010, 1T by 2015 and 10T by 2020, each time requiring new cabling. :p

Old Cat5e runs won't last forever. Going 2.5 to keep old cabling is just kicking the can down the road at this point.
WiFi. So much isn’t wired in anymore.
 
This is disappointing. I have been thinking of finally updating my wired networking from the ancient 1gig that has served me well to likely 10gig if it can be done properly. Intel having a crappy chipset is really disappointing as even on new mobos its prevalent, was thought of to be a 'good' default and even have native Linux support in the past. I'm about to build 2 new PCs (as soon as the 7950X3D arrives most likely), one Zen4 3D main PC and a Zen3 powered home server/NAS machine. The Zen3 one mobo is going to be a Asus X570 Dark Hero which has an Intel 211-AT 2.5G port + 1G port, so hopefully that one doesn't have the same problems. The other will have a X670E Extreme, which apparently has a (non specified at least on the site) Marvell AQuantia 10G (which I THINK now has native Linux support or may even be in the kernel) and then an Intel 2.5G port. I've generally been running CAT6E, 7, 8 cables when I need to buy new ones over the years but my whole house isn't wired in the walls or anything. It will be unfortunate if Intel chipsets can't get a fix for this stuff ASAP.
 
Marvel 10G chips are pretty well supported everywhere - except the most recent in BSD and ESXi. Linux I think has full support - I can check if you’d like. Which distro? I’ve got a NUC 12 Extreme sitting here.
 
Old Cat5e runs won't last forever. Going 2.5 to keep old cabling is just kicking the can down the road at this point.
Kicking the can down the road is perfectly valid though. There is no reason to rush into spending a bunch of money if there is a valid solution to make things last longer. There's also the question of need: How fast does a desktop connection NEED to be? Just saying "faster" or "as fast as possible" isn't realistic. Gig actually still does pretty well for most desktop uses. Sure there are some cases where people need 10gig, or even more. But they are not the majority. So why spend a ton of money upgrading everything to 10gig for no reason?

I also think you misremember how fast things changed to new standards. Gig was out a LONG time before it really started getting adopted, it was expensive as fuck when it first came out. Gig NICs were like $300. Plenty of places stayed at 100 for a good while until the prices came down.

That aside, each order of magnitude matters less than the one before it. There is less and less that isn't instant, or near enough as makes no odds, that you really care about more speed. Gig is about the same speed as older magnetic drives, 2.5gig is as fast as even the fastest magnetic drives, 5gig is about on par with SATA SSDs. That means that for plenty of things they are plenty fast enough. Remote stuff feels basically as fast as local. Sure, if you are editing 4k video you want more bandwidth, possibly more than 10gig, to your server. But if you are just doing normal business documents files over a gig can well feel just as fast as files on the local drive.

Goes double with everything going to The Cloud(tm) since often as not you aren't even getting gig speeds from your cloud provider, much less 10 gig. I mean you might as a whole organization but an individual user probably sees speeds way less than that. Means there isn't as much need to upgrade their network.
 
WiFi. So much isn’t wired in anymore.
For any serious Wi-Fi network, you're not using some consumer router that doubles as a built-in Wi-Fi AP to cover the whole place, but rather a dedicated router (think PCs running OPNsense with a lot of Ethernet ports) running wired backhaul to distributed Wi-Fi APs around the building in order to ensure good coverage.

Emphasis on wired backhaul - you still need good cabling between the APs and your router to ensure a good Wi-Fi experience for all the clients, and at the rate it's improving, 1GbE isn't going to cut it for the APs any more.
 
For any serious Wi-Fi network, you're not using some consumer router that doubles as a built-in Wi-Fi AP to cover the whole place, but rather a dedicated router (think PCs running OPNsense with a lot of Ethernet ports) running wired backhaul to distributed Wi-Fi APs around the building in order to ensure good coverage.

Emphasis on wired backhaul - you still need good cabling between the APs and your router to ensure a good Wi-Fi experience for all the clients, and at the rate it's improving, 1GbE isn't going to cut it for the APs any more.
Oh agreed. But that’s a lot less wiring. Fewer ports, fewer lines, etc. That’s partially why we haven’t seen more adoption of high speed - both because WiFi didn’t need it till recently, with speeds too slow to push the back haul, but also because so many things went from wired to wireless, so fewer ports driving connection needs. lower port counts means lower pressure to drop prices and economies of scale. I remember my early college days when we were clustered around the few network jacks that were public, and then the beginning of WiFi rolling out (Orinoco Gold cards!).
 
Wiring is just a small part of it. I built my house in 2017, all wired with Cat6a. The terminations are cat6 as cat6a terminations were x8 more expensive at that point. I've run adapter and link tests using 2 PCs equipped with 10Gbe copper cards and yes, I have no problem getting solid 10Gb connectivity to and from any point in my house. But the retail channel doesn't offer many copper 10 Gbe uplinks.
Maybe if I get desperate I'll go down the media converter path: https://www.servethehome.com/sfp-to-10gbase-t-adapter-module-buyers-guide/ and connect my SFP+ ports as 2.5Gbe.
 
Wiring is just a small part of it. I built my house in 2017, all wired with Cat6a. The terminations are cat6 as cat6a terminations were x8 more expensive at that point. I've run adapter and link tests using 2 PCs equipped with 10Gbe copper cards and yes, I have no problem getting solid 10Gb connectivity to and from any point in my house. But the retail channel doesn't offer many copper 10 Gbe uplinks.
Maybe if I get desperate I'll go down the media converter path: https://www.servethehome.com/sfp-to-10gbase-t-adapter-module-buyers-guide/ and connect my SFP+ ports as 2.5Gbe.
One note on those: they can get very very hot. Make sure you have some airflow or use one in a two port card till you do some testing. We’ve had NICs burn out from them. Switches seem to fare much much better. My company explicitly does not support them as they can and will overheat the NICs in our product (Mellanox), but we have limited airflow over that part.
 
Wiring is just a small part of it. I built my house in 2017, all wired with Cat6a. The terminations are cat6 as cat6a terminations were x8 more expensive at that point. I've run adapter and link tests using 2 PCs equipped with 10Gbe copper cards and yes, I have no problem getting solid 10Gb connectivity to and from any point in my house. But the retail channel doesn't offer many copper 10 Gbe uplinks.
Maybe if I get desperate I'll go down the media converter path: https://www.servethehome.com/sfp-to-10gbase-t-adapter-module-buyers-guide/ and connect my SFP+ ports as 2.5Gbe.

While most of my 10gig stuff is fiber, I've used the Mikrotik branded 10gig copper SFP+ module on occasion when I need to plug somethig copper in. They have worked well, maintained good speeds in iperf.

I understand with all of these adapters - however - the range will be more limited than with native copper switches, as the SFP+ standard does not supply enough power for a signal to go the full 100m/300ft . Not sure how much shorter the max run is though. I can't remember exactly, but I think I've gone up to 100ft without a problem.

As far as switches go, I've been pretty happy with my Mikrotiks. I used to use various decommed cisco and aruba units, but they were all loud and annoying to manage with license requirements and stupid crap like that. The Mikrotiks are great, as long as you don't need/want to do heavy Layer3 lifting The features are fully supported, but CPU's in them just aren't powerful enough for anyhting but basic stuff in that regard. All of the layer2 stuff works perfectly though.

My main switch in my rack is a 16 port SFP+ model, the CRS317-1G-16S+. It uplinks via 10gig to my CSS326-24G-2S+ switches, one in the rack, one in my office, and one upstairs. I use a copper DAC cable for the one in the rack, and fiber for the others.

These have 24 copper gigabit ports and two 10gig SFP+ uplink ports, and are surprisingly affordable. I bought mine for $129 brand new, but that was before the inflation spikes, so they may be a little bit more now. Couple them with a Mikrotik branded 10gig adapter, and they can easily handle 10gig copper in their two SFP+ ports. (I tried the cheaper 10gtek ones, but they were not reliable)

The 16 port SFP+ switch was a little bit pricier at ~$600 a few years ago when I bought it, but it was a good investment. Been very happy with it.

1674449292609.png
 
Last edited:
Back
Top