Intel 1226-V 2.5GbE Ethernet Chipset Showing Connection Drop Issues (Chipset Used on most Raptor Lake Motherboards)

Those are damned good switches I always forget about. Well priced too. Even the full 10G one is. I may finally replace my old Blade that is still in use (fucker won’t die), and I’m short on my mains (Nexus 9396 or Dell S4048).
 
This weekend I priced out hardware and cables to upgrade my network to get 4 systems on 10Gb. SFP+ and copper both will be over $1000. :cry:
 
This weekend I priced out hardware and cables to upgrade my network to get 4 systems on 10Gb. SFP+ and copper both will be over $1000. :cry:

What have you been pricing out?

Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.

The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.

Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.

Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.

With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.

That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.

DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.

If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
 
Last edited:
What have you been pricing out?

Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.

The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.

Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.

Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.

With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.

That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.

DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.

If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
10g Switches can be had for cheap i have a 48 port SFP+ 10g Arista that was 400, There are brocade/Ruckus switches that do 1g but also have 8x 10g SFP+ ports for 200-350. Everything in my rack uses DAC cables but ive been contemplating changing that over to fiber. Everything outside of the rack is cat6 for 1g or 10g with the use of a rj45 transceiver. But yes Intel x520 and x540 nics are cheap these days and work fine.
 
What have you been pricing out?

Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.

The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.

Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.

Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.

With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.

That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.

DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.

If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
10M now on TwinAx. fs.com to buy the right brand cheap. Also for transducers cheap, but expect about a 5% failure rate (you still come out way ahead).

Almost everyone is a rebranded finisar - with brand coding to lock it for switches (dicks).
 
10M now on TwinAx. fs.com to buy the right brand cheap. Also for transducers cheap, but expect about a 5% failure rate (you still come out way ahead).

I have bought fiber from FS in the past. Call me crazy but I still have trust issues when it comes to Chinese branded stuff in my network, at least anything with logic on it.

I just go the eBay route instead and look for proper branded stuff. I haven't had a problem yet that way (knock on wood)

Finisar 10GBase-SR transducers run for ~$7 on Ebay now.

Almost everyone is a rebranded finisar - with brand coding to lock it for switches (dicks).

I get the impression (though I could be wrong) that the brand coding lockout stuff tends to be in the Switch hardware, not on the transducer. The transducer just tells the switch what it is, and the switch decides whether or not to allow it.

My Mikrotik switches don't seem to block anything I have tried, but I've had nothing but problems with cheap "10GTek" branded both gigabit and 10G copper adapters I have used. Flaky as all hell.
 
10g Switches can be had for cheap i have a 48 port SFP+ 10g Arista that was 400, There are brocade/Ruckus switches that do 1g but also have 8x 10g SFP+ ports for 200-350. Everything in my rack uses DAC cables but ive been contemplating changing that over to fiber. Everything outside of the rack is cat6 for 1g or 10g with the use of a rj45 transceiver. But yes Intel x520 and x540 nics are cheap these days and work fine.

Yeah, I did that with an old Aruba switch a while back. 48 gigabit copper ports and 4x SFP+ 10gig ports.

Surprisingly it wasn't too picky with transducers. Both the intel branded transducers and finisar branded ones worked.

The switch was on the loud side for office use though. When it was hidden in the basement it was fine, but when I had to move it to muy living space it suddenly wasn't. Most things could be managed from the web interface, but some things required using SSH, and the interface was not incredibly intuitive, and very different from things I've used in the past.

Stock the thing shipped with two SFP+ ports for networking and two for linking with other Aruba switches using some custom protocol. I had to use the SSH interface in order to convert all four to standard networking mode.

I've been much happier with my Mikrotik switches. Only issue I've had with those is that when I tried to aggregate two 10gig ports for faster uplink between switches, it initially worked, but then suddenly forgot it was aggregated, resulting in a bad loop that took down the network until I figured it out.

There have been several firmware updates since I had this problem, so they might have fixed it now, but I just haven't used link aggregation since then. Part of what I disliked was that I couldn't manually configure ports for link aggregation. Automated detection was the only option, and it seemed not 100% reliable.

I don't really need link aggregation though. I use the 24port copper switches with two 10gig uploads because they werre cheap ($129 at the time). I don't need all 24 ports. If I used 20 ports and needed non-blocking performance upstream on them all, I'd look into aggregation again, but the truth is, I really don't.
 
I have bought fiber from FS in the past. Call me crazy but I still have trust issues when it comes to Chinese branded stuff in my network, at least anything with logic on it.

I just go the eBay route instead and look for proper branded stuff. I haven't had a problem yet that way (knock on wood)

Finisar 10GBase-SR transducers run for ~$7 on Ebay now.



I get the impression (though I could be wrong) that the brand coding lockout stuff tends to be in the Switch hardware, not on the transducer. The transducer just tells the switch what it is, and the switch decides whether or not to allow it.

My Mikrotik switches don't seem to block anything I have tried, but I've had nothing but problems with cheap "10GTek" branded both gigabit and 10G copper adapters I have used. Flaky as all hell.
Correct - mostly an issue for enterprise hardware, but there are a few exceptions. Cisco for instance allows any twinax cable (because outside of FS and people having "fun" there's no such thing as an HP->Cisco TwinAx cable), but Dell blocks QSFP cables (allows SFP+), except on 40G ports (25G/100G require branded). Mellanox requires branded cables (dicks), or did through their 40G/56G side, and often (double dicks) required it on their network cards. I believe they finally dropped the card requirement because that was stupid, but they were overly picky for a long time.
 
Correct - mostly an issue for enterprise hardware, but there are a few exceptions. Cisco for instance allows any twinax cable (because outside of FS and people having "fun" there's no such thing as an HP->Cisco TwinAx cable), but Dell blocks QSFP cables (allows SFP+), except on 40G ports (25G/100G require branded). Mellanox requires branded cables (dicks), or did through their 40G/56G side, and often (double dicks) required it on their network cards. I believe they finally dropped the card requirement because that was stupid, but they were overly picky for a long time.

Heh. DAC's can be a pain in the butt if they are brand locked on both sides. You'd have to only buy one brand of hardware...

I know they say they do this to ensure stability with their validated configuration, but if that were the case, just write the validated configurations in the docs and let people do as they please. There is definitely a dick element of brand lock-in's going on here, which IMHO is a form of market manipulation and ought to be illegal.

Luckily Intel NIC's seem to take anything you stick in them. (At least everything I've tried to date)
 
Heh. DAC's can be a pain in the butt if they are brand locked on both sides. You'd have to only buy one brand of hardware...

I know they say they do this to ensure stability with their validated configuration, but if that were the case, just write the validated configurations in the docs and let people do as they please. There is definitely a dick element of brand lock-in's going on here, which IMHO is a form of market manipulation and ought to be illegal.

Luckily Intel NIC's seem to take anything you stick in them. (At least everything I've tried to date)
Oh yeah. Intel don't give no shits. Does care about fibre trancievers in some of them, but not 10GBASE :D
 
Oh yeah. Intel don't give no shits. Does care about fibre trancievers in some of them, but not 10GBASE :D
It feels kind of weird that Intel is the good guy in this conversation :p

Manipulating this shit seems like something that would be right up their alley.
 
Always have a few spare SFP's on hand. They are considered a consumable part now. We've had a bunch die at around 10,000 hours. Come to think of it most of those were for storage tho, the ethernet SFP's last longer. Still, have some spares handy.
 
Always have a few spare SFP's on hand. They are considered a consumable part now. We've had a bunch die at around 10,000 hours. Come to think of it most of those were for storage tho, the ethernet SFP's last longer. Still, have some spares handy.

10k hours is only just over a year.

What SFP's are you buying? Are these discount off brand models like from the Fiber Store or 10GTek or are they branded Finisar or relabeled Intel/Dell/HP/Cisco etc. models?

What temperature are they running at? Do you allow them to get really hot? Mine seem to average just under 40C. I did briefly have a switch in the attic of my old house and that got really hot. In th esummer that attic regularly got up well over 120 degrees (F). I actually modded the switch that went up there with a fan just to keep temps in check. The SFP+ module ran at about 65C, but even that one is still working fine.

Granted, my use case being in a home environment is not as intense as in an enterprise environment, but still, this does not match my experience at all.

Every SFP+ module I have I bought used on eBay. Apart from the crappy off brand 10Gtek ones they all worked out of the box, and I've never had one fail.

I started buying them about 4 years ago, and I mist have about 16 of them in total between the PC's and switches.

I'll admit my 16 units are not a large sample size, so this does fall into the borderline anecdotal territory, but if they were dropping at the rate you suggest, roughly every year of use, I should have seen at least a few fail by now.
 
Last edited:
ehh, multigig is a different beast.

Yeah, and that is fair, we may have drifted off topic a little bit, but the comment of mine you quoted was not secifically about multigig, but rather about SFP+ compatibility etween NIC's and Switches. Its very common for there to be vendor lockouts in enterprise gear, despite them all using the same SFP+ standard.
 
10k hours is only just over a year.

What SFP's are you buying? Are these discount off brand models like from the Fiber Store or 10GTek or are they branded Finisar or relabeled Intel/Dell/HP/Cisco etc. models?

What temperature are they running at? Do you allow them to get really hot? Mine seem to average just under 40C. I did briefly have a switch in the attic of my old house and that got really hot. In th esummer that attic regularly got up well over 120 degrees (F). I actually modded the switch that went up there with a fan just to keep temps in check. The SFP+ module ran at about 65C, but even that one is still working fine.

Granted, my use case being in a home environment is not as intense as in an enterprise environment, but still, this does not match my experience at all.

Every SFP+ module I have I bought used on eBay. Apart from the crappy off brand 10Gtek ones they all worked out of the box, and I've never had one fail.

I started buying them about 4 years ago, and I mist have about 16 of them in total between the PC's and switches.

I'll admit my 16 units are not a large sample size, so this does fall into the borderline anecdotal territory, but if they were dropping at the rate you suggest, roughly every year of use, I should have seen at least a few fail by now.
I've killed a few dozen over the years, but that's generally with counts in the 1000s. FS.com/3rd party ones die more often for sure, but they're cheaper enough that it doesn't matter. Heck, I've got something like 80 in use at home and site A alone.
 
I've killed a few dozen over the years, but that's generally with counts in the 1000s. FS.com/3rd party ones die more often for sure, but they're cheaper enough that it doesn't matter. Heck, I've got something like 80 in use at home and site A alone.
I'm a fan of decommed Enterprise parts. Buy the premium brands, at decommed prices.

Finisar 10GBase-SR modules are - as mentioned above - going for ~$7 on eBay now... Fiberstore - for comparison is now $20.

I've personally had better luck with eBay server pulls from good enterprise brands, than brand new 10GTek or Fiberstore parts.
 
Yeah, and that is fair, we may have drifted off topic a little bit, but the comment of mine you quoted was not secifically about multigig, but rather about SFP+ compatibility etween NIC's and Switches. Its very common for there to be vendor lockouts in enterprise gear, despite them all using the same SFP+ standard.
oh yeah, my arista has a CLI setting the turn it off. It seems the further into 10g they got that stuff went away as my brocade doesnt really care like my Intel nics.
 
How does Intel manage to have what is generally considered to be one of the better 10GbE chipsets alongside Mellanox, only to screw up a 2.5GbE implementation on consumer-class boards?

Fortunately, I'm not noticing any random disconnect issues on my I225-V (I have a Z690 board) right now, but if anything happens, I won't hesitate to cram a real NIC in this system, even if I need to cough up several PCIe lanes for 10GbE or even 40GbE/FDR InfiniBand.

Perhaps part of that is because all my Ethernet switches and routers are still 1GbE at most, none of this 2.5GbE quarter-assed crap when we should all be on 10GbE across the board by now for home systems instead of 1GbE that's stagnated longer than Skylake-era Intel.
2.5G is weird, it suffers from the same problem that Wireless N did, in that many vendors released their 2.5G equipment before there was an actual specification for it in place and as of right now there are two competing 2.5G ethernet standards one held by the NBase-T alliance and the other MGBase-T alliance, and the only stuff that works seamlessly across both needs to follow the 802.3bz standards created by the Ethernet Alliance which was approved by IEEE in 2016, but still isn't present in all 2.5g Equipment. To make it more fun if you are using 2.5G over Cat5e you can run into issues with stranded cable that you won't have with solid, over longer distances so if you are trying to run it in an actual work environment you still very much need to be using Cat6a at the very least otherwise you are going to run into weird issues due to signal degradation unless you can be sure your runs were done with solid and not stranded cable.

It should be noted that the Intel 225 series follows the NBase-T specifications, along with Cisco, Freescale, and Xilinx, but Broadcomm follows the MGBase-T specs, so if you are trying to use an Intel Nic at 2.5 with a Broadcomm networking switch you are going to be in for a bad time.
 
Last edited:
I was talking about work. I'm unsure the count but it's in the thousands. The storage SFP's count is just under 800, and we've replaced between 50 and 100 of them. It's a pretty high percentage, brought it up to HPE they just downplayed it and said they are consumable and failures are expected. It's all covered under contract but still scary when you go look at an enclosure and find 2 of 4 paths are down because the SFP's crapped out.
 
I was talking about work. I'm unsure the count but it's in the thousands. The storage SFP's count is just under 800, and we've replaced between 50 and 100 of them. It's a pretty high percentage, brought it up to HPE they just downplayed it and said they are consumable and failures are expected. It's all covered under contract but still scary when you go look at an enclosure and find 2 of 4 paths are down because the SFP's crapped out.
My Older ProCurves did the same thing, my DLinks of the same age don't have nearly the same degree of failures but the guys at Aruba designing my new network have assured me that those failures are a thing of the past and I can expect my SFP+ adapters and direct attach to last the duration of the equipment so... fingers crossed?
 
My Older ProCurves did the same thing, my DLinks of the same age don't have nearly the same degree of failures but the guys at Aruba designing my new network have assured me that those failures are a thing of the past and I can expect my SFP+ adapters and direct attach to last the duration of the equipment so... fingers crossed?
DACs at least won't burn out like optics do.
 
2.5G is weird, it suffers from the same problem that Wireless N did, in that many vendors released their 2.5G equipment before there was an actual specification for it in place and as of right now there are two competing 2.5G ethernet standards one held by the NBase-T alliance and the other MGBase-T alliance, and the only stuff that works seamlessly across both needs to follow the 802.3bz standards created by the Ethernet Alliance which was approved by IEEE in 2016, but still isn't present in all 2.5g Equipment. To make it more fun if you are using 2.5G over Cat5e you can run into issues with stranded cable that you won't have with solid, over longer distances so if you are trying to run it in an actual work environment you still very much need to be using Cat6a at the very least otherwise you are going to run into weird issues due to signal degradation unless you can be sure your runs were done with solid and not stranded cable.

It should be noted that the Intel 225 series follows the NBase-T specifications, along with Cisco, Freescale, and Xilinx, but Broadcomm follows the MGBase-T specs, so if you are trying to use an Intel Nic at 2.5 with a Broadcomm networking switch you are going to be in for a bad time.

Jesus. Sounds like the entire industry screwed up on this one.

This is like the bad old days in the 1980's when there were multiple competing standards and poor compatibility. I thought we had gotten away from that in 1990 when 10baseT was codified.
 
I was talking about work. I'm unsure the count but it's in the thousands. The storage SFP's count is just under 800, and we've replaced between 50 and 100 of them. It's a pretty high percentage, brought it up to HPE they just downplayed it and said they are consumable and failures are expected. It's all covered under contract but still scary when you go look at an enclosure and find 2 of 4 paths are down because the SFP's crapped out.

That is nuts. And your sample sizes are much larger than my meager 16, but it still suggests to me that it might be a HP problem, not an "all SFP+ modules" problem.
 
My Older ProCurves did the same thing, my DLinks of the same age don't have nearly the same degree of failures but the guys at Aruba designing my new network have assured me that those failures are a thing of the past and I can expect my SFP+ adapters and direct attach to last the duration of the equipment so... fingers crossed?

I had nothing but problems when I first tried to go 10G fiber in 2013 using Brocade BR-1020 adapters with Fiber Store transducers.

At the time I was pretty sure I was dealing with driver problems, but who knows, maybe it was bad modules.

I wonder if it is a matter of cooling. I have seen the transducers run really hot in some switches, but not in others.
 
DACs at least won't burn out like optics do.
OR worse melt the fiber, I have had more than a few of my HP and Cisco SFP+ LC connectors burn out and take the first inch of fiber along with it, I really hate splicing and terminating fibre.
 
OR worse melt the fiber, I have had more than a few of my HP and Cisco SFP+ LC connectors burn out and take the first inch of fiber along with it, I really hate splicing and terminating fibre.

Ugh yeah.

I had to do splice fibers back at Raytheon for underwater cable systems for the navy in 2005. it was a bloody night mare to line up the fiber, splice it, and then test only to find out that there was too much signal degradation and have to do it again, and again, before getting it right. I understand it is easier with the equipment available these days, but still, not something I want to get into doing.

Rather than bulk cabling, at home I just use raceways in discrete places with fiber patch cables. If I ever have a melted fiber (haven't run into that yet, fingers crossed) I'll just replace the entire run rather than re-splicing it. That is obviously not an option with long runs inside walls and plenums.

I do like that fiber electrically isolates equipment though. Not that I have ever had an issue where one PC and/or switch took out another with an electrical fault over ethernet or DAC, so this may just be a theoretical advantage, but still!

All in all though, I am not an IT professional and if I exclude my bad experience with Brocade in 2013 the first time I tried it, I have had many completely problem free years. Never as much as a damaged fiber or anything. I figure if I can do it, others can too.

I did go out of my way to purchase recessed rack mounts for my 16 port SFP+ switch though, as the door to my rack interfered with the fibers coming out the front, and I I had a ghost of Christmas future vision of damaged fibers if I didn't do something...

PXL_20230123_202215929.jpg
 
Last edited:
Ugh yeah.

I had to do splice fibers back at Raytheon for underwater cable systems for the navy in 2005. it was a bloody night mare to line up the fiber, splice it, and then test only to find out that there was too much signal degradation and have to do it again, and again, before getting it right. I understand it is easier with the equipment available these days, but still, not something I want to get into doing.

Rather than bulk cabling, at home I just use raceways in discrete places with fiber patch cables. If I ever have a melted fiber (haven't run into that yet, fingers crossed) I'll just replace the entire run rather than re-splicing it. That is obviously not an option with long runs inside walls and plenums.

I do like that fiber electrically isolates equipment though. Not that I have ever had an issue where one PC and/or switch took out another with an electrical fault over ethernet or DAC, so this may just be a theoretical advantage, but still!

All in all though, I am not an IT professional and if I exclude my bad experience with Brocade in 2013 the first time I tried it, I have had many completely problem free years. Never as much as a damaged fiber or anything. I figure if I can do it, others can too.

I did go out of my way to purchase recessed rack mounts for my 16 port SFP+ switch though, as the door to my rack interfered with the fibers coming out the front, and I I had a ghost of Christmas future vision of damaged fibers if I didn't do something...

View attachment 543885
Yeah, I just hired the local ISP to come down and re-splice it, cost like $300 but they got it right and done in like an hour, while I easily spent 3x that amount of time fucking it up over and over. The kit they used for fixing it was like $15,000 and I don't have to do it enough to warrant spending that kind of money should it happen again I would just call them again.
Side note your rack is much cleaner than mine, I have an order that just arrived from Patchbox so really hoping that spruces it up a little because I am ashamed to show what it currently looks like.
 
Correct - mostly an issue for enterprise hardware, but there are a few exceptions. Cisco for instance allows any twinax cable (because outside of FS and people having "fun" there's no such thing as an HP->Cisco TwinAx cable), but Dell blocks QSFP cables (allows SFP+), except on 40G ports (25G/100G require branded). Mellanox requires branded cables (dicks), or did through their 40G/56G side, and often (double dicks) required it on their network cards. I believe they finally dropped the card requirement because that was stupid, but they were overly picky for a long time.
Heh, had some non-Aruba branded fiber transceivers from FS.com (had originally planned for a different brand for my main switch) that my Aruba refused to work with. Had to pay for new ones only to be greeted with a firmware update from HP a couple of months later that removed the transciever branding limit :facepalm
 
I had nothing but problems when I first tried to go 10G fiber in 2013 using Brocade BR-1020 adapters with Fiber Store transducers.

At the time I was pretty sure I was dealing with driver problems, but who knows, maybe it was bad modules.

I wonder if it is a matter of cooling. I have seen the transducers run really hot in some switches, but not in others.
Both. Those cards were finicky as hell
 
Yeah, I just hired the local ISP to come down and re-splice it, cost like $300 but they got it right and done in like an hour, while I easily spent 3x that amount of time fucking it up over and over. The kit they used for fixing it was like $15,000 and I don't have to do it enough to warrant spending that kind of money should it happen again I would just call them again.
Side note your rack is much cleaner than mine, I have an order that just arrived from Patchbox so really hoping that spruces it up a little because I am ashamed to show what it currently looks like.

It presents itself maybe a little better in pictures than in person :p

It's also half empty, so that helps.

Its an ancient "Wright Line" rack from the late 90's when everything was still beige. The website wrightline.com is on the front of the unit, but it now redirects to Eaton, so they must have been acquired by Eaton I guess.

It's an odd in between size. I haven't done an exact count, but it is an odd (by modern standards) in-between size of ~34U

Before I got it, it sat in some guys garage for years. He had "rescued" it when it was being decommed at his job and had planned on doing something with it but never got around to it, and posted it on Craigslist.

It had a little rust on it from that time in the garage, and it also looked as if they had sanded drywall or something like that without covering it, as it was absolutely covered in this thick white dust,, but I spent some time cleaning it, and it looks sortof OK, but not great in person. The pics hide it a bit :p

This is what it looked like when I got it out of the car in the old house and cleaned it up in the garage

The last time I moved I had the movers put it in the basement. They brought it in through the bulk head and it got a little beaten during that move, because they really struggled with it. Nothing I couldn't clean up with some sand paper and rustoleum, but I haven't gotten around to it.

The thing weighs a bloody ton. I was able to drive it home in my Volvo station wagon when I first got it, but I made the mistake of trying to pull it out of the car and stand it upright on my own. It almost killed me, and I was a hair away from losing the battle and having it tip on top of and ruin the car :LOL:

It was one of those where I was under it, pushing for all I was worth, and screaming to get it upright, and as I recall I almost didn't make it :p
 
Jesus. Sounds like the entire industry screwed up on this one.

This is like the bad old days in the 1980's when there were multiple competing standards and poor compatibility. I thought we had gotten away from that in 1990 when 10baseT was codified.
100%, thats why i just use 10g and if i need to have a nic that can step down in between 10g and 1g i use the x550 intel nics.
 
100%, thats why i just use 10g and if i need to have a nic that can step down in between 10g and 1g i use the x550 intel nics.
Which is good and all, it's a solid NIC, but if you look in the supplemental datasheet for the X550 there is this lovely nugget.
"NBASE-T as per the IEEE P802.3bz/D1.1 Draft Standard for Ethernet Amendment"
which is different than the final standard for the 802.3bz specification which was finalized on D3.2
So depending on what you plug that into you could still be in for a world of WTFWITH!
 
Which is good and all, it's a solid NIC, but if you look in the supplemental datasheet for the X550 there is this lovely nugget.
"NBASE-T as per the IEEE P802.3bz/D1.1 Draft Standard for Ethernet Amendment"
which is different than the final standard for the 802.3bz specification which was finalized on D3.2
So depending on what you plug that into you could still be in for a world of WTFWITH!
i dont have 2.5g or 5g aka memeG on my network. The only device is my cable modem and it works with it just fine. Everything else from firewall down is 1g/10g/40g
 
What have you been pricing out?

Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.

The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.

Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.

Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.

With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.

That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.

DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.

If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
I'll have to take some time to look up your recommendations. I confess, I'm far from a networking expert - I've never used SFP and I don't know what DAC is.

I've been looking at 2 switches with either 5 RJ45 or 8 SFP+ ports ($240-270 ea). My internet enters the house on the ground floor (1 PC near) and I have a Cat6e run to the floor above where the other PCs are. SFP will require several transceivers to connect the floors and tie into existing connections (internet, wireless router, printer, etc.). I haven't really looked at the "10Gb" switches that have 2 10Gb ports and the rest are 1Gb.

I probably need an Intel NIC for my FreeNAS system, which has PCIe 8/16x slots available, for built in BSD driver compatibility. Was looking at X520 or X540 cards. The other systems are Windows (maybe 1 Linux), at least 1 is going to be limited to PCIe 3.0 4x, so I was looking at Aquantia cards for those.

It would be cheaper to go 2.5Gb of course, but it kind of doesn't seem worth the effort. I guess I'd find out if the Intel 225 chip on one of my mobos is a problem.
 
i dont have 2.5g or 5g aka memeG on my network. The only device is my cable modem and it works with it just fine. Everything else from firewall down is 1g/10g/40g

I really feel like that's the way to do it.

I don't have above gigabit speeds on the WAN here yet, but if I did, the WAN side of the router/firewall is probably the only place I'd be using 2.5G or 5G.

Everything downstream of that either already has 10gig (workstation & testbench) for high speed storage transfers OR is really not high speed critical and would be just as happy with "only" gigabit.
 
I'll have to take some time to look up your recommendations. I confess, I'm far from a networking expert - I've never used SFP and I don't know what DAC is.

I've been looking at 2 switches with either 5 RJ45 or 8 SFP+ ports ($240-270 ea). My internet enters the house on the ground floor (1 PC near) and I have a Cat6e run to the floor above where the other PCs are. SFP will require several transceivers to connect the floors and tie into existing connections (internet, wireless router, printer, etc.). I haven't really looked at the "10Gb" switches that have 2 10Gb ports and the rest are 1Gb.

I probably need an Intel NIC for my FreeNAS system, which has PCIe 8/16x slots available, for built in BSD driver compatibility. Was looking at X520 or X540 cards. The other systems are Windows (maybe 1 Linux), at least 1 is going to be limited to PCIe 3.0 4x, so I was looking at Aquantia cards for those.

It would be cheaper to go 2.5Gb of course, but it kind of doesn't seem worth the effort. I guess I'd find out if the Intel 225 chip on one of my mobos is a problem.

Feel free to ask any questions you might have. I have learned most of this shit by trial and error over the last 10 years.

DAC - in this case - means "Direct Attach Copper". It is a copper cable which can connect two devices with SFP, SFP+ or SFP28 (etc.) without a fiber or transducer. You just stick one end of the cable in the SFP+ port where you would otherwise shove the transducer.

They look like this:

10_sfp_dac_2_7.jpg



Instead of using two transducers:

p_sr_fiber_transceiver_850nm_8_2_1dbm_output_power.jpg


And then removing that black rubber cap on the back of each transducer and connecting them to eachother with your fiber:

0000001%2FLC_TO_LC_10GB_AQUA_OM3_FIBER_OPTIC_CABLE.jpg


(those little round white plastic covers come off to expose the tip of the fiber, which goes into the transducer.

DAC cables can be cheaper than using fiber and transducers since you don't need to buy the transducers, but when we are talking about using decommed enterprise stuff from ebay, the $7 10GBase-SR transducers aren't going to bust the bank. DAC's are generally a little rarer, and thus may or may not cost more. It really depends on what you can find when you buy.

Passive DAC's are usually short, and used inside of a rack to connect various devices together. The longest ones are 7M (~23ft) but many are 1-3ft and just used to link multiple switches or connecting servers to switches.

There are also Active DAC's that can be longer than 7M, but honestly, IMHO they are not worth it unless you are an Enterprise user and buying brand new stuff for the datacenter. If you are, the latest spanking new transducers might be more expensive than an active DAC, but for us home users using older Enterprise gear almost everything else is likely going to be cheaper than an active DAC.

If you have networking gear that is vendor locked DAC's may or may not be an option, as the vendor lock may prevent it from being used. In that case your only option is to get a compatible transducer on each end, and link them with fiber. DAC's are challenging because if you have two devices each with their own vendor locks, the DAC may have to be vendor allowed on both devices.

From a performance perspective, the difference between transducers with fiber, passive DAC's or active DAC's is going to be negligible. There are some minor latency differences, but it is not worth writing home about. We are talking like 10's of microseconds (not milliseconds that pings are usually measured in)

Also, regarding SFP's, know what you are buying.

SFP: This is Gigabit. Some people still call them GBIC's but that is technically wrong, and an older standard.
SFP+ This is 10Gig. These are almost always backwards compatible with gigabit SFP's, so if you only need gigabit speeds you can usually put an older SFP transducer module in a device with an SFP+ port. Almost always. Sometimes there are unexpected results.
SFP28: This is the latest standard which offers 25gbit speeds. Again, usually backwards compatible with 10Gig SFP+ and Gigabit SFP, but YMMV depending on the device.

You'll also see various QSFP solutions. These combine four 10gig SFP+ with a bigger plug into a single 40Gig link. (QSFP+) 40gig is mostly deprecated though.

Then there is QSFP28 Same thing, quad SFP28 in one larger connector that combines four 25gbe lines into a single 100gig line.

The cool part about the QSFP standard is that if you don't want a single 40gig or 100gig link, you can use a breakout cable and instead use the one bog port as four individual links.

As has been previously mentioned in this thread, while the SFP, SFP+ and SFP28 ports are technically standards and supposed to be compatible, many (most) enterprise networking brands are dicks and vendor lock their ports so that they only work with their own transducer modules (and sometimes also DAC's) They do this in the claimed name of reliability. "We have tested such and such transducer and guarantee they will work, and block everything else", but its really just to force you to keep going back to them and buying their overpriced shit, instead of using cheaper aftermarket stuff.


I wish I had had this post when I started with fiber. Would have saved me a TON of time :p I hope it helps you.
 
Some 25/100 ports just use QSFP labling (just to make life miserable). Dell does this. Or did.

28 back to 10/1 is also not as reliable as it should be 😁😂😂. Because dicks again.
 
Some 25/100 ports just use QSFP labling (just to make life miserable). Dell does this. Or did.

28 back to 10/1 is also not as reliable as it should be 😁😂😂. Because dicks again.

I've considered picking up a pair of Intel XXV710-DA2's one for my server and one for my worstation.

One of the two ports on each would still have to work in 10gig mode though as I don't have a 25gbe switch, and likely won't for some time. The other port would be a direct link between the two at 25gig for some sweet sweet storage speed :p
 
I've considered picking up a pair of Intel XXV710-DA2's one for my server and one for my worstation.

One of the two ports on each would still have to work in 10gig mode though as I don't have a 25gbe switch, and likely won't for some time. The other port would be a direct link between the two at 25gig for some sweet sweet storage speed :p
im at the point of looking at 25g switches myself. id rather go that route then getting a 40g switch. But they are still expensive but id also like to learn ONIE.
 
Back
Top