Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
This weekend I priced out hardware and cables to upgrade my network to get 4 systems on 10Gb. SFP+ and copper both will be over $1000.
10g Switches can be had for cheap i have a 48 port SFP+ 10g Arista that was 400, There are brocade/Ruckus switches that do 1g but also have 8x 10g SFP+ ports for 200-350. Everything in my rack uses DAC cables but ive been contemplating changing that over to fiber. Everything outside of the rack is cat6 for 1g or 10g with the use of a rj45 transceiver. But yes Intel x520 and x540 nics are cheap these days and work fine.What have you been pricing out?
Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.
The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.
Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.
Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.
With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.
That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.
DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.
If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
10M now on TwinAx. fs.com to buy the right brand cheap. Also for transducers cheap, but expect about a 5% failure rate (you still come out way ahead).What have you been pricing out?
Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.
The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.
Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.
Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.
With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.
That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.
DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.
If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
10M now on TwinAx. fs.com to buy the right brand cheap. Also for transducers cheap, but expect about a 5% failure rate (you still come out way ahead).
Almost everyone is a rebranded finisar - with brand coding to lock it for switches (dicks).
10g Switches can be had for cheap i have a 48 port SFP+ 10g Arista that was 400, There are brocade/Ruckus switches that do 1g but also have 8x 10g SFP+ ports for 200-350. Everything in my rack uses DAC cables but ive been contemplating changing that over to fiber. Everything outside of the rack is cat6 for 1g or 10g with the use of a rj45 transceiver. But yes Intel x520 and x540 nics are cheap these days and work fine.
Correct - mostly an issue for enterprise hardware, but there are a few exceptions. Cisco for instance allows any twinax cable (because outside of FS and people having "fun" there's no such thing as an HP->Cisco TwinAx cable), but Dell blocks QSFP cables (allows SFP+), except on 40G ports (25G/100G require branded). Mellanox requires branded cables (dicks), or did through their 40G/56G side, and often (double dicks) required it on their network cards. I believe they finally dropped the card requirement because that was stupid, but they were overly picky for a long time.I have bought fiber from FS in the past. Call me crazy but I still have trust issues when it comes to Chinese branded stuff in my network, at least anything with logic on it.
I just go the eBay route instead and look for proper branded stuff. I haven't had a problem yet that way (knock on wood)
Finisar 10GBase-SR transducers run for ~$7 on Ebay now.
I get the impression (though I could be wrong) that the brand coding lockout stuff tends to be in the Switch hardware, not on the transducer. The transducer just tells the switch what it is, and the switch decides whether or not to allow it.
My Mikrotik switches don't seem to block anything I have tried, but I've had nothing but problems with cheap "10GTek" branded both gigabit and 10G copper adapters I have used. Flaky as all hell.
Correct - mostly an issue for enterprise hardware, but there are a few exceptions. Cisco for instance allows any twinax cable (because outside of FS and people having "fun" there's no such thing as an HP->Cisco TwinAx cable), but Dell blocks QSFP cables (allows SFP+), except on 40G ports (25G/100G require branded). Mellanox requires branded cables (dicks), or did through their 40G/56G side, and often (double dicks) required it on their network cards. I believe they finally dropped the card requirement because that was stupid, but they were overly picky for a long time.
Oh yeah. Intel don't give no shits. Does care about fibre trancievers in some of them, but not 10GBASEHeh. DAC's can be a pain in the butt if they are brand locked on both sides. You'd have to only buy one brand of hardware...
I know they say they do this to ensure stability with their validated configuration, but if that were the case, just write the validated configurations in the docs and let people do as they please. There is definitely a dick element of brand lock-in's going on here, which IMHO is a form of market manipulation and ought to be illegal.
Luckily Intel NIC's seem to take anything you stick in them. (At least everything I've tried to date)
It feels kind of weird that Intel is the good guy in this conversationOh yeah. Intel don't give no shits. Does care about fibre trancievers in some of them, but not 10GBASE
Always have a few spare SFP's on hand. They are considered a consumable part now. We've had a bunch die at around 10,000 hours. Come to think of it most of those were for storage tho, the ethernet SFP's last longer. Still, have some spares handy.
ehh, multigig is a different beast.It feels kind of weird that Intel is the good guy in this conversation
Manipulating this shit seems like something that would be right up their alley.
ehh, multigig is a different beast.
I've killed a few dozen over the years, but that's generally with counts in the 1000s. FS.com/3rd party ones die more often for sure, but they're cheaper enough that it doesn't matter. Heck, I've got something like 80 in use at home and site A alone.10k hours is only just over a year.
What SFP's are you buying? Are these discount off brand models like from the Fiber Store or 10GTek or are they branded Finisar or relabeled Intel/Dell/HP/Cisco etc. models?
What temperature are they running at? Do you allow them to get really hot? Mine seem to average just under 40C. I did briefly have a switch in the attic of my old house and that got really hot. In th esummer that attic regularly got up well over 120 degrees (F). I actually modded the switch that went up there with a fan just to keep temps in check. The SFP+ module ran at about 65C, but even that one is still working fine.
Granted, my use case being in a home environment is not as intense as in an enterprise environment, but still, this does not match my experience at all.
Every SFP+ module I have I bought used on eBay. Apart from the crappy off brand 10Gtek ones they all worked out of the box, and I've never had one fail.
I started buying them about 4 years ago, and I mist have about 16 of them in total between the PC's and switches.
I'll admit my 16 units are not a large sample size, so this does fall into the borderline anecdotal territory, but if they were dropping at the rate you suggest, roughly every year of use, I should have seen at least a few fail by now.
I'm a fan of decommed Enterprise parts. Buy the premium brands, at decommed prices.I've killed a few dozen over the years, but that's generally with counts in the 1000s. FS.com/3rd party ones die more often for sure, but they're cheaper enough that it doesn't matter. Heck, I've got something like 80 in use at home and site A alone.
oh yeah, my arista has a CLI setting the turn it off. It seems the further into 10g they got that stuff went away as my brocade doesnt really care like my Intel nics.Yeah, and that is fair, we may have drifted off topic a little bit, but the comment of mine you quoted was not secifically about multigig, but rather about SFP+ compatibility etween NIC's and Switches. Its very common for there to be vendor lockouts in enterprise gear, despite them all using the same SFP+ standard.
2.5G is weird, it suffers from the same problem that Wireless N did, in that many vendors released their 2.5G equipment before there was an actual specification for it in place and as of right now there are two competing 2.5G ethernet standards one held by the NBase-T alliance and the other MGBase-T alliance, and the only stuff that works seamlessly across both needs to follow the 802.3bz standards created by the Ethernet Alliance which was approved by IEEE in 2016, but still isn't present in all 2.5g Equipment. To make it more fun if you are using 2.5G over Cat5e you can run into issues with stranded cable that you won't have with solid, over longer distances so if you are trying to run it in an actual work environment you still very much need to be using Cat6a at the very least otherwise you are going to run into weird issues due to signal degradation unless you can be sure your runs were done with solid and not stranded cable.How does Intel manage to have what is generally considered to be one of the better 10GbE chipsets alongside Mellanox, only to screw up a 2.5GbE implementation on consumer-class boards?
Fortunately, I'm not noticing any random disconnect issues on my I225-V (I have a Z690 board) right now, but if anything happens, I won't hesitate to cram a real NIC in this system, even if I need to cough up several PCIe lanes for 10GbE or even 40GbE/FDR InfiniBand.
Perhaps part of that is because all my Ethernet switches and routers are still 1GbE at most, none of this 2.5GbE quarter-assed crap when we should all be on 10GbE across the board by now for home systems instead of 1GbE that's stagnated longer than Skylake-era Intel.
My Older ProCurves did the same thing, my DLinks of the same age don't have nearly the same degree of failures but the guys at Aruba designing my new network have assured me that those failures are a thing of the past and I can expect my SFP+ adapters and direct attach to last the duration of the equipment so... fingers crossed?I was talking about work. I'm unsure the count but it's in the thousands. The storage SFP's count is just under 800, and we've replaced between 50 and 100 of them. It's a pretty high percentage, brought it up to HPE they just downplayed it and said they are consumable and failures are expected. It's all covered under contract but still scary when you go look at an enclosure and find 2 of 4 paths are down because the SFP's crapped out.
DACs at least won't burn out like optics do.My Older ProCurves did the same thing, my DLinks of the same age don't have nearly the same degree of failures but the guys at Aruba designing my new network have assured me that those failures are a thing of the past and I can expect my SFP+ adapters and direct attach to last the duration of the equipment so... fingers crossed?
2.5G is weird, it suffers from the same problem that Wireless N did, in that many vendors released their 2.5G equipment before there was an actual specification for it in place and as of right now there are two competing 2.5G ethernet standards one held by the NBase-T alliance and the other MGBase-T alliance, and the only stuff that works seamlessly across both needs to follow the 802.3bz standards created by the Ethernet Alliance which was approved by IEEE in 2016, but still isn't present in all 2.5g Equipment. To make it more fun if you are using 2.5G over Cat5e you can run into issues with stranded cable that you won't have with solid, over longer distances so if you are trying to run it in an actual work environment you still very much need to be using Cat6a at the very least otherwise you are going to run into weird issues due to signal degradation unless you can be sure your runs were done with solid and not stranded cable.
It should be noted that the Intel 225 series follows the NBase-T specifications, along with Cisco, Freescale, and Xilinx, but Broadcomm follows the MGBase-T specs, so if you are trying to use an Intel Nic at 2.5 with a Broadcomm networking switch you are going to be in for a bad time.
I was talking about work. I'm unsure the count but it's in the thousands. The storage SFP's count is just under 800, and we've replaced between 50 and 100 of them. It's a pretty high percentage, brought it up to HPE they just downplayed it and said they are consumable and failures are expected. It's all covered under contract but still scary when you go look at an enclosure and find 2 of 4 paths are down because the SFP's crapped out.
My Older ProCurves did the same thing, my DLinks of the same age don't have nearly the same degree of failures but the guys at Aruba designing my new network have assured me that those failures are a thing of the past and I can expect my SFP+ adapters and direct attach to last the duration of the equipment so... fingers crossed?
OR worse melt the fiber, I have had more than a few of my HP and Cisco SFP+ LC connectors burn out and take the first inch of fiber along with it, I really hate splicing and terminating fibre.DACs at least won't burn out like optics do.
OR worse melt the fiber, I have had more than a few of my HP and Cisco SFP+ LC connectors burn out and take the first inch of fiber along with it, I really hate splicing and terminating fibre.
Yeah, I just hired the local ISP to come down and re-splice it, cost like $300 but they got it right and done in like an hour, while I easily spent 3x that amount of time fucking it up over and over. The kit they used for fixing it was like $15,000 and I don't have to do it enough to warrant spending that kind of money should it happen again I would just call them again.Ugh yeah.
I had to do splice fibers back at Raytheon for underwater cable systems for the navy in 2005. it was a bloody night mare to line up the fiber, splice it, and then test only to find out that there was too much signal degradation and have to do it again, and again, before getting it right. I understand it is easier with the equipment available these days, but still, not something I want to get into doing.
Rather than bulk cabling, at home I just use raceways in discrete places with fiber patch cables. If I ever have a melted fiber (haven't run into that yet, fingers crossed) I'll just replace the entire run rather than re-splicing it. That is obviously not an option with long runs inside walls and plenums.
I do like that fiber electrically isolates equipment though. Not that I have ever had an issue where one PC and/or switch took out another with an electrical fault over ethernet or DAC, so this may just be a theoretical advantage, but still!
All in all though, I am not an IT professional and if I exclude my bad experience with Brocade in 2013 the first time I tried it, I have had many completely problem free years. Never as much as a damaged fiber or anything. I figure if I can do it, others can too.
I did go out of my way to purchase recessed rack mounts for my 16 port SFP+ switch though, as the door to my rack interfered with the fibers coming out the front, and I I had a ghost of Christmas future vision of damaged fibers if I didn't do something...
View attachment 543885
Heh, had some non-Aruba branded fiber transceivers from FS.com (had originally planned for a different brand for my main switch) that my Aruba refused to work with. Had to pay for new ones only to be greeted with a firmware update from HP a couple of months later that removed the transciever branding limit :facepalmCorrect - mostly an issue for enterprise hardware, but there are a few exceptions. Cisco for instance allows any twinax cable (because outside of FS and people having "fun" there's no such thing as an HP->Cisco TwinAx cable), but Dell blocks QSFP cables (allows SFP+), except on 40G ports (25G/100G require branded). Mellanox requires branded cables (dicks), or did through their 40G/56G side, and often (double dicks) required it on their network cards. I believe they finally dropped the card requirement because that was stupid, but they were overly picky for a long time.
Both. Those cards were finicky as hellI had nothing but problems when I first tried to go 10G fiber in 2013 using Brocade BR-1020 adapters with Fiber Store transducers.
At the time I was pretty sure I was dealing with driver problems, but who knows, maybe it was bad modules.
I wonder if it is a matter of cooling. I have seen the transducers run really hot in some switches, but not in others.
Yeah, I just hired the local ISP to come down and re-splice it, cost like $300 but they got it right and done in like an hour, while I easily spent 3x that amount of time fucking it up over and over. The kit they used for fixing it was like $15,000 and I don't have to do it enough to warrant spending that kind of money should it happen again I would just call them again.
Side note your rack is much cleaner than mine, I have an order that just arrived from Patchbox so really hoping that spruces it up a little because I am ashamed to show what it currently looks like.
100%, thats why i just use 10g and if i need to have a nic that can step down in between 10g and 1g i use the x550 intel nics.Jesus. Sounds like the entire industry screwed up on this one.
This is like the bad old days in the 1980's when there were multiple competing standards and poor compatibility. I thought we had gotten away from that in 1990 when 10baseT was codified.
Which is good and all, it's a solid NIC, but if you look in the supplemental datasheet for the X550 there is this lovely nugget.100%, thats why i just use 10g and if i need to have a nic that can step down in between 10g and 1g i use the x550 intel nics.
i dont have 2.5g or 5g aka memeG on my network. The only device is my cable modem and it works with it just fine. Everything else from firewall down is 1g/10g/40gWhich is good and all, it's a solid NIC, but if you look in the supplemental datasheet for the X550 there is this lovely nugget.
"NBASE-T as per the IEEE P802.3bz/D1.1 Draft Standard for Ethernet Amendment"
which is different than the final standard for the 802.3bz specification which was finalized on D3.2
So depending on what you plug that into you could still be in for a world of WTFWITH!
I'll have to take some time to look up your recommendations. I confess, I'm far from a networking expert - I've never used SFP and I don't know what DAC is.What have you been pricing out?
Decommed Intel X520 adapters are like $70 a piece on eBay now. Some come with 10GBase-SR transducers included, so shop wisely.
The fiber is not expensive. I usually get Amazon Basics OM3 fiber (LC-LC) on Amazon.
Its the switch that is the killer. I used to use decommed enterprise gear. I had an Aruba switch with 48 gigabit copper ports and four SFP+ ports, but it was on the loud side.
Mirkrotik switches are among the most reasonable new switches for this type of thing. I filtered by "at least 4 SFP+ ports" and there are four current models (though one of them is an outdoor switch. My 16 port model must have been discontinued) . Look here.
With four x520 adapters for ~$75 each (if you are smart and buy the ones with the transducers included) and a Mikrotik CRS309-1G-8S+IN and 4x fiber patch cables for ~$50, I only get to $619.
That CRS309-1G-8S+IN has 8 10gig ports and one gigabit port (intended for management, but it can be switched too) You could either use that gigabit port for connection to a router/WAN, or pick up a CSS326-24G-2S+RM which has 24 gigabit copper ports and two SFP+ ports to link it in with all of your gigabit gear. It's only $159. I've had good luck with Molex branded DAC cables for direct SFP+ links between these switches. We are talking like $10 on eBay for the DAC cable.
DAC cables may even work for some of your PC's, if they are close enough to the switch. I believe they go up to 7M (23ft). The longer DAC cables are a bit more expensive than fiber, but then you don't need the transducers, so it all depends on how many transducers you can get included.
If you need extra, I have found that Finisar branded 10GBase-SR transducers work well in everything I have shoved them in thus far. (Intel NIC's, Mikrotik switches, Aruba switches) I think Intel's own branded transducers are just relabeled Finisars.
i dont have 2.5g or 5g aka memeG on my network. The only device is my cable modem and it works with it just fine. Everything else from firewall down is 1g/10g/40g
I'll have to take some time to look up your recommendations. I confess, I'm far from a networking expert - I've never used SFP and I don't know what DAC is.
I've been looking at 2 switches with either 5 RJ45 or 8 SFP+ ports ($240-270 ea). My internet enters the house on the ground floor (1 PC near) and I have a Cat6e run to the floor above where the other PCs are. SFP will require several transceivers to connect the floors and tie into existing connections (internet, wireless router, printer, etc.). I haven't really looked at the "10Gb" switches that have 2 10Gb ports and the rest are 1Gb.
I probably need an Intel NIC for my FreeNAS system, which has PCIe 8/16x slots available, for built in BSD driver compatibility. Was looking at X520 or X540 cards. The other systems are Windows (maybe 1 Linux), at least 1 is going to be limited to PCIe 3.0 4x, so I was looking at Aquantia cards for those.
It would be cheaper to go 2.5Gb of course, but it kind of doesn't seem worth the effort. I guess I'd find out if the Intel 225 chip on one of my mobos is a problem.
Some 25/100 ports just use QSFP labling (just to make life miserable). Dell does this. Or did.
28 back to 10/1 is also not as reliable as it should be . Because dicks again.
im at the point of looking at 25g switches myself. id rather go that route then getting a 40g switch. But they are still expensive but id also like to learn ONIE.I've considered picking up a pair of Intel XXV710-DA2's one for my server and one for my worstation.
One of the two ports on each would still have to work in 10gig mode though as I don't have a 25gbe switch, and likely won't for some time. The other port would be a direct link between the two at 25gig for some sweet sweet storage speed