Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,822
Hey everyone,

I've been searching on eBay and have seen that Intel 100Gbit QSFP28 adapters have started to become affordable, and I am toying in my mind with maybe upgrading.

From googling, it looks as if qsfp28 is just four sfp28 lanes, and sfp28 is backwards compatible with SFP+, so if I use a QSFP28 to 4x SFP28 breakout cable I should be able to have 4x 10gbit lanes until such time as I upgrade my switch hardware.

Initially I wouldn't be any better off than I would be with a 40Gbit QSFP+ adapter, but that seems like a tech that is going the way of the dodo, so I'd rather not invest in it, and instead jump straight to QSFP28.

Is my understanding accurate?


My second question is, is it possible to get 4x fiber outputs from a single QSFP28 port?

The four way splitting cables I have seen are all for DAC use, but is there any way to split the QSFP28 port on the back of a server to - lets say - two dac cables that use LAG to connect to two 10gig SFP+ ports on my switch, and use the other two SFP28 lanes to install either SFP28 or SFP+ SR transducers and run two sets of fiber from there to remote devices?

Appreciate any input and thoughts on this.
 
My second question is, is it possible to get 4x fiber outputs from a single QSFP28 port?

The four way splitting cables I have seen are all for DAC use, but is there any way to split the QSFP28 port on the back of a server to - lets say - two dac cables that use LAG to connect to two 10gig SFP+ ports on my switch, and use the other two SFP28 lanes to install either SFP28 or SFP+ SR transducers and run two sets of fiber from there to remote devices?

I may have just answered my own second question here, as I just found this:

1648333095700.png


It's a $90 cable (yikes) but it looks like it should do what I need it to do. May not reach from the back of the server up to the switch though. Going to need LC-LC female to female adapters...

Of course this particular model is a 1310nm version (presumably 25GBase-LR?) but if this exists, I'm guessing an 850nm 25GBase-SR version must exist as well...
 
Last edited:
Yea, I mean I'd just get a QSFP-100G-SR4 (New on FS.com for around $100, pick the type of compatibility you want.) , then your standard OM3/4 MPO Breakout cable is all you'll need. Also yes for the 25Gbase-LR for the 1310nm.

Assuming you're looking at the E810-CQDA2 cards (the E810-2CQDA2 are nice but bifurcation can be annoying with them to get the 2x100G speeds). They're fun cards, especially since they have a bit of traffic engineering you can do with them too.
 
Yea, I mean I'd just get a QSFP-100G-SR4 (New on FS.com for around $100, pick the type of compatibility you want.) , then your standard OM3/4 MPO Breakout cable is all you'll need. Also yes for the 25Gbase-LR for the 1310nm.

Assuming you're looking at the E810-CQDA2 cards (the E810-2CQDA2 are nice but bifurcation can be annoying with them to get the 2x100G speeds). They're fun cards, especially since they have a bit of traffic engineering you can do with them too.

So you trust the Fiber Store?

I haven't bought from them in years. I got some transducers from them back when the first 10gig nics to become affordable to home users were the Brocade ones. That setup never worked quite right, but honestly I blame that on Brocade more than I do the fiber store.

I am a little bit concerned when it comes to putting noname Chinese stuff in my network though.
 
Yea, I mean I'd just get a QSFP-100G-SR4 (New on FS.com for around $100, pick the type of compatibility you want.) , then your standard OM3/4 MPO Breakout cable is all you'll need. Also yes for the 25Gbase-LR for the 1310nm.

Assuming you're looking at the E810-CQDA2 cards (the E810-2CQDA2 are nice but bifurcation can be annoying with them to get the 2x100G speeds). They're fun cards, especially since they have a bit of traffic engineering you can do with them too.

And thank you for that.

How does this actually work practically?

I've never used a qsfp type module before. Do you have to configure if it presents a single 100gig interface, or four 25gig interfaces, or does it do that automatically based on what module is in the port?
 
So you trust the Fiber Store?

I haven't bought from them in years. I got some transducers from them back when the first 10gig nics to become affordable to home users were the Brocade ones. That setup never worked quite right, but honestly I blame that on Brocade more than I do the fiber store.

I am a little bit concerned when it comes to putting noname Chinese stuff in my network though.
I've used them for 4 or so years for their 40-100G modules, we started trialing some of their QSFP-DD last year , haven't noticed any issues than any other optic. I can see the concern, but you still have a lot of big telco's buying their equipment (switches, I'm assuming transceivers as well). They're really not any less known than any of the other third party optics guys, (Addon networks, Promedia, etc) , and are probably even more well respected due to their actually having networking equipment. Evil Secret in the Optics industry, there's really not that many manufacturers of them out there (even less for the PHY for them), so they all get made at the same places and kind of their own "code information, labeling" made on top of it.

Heck, if you look on FS site they have a box that lets you change compatibility, upgrade firmware, add your own serial numbers, most of the third party guys have things like that.
And thank you for that.

How does this actually work practically?

I've never used a qsfp type module before. Do you have to configure if it presents a single 100gig interface, or four 25gig interfaces, or does it do that automatically based on what module is in the port?

Generally you'll have to configure if it's going to be in a breakout 4x10, 4x25 (or 2 x 100, 4 x 50, 4x100, 8 x 50 for the DD) or the whole speed. Pretty straightforward on switches, slightly less easy for NICS depending on the OS running. Some are completely auto though.
 
Seems odd to go through all that expense and hassle just to end up with 10Gb. Not saying it won't work, but there's cheaper and easier paths to take for 10G net. The cheapest 100G switch I know of is well north of $7k still and won't be changing anytime soon.

What's the actual end goal use case and timeline?
 
Seems odd to go through all that expense and hassle just to end up with 10Gb. Not saying it won't work, but there's cheaper and easier paths to take for 10G net. The cheapest 100G switch I know of is well north of $7k still and won't be changing anytime soon.

What's the actual end goal use case and timeline?

The goal is to eventually upgrade the main switch in the rack. I like this one MikroTik sells, two QSFP+ ports, four SFP+ ports and 48 gigabit copper ports. Its surprisingly affordable at $499. Like all other Mikrotik switches, I'm assuming it's great at Layer2 stuff, but not so much at Layer3 stuff due to an underpowered CPU, but that is fine, I don't do routing in my switches.

Only reason I'm not buying now, is because I don't want to buy 10gig/40gig hardware today. I presume that at some point they must be planning on moving to SFP28/QSFP28 models, and that's when I'd pull the trigger, and wind up with a 100gig link between the server and the main switch.
 
The goal is to eventually upgrade the main switch in the rack. I like this one MikroTik sells, two QSFP+ ports, four SFP+ ports and 48 gigabit copper ports. Its surprisingly affordable at $499. Like all other Mikrotik switches, I'm assuming it's great at Layer2 stuff, but not so much at Layer3 stuff due to an underpowered CPU, but that is fine, I don't do routing in my switches.

Only reason I'm not buying now, is because I don't want to buy 10gig/40gig hardware today. I presume that at some point they must be planning on moving to SFP28/QSFP28 models, and that's when I'd pull the trigger, and wind up with a 100gig link between the server and the main switch.
That MikroTik only does 40G.
Like most things tech, I wouldn't worry about "future proofing". From what I know in the industry, QSFP+ isn't going anywhere -- QSFP28 is only used in 100G+ applications (it very much is the standard there, has been for several years). If you're only looking at upgrading to 10G or 40G, then, just roll with QSFP+ now and be happy. You can get Brocade ICX switches with two QSFP+ ports and 8 10G SFP+ cages for like $250 all day right now. I have 40G spine going between the floors in my home, and 10G to workstations/servers, and until I can get 10G WAN at home, not really needing or wanting more than that. Even my weekly differential backups, while huge, run when I'm asleep, so improving their speed 4x or 10x wouldn't even be noticeable.
 
Totally understand if you just wanna flex and have 100G at home. Just realize the switch is gonna cost ya $7k and be loud/power hungry/hot, plus getting 100G to end devices isn't trivial.
 
Kinda in agreement there, if you aren't going to care about the enhanced features of the intel cards, don't buy them yet, wait until you can buy them later when it makes sense, it will code wise in operating systems make your life easier.
 
Totally understand if you just wanna flex and have 100G at home. Just realize the switch is gonna cost ya $7k and be loud/power hungry/hot, plus getting 100G to end devices isn't trivial.

Mikrotik sells 40G compatible QSFP+ switches for $499 now, and they are pretty quiet.

They don't have a 100G model available yet, but it's only a matter of time.

If I needed full 100G layer3 capability, sure, it would have to be a screamer, but layer2 is comparatively easy.

Kinda in agreement there, if you aren't going to care about the enhanced features of the intel cards, don't buy them yet, wait until you can buy them later when it makes sense, it will code wise in operating systems make your life easier.

I usually buy Intel, not for their features, but because they work. My experience with other brands has been far from stellar both in the consumer space (Realtek) and in the Enterprise space (Brocade). Back when the BR1020's were the first 10gig adapters enthusiasts could afford, they were absolute turds, half the time not working at all, and the other half barely hitting 2gbit. Swapped them out for Intel x520's and like magic everything suddenly "just worked". Because of this, I no longer buy anything but Intel NIC's.

That MikroTik only does 40G.
Like most things tech, I wouldn't worry about "future proofing". From what I know in the industry, QSFP+ isn't going anywhere -- QSFP28 is only used in 100G+ applications (it very much is the standard there, has been for several years). If you're only looking at upgrading to 10G or 40G, then, just roll with QSFP+ now and be happy. You can get Brocade ICX switches with two QSFP+ ports and 8 10G SFP+ cages for like $250 all day right now. I have 40G spine going between the floors in my home, and 10G to workstations/servers, and until I can get 10G WAN at home, not really needing or wanting more than that. Even my weekly differential backups, while huge, run when I'm asleep, so improving their speed 4x or 10x wouldn't even be noticeable.

I just did some calculations, and I may stick to 40G after all.

I only have 8x PCIe gen3 to work with in the server. Everything else is taken, and 8x gen3 just doesn't have sufficient bandwidth for 100gig.

My main motivation was to eliminate a choke point between my VM/NAS server and my main switch where 10gig currently is not enough.

Everyone had told me that 10gig/40gig is a dead end, and that 25gig/100gig is the future. I just didn't want to buy into a dead standard with 40gig.
 
Last edited:
Swapped them out for Intel x520's and like magic everything suddenly "just worked". Because of this, I no longer buy anything but Intel NIC's.

Everyone had told me that 10gig/40gig is a dead end, and that 25gig/100gig is the future. I just didn't want to buy into a dead standard with 40gig.
And I'm sure you might know that every scammer and their mother knows this now so it's nearly impossible to find a genuine used Intel card anymore. Even the Dell and HP branded ones have lots of fakes. :(

I love it when someone says something is a dead end. If it works for my use case, that means I can get it cheap. :D
 
Everyone had told me that 10gig/40gig is a dead end, and that 25gig/100gig is the future. I just didn't want to buy into a dead standard with 40gig.
Dunno who's telling you that, but, either you're taking it out of context, or, they're wrong.
I'm truly surprised 10G isn't enough for normal (even high enthusiast) home use. What's your workload?
 
Just did a bit of synthetic testing on my 24 disk + hefty L2Arc+Zil (each sitting on their own 250GB nVME). Dual Intel 24 core, 256GB RAM system.
I can hit juuuust about 1500MB/s read and juuuust about 800MB/s write. That's a pube hair over 10G for read and well under for write. Gotta have some insane storage if you're saturating and maintaining a bottle neck over 10G!

Now I'm wondering where my storage bottleneck is, but, really don't have it in me to go digging tonight :D.
 
Last edited:
Just did a bit of synthetic testing on my 24 disk + hefty L2Arc+Zil (each sitting on their own 250GB nVME). Dual Intel 24 core, 256GB RAM system.
I can hit juuuust about 1500MB/s read and juuuust about 800MB/s write. That's a pube hair over 10G for read and well under for write. Gotta have some insane storage if you're saturating and maintaining a bottle neck over 10G!

Now I'm wondering where my storage bottleneck is, but, really don't have it in me to go digging tonight :D.

I can just about max out my NAS at 10gig. Roughly 1.2GB/s over NFS. I have several VM's that run on that box as well, so the intent is that the NAS doesn't dominate the interface and hurt everything else. The biggest of these is probably MythTV. Would suck if a backup to NAS used up all of the bandwidth and caused TV content to skip or time out.

I mean, 40gig would probably be more than sifficient, but do I really want to buy in to an end of life interface standard? This is why I am thinking 100gig. Or at least 25gig. SFP28 seems to have more of a future.

It hasn't been a problem yet, but I am thinking forward. Usually I plan and scheme for several months before actually buying stuff, so consider this conversation "early stage" targeted at sussing out the potential possibilities that are out there, not something I am planning on doing tomorrow.

Essentiually I am plotting out the next phase of my network for the next 2-5 years or so. Anything resultant from this thread likely wont become reality until 2025 or after.
 
Last edited:
If 40G (or 25G, or nG) is sufficient, and will be for the likely life of the system, it doesn't much matter if the hardware is EoL (note: QSFP+ isn't as an interface, nor is 40G in general).
Consider this, Mellanox just last year released brand new ConnectX-5 cards. Those will be considered current until they hit end of sales, which then puts a 5 year EoL tag on them. So, at absolute bare minimum, you'll have 5 years of official support for 40G. Probably more like 8-10 years at a minimum, I'd wager. Intel keeps things around even longer than that (the XL710 will likely not be EoL'd for another 5-6 years). And just because it falls off official support doesn't mean driver is suddenly going to stop working and such. If I recall past posts by you, you intentionally run as old as possible software (I think I recall you saying you're on Ubunutu 16.04 still?), so that reduces the concern even farther, imo.

Once either 40G isn't enough bandwidth, or you are whole sale rebuilding the system, then you can reevaluate your needs. For now, might as well just get what you need, with padding (i.e. sounds like 40G is the ticket), and which will be 1/100th the cost and headache of 100G.

I'd really just get an ICX-6610 (2x40G ports - one to the server, the other open or whatever you want later), and a Mellanox ConnectX-4 BCAT. The switch will be ~$250, the NIC ~$150. So for $400 you get 40G switch and 40G interface in your server, plus an available 40G port to use in the future.
 
If 40G (or 25G, or nG) is sufficient, and will be for the likely life of the system, it doesn't much matter if the hardware is EoL (note: QSFP+ isn't as an interface, nor is 40G in general).
Consider this, Mellanox just last year released brand new ConnectX-5 cards. Those will be considered current until they hit end of sales, which then puts a 5 year EoL tag on them. So, at absolute bare minimum, you'll have 5 years of official support for 40G. Probably more like 8-10 years at a minimum, I'd wager. Intel keeps things around even longer than that (the XL710 will likely not be EoL'd for another 5-6 years). And just because it falls off official support doesn't mean driver is suddenly going to stop working and such. If I recall past posts by you, you intentionally run as old as possible software (I think I recall you saying you're on Ubunutu 16.04 still?), so that reduces the concern even farther, imo.

Once either 40G isn't enough bandwidth, or you are whole sale rebuilding the system, then you can reevaluate your needs. For now, might as well just get what you need, with padding (i.e. sounds like 40G is the ticket), and which will be 1/100th the cost and headache of 100G.

I'd really just get an ICX-6610 (2x40G ports - one to the server, the other open or whatever you want later), and a Mellanox ConnectX-4 BCAT. The switch will be ~$250, the NIC ~$150. So for $400 you get 40G switch and 40G interface in your server, plus an available 40G port to use in the future.

Appreciate the recommendations.

Going with QSFP28/SFP28 also had the benefit of upping my direct links to my NAS to 25mbit, which would have been nice, but in the grand scheme of things isn't necessary.

I will have to read up on that Ruckus and Mellanox gear. I haven't used either yet. I've been leaning toward Intel NIC's (because they just work without problems) and have been using Mikrotik switches because they are so damn affordable.

At this point I'm leaning towards a single port 40gig QSFP+ adapter on the server (all I have free is 8x gen3, so nothing more makes sense at this point) and direct link that to a main switch in my rack via a DAC, and then run dual 10gig fibers from that main switch each to my workstation and my testbench as well as single 10gig fiber uplinks to smaller switches around the house.

My main inclination is to go with what I know (single SFP+ port intel 710 in the server, and a Mikrotik main switch), but the options you mention are worth looking into.

Are those Ruckus ICX switches easy to manage straight from the device, or are they like a lot of enterprise gear that requires management software that carries licensing, etc. etc?
 
Kinda in agreement there, if you aren't going to care about the enhanced features of the intel cards, don't buy them yet, wait until you can buy them later when it makes sense, it will code wise in operating systems make your life easier.

Btw. What do you mean when you talk about "extra features"? What do they do beyond just being a NIC?
 
Mikrotik switches because they are so damn affordable.
Are those Ruckus ICX switches easy to manage straight from the device, or are they like a lot of enterprise gear that requires management software that carries licensing, etc. etc?
There is a reason mikrotik is so cheap.

I have a ICX6610-48P and its a fantastic switch. It has out of band mgmt, console for managing and it has a web management portal to but its not the greatest. CLI is the best mode of programming them.
As for licensing, Theese switches do have licenses but the licenses get you additional features with most of them being L3 stuff. But however there are licenses for ports as well. default the SFPs are 1g and a license is required to make them 10g.

Here is the go to guide for brocade switches. Make sure you look at the datasheet for them as it will answer alot of questions. Also the license guide is great.
https://forums.servethehome.com/ind...s-cheap-powerful-10gbe-40gbe-switching.21107/
 
Back
Top