NVIDIA Acquires Mellanox for $6.9 Billion

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,061
NVIDIA has reached a definitive agreement to acquire Mellanox for $6.9 billion. NVIDIA will acquire all of the issued and outstanding common shares of Mellanox for $125 per share in cash. NVIDIA and Mellanox are both known as high performance computing (HPC) industry leaders and their products are found in over 250 of the world's TOP500 supercomputers and every major cloud service provider. Mellanox is known for its high-performance interconnect technology called Infiniband and high-speed Ethernet products. "We share the same vision for accelerated computing as NVIDIA," said Eyal Waldman, founder and CEO of Mellanox. "Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people."

"The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world's datacenters," said Jensen Huang, founder and CEO of NVIDIA. "Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine. "We're excited to unite NVIDIA's accelerated computing platform with Mellanox's world-renowned accelerated networking platform under one roof to create next-generation datacenter-scale computing solutions. I am particularly thrilled to work closely with the visionary leaders of Mellanox and their amazing people to invent the computers of tomorrow."
 
Hopefully Nvidia doesn't find a way to screw up things like IB(InfiniBand) and make it incompatible with other vendors and force you to buy equipment form them to support their stuff like the DGX-1.
 
Hopefully Nvidia doesn't find a way to screw up things like IB(InfiniBand) and make it incompatible with other vendors and force you to buy equipment form them to support their stuff like the DGX-1.
I can't imagine they will buy 7 billion worth of interconnect tech, just to open source it. Licence it out, yes, and that would also fit their business model, but it would stifle usage and development, but thats never been a problem to NV.
 
Hopefully Nvidia doesn't find a way to screw up things like IB(InfiniBand) and make it incompatible with other vendors and force you to buy equipment form them to support their stuff like the DGX-1.
Oh but they will... I think They seem to be a destructive, take over kind of corporation.
 
I actually disagree with what he's saying, but there is no reason whatsoever to bring political ideology into this discussion.
[H]'s mods and admins' hair will turn white trying to moderate political discussion here as we get deeper into the 2020 election season.
 
Not sure how I feel about this as I am a big fan of Mellanox products for InfiniBand. They were already highly involved with Nvidia before this though. What I am most worried about is how Nvidia will handle the overall networking products outside of high end GPU computing.

I would be interested to see if Intel, Broadcom, or HP start broadening their InfiniBand products in light of this development. Largely they have left a bit of this up to Mellanox since it is such a niche market, but with the Nvidia buyout that may change.

Hopefully Nvidia doesn't find a way to screw up things like IB(InfiniBand) and make it incompatible with other vendors and force you to buy equipment form them to support their stuff like the DGX-1.

I can't imagine they will buy 7 billion worth of interconnect tech, just to open source it. Licence it out, yes, and that would also fit their business model, but it would stifle usage and development, but thats never been a problem to NV.

Oh but they will... I think They seem to be a destructive, take over kind of corporation.

I am really confused about what you guys are trying to say here. Mellanox does not own InfiniBand, it is more or less an open standard. Several companies are members of the InfiniBand Trade Association that guides the specification.
 
This is sort of a good thing, it will allow nVidia to compete with Intel on a number of AI based super computer projects coming up. As for the GPU side of things Melanox and nVidia have been cooperating and doing joint effort development for a long time so here won’t be much of a change there. They have been bedfellows for at least a decade at this point so it was about time this happened.
 
Not sure how I feel about this as I am a big fan of Mellanox products for InfiniBand. They were already highly involved with Nvidia before this though. What I am most worried about is how Nvidia will handle the overall networking products outside of high end GPU computing.

I would be interested to see if Intel, Broadcom, or HP start broadening their InfiniBand products in light of this development. Largely they have left a bit of this up to Mellanox since it is such a niche market, but with the Nvidia buyout that may change.







I am really confused about what you guys are trying to say here. Mellanox does not own InfiniBand, it is more or less an open standard. Several companies are members of the InfiniBand Trade Association that guides the specification.
Open standard is a little loose as there isn’t really a standard implementation outside of the start and end call for packet transmission. But yeah Infinity band is already used by a lot of companies for stuff. I have a bunch of L3 Dell switches that use it for interconnects.
 
Not sure how I feel about this as I am a big fan of Mellanox products for InfiniBand. They were already highly involved with Nvidia before this though. What I am most worried about is how Nvidia will handle the overall networking products outside of high end GPU computing.

I would be interested to see if Intel, Broadcom, or HP start broadening their InfiniBand products in light of this development. Largely they have left a bit of this up to Mellanox since it is such a niche market, but with the Nvidia buyout that may change.







I am really confused about what you guys are trying to say here. Mellanox does not own InfiniBand, it is more or less an open standard. Several companies are members of the InfiniBand Trade Association that guides the specification.

On my behalf, I am saying the type of 'partner' Nvidia is.
In that they are not a 'partner' at all, and I don't think they have embarked in a 'synergistic' kind of endeavor. Ever?
I expect whatever Mellanox does to suffer, to whatever tech they have be closed off as much as possible in the next or the next next update.
Whatever they do will probably be contaminated with Nvidia 'AI' bullshit too.
Me guessing and opinionating, I don't even know what Mellanox is, sound like a laundry detergent.
 
Open standard is a little loose as there isn’t really a standard implementation outside of the start and end call for packet transmission. But yeah Infinity band is already used by a lot of companies for stuff. I have a bunch of L3 Dell switches that use it for interconnects.

First, it's InfiniBand not "Infinity Band" (which actually describes a ring) and yes it very much does have a specification and a control board (InfiniBand Trade Association) which I included links to in my post. Specifically,

"InfiniBand is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems."

Remember standards talk about the architecture and how things work together. That doesn't mean you have to include everything in your implementation, but it needs to operate in a standard way to communicate with other devices and work properly.

And btw, those Dell switches are actually made by Mellanox...

On my behalf, I am saying the type of 'partner' Nvidia is.
In that they are not a 'partner' at all, and I don't think they have embarked in a 'synergistic' kind of endeavor. Ever?
I expect whatever Mellanox does to suffer, to whatever tech they have be closed off as much as possible in the next or the next next update.
Whatever they do will probably be contaminated with Nvidia 'AI' bullshit too.
Me guessing and opinionating, I don't even know what Mellanox is, sound like a laundry detergent.

This actually isn't true. Nvidia has partnered with many different companies that have worked well for both companies. I have actually had far better experiences working on partnerships with Nvidia than I have with AMD. Most of that has to do with the resources Nvidia has at hand to help projects. It is also why Nvidia is one of the world leaders in high end compute projects.

As for what Mellanox is, they are the leading provider of InfiniBand equipment. InfiniBand is a networking standard that was created to greatly increase bandwidth for high end distributed computing.

In this case, Nvidia is not partnering with Mellanox, they are outright buying them. That is the concerning part with me, as they may decide to steer Mellanox products away from the more general networking market and gear it towards a much more niche GPU compute market. In that I am in complete agreement with you. Although I am not sure what Nvidia AI bullshit has to do with InfiniBand specifically. InfiniBand may be used to help push Nvidia AI in providing faster interconnect for distributed GPU compute. But that was something Mellanox was already working on with Nvidia, and it was something we were using on my last project. Nvidia was already using their own fiber technology prior to this, which was designed much like Intels fiber technology to directly connect their Cores and CPUs.
 
First, it's InfiniBand not "Infinity Band" (which actually describes a ring) and yes it very much does have a specification and a control board (InfiniBand Trade Association) which I included links to in my post. Specifically,

"InfiniBand is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems."

Remember standards talk about the architecture and how things work together. That doesn't mean you have to include everything in your implementation, but it needs to operate in a standard way to communicate with other devices and work properly.

And btw, those Dell switches are actually made by Mellanox...



This actually isn't true. Nvidia has partnered with many different companies that have worked well for both companies. I have actually had far better experiences working on partnerships with Nvidia than I have with AMD. Most of that has to do with the resources Nvidia has at hand to help projects. It is also why Nvidia is one of the world leaders in high end compute projects.

As for what Mellanox is, they are the leading provider of InfiniBand equipment. InfiniBand is a networking standard that was created to greatly increase bandwidth for high end distributed computing.

In this case, Nvidia is not partnering with Mellanox, they are outright buying them. That is the concerning part with me, as they may decide to steer Mellanox products away from the more general networking market and gear it towards a much more niche GPU compute market. In that I am in complete agreement with you. Although I am not sure what Nvidia AI bullshit has to do with InfiniBand specifically. InfiniBand may be used to help push Nvidia AI in providing faster interconnect for distributed GPU compute. But that was something Mellanox was already working on with Nvidia, and it was something we were using on my last project. Nvidia was already using their own fiber technology prior to this, which was designed much like Intels fiber technology to directly connect their Cores and CPUs.
Thanks for the clarification.

But I don’t think there is a reason for nVidia to make changes to infinitiband (have to fight autocorrect to get that to type) at this point, but I could see them using their new resources to create a new high speed transport specifically for distributed AI functions or god forbid a better interconnect for SLI like a control node or something.
 
Thanks for the clarification.

But I don’t think there is a reason for nVidia to make changes to infinitiband (have to fight autocorrect to get that to type) at this point, but I could see them using their new resources to create a new high speed transport specifically for distributed AI functions or god forbid a better interconnect for SLI like a control node or something.

Nvidia cannot make changes to InfiniBand. The specification is controlled by the InfiniBand Trade Association and overseen by a bunch of companies. Like I said, if you go to the link, it talks all about InfiniBand.

Nvidia already has a technology they were using for distributed compute, it is called NVLink. It's concept is similar to QPI in Intel. Both Intel and Nvidia though are limited when it comes to extending these across multiple systems, as well as increasing the bandwidth capability. This is generally where InfiniBand comes into play. The problem is that you are still dumping information out of the NVLink or QPI when you need to go through the main network stack. You lose a bit in that transition. I am curious is this is the reason Nvidia is buying Mellanox. Now Nvidia already uses Mellanox heavily in their HPC solutions. Mellanox is prominently used in most of the worlds Supercomputer clusters.
 
Nvidia cannot make changes to InfiniBand. The specification is controlled by the InfiniBand Trade Association and overseen by a bunch of companies. Like I said, if you go to the link, it talks all about InfiniBand.

Nvidia already has a technology they were using for distributed compute, it is called NVLink. It's concept is similar to QPI in Intel. Both Intel and Nvidia though are limited when it comes to extending these across multiple systems, as well as increasing the bandwidth capability. This is generally where InfiniBand comes into play. The problem is that you are still dumping information out of the NVLink or QPI when you need to go through the main network stack. You lose a bit in that transition. I am curious is this is the reason Nvidia is buying Mellanox. Now Nvidia already uses Mellanox heavily in their HPC solutions. Mellanox is prominently used in most of the worlds Supercomputer clusters.
Dedicated NVLink switches would be kind of cool
 
Nvidia cannot make changes to InfiniBand.

No, But they can make changes in the way it's implemented in their own products to force to buy form them for switching equipment or force others to buy chips form them to support their stuff.

And guess what? Nvidia is a monopoly in GPU computing so companies have no choice but to deal with them or go without it.
 
Dedicated NVLink switches would be kind of cool

For reference, the new RTX cards are using NVLink bridges instead of SLI, which cost $80. The NVLink switches were designed for two main functions. The first was to replace PCIe connections intrasystem, the second was to bridge multiple systems. Nvidia current architecture for HPC includes servers with 8 GPUs. The NVLink switch has 18 ports. Nvidia then doubles up the server giving you 16 interconnected GPUs. There are some applications that extend this as well to more systems, but it gets more complicated. This is most likely why Nvidia has bought Mellanox.
 
For reference, the new RTX cards are using NVLink bridges instead of SLI, which cost $80. The NVLink switches were designed for two main functions. The first was to replace PCIe connections intrasystem, the second was to bridge multiple systems. Nvidia current architecture for HPC includes servers with 8 GPUs. The NVLink switch has 18 ports. Nvidia then doubles up the server giving you 16 interconnected GPUs. There are some applications that extend this as well to more systems, but it gets more complicated. This is most likely why Nvidia has bought Mellanox.
The RTX cards use NVLink but I thought the consumer cards used SLI over NVLink.
 
That's more then AMD paid for ATI.



RIP ATI+Bioware. My favorite 2003 canadian gaming duo. Fond memories of a more elegant time.
 
No, But they can make changes in the way it's implemented in their own products to force to buy form them for switching equipment or force others to buy chips form them to support their stuff.

And guess what? Nvidia is a monopoly in GPU computing so companies have no choice but to deal with them or go without it.

Nvidia is not a monopoly in GPU computing, AMD and Intel also do GPU computing. Mellanox is not currently implemented in Nvidia products, unless you mean as a component to their current GPU servers. But then again Mellanox is a component is most of the supercomputer clusters. There are other options out there besides Mellanox for InfiniBand, so there is no "forcing" others to buy from Nvidia, but It certainly will have an impact on those currently using Mellanox. They will either have to switch over to Intel solutions or go with whatever Nvidia ends up with.
 
The RTX cards use NVLink but I thought the consumer cards used SLI over NVLink.

? RTX are consumer cards...specifically RTX 2080. The newer RTX line and presumably going forward use NVLink. Older cards use SLI.

Geforce RTX NVLink:

"GeForce RTX NVLINK™ Bridge
The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards."

EDIT: Just to be clear here, the new NVLink offers some advantages over SLI. First is speed. They are much faster than the old SLI interconnects. Second they allow for memory pooling. However, this requires the developer to actually incorporate that into their program. So current games designed for SLI will not take advantage of the memory pooling, however they should take advantage of the increased speed.
 
Last edited:
? RTX are consumer cards...specifically RTX 2080. The newer RTX line and presumably going forward use NVLink. Older cards use SLI.

Geforce RTX NVLink:

"GeForce RTX NVLINK™ Bridge
The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards."
I remember reading somewhere that the NVLink used in the RTX consumer cards was different than the NVLink used for the Titan and Quadro series, the pin out was different or there was more pins or something hence the 10x price difference between the consumer and enterprise NVLink connectors, and in that article they said that even though the RTX cards were using the NVLink connectors, they were using the SLI protocol over those connecters so there was still a master/slave relationship among the cards. Unless nVidia changed that between their announcement and launch I don't know, or I could be completely wrong in my memory, but I can find a Linus video of him stating basically the same thing so they must have released that info somewhere at some point.
 
I can't imagine they will buy 7 billion worth of interconnect tech, just to open source it. Licence it out, yes, and that would also fit their business model, but it would stifle usage and development, but thats never been a problem to NV.
Um, the ConnectX driver at least is already in the linux kernel...
 
I remember reading somewhere that the NVLink used in the RTX consumer cards was different than the NVLink used for the Titan and Quadro series, the pin out was different or there was more pins or something hence the 10x price difference between the consumer and enterprise NVLink connectors, and in that article they said that even though the RTX cards were using the NVLink connectors, they were using the SLI protocol over those connecters so there was still a master/slave relationship among the cards. Unless nVidia changed that between their announcement and launch I don't know, or I could be completely wrong in my memory, but I can find a Linus video of him stating basically the same thing so they must have released that info somewhere at some point.

So some of these is comparing apples to oranges. Consumer cards are not afforded all the same features as prosumer cards. There are some things as far as actually doing compute that the consumer cards will not be able to take advantage of, except perhaps for the Titan.

What you are really asking about here is peering and memory pooling. NVLink for RTX does support that, but it is up to the developer to actually include this in their program, as I said. The NVLink will also support games using SLI technology. The NVLink bridge itself actually has more bandwidth and can transfer faster than the old SLI bridges. So even if you do not include memory pooling, it should still give you better performance. You can take a look at a simple test by techpowerup and one by puget. The NVLink is mainly a technology for GPU compute, Nvidia just takes advantage of some of its benefits in the new consumer cards to provide more options. But it is up to the developer to actually include those in their program.
 
So some of these is comparing apples to oranges. Consumer cards are not afforded all the same features as prosumer cards. There are some things as far as actually doing compute that the consumer cards will not be able to take advantage of, except perhaps for the Titan.

What you are really asking about here is peering and memory pooling. NVLink for RTX does support that, but it is up to the developer to actually include this in their program, as I said. The NVLink will also support games using SLI technology. The NVLink bridge itself actually has more bandwidth and can transfer faster than the old SLI bridges. So even if you do not include memory pooling, it should still give you better performance. You can take a look at a simple test by techpowerup and one by puget. The NVLink is mainly a technology for GPU compute, Nvidia just takes advantage of some of its benefits in the new consumer cards to provide more options. But it is up to the developer to actually include those in their program.
Good clarification, I was under the mistaken interpretation then that they were just doing the same old SLI protocol over the newer links.
 
The purchase makes sense. Nvidia needs to diversify it's product lineup a bit more to expand and some of Mellanox's technology will go well together with Nvidia's.

Sucks to be Intel getting outbid by Nvidia.
 
The purchase makes sense. Nvidia needs to diversify it's product lineup a bit more to expand and some of Mellanox's technology will go well together with Nvidia's.

Sucks to be Intel getting outbid by Nvidia.

Personally I am actually glad they were outbid. Intel already has their own InfiniBand technology. If they won the bid, then they would just be consolidating technology, offering less competition.
 
Nvidia is not a monopoly in GPU computing,

Depends on how you want to define monopoly. If you look at GPU accelerated applications many will require CUDA and few will support OpenCL. And the ones that do typically show bad performance for various reasons. There is Vulkan Compute which isn't used much at all currently. Perhaps intel can change this down the road.

And my point is in their own products such as the DGX-1 which supports InfiniBand they could implement in some shitty way that forces equipment manufactures to buy chips form them to have that equipment supported when it comes to switches and such. Pretty much to lock out other vendors.

Vendor lock-in is all too common in the enterprise segment to stifle competition.

For consumer GPUs and markets this is meaningless and most wont care anyhow.
 
Last edited:
Intel's bid enticed NVIDIA to overpay by about a billion dollars for a company that Intel didn't need to begin with. Cool beans...

It's an aquisition meant to bolster NVIDIA's stock. I wouldn't be surprised if they sell Mellanox down the road, after they get all the IP out of them that they possibly can without damaging that company's value too much in the process. This isn't an aquisition like Ageia or 3DFX. Those were small potatoes compared to this.
 
Last edited:
Damn they paid more for Mellanox than AMD did for ATI that's seems nuts from a context standpoint though with cloud computing it's probably rather lucrative today plus it plays into Nvidia's ambitions into that area I suppose as well. Still that's hefty price tag, but they've got tons of cash anyway so it shouldn't financially burden them in the same way it did AMD.
 
Damn they paid more for Mellanox than AMD did for ATI that's seems nuts from a context standpoint though with cloud computing it's probably rather lucrative today plus it plays into Nvidia's ambitions into that area I suppose as well. Still that's hefty price tag, but they've got tons of cash anyway so it shouldn't financially burden them in the same way it did AMD.

Now, if only NVIDIA could somehow get their hands on an x86 license, if only... I wonder what kind of approach they would take to CPU design. They have datacenter-class GPUs and now high-end networking equipment in their portfolio, seems like CPUs is the last piece of the puzzle that's missing, and they would practically own the data center market.
 
Now, if only NVIDIA could somehow get their hands on an x86 license, if only... I wonder what kind of approach they would take to CPU design. They have datacenter-class GPUs and now high-end networking equipment in their portfolio, seems like CPUs is the last piece of the puzzle that's missing, and they would practically own the data center market.

If ARM ever takes off in the server world we'll probably find out.
 
Back
Top