NVIDIA Acquires Mellanox for $6.9 Billion

Discussion in 'HardForum Tech News' started by cageymaru, Mar 11, 2019.

  1. cageymaru

    cageymaru [H]ard as it Gets

    Messages:
    19,814
    Joined:
    Apr 10, 2003
    NVIDIA has reached a definitive agreement to acquire Mellanox for $6.9 billion. NVIDIA will acquire all of the issued and outstanding common shares of Mellanox for $125 per share in cash. NVIDIA and Mellanox are both known as high performance computing (HPC) industry leaders and their products are found in over 250 of the world's TOP500 supercomputers and every major cloud service provider. Mellanox is known for its high-performance interconnect technology called Infiniband and high-speed Ethernet products. "We share the same vision for accelerated computing as NVIDIA," said Eyal Waldman, founder and CEO of Mellanox. "Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people."

    "The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world's datacenters," said Jensen Huang, founder and CEO of NVIDIA. "Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine. "We're excited to unite NVIDIA's accelerated computing platform with Mellanox's world-renowned accelerated networking platform under one roof to create next-generation datacenter-scale computing solutions. I am particularly thrilled to work closely with the visionary leaders of Mellanox and their amazing people to invent the computers of tomorrow."
     
  2. Stimpy88

    Stimpy88 [H]ard|Gawd

    Messages:
    1,271
    Joined:
    Feb 18, 2004
    Another excuse to keep consumer GPU prices high for another few years...
     
    Ironchef3500 likes this.
  3. pclov3r

    pclov3r Limp Gawd

    Messages:
    440
    Joined:
    Feb 14, 2010
    Hopefully Nvidia doesn't find a way to screw up things like IB(InfiniBand) and make it incompatible with other vendors and force you to buy equipment form them to support their stuff like the DGX-1.
     
  4. Stimpy88

    Stimpy88 [H]ard|Gawd

    Messages:
    1,271
    Joined:
    Feb 18, 2004
    I can't imagine they will buy 7 billion worth of interconnect tech, just to open source it. Licence it out, yes, and that would also fit their business model, but it would stifle usage and development, but thats never been a problem to NV.
     
    Red Falcon and Ski like this.
  5. Uvaman2

    Uvaman2 2[H]4U

    Messages:
    3,117
    Joined:
    Jan 4, 2016
    Oh but they will... I think They seem to be a destructive, take over kind of corporation.
     
  6. Nukester

    Nukester [H]ard|Gawd

    Messages:
    1,428
    Joined:
    Mar 21, 2016
    Guess they are making exuberant profits off of their over priced video cards.
     
    Ironchef3500 and Marees like this.
  7. katanaD

    katanaD [H]ard|Gawd

    Messages:
    1,987
    Joined:
    Nov 15, 2016

    Sadly.. as witnessed by just about every website.. that seems to be the norm

    we could be discussing cupcakes.. and low and behold.. someone will try to make it political

    :eek::oops:
     
  8. Spidey329

    Spidey329 [H]ardForum Junkie

    Messages:
    8,676
    Joined:
    Dec 15, 2003
    He added "plus, we reeeeaaalllly love money." As he snorted a line of coke, slammed a champagne bottle against the wall, and sprinted out the door with two hookers in tow.
     
  9. Smashing Young Man

    Smashing Young Man [H]ard|Gawd

    Messages:
    1,537
    Joined:
    Sep 11, 2009
    [H]'s mods and admins' hair will turn white trying to moderate political discussion here as we get deeper into the 2020 election season.
     
  10. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    Not sure how I feel about this as I am a big fan of Mellanox products for InfiniBand. They were already highly involved with Nvidia before this though. What I am most worried about is how Nvidia will handle the overall networking products outside of high end GPU computing.

    I would be interested to see if Intel, Broadcom, or HP start broadening their InfiniBand products in light of this development. Largely they have left a bit of this up to Mellanox since it is such a niche market, but with the Nvidia buyout that may change.

    I am really confused about what you guys are trying to say here. Mellanox does not own InfiniBand, it is more or less an open standard. Several companies are members of the InfiniBand Trade Association that guides the specification.
     
    GoldenTiger, Revdarian and Lakados like this.
  11. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    This is sort of a good thing, it will allow nVidia to compete with Intel on a number of AI based super computer projects coming up. As for the GPU side of things Melanox and nVidia have been cooperating and doing joint effort development for a long time so here won’t be much of a change there. They have been bedfellows for at least a decade at this point so it was about time this happened.
     
  12. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    Open standard is a little loose as there isn’t really a standard implementation outside of the start and end call for packet transmission. But yeah Infinity band is already used by a lot of companies for stuff. I have a bunch of L3 Dell switches that use it for interconnects.
     
  13. Uvaman2

    Uvaman2 2[H]4U

    Messages:
    3,117
    Joined:
    Jan 4, 2016
    On my behalf, I am saying the type of 'partner' Nvidia is.
    In that they are not a 'partner' at all, and I don't think they have embarked in a 'synergistic' kind of endeavor. Ever?
    I expect whatever Mellanox does to suffer, to whatever tech they have be closed off as much as possible in the next or the next next update.
    Whatever they do will probably be contaminated with Nvidia 'AI' bullshit too.
    Me guessing and opinionating, I don't even know what Mellanox is, sound like a laundry detergent.
     
  14. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    First, it's InfiniBand not "Infinity Band" (which actually describes a ring) and yes it very much does have a specification and a control board (InfiniBand Trade Association) which I included links to in my post. Specifically,

    "InfiniBand is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems."

    Remember standards talk about the architecture and how things work together. That doesn't mean you have to include everything in your implementation, but it needs to operate in a standard way to communicate with other devices and work properly.

    And btw, those Dell switches are actually made by Mellanox...

    This actually isn't true. Nvidia has partnered with many different companies that have worked well for both companies. I have actually had far better experiences working on partnerships with Nvidia than I have with AMD. Most of that has to do with the resources Nvidia has at hand to help projects. It is also why Nvidia is one of the world leaders in high end compute projects.

    As for what Mellanox is, they are the leading provider of InfiniBand equipment. InfiniBand is a networking standard that was created to greatly increase bandwidth for high end distributed computing.

    In this case, Nvidia is not partnering with Mellanox, they are outright buying them. That is the concerning part with me, as they may decide to steer Mellanox products away from the more general networking market and gear it towards a much more niche GPU compute market. In that I am in complete agreement with you. Although I am not sure what Nvidia AI bullshit has to do with InfiniBand specifically. InfiniBand may be used to help push Nvidia AI in providing faster interconnect for distributed GPU compute. But that was something Mellanox was already working on with Nvidia, and it was something we were using on my last project. Nvidia was already using their own fiber technology prior to this, which was designed much like Intels fiber technology to directly connect their Cores and CPUs.
     
    DooKey, Red Falcon, Brian_B and 3 others like this.
  15. Krazy925

    Krazy925 2[H]4U

    Messages:
    3,318
    Joined:
    Sep 29, 2012
    They’ll just ban people.

    Problem solved.
     
    GoldenTiger likes this.
  16. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    Thanks for the clarification.

    But I don’t think there is a reason for nVidia to make changes to infinitiband (have to fight autocorrect to get that to type) at this point, but I could see them using their new resources to create a new high speed transport specifically for distributed AI functions or god forbid a better interconnect for SLI like a control node or something.
     
  17. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    Nvidia cannot make changes to InfiniBand. The specification is controlled by the InfiniBand Trade Association and overseen by a bunch of companies. Like I said, if you go to the link, it talks all about InfiniBand.

    Nvidia already has a technology they were using for distributed compute, it is called NVLink. It's concept is similar to QPI in Intel. Both Intel and Nvidia though are limited when it comes to extending these across multiple systems, as well as increasing the bandwidth capability. This is generally where InfiniBand comes into play. The problem is that you are still dumping information out of the NVLink or QPI when you need to go through the main network stack. You lose a bit in that transition. I am curious is this is the reason Nvidia is buying Mellanox. Now Nvidia already uses Mellanox heavily in their HPC solutions. Mellanox is prominently used in most of the worlds Supercomputer clusters.
     
    Lakados likes this.
  18. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    Dedicated NVLink switches would be kind of cool
     
  19. pclov3r

    pclov3r Limp Gawd

    Messages:
    440
    Joined:
    Feb 14, 2010
    No, But they can make changes in the way it's implemented in their own products to force to buy form them for switching equipment or force others to buy chips form them to support their stuff.

    And guess what? Nvidia is a monopoly in GPU computing so companies have no choice but to deal with them or go without it.
     
  20. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    For reference, the new RTX cards are using NVLink bridges instead of SLI, which cost $80. The NVLink switches were designed for two main functions. The first was to replace PCIe connections intrasystem, the second was to bridge multiple systems. Nvidia current architecture for HPC includes servers with 8 GPUs. The NVLink switch has 18 ports. Nvidia then doubles up the server giving you 16 interconnected GPUs. There are some applications that extend this as well to more systems, but it gets more complicated. This is most likely why Nvidia has bought Mellanox.
     
    joobjoob and Lakados like this.
  21. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    The RTX cards use NVLink but I thought the consumer cards used SLI over NVLink.
     
  22. joobjoob

    joobjoob Gawd

    Messages:
    543
    Joined:
    Jun 29, 2004
    That's more then AMD paid for ATI.



    RIP ATI+Bioware. My favorite 2003 canadian gaming duo. Fond memories of a more elegant time.
     
  23. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    Nvidia is not a monopoly in GPU computing, AMD and Intel also do GPU computing. Mellanox is not currently implemented in Nvidia products, unless you mean as a component to their current GPU servers. But then again Mellanox is a component is most of the supercomputer clusters. There are other options out there besides Mellanox for InfiniBand, so there is no "forcing" others to buy from Nvidia, but It certainly will have an impact on those currently using Mellanox. They will either have to switch over to Intel solutions or go with whatever Nvidia ends up with.
     
  24. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    ? RTX are consumer cards...specifically RTX 2080. The newer RTX line and presumably going forward use NVLink. Older cards use SLI.

    Geforce RTX NVLink:

    "GeForce RTX NVLINK™ Bridge
    The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards."

    EDIT: Just to be clear here, the new NVLink offers some advantages over SLI. First is speed. They are much faster than the old SLI interconnects. Second they allow for memory pooling. However, this requires the developer to actually incorporate that into their program. So current games designed for SLI will not take advantage of the memory pooling, however they should take advantage of the increased speed.
     
    Last edited: Mar 11, 2019
  25. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    I remember reading somewhere that the NVLink used in the RTX consumer cards was different than the NVLink used for the Titan and Quadro series, the pin out was different or there was more pins or something hence the 10x price difference between the consumer and enterprise NVLink connectors, and in that article they said that even though the RTX cards were using the NVLink connectors, they were using the SLI protocol over those connecters so there was still a master/slave relationship among the cards. Unless nVidia changed that between their announcement and launch I don't know, or I could be completely wrong in my memory, but I can find a Linus video of him stating basically the same thing so they must have released that info somewhere at some point.
     
  26. /dev/null

    /dev/null [H]ardForum Junkie

    Messages:
    14,202
    Joined:
    Mar 31, 2001
    Um, the ConnectX driver at least is already in the linux kernel...
     
  27. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    So some of these is comparing apples to oranges. Consumer cards are not afforded all the same features as prosumer cards. There are some things as far as actually doing compute that the consumer cards will not be able to take advantage of, except perhaps for the Titan.

    What you are really asking about here is peering and memory pooling. NVLink for RTX does support that, but it is up to the developer to actually include this in their program, as I said. The NVLink will also support games using SLI technology. The NVLink bridge itself actually has more bandwidth and can transfer faster than the old SLI bridges. So even if you do not include memory pooling, it should still give you better performance. You can take a look at a simple test by techpowerup and one by puget. The NVLink is mainly a technology for GPU compute, Nvidia just takes advantage of some of its benefits in the new consumer cards to provide more options. But it is up to the developer to actually include those in their program.
     
    Lakados likes this.
  28. Fresch

    Fresch n00b

    Messages:
    41
    Joined:
    Mar 14, 2018
    Can I get 3dfx back?
     
  29. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,662
    Joined:
    Feb 3, 2014
    Good clarification, I was under the mistaken interpretation then that they were just doing the same old SLI protocol over the newer links.
     
  30. zehoo

    zehoo Limp Gawd

    Messages:
    253
    Joined:
    Aug 22, 2004
    The purchase makes sense. Nvidia needs to diversify it's product lineup a bit more to expand and some of Mellanox's technology will go well together with Nvidia's.

    Sucks to be Intel getting outbid by Nvidia.
     
  31. NoOther

    NoOther [H]ardness Supreme

    Messages:
    6,477
    Joined:
    May 14, 2008
    Personally I am actually glad they were outbid. Intel already has their own InfiniBand technology. If they won the bid, then they would just be consolidating technology, offering less competition.
     
    renz496 and zehoo like this.
  32. pclov3r

    pclov3r Limp Gawd

    Messages:
    440
    Joined:
    Feb 14, 2010
    Depends on how you want to define monopoly. If you look at GPU accelerated applications many will require CUDA and few will support OpenCL. And the ones that do typically show bad performance for various reasons. There is Vulkan Compute which isn't used much at all currently. Perhaps intel can change this down the road.

    And my point is in their own products such as the DGX-1 which supports InfiniBand they could implement in some shitty way that forces equipment manufactures to buy chips form them to have that equipment supported when it comes to switches and such. Pretty much to lock out other vendors.

    Vendor lock-in is all too common in the enterprise segment to stifle competition.

    For consumer GPUs and markets this is meaningless and most wont care anyhow.
     
    Last edited: Mar 11, 2019
  33. STEM

    STEM Gawd

    Messages:
    581
    Joined:
    Jun 7, 2007
    Intel's bid enticed NVIDIA to overpay by about a billion dollars for a company that Intel didn't need to begin with. Cool beans...

    It's an aquisition meant to bolster NVIDIA's stock. I wouldn't be surprised if they sell Mellanox down the road, after they get all the IP out of them that they possibly can without damaging that company's value too much in the process. This isn't an aquisition like Ageia or 3DFX. Those were small potatoes compared to this.
     
    Last edited: Mar 11, 2019
  34. knowom

    knowom Limp Gawd

    Messages:
    424
    Joined:
    Aug 15, 2008
    Damn they paid more for Mellanox than AMD did for ATI that's seems nuts from a context standpoint though with cloud computing it's probably rather lucrative today plus it plays into Nvidia's ambitions into that area I suppose as well. Still that's hefty price tag, but they've got tons of cash anyway so it shouldn't financially burden them in the same way it did AMD.
     
  35. STEM

    STEM Gawd

    Messages:
    581
    Joined:
    Jun 7, 2007
    Now, if only NVIDIA could somehow get their hands on an x86 license, if only... I wonder what kind of approach they would take to CPU design. They have datacenter-class GPUs and now high-end networking equipment in their portfolio, seems like CPUs is the last piece of the puzzle that's missing, and they would practically own the data center market.
     
  36. Mode13

    Mode13 Gawd

    Messages:
    653
    Joined:
    Jun 11, 2018
    Haha, 6.9

    over and out
     
  37. 5150Joker

    5150Joker 2[H]4U

    Messages:
    3,146
    Joined:
    Aug 1, 2005
    If ARM ever takes off in the server world we'll probably find out.