Nvidia B100 and GB200 enter the supply chain certification stage

Lakados

[H]F Junkie
Joined
Feb 3, 2014
Messages
10,558
https://wccftech.com/nvidia-next-ge...-certification-stage-foxconn-wistron-in-race/

Basically, they are now in a stage where they can start verifying that the conceptual designs actually work for real-world manufacturing processes and that shipping the various components around for assembly isn't breaking anything along the way.

These chips look like they will undergo final packaging and assembly in NA, and not in China as they have before.
 
All talk about yet to be released product getting close to H100 could get unimpressive fast and if they achieve that mid 2024 launch and achieve a 2025 upgrade using their giant current revenues to do that, could be really hard for AMD-Intel to keep up with that. They seem to want to quadruple stuff from 2023 to 2025...

That 2024 B40 release will maybe more interesting for us either way, if it is the luxury version that share the same die has the high end gamers side to come or if they decide to split that product line would be both relevant.
 
Last edited:
So AI compute is so hot that Nvidia doesn't even have to wait for competition to near before they innovate/release faster stuff simply due to customer demand? Customers says screw the upgrade cycle/depreciation we need more now?
 
So AI compute is so hot that Nvidia doesn't even have to wait for competition to near before they innovate/release faster stuff simply due to customer demand? Customers says screw the upgrade cycle/depreciation we need more now?
Basically, Nvidia’s only real competitor is themselves, the TSMC packaging and production facilities for the H100’s are backlogged and performing final packaging and assembly of a product in a country where they are not legally allowed to sell them seems like a guaranteed way to have your shit stolen right off the assembly line.
So if they have to re arrange their supply chain, and move to a different node to have a hope in hell of meeting demand then why do all that work for a product that would normally have less than a year left on it.
 
Nvidia’s only real competitor is themselves
Among direct competitor, it feel like they could achieve to be (the product need to be so much better than H100 to be able to sell, being the only affair and release so fast to keep their own ridiculously high quarter number up not to beat others).

But being so much better than the internal custom solution of giants (facebook, apple, amazon, tesla, microsoft, google, etc...) for them that they need to go in big or in small part with some Nvidia solution to stay competitive among themselves is probably also a competition, they compete with internal custom solution and I imagine a general solution need to be quite better to beat inferior but tailor-made for the need solution.
 
Among direct competitor, it feel like they could achieve to be (the product need to be so much better than H100 to be able to sell, being the only affair and release so fast to keep their own ridiculously high quarter number up not to beat others).

But being so much better than the internal custom solution of giants (facebook, apple, amazon, tesla, microsoft, google, etc...) for them that they need to go in big or in small part with some Nvidia solution to stay competitive among themselves is probably also a competition, they compete with internal custom solution and I imagine a general solution need to be quite better to beat inferior but tailor-made for the need solution.
Yeah internally some of them don't need to actually compete, it just needs to work better, Meta, Google, Microsoft, etc, have their Internal silicon that uses a custom LLM they designed specifically for that silicon and their AI solution is basically single-purpose, so that work is done already and the hardware R&D is already there and unless Nvidia can get their ecosystem to a place where it no longer makes financial sense to continue that internal solution the big guys will continue doing it.
That said any new players who don't have their own solution already in place what are their options, use an incomplete development environment with inferior hardware, that is subject to change and costs roughly the same and in some cases more, or go Nvidia with CUDA, which is a proven solution with a long track record of success with industry-leading features and support on the fastest hardware.
Because remember as expensive as the hardware Nvidia sells is, it's the cheap part, paying the developers to do their jobs to utilize that hardware is the expensive bit, and god the electricity bill...
Jensen was not at all wrong with the "The more you buy", tagline as cringy and meme-worthy as it was.
I mean you can take the best AMD or Intel AI solution and after you add in developer costs and incidentals they often come out costing more than the equivalent in Nvidia hardware, at the expense of you are now locked into the Nvidia ecosystem, which depending on how you view that is or isn't a deal breaker, but it is something that needs serious consideration.
 
Obviously Meta/Google/Microsoft are huge, but are they huge enough to match the compute of say H100s? Isn't Microsoft using Nvidia for their efforts, well with GPT4 anyhow? I know these guys have custom AI compute but isn't it VERY specialized and task specific?
 
Obviously Meta/Google/Microsoft are huge, but are they huge enough to match the compute of say H100s? Isn't Microsoft using Nvidia for their efforts, well with GPT4 anyhow? I know these guys have custom AI compute but isn't it VERY specialized and task specific?
Yeah, from what I recall on the subject, Meta's internal LLM is specifically aimed at their algorithms for content feeds and for sorting and aggregating user data, it's doing the job well enough and fast enough that their hardware and language have not become a bottleneck so there is no need for them to migrate away, similar with Google.
I have no clue what Microsoft does and half the time when I do I find myself asking why more than not, so I can honestly say I don't know what they are using their own in-house AI stuff for at this stage, only that they are or were working on it for, reasons?
However, if they could develop a platform agnostic suite to rival CUDA that would be a game changer, and worth billions onto itself.
 
Back
Top