GeForce Partner Program Impacts Consumer Choice

^It's been talked about in another thread around here, but basically this disproves GPP in this particular case.
One brand of video card disproves GPP? So you are saying that everything that I have documented and all the people I have interviewed are liars?
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Somebodies shit is getting pushed in tonight.

And

I dont want to be here to watch it when it happens.


Kyle Own’s

HardOCP
 
Good news, Kyle, I am seeing it reported that Nvidia is ENDING GPP! https://www.tomshardware.com/news/nvidia-ends-geforce-partner-program,37008.html
Same here. It's ending.

https://hothardware.com/news/nvidia-ends-geforce-partner-program



"The rumors, conjecture and mistruths go far beyond its intent," NVIDIA writes in a new blog post. "Rather than battling misinformation, we have decided to cancel the program."

With GPP, we asked our partners to brand their products in a way that would be crystal clear. The choice of GPU greatly defines a gaming platform. So, the GPU brand should be clearly transparent – no substitute GPUs hidden behind a pile of techno-jargon.

Most partners agreed. They own their brands and GPP didn’t change that. They decide how they want to convey their product promise to gamers. Still, today we are pulling the plug on GPP to avoid any distraction from the super exciting work we’re doing to bring amazing advances to PC gaming.
 
Same here. It's ending.

https://hothardware.com/news/nvidia-ends-geforce-partner-program



"The rumors, conjecture and mistruths go far beyond its intent," NVIDIA writes in a new blog post. "Rather than battling misinformation, we have decided to cancel the program."

With GPP, we asked our partners to brand their products in a way that would be crystal clear. The choice of GPU greatly defines a gaming platform. So, the GPU brand should be clearly transparent – no substitute GPUs hidden behind a pile of techno-jargon.

Most partners agreed. They own their brands and GPP didn’t change that. They decide how they want to convey their product promise to gamers. Still, today we are pulling the plug on GPP to avoid any distraction from the super exciting work we’re doing to bring amazing advances to PC gaming.

Their statement is bullshit...I have no problem with them dictating BRANDING. I could care less if my card is labeled "ROG" or "AREZ". What I did take issue with was allocation and pricing penalties for those who didn't play ball!
 
Their statement is bullshit...I have no problem with them dictating BRANDING. I could care less if my card is labeled "ROG" or "AREZ". What I did take issue with was allocation and pricing penalties for those who didn't play ball!

Dictating terms of independent branding is a big though. Good brand names are expensive to develop and for nVidia to swoop in and want to dictate terms on brands couldn't have been anything that OEMs and AIBs wanted. If there really were a problem with consumers getting confused over branding and not knowing what they were buying, the OEMs and AIBs would have addressed this issue. That kind of confusion doesn't serve them well because it leads to returns.
 
It should never hit the front page because it is just a bunch of lies from a bunch of liars. novideo isn't going to drop Gpp, they're just renaming it, making some tweaks and will rebrand it to avoid litigation.
 
Cr
Same here. It's ending.

https://hothardware.com/news/nvidia-ends-geforce-partner-program



"The rumors, conjecture and mistruths go far beyond its intent," NVIDIA writes in a new blog post. "Rather than battling misinformation, we have decided to cancel the program."

With GPP, we asked our partners to brand their products in a way that would be crystal clear. The choice of GPU greatly defines a gaming platform. So, the GPU brand should be clearly transparent – no substitute GPUs hidden behind a pile of techno-jargon.

Most partners agreed. They own their brands and GPP didn’t change that. They decide how they want to convey their product promise to gamers. Still, today we are pulling the plug on GPP to avoid any distraction from the super exciting work we’re doing to bring amazing advances to PC gaming.
I call BS on that damage control oriented, too little-too late, egg on my face excuse NVIDIA made for canning GPP. This won't be the last of GPP or w/e iteration of it they come up w/ next
 
Cr

I call BS on that damage control oriented, too little-too late, egg on my face excuse NVIDIA made for canning GPP. This won't be the last of GPP or w/e iteration of it they come up w/ next
I would suggest to you that HH is doing nothing but putting on a pretty face for NVIDIA.
 
As plausible as this could be, if it came from an actual company that would be one thing, but a startup whose primary goal seems to be to get enough interest and media attention that either AMD or Nvidia purchase them?.....

But Ford, GM, and Toyota have been accused of doing this to dealers who upset them. As has Intel and AMD, I may have run afoul of HP at one point in my career and had to wait a few extra months when there was a sudden supply constraint and allocation issues that they were somewhat apologetic for…
Pretty sure it’s how the top of the food chain functions since they can’t offer bulk order discounts any more.
 
Last edited:
Old report, but still relevant:

It’s no secret that Nvidia is taking advantage of the massive demand for GPUs by using it to upsell and cross sell customers. Many sources in the supply chain tell us Nvidia is giving preferential allocation to firms based on a number of factors, including but not limited to: multi-sourcing plans, plans to make their own AI chips, buying Nvidia’s DGX, NICs, switches, and/or optics. We detailed this in the Amazon Cloud Crisis report from March.

look how intoxicated the supply chain is on L40 and L40S GPUs. We wrote about this here, but since then we have heard more about Nvidia’s allocation shenanigans.

For those OEMs to win larger H100 allocation, Nvidia is pushing the L40S. Those OEMs face pressure to buy more L40S, and in turn receive better allocations of H100. This is the same game Nvidia played in PC space where laptop makers and AIB partners had to buy larger volumes of G106/G107 (mid-range and low-end GPUs) to get good allocations for the more scarce, higher margin G102/G104 (high-end and flagship GPUs).

Many in the Taiwan supply chain are being fed the narrative than the L40S is better than an A100, due to it’s higher FLOPS. To be clear, those GPUs are not good for LLM inference because they have less than half the memory bandwidth of the A100 and no NVLink. This means running LLMs on them with good TCO is nigh on impossible save for very small models. High batch sizes have unacceptable tokens/second/user making the theoretical FLOPS useless in practice for LLMs.

OEMs are also being pressured to support Nvidia’s MGX modular server design platform. This effectively takes all the hard work out of designing a server, but at the same time, commoditizes it, creating more competition and driving down margin for the OEM. Firms like Dell, HPE, and Lenovo are obviously resistant to MGX, but the lower cost firms in Taiwan such as SuperMicro, Quanta, Asus, Gigabyte, Pegatron, and ASRock are rushing to fill in that void and commoditize low cost “enterprise AI”.

Conveniently, these OEMs/ODMs participating in L40S and MGX hype games also get much better allocations of Nvidia’s mainline GPU products.


Old report (but still relevant now)

Ever since the early days of Nvidia, Jensen has been aggressive with his supply chain in order to fuel Nvidia’s massive growth ambitions.

Nvidia has bought up the majority of TSMC’s CoWoS supply. They didn’t stop there, they also went out and investigated and bought up Amkor’s capacity. We detailed this here.

Supply Chain Mastery – Jensen Always Bets Big

One thing we really respect about Nvidia is that they are masters of supply chain. They have shown multiple times in the past that they can creatively ramp their supply during shortages.

Nvidia has secured immense supply by being willing to commit to non-cancellable orders or even make prepays. Nvidia has $11.15 billion of purchase commitments, capacity obligations, and obligations of inventory. Nvidia also has an additional $3.81 billion of prepaid supply agreements. No other vendor comes even close, and so they won’t be able to partake in the frenzy that is occurring.

https://www.semianalysis.com/p/nvidias-plans-to-crush-competition
 
Old report, but still relevant:

It’s no secret that Nvidia is taking advantage of the massive demand for GPUs by using it to upsell and cross sell customers. Many sources in the supply chain tell us Nvidia is giving preferential allocation to firms based on a number of factors, including but not limited to: multi-sourcing plans, plans to make their own AI chips, buying Nvidia’s DGX, NICs, switches, and/or optics. We detailed this in the Amazon Cloud Crisis report from March.

look how intoxicated the supply chain is on L40 and L40S GPUs. We wrote about this here, but since then we have heard more about Nvidia’s allocation shenanigans.

For those OEMs to win larger H100 allocation, Nvidia is pushing the L40S. Those OEMs face pressure to buy more L40S, and in turn receive better allocations of H100. This is the same game Nvidia played in PC space where laptop makers and AIB partners had to buy larger volumes of G106/G107 (mid-range and low-end GPUs) to get good allocations for the more scarce, higher margin G102/G104 (high-end and flagship GPUs).

Many in the Taiwan supply chain are being fed the narrative than the L40S is better than an A100, due to it’s higher FLOPS. To be clear, those GPUs are not good for LLM inference because they have less than half the memory bandwidth of the A100 and no NVLink. This means running LLMs on them with good TCO is nigh on impossible save for very small models. High batch sizes have unacceptable tokens/second/user making the theoretical FLOPS useless in practice for LLMs.

OEMs are also being pressured to support Nvidia’s MGX modular server design platform. This effectively takes all the hard work out of designing a server, but at the same time, commoditizes it, creating more competition and driving down margin for the OEM. Firms like Dell, HPE, and Lenovo are obviously resistant to MGX, but the lower cost firms in Taiwan such as SuperMicro, Quanta, Asus, Gigabyte, Pegatron, and ASRock are rushing to fill in that void and commoditize low cost “enterprise AI”.

Conveniently, these OEMs/ODMs participating in L40S and MGX hype games also get much better allocations of Nvidia’s mainline GPU products.
But at the same time is it unusual for a company of any size big or small to offer preferred treatment to their larger more diverse clients?
The issue is it's no longer an issue of big or small but huge and mega companies operating as functional monopolies even if they aren't legal monopolies.

Nvidia AI is a functional monopoly, nobody else can do what they do or is yet doing what they are doing but they have lots of incoming or potential competition so they aren't technically a Monopoly and they aren't yet doing anything that would be legally anti-competitive even if it is overly aggressive.

Jensen paid attention to all the crap Intel pulled and has been on the receiving end of it more than once so he's making sure to color inside the lines just really aggressively.
 
It’s no secret that Nvidia is taking advantage of the massive demand for GPUs by using it to upsell and cross sell customers. Many sources in the supply chain tell us Nvidia is giving preferential allocation to firms based on a number of factors.

since then we have heard more about Nvidia’s allocation shenanigans.
That's just business. You give preferential treatment to your best customers. Every company does that.

I'm not sure that is the same as what GPP was. GPP was Nvidia not wanting their gpu's sold under the same brand names as AMD gpus, because doing so helped their competitor. Eg. If asus sells Nvidia GPUs under the ASUS STRIX brand, Nvidia did not want there to also be AMD gpus branded as STRIX. They wanted brand separation, because the brand would gain a good rep based on Nvidia chips, then years later turn around and help sell AMD chips. I think the negative reaction to that was overblown. The negative press caused it to stop being a thing, so no agreement to sign (also a big point of contention). But treating your good customers preferentially, I don't think it's the same as what GPP was supposed to be or purported to be (whatever that truth exactly is/was).
 
That's just business. You give preferential treatment to your best customers. Every company does that.

I'm not sure that is the same as what GPP was. GPP was Nvidia not wanting their gpu's sold under the same brand names as AMD gpus, because doing so helped their competitor. Eg. If asus sells Nvidia GPUs under the ASUS STRIX brand, Nvidia did not want there to also be AMD gpus branded as STRIX. They wanted brand separation, because the brand would gain a good rep based on Nvidia chips, then years later turn around and help sell AMD chips. I think the negative reaction to that was overblown. The negative press caused it to stop being a thing, so no agreement to sign (also a big point of contention). But treating your good customers preferentially, I don't think it's the same as what GPP was supposed to be or purported to be (whatever that truth exactly is/was).
As I recall, GPP required that AIB's "gaming" lines be Nvidia only. That goes beyond "brand separation".
 
As I recall, GPP required that AIB's "gaming" lines be Nvidia only. That goes beyond "brand separation".
It required AMD be removed from their existing gaming brands, they could be set up under different gaming brands.

It should be noted that while the GPP program is “dead and gone” some AIB’s still moved AMD products to a new branding lineup.

But let’s face it for most AIB’s NVidia did and still outsells AMD 10 to 1, moving them to a different branding would have been a major blow.
 
Old report, but still relevant:

It’s no secret that Nvidia is taking advantage of the massive demand for GPUs by using it to upsell and cross sell customers. Many sources in the supply chain tell us Nvidia is giving preferential allocation to firms based on a number of factors, including but not limited to: multi-sourcing plans, plans to make their own AI chips, buying Nvidia’s DGX, NICs, switches, and/or optics. We detailed this in the Amazon Cloud Crisis report from March.

look how intoxicated the supply chain is on L40 and L40S GPUs. We wrote about this here, but since then we have heard more about Nvidia’s allocation shenanigans.

For those OEMs to win larger H100 allocation, Nvidia is pushing the L40S. Those OEMs face pressure to buy more L40S, and in turn receive better allocations of H100. This is the same game Nvidia played in PC space where laptop makers and AIB partners had to buy larger volumes of G106/G107 (mid-range and low-end GPUs) to get good allocations for the more scarce, higher margin G102/G104 (high-end and flagship GPUs).

Many in the Taiwan supply chain are being fed the narrative than the L40S is better than an A100, due to it’s higher FLOPS. To be clear, those GPUs are not good for LLM inference because they have less than half the memory bandwidth of the A100 and no NVLink. This means running LLMs on them with good TCO is nigh on impossible save for very small models. High batch sizes have unacceptable tokens/second/user making the theoretical FLOPS useless in practice for LLMs.

OEMs are also being pressured to support Nvidia’s MGX modular server design platform. This effectively takes all the hard work out of designing a server, but at the same time, commoditizes it, creating more competition and driving down margin for the OEM. Firms like Dell, HPE, and Lenovo are obviously resistant to MGX, but the lower cost firms in Taiwan such as SuperMicro, Quanta, Asus, Gigabyte, Pegatron, and ASRock are rushing to fill in that void and commoditize low cost “enterprise AI”.

Conveniently, these OEMs/ODMs participating in L40S and MGX hype games also get much better allocations of Nvidia’s mainline GPU products.
Just as a note, the AI series gets all the fanfare for being the hottest shit since Jesus bought a glass bottom boat but the L series cards are no slouch at workstation jobs and virtual desktop work.

With all the loads being blown over AI it’s easy to forget that other types of jobs exist. There was a stretch of time that the bigger workstation OEM’s were prioritizing AI customers at the expense of traditional drafting, engineering, and visual work because the AI cards had bigger margins but the rest of the components which were also facing constraints didn’t. So they made more profits moving H100 based systems than they did L40 because the memory and CPU’s bundled with them were often identical.
 
Back
Top