RTX 5xxx / RX 8xxx speculation

At this point, I'm all for some competition. Nvidia has held a GPU monopoly for far too long!
1728499002106.png
 
I think most here echo my sentiments that Nvidia has had very little competition and that lack of rivalry leads to development stagnation and high prices. I used the term "monopoly" loosely, but it fits for me since there hasn't been an attractive contender for my dollars since I owned a 3dfx Voodoo card in 1998. Sure, I've owned a couple Radeon cards since then, but I've always gone back to Nvidia because they offer the best performance. If there's a chance that Intel can put some pressure on them, I'm here for it!
 
Wccftech’s report also points to a more powerful RTX 5080 than expected. The unannounced GPU will reportedly have a 32Gbps memory speed instead of the rumored 28Gbps and a maximum bandwidth of 1024GB/s instead of the 896GB/s that was rumored.
That would be 4090 bandwidth (just al ittle more) in the xx80, it could make for this generation 256 bits "ok" and not limiting, if the performance are 4090 ishh.

And that would be a nice upgrade for the 5070 as well, 4080 bandwith for the xx70 series could be enough to not be dragged by it.

Last gen total stagnation in memory bandwidth could make this gen upgrade "free" for Nvidia outside the 5090 if they really push 512 bits (32 gbs at 384 or 448 would already be quite the jump).

There also rumours that the 5080 (5090 mobile) will be 24gb of vram, probably waiting for the 3gb ram chips to be ready to launch, maybe if the 5080 will not be ready for mobile right away they want a 5070 to have a 5080m-5070m ready at least.

Wccftech says Nvidia’s CES keynote will also focus on next-generation AI technologies for gaming with a “major surprise” expected.
I feel this could be a trap for Nvidia, it would be a big deception for it to no have some DLSS 4 or some other buzz word new tech like in vram AI texture system or some other asset generation, dynamic and up to 3 generated frame by frame that let people use 240 hz monitor with their new open spec g-sync hardware but do not use them if the frame rate is naturally high enough, just when it drops, with a lesser hit on latency because of better reflex chain or something.

That could force them to jump the shark as they need an "surprise" announcement all the time, expecting not only surprise but major surprise is a dangerous place to be, but at least it creates desperate push for innovation in a world where transistor density gain by dollar seem to not be happening anytime soon.
 
I think most here echo my sentiments that Nvidia has had very little competition and that lack of rivalry leads to development stagnation and high prices. I used the term "monopoly" loosely, but it fits for me since there hasn't been an attractive contender for my dollars since I owned a 3dfx Voodoo card in 1998. Sure, I've owned a couple Radeon cards since then, but I've always gone back to Nvidia because they offer the best performance. If there's a chance that Intel can put some pressure on them, I'm here for it!

Sounds like Nvidia is a victim of their own success then. It's not their fault they simply make a better product than the competition.
 
I think most here echo my sentiments that Nvidia has had very little competition and that lack of rivalry leads to development stagnation and high prices. I used the term "monopoly" loosely, but it fits for me since there hasn't been an attractive contender for my dollars since I owned a 3dfx Voodoo card in 1998. Sure, I've owned a couple Radeon cards since then, but I've always gone back to Nvidia because they offer the best performance. If there's a chance that Intel can put some pressure on them, I'm here for it!
What development stagnation? Seriously, NV had pushed graphics forward unlike anyone else in the industry. It's their fault that they are the undisputed leader in graphics I guess..,.
 
What development stagnation? Seriously, NV had pushed graphics forward unlike anyone else in the industry. It's their fault that they are the undisputed leader in graphics I guess..,.

Forreal. Nvidia was always the first to come up with neat things, and while some ended up being mostly gimmicks, others are complete game changers like VRR, AI upscaling, and ray tracing.
 
What development stagnation? Seriously, NV had pushed graphics forward unlike anyone else in the industry. It's their fault that they are the undisputed leader in graphics I guess..,.
R&D budget going into GPU and their part is indeed giant.

What progresses at even half the speed gpu did in the last decade (specially with the software making it easy to use) ?

a 4090 has 255% the performance on an larger 754mm 2018 2080ti:
https://www.techpowerup.com/review/gigabyte-geforce-rtx-4080-super-gaming-oc/32.html

Thats better than 17% a year improvement, AI upscaling got from game specific and unusable to generalized training and almost an must during that time, Nvidia cooling design and tech alone look like they could have had more R&D going on them than most niche industry on a whole get.

They still have the big AAA games consoles competition and obviously a need to push tech for your igpu-apu to never become enough for gamers.

The performance rumors of the 5080-5090 are a bit hard to believe, but if they made without a generous new node improvement a 4080 in power, die size, bus that goes 5-10% faster than the 4090, that show really good improvement and not really a stagnation on the micron ram part in part but Nvidia as well, +40%, we are over the moon when CPU do something like this.

To take just an example, there is rumors that blackwell will use a 3d pcb with 3 different layers, making a lot of surface for cooling and density, I am not even sure what that mean, but sound like complicated innovation
 
Last edited:
I'm still having a hard time believing the 5090 really will be 512bit 32GB. Just seems so crazy when it doesn't need to be. They could a 448bit version for whatever they plan to charge for this 512bit version and sell the exact same. Hell, even a 384bit version would.

I can't remember the last time nvidia when right out the gate with a new arch, a new node, and a full fat core chip, top gaming offering on day 1. They haven't really had to since AMD bought ATI.

These chips are going to be very expensive to make, so why not roll out with a cheaper to make slightly cut down version, sell for same in the face of ZERO competition and make more profit? At the very least you could release both with a 6 month refresh for the top tier part coming later.
 
I can't remember the last time nvidia when right out the gate with a new arch, a new node, and a full fat core chip, top gaming offering on day 1. They haven't really had to since AMD bought ATI.
Rumors are significantly below the full chip too (which could be a giant ~750mm affair considering core count versus the rumoured die-size and core count of the 5070-5080), seem like the exact 4090 gameplan.

21,760 of the 24,576 GB102 core are expected to be enabled or 88.5% (88.9% for the 4090), those will be in part passed down missing the bin B40-RTX6000 card.

They could a 448bit version for whatever they plan to charge for this 512bit version and sell the exact same.
Specially with the gddr7 upgrade giving already a nice bandwith boost, but not sure about exact same considering the importance of bandwith for AI task ;), 32gb-512 bits could make them quite the AI card and that 512 bit memory controller could have been a decision made for the B40/rtx 6000 and will be already there, the question being is it disabled to make cheaper PCB.

One possible reason, going Samsung 8 -> TSMC 5 warped expectation and now they need to up the specs a la Turing to meet them one other is the price point people are ready to pay created a new class of gpu up the stack. And a giant performance gain on the halo product make everything else around a launch easier to pass vibe wise.

If the rumours end up true, maybe it is to have a nice "round" 32GB amount of vram instead of 28 that look better on the box as well.
 
These cards are going to get the usual "oh there is short supply and supply chain issues"

You know it's going to happen. Cards will be released. Scalpers will buy all of them and they will go for 3k plus even more.

Gonna be the same crap all over again. You just know they are chomping at the bit to release the supply chain horseshit again. The first 6 months these will be flying unicorn elephants.

Good luck.
 
These cards are going to get the usual "oh there is short supply and supply chain issues"

You know it's going to happen. Cards will be released. Scalpers will buy all of them and they will go for 3k plus even more.

Gonna be the same crap all over again. You just know they are chomping at the bit to release the supply chain horseshit again. The first 6 months these will be flying unicorn elephants.

Good luck.
I doubt scalpers will be needed to sell these at $3000, Nvidia will do that themselves. a $2500 MSRP means that Asus will make their watercooled super-ultra badass overclocked (by 10Mhz) dick-sucking big-tiddied RGB mega edition for like $3599
 
I just think there is going to magically be a shortage of cards again. Whatever way it happens. My bet is nv will say there is a shortage.
 
What Nvidia as possibly to gain by not having just the most card as they can make there.... it is not like they can sell them higher than MSRP later on...

If there is more demand than supply there will be a shortage, that not something people say, that something that it is.
 
Last edited:
What Nvidia as possibly to gain by not having just the most card as they can make there.... it is not like they can sell them higher than MSRP later on...

If there is more demand than supply there will be a shortage, that not something people say.

It's just my opinion. Take it with a grain of salt. My opinion is there will be a shortage and supply issues. That is it. My opinion only.

A company the size of nvidia should be able to meet demand, no matter what people say. My opinion again. At their worth, there should be no shortage whatsoever considering how much money they have. My opinion.
 
Last edited:
It's just my opinion. Take it with a grain of salt. My opinion is there will be a shortage and supply issues. That is it. My opinion only.

A company the size of nvidia should be able to meet demand, no matter what people say. My opinion again. At their worth, there should be no shortage whatsoever considering how much money they have. My opinion.
I'm ready for Out of Stock for weeks.
 
It's just my opinion. Take it with a grain of salt. My opinion is there will be a shortage and supply issues. That is it. My opinion only.

A company the size of nvidia should be able to meet demand, no matter what people say. My opinion again. At their worth, there should be no shortage whatsoever considering how much money they have. My opinion.
They can't forecast exact demand, and don't want to end up with overstock.
 
They can't forecast exact demand, and don't want to end up with overstock.
They can forecast demand enough to not end up with overstock. They are far from oh we stupid. Regardless overstock would be nice so that people could actually buy their shit at non ass rape prices.

I personally believe it's simply a cashgrab. Nothing more.

But regardless. We all have personal opinions about the matter.
 
What Nvidia as possibly to gain by not having just the most card as they can make there.... it is not like they can sell them higher than MSRP later on...

If there is more demand than supply there will be a shortage, that not something people say, that something that it is.
Not really caring about the gaming market and using the fab time to fill the back order of AI cards that are back ordered a year out.
 
It's just crazy high demand IMO. That being said in the EU only the 3000 cards were hard to get and scalped, none of the other gens have suffered from that.

And the USA economy has been going extremely hot so it would make sense that a lot of people can easily afford even a 4090 there, way more so than in Europe, where the prices are also higher to begin with (due to taxes, import duties, conversion rates etc.).
 
I'm still having a hard time believing the 5090 really will be 512bit 32GB. Just seems so crazy when it doesn't need to be. They could a 448bit version for whatever they plan to charge for this 512bit version and sell the exact same. Hell, even a 384bit version would.

I can't remember the last time nvidia when right out the gate with a new arch, a new node, and a full fat core chip, top gaming offering on day 1. They haven't really had to since AMD bought ATI.

These chips are going to be very expensive to make, so why not roll out with a cheaper to make slightly cut down version, sell for same in the face of ZERO competition and make more profit? At the very least you could release both with a 6 month refresh for the top tier part coming later.
384b = 24GB
448b = 28GB

B100 and B200 MCM enterprise products based on Blackwell are already out on the market. Gaming Blackwell is monolithic, not MCM, and is launching 5-6 months after the enterprise products, so it's not "day 1."
Not really caring about the gaming market and using the fab time to fill the back order of AI cards that are back ordered a year out.
The backorder of enterprise Blackwell is due to the issue with TSMC's CoWoS-L packaging for its MCM design that has been posted about many times here. Gaming Blackwell is monolithic and not being produced on the same fabs. Issues with one should not affect the other.
 
will have a package size of just 29 x 29 mm
Could the test gig be larger than a gpu die, cover the vram or other part of the pcb, if it look like google image result for test jig block, they seem bigger than the die you put in them.

the pins count would it not be 1137 if it is a 1137 BGA ?, maybe that the pin count and the 0.773 would tell us the die size, according to gpt, 250mm die max.... to be expected with those values.
 
Last edited:
They can forecast demand enough to not end up with overstock. They are far from oh we stupid. Regardless overstock would be nice so that people could actually buy their shit at non ass rape prices.

I personally believe it's simply a cashgrab. Nothing more.

But regardless. We all have personal opinions about the matter.
Buy the AMD competitor to the 5090...
 
Could the test gig be larger than a gpu die, cover the vram or other part of the pcb, if it look like google image result for test jig block, they seem bigger than the die you put in them.

the pins count would it not be 1137 if it is a 1137 BGA ?, maybe that the pin count and the 0.773 would tell us the die size, according to gpt, 250mm die max.... to be expected with those values.
Lol. Riiiiight. "Just" an 841mm2 chip you say? :p Also, that chart says that's the number of pins, not dimensions :). I think.

Was going to say. The die in the 4090 is something like 24mm x 25mm.
Is there a difference between die size & package size ?


Olrak claims these are package size including 8 lane pcie x5 with 128bit bus width
 
Is there a difference between die size & package size ?
Yes. The die is normally encased in a plastic or ceramic package. Think chip on a board or dimm here, not CPU package with a PCB and heat spreader.
 
NVIDIA-Pascal-GP104-vs-GM200-vs-GM204-GPU-Comparison.jpg


package is the whole gray or green square and the gpu die the black one in the middle ?
 
Last edited:
Mobile RDNA 4 Lineup

AMD Radeon RX 8000 “RDNA 4” Mobile GPUs Include 16, 12 & 8 GB Variants, Up To 175W TGPs​


GcCcUIWbMAADzUd.jpeg


https://x.com/Olrak29_/status/1855655299484643721

https://wccftech.com/amd-radeon-rx-8000-rdna-4-mobile-gpus-16-12-8-gb-variants-up-to-175w-tgp/

AMD Ryzen AI MAX 300 "Strix Halo" APU Lineup:​

SKU NameArchitecturesCPU CoresGPU CoresTDP
Ryzen AI Max+ 395Zen 5 / RDNA 3.516 / 3240 CUs (Radeon 8060S)55-130W
Ryzen AI Max 390Zen 5 / RDNA 3.512 / 2440 CUs (Radeon 8060S)55-130W
Ryzen AI Max 385Zen 5 / RDNA 3.58 / 1632 CUs (Radeon 8050S)55-130W
Ryzen AI Max 380Zen 5 / RDNA 3.56 / 1216 CUs (Radeon 8XXXS)55-130W

https://wccftech.com/amd-mobile-cpu...alo-fire-range-krackan-radeon-rx-8000-series/

View attachment 690812
 
  • Like
Reactions: Axman
like this
Strix Halo at 130w, if that's true and rumored performance around 4070, might as well get a low power 4070.
But if it's around or faster than 4070 (full power) then it might be of interest, especially for that unified ram.
 
Strix Halo at 130w, if that's true and rumored performance around 4070, might as well get a low power 4070.
But if it's around or faster than 4070 (full power) then it might be of interest, especially for that unified ram.
What are they going to do about memory bandwidth for it? Desktop 4070 is 504GB/s, 7700XT is 432 and 7800XT is 624. DDR5-6000 dual channel is a mere 96GB/s. Sure they could use faster ram so that's just an example, but they're going to need a lot more mem bandwidth than a typical dual channel DDR5 setup can offer to match a 4070.
 
What are they going to do about memory bandwidth for it? Desktop 4070 is 504GB/s, 7700XT is 432 and 7800XT is 624. DDR5-6000 dual channel is a mere 96GB/s. Sure they could use faster ram so that's just an example, but they're going to need a lot more mem bandwidth than a typical dual channel DDR5 setup can offer to match a 4070.
Strix Halo is rumored to up the memory bus to 256-bit LPDDR5X 8533. That's still "only" good for 273GB/s though so I guess the cache is gonna be doing some heavy lifting (there's rumored to be additional cache for the iGPU, unclear if shared L4 or something separate)
 
Strix Halo is rumored to up the memory bus to 256-bit LPDDR5X 8533. That's still "only" good for 273GB/s though so I guess the cache is gonna be doing some heavy lifting (there's rumored to be additional cache for the iGPU, unclear if shared L4 or something separate)
That could work. Might not quite catch a 4070, but would help a lot. That's kind of like what Intel did with Lunar Lake. 8533 in the SoC package to keep up with and often beat AMD's APUs. Intel does 8533 and AMD replies with twice as much. Wish they'd give us a 256-bit bus on desktop chips.
 
Seems there's no "D" version of 5070 Ti which means it will be slower than 4090.
 
Back
Top