RTX 4xxx / RX 7xxx speculation

Holy it's been a while since I've visited. Looking to see what information will come of it all for the 4xxx series along with AMD's new 7xxx series. Hearing some rumors from kopite? that they are no longer as amped up for the RDNA 7xxx series as before. Either way, exciting times ahead, not sure if I'll need to upgrade from my 3090!
 
I was about to get a 3090 ti (running 2070 super) but going to try to wait a bit longer for a 4090.
 
nteresting how the 4060 will have only 8GB VRAM,
970 gtx had 4 GB the 1060 GTX had 6 gig of ram.
GTX 1070 had 8 gig the RTX 2060 had 6 gig
RTX 2070 had 8 gig the 3060TI had 8 gig (GDDR6)
The 3070 had 8 gig (non-x) the 4060 would have 8 gig (but now GDDR6x)

Seem quite in lineish, it is the 3060 with 12 gig that is just really strange in the line up imo.

Having the xx60 having lower ram level than the xx70 of the previous generation is not really out of line, equal is quite good.

I think I understand is the bandwith bus of 192 bits and the 8GB that is interesting ?
 
Last edited:
970 gtx had 4 GB the 1060 GTX had 3 gig of ram.
GTX 1070 had 8 gig the RTX 2060 had 6 gig
RTX 2070 had 8 gig the 3060TI had 8 gig (GDDR6)
The 3070 had 8 gig (non-x) the 4060 would have 8 gig (but now GDDR6x)

Seem quite in lineish, it is the 3060 with 12 gig that is just really strange in the line up imo.

Having the xx60 having lower ram level than the xx70 of the previous generation is not really out of line, equal is quite good.
That's true. It makes more sense that it has 8, seems like a downgrade to the 'masses' who saw the 3060 has 12 but don't know the other factors.
I think I understand is the bandwith bus of 192 bits and the 8GB that is interesting ?
I think this is what they're getting at. Lower RAM with a similar memory configuration. But the CUDA core count is higher by over 25% so that would be a boost, as well as the GDDR6X over non-X with the 3060 which should definitely give it a considerable gain. Also 36 SMs compared to 28 on the 3060.
I think the most grumble-worthy point is that it has a higher TGP, but it's almost negligible in my eyes.
 
970 gtx had 4 GB the 1060 GTX had 3 gig of ram.
GTX 1070 had 8 gig the RTX 2060 had 6 gig
RTX 2070 had 8 gig the 3060TI had 8 gig (GDDR6)
The 3070 had 8 gig (non-x) the 4060 would have 8 gig (but now GDDR6x)

Seem quite in lineish, it is the 3060 with 12 gig that is just really strange in the line up imo.

Having the xx60 having lower ram level than the xx70 of the previous generation is not really out of line, equal is quite good.

I think I understand is the bandwith bus of 192 bits and the 8GB that is interesting ?
The 1060 had 3 and 6GB variants with the 6GB ones being significantly more popular since they could also double for efficient mining cards
 
I haven’t done the math but isn’t it unlikely for a 192 bit memory bus card to have 8GB memory? That was the whole reason the 3060 ti had 8GB on a 256 bit bus but the 3060 had 12GB on 192 bit. 6 GB would have been to little so they bumped it to the next capacity that would work.

Either the memory amount or bus width is wrong on the 4060.
 
I haven’t done the math but isn’t it unlikely for a 192 bit memory bus card to have 8GB memory? That was the whole reason the 3060 ti had 8GB on a 256 bit bus but the 3060 had 12GB on 192 bit. 6 GB would have been to little so they bumped it to the next capacity that would work.

Either the memory amount or bus width is wrong on the 4060.
Not necessarily. It is low for a higher end card but if the memory clock is high enough, it will balance out some.
 
Not necessarily. It is low for a higher end card but if the memory clock is high enough, it will balance out some.
How so? Doesn’t 192 have to be in multiples of 3 to work properly? The leak says 8GB at 192 bits which would be a very odd configuration. It would either be 8 GB at 256 bits or something like 6 or 12 at 192.
 
The 1060 had 3 and 6GB variants with the 6GB ones being significantly more popular since they could also double for efficient mining cards
6GB was also more popular because it wasn't castrated by Nvidia's deceptive marketing department.
 
How so? Doesn’t 192 have to be in multiples of 3 to work properly? The leak says 8GB at 192 bits which would be a very odd configuration. It would either be 8 GB at 256 bits or something like 6 or 12 at 192.
It does not. Generally, this would be the optimal case as it allows each 32 bits of memory bus to connect to one package of GDDR (being either one or two GB VRAM). A card that does not follow this will access some of the memory at a partial bandwidth. In the case of the 4060, 8 GB would be split as such: 4 GB would run on their own 32-bit lanes, the remaining 4 would run with 2 GB per 32-bit lane. GDDR has something called clamshell mode, which allow two packages to connect to one channel, doubling memory. It is less efficient than if you had 8 GB on 256 bits, so it certainly isn't optimal, but it is a viable option to raise the performance of the card somewhat than being forced to limit it to 6 GB VRAM, which would be a significant step down.
 
60 series have never been considered a higher end card.
I wasn't referring specifically to the 60s with this in mind. I was thinking more about the RTX line as a whole, compared to the 1600s or other less powerful card lines. Maybe I'm looking at it a little too black and white.
 
It does not. Generally, this would be the optimal case as it allows each 32 bits of memory bus to connect to one package of GDDR (being either one or two GB VRAM). A card that does not follow this will access some of the memory at a partial bandwidth. In the case of the 4060, 8 GB would be split as such: 4 GB would run on their own 32-bit lanes, the remaining 4 would run with 2 GB per 32-bit lane. GDDR has something called clamshell mode, which allow two packages to connect to one channel, doubling memory. It is less efficient than if you had 8 GB on 256 bits, so it certainly isn't optimal, but it is a viable option to raise the performance of the card somewhat than being forced to limit it to 6 GB VRAM, which would be a significant step down.
Isn’t that what they did with the 970? It was hardly a good idea
 
Isn’t that what they did with the 970? It was hardly a good idea
No, the 970 was 256-bit and 4 GB (512 MB per 32-bit lane), ostensibly. Nvidia settled a class-action lawsuit over the 970 because 512 MB was separated from the main line so the card effectively ran as with 3.5 GB. I remember getting a $30 'consolation prize' due to that.
 
Nvidia confirmed the 40 series in Q3/Q4 during their earnings call.

"As we expect some ongoing impact as we prepare for a new architectural transition later in the year, we are projecting Gaming revenue to decline sequentially in Q2."
 
No, the 970 was 256-bit and 4 GB (512 MB per 32-bit lane), ostensibly. Nvidia settled a class-action lawsuit over the 970 because 512 MB was separated from the main line so the card effectively ran as with 3.5 GB. I remember getting a $30 'consolation prize' due to that.

I got the same. And yet the card worked fine for me up until a few weeks ago when I upgraded. Memory quantity isn't the only factor to consider as we've seen time and time again.
 
I got the same. And yet the card worked fine for me up until a few weeks ago when I upgraded. Memory quantity isn't the only factor to consider as we've seen time and time again.
I fully agree. The 970 was a fantastic card for my build several years ago. Of course the 3060 I have now is a big step up, but for its time the 970 was an excellent mid-range card. I could play almost anything maxed out on 1080p with it.
 
But if prices remain consistent then new tech is just costing more.
Price drops on previous tech just mean new tech is that much better w/o costing that much more.
Some people get really obsessed with GPU resale value and try to time it just correctly. Sometimes to get burned like selling 2080tis for $400 only to find rtx 3000 cards noexistant lol.
 
Some people get really obsessed with GPU resale value and try to time it just correctly. Sometimes to get burned like selling 2080tis for $400 only to find rtx 3000 cards noexistant lol.
To be fair, that happened once. Time will tell if that becomes the norm moving forward
 
To be fair, that happened once. Time will tell if that becomes the norm moving forward

Tell me again how Nvidia/AMD will self-sabotage selling their cards at twice the MSRP.
Next-gen is going to be paper launch for the whole 2022.
 
Interesting for me, this was the cheapest generation of all time. Made over $10,000 from hobby mining, actual cash out and not counting sandbag coins for any future worth. Of course using older cards as well.

Do not see this option for next generation, using your card for other means to reduce total cost. Could be very expensive increase overall for me.
 
Tell me again how Nvidia/AMD will self-sabotage selling their cards at twice the MSRP.
Next-gen is going to be paper launch for the whole 2022.
To tell you again, I’d had to have told you at least once before.
 
Last edited:
Interesting how the 4060 will have only 8GB VRAM, but to be fair the 3060Ti has only 8 yet it runs considerably better than the base 3060. So I wouldn't pan it yet.
I wonder as well if Nvidia is including it as more of an entry-level into the RTX line and possibly eliminating the chance of a 4050 or something like this.
Well that's because the 3060 Ti is using the same chip as the 3070 while the 3060 is a tier below chip (GA104 vs GA106)
 
So I'm just looking at the rumored specs for the 4xxx series and one thing that struck me is the significantly reduced mem-bus. The 4080 is only 256-bit, and the 4070 has a paltry 192-bit bus. Is nV relying purely on memory speed to perform at higher resolution/IQ?

AMD tried to skimp on the memory bus to save costs on the 6xxx series (purporting there was some magic that would work around the smaller width), but they really suffer at higher resolution.
 
Some people get really obsessed with GPU resale value and try to time it just correctly. Sometimes to get burned like selling 2080tis for $400 only to find rtx 3000 cards noexistant lol.
My system is to sell a few months prior to the next launch (and use a back up card and/or my gaming laptop in the interim), thereby maximizing resale value. People who wait within a month or two to sell normally get burnt because, at that point, rumors are starting to solidify on performance and price of the next gen and those with the current gen cards panic. The situation with the 2080 ti selling for $500 a few weeks before the 3xxx series launch (which were extremely hard to get) is the prime example. It's just like any investment market: you need to not be greedy holding (and be willing to go a few months without assets).

I sold my remaining 3080 ti last week for $1000+ and am using my old 980 ti and 2080 laptop, patiently awaiting for the new cards in August. Those holding onto their cards until July are going to see a massive drop in resale value, especially this round since miners are timing to do the same. That said, it's going to be springtime for those in the market for used cards.
 
Every generation there are tons of bullfuck rumors and everyone thinks the next card is going to be something ludicrous like 75% faster, then the new cards come out and the high end one is 15% faster than the previous generation. Every. Single. Time.
 

Attachments

  • 1654070884986.jpeg
    1654070884986.jpeg
    5.8 KB · Views: 0
Every generation there are tons of bullfuck rumors and everyone thinks the next card is going to be something ludicrous like 75% faster, then the new cards come out and the high end one is 15% faster than the previous generation. Every. Single. Time.
True. If you want to see some absurd return-of-Jesus-and-Buddha optimism (some members still here I won't call out), go look up the old Vega thread.

That said, I've seen a more tempered mentality in the last two launches as people have learned to dampen their expectations. Actually, most people (here at least) were somewhat surprised with how much of an improvement Ampere, and to a lessor extent RDA 2, brought vs. their predecessors.

My guess is AMD and nVidia both know they don't need to blow minds to sell cards this round, so improvements will be lukewarm e.g. ~30% improvement per card tier. This middling improvement will be partially due to strategy (so they can further their agenda to release incrementally-improved models), and partially due to laziness/lack of motivation because there are only two players in the market.
 
True. If you want to see some absurd return-of-Jesus-and-Buddha optimism (some members still here I won't call out), go look up the old Vega thread.

That said, I've seen a more tempered mentality in the last two launches as people have learned to dampen their expectations. Actually, most people (here at least) were somewhat surprised with how much of an improvement Ampere, and to a lessor extent RDA 2, brought vs. their predecessors.

My guess is AMD and nVidia both know they don't need to blow minds to sell cards this round, so improvements will be lukewarm e.g. ~30% improvement per card tier. This middling improvement will be partially due to strategy (so they can further their agenda to release incrementally-improved models), and partially due to laziness/lack of motivation because there are only two players in the market.

I think even 30% is being pretty optimistic.
 
I think even 30% is being pretty optimistic.
If everything is made at TSMC then 30% could very well be on the low side. The samsung process seems to be quite inferior to what is available at TSMC. Also keep in mind that ray-tracing is still immature so we might see massive gains there as well.

There was finally competition again in the current gen, outside of ray-tracing, so neither side can afford to hold back on their high-end cards. If one side is 15% faster and only 10% more expensive then most will chose the faster card.
 
It's just like any investment market: you need to not be greedy holding (and be willing to go a few months without assets).
Idk. Never viewed a GPU as an investment. I've never gimped myself to a backup card because, well, I still want to be able to play the same stuff.

*Edit* Then again I never buy the overtly expensive models in the 1st place and only buy when I can get a good deal.
 
If everything is made at TSMC then 30% could very well be on the low side. The samsung process seems to be quite inferior to what is available at TSMC. Also keep in mind that ray-tracing is still immature so we might see massive gains there as well.

There was finally competition again in the current gen, outside of ray-tracing, so neither side can afford to hold back on their high-end cards. If one side is 15% faster and only 10% more expensive then most will chose the faster card.
And the absurdity of 30% being a "good" outcome for a revised architecture on a new node isn't lost on us old timers. As late as 5 years ago, a 30% uplift after two years would have been considered a middling disappointment -- but here we are.
 
And the absurdity of 30% being a "good" outcome for a revised architecture on a new node isn't lost on us old timers. As late as 5 years ago, a 30% uplift after two years would have been considered a middling disappointment -- but here we are.
So true. Back in the days we used to be disappointed when below 50% and happy when you saw 70% or higher increase over previous architectures initial release. Anything in-between was as expected. It used to be new architecture->refresh->new architecture with 2 years between architectures and refresh in-between. Now the refreshes barely offer up any increase and the architecture to architecture gap has been fairly low despite being 2 years or so between each new architecture. E.g. 3080->4080 would be an architecture jump and way back in the days we would have usually gotten somewhere between 50 and 70% gains, if not more 15+ years ago.
 
then the new cards come out and the high end one is 15% faster than the previous generation. Every. Single. Time.
Are you using some actual pricing tier system or looking at 1% low, ms for a frame instead of average fps to say that ? Really depend on resolution

From the techpowerup link above, at 4K:
1080 -> 39.4 fps -> 2080 58.2 fps -> 3080 97.4 fps

That ratio wise
1: 1.477 : 2.47

Or a 47% jump followed by a 67% one

If the 2080 super is a better in between here we change for 57% and 56.5%

Are people speculating to see again a much higher than 60% jump at 4K on the 4080 ? The 4090 yes, but that power envelope and specs are quite something.
 
Back
Top