RTX 4xxx / RX 7xxx speculation

3dmark expressed in absolute and 3090 ratio:
https://videocardz.com/newz/nvidia-...cores-leaked-at-least-82-faster-than-rtx-3090
Fire Strike UltraTime Spy ExtremePort Royal
NVIDIA GeForce RTX 4090 AIB25,708
2.04
19,431
1.89
25,414
1.86
NVIDIA GeForce RTX 4090 Stock25,256
2.00
18,892
1.84
24,886
1.82
NVIDIA GeForce RTX 4080 16GB17,465
1.39
13,977
1.36
17,607
1.29
NVIDIA GeForce RTX 3090 Ti14,007
1.11
10,709
1.04
14,851
1.09
NVIDIA GeForce RTX 309012,607
1.00
10,293
1.00
13,642
1.00
NVIDIA GeForce RTX 3080 Ti12,457
0.99
10,042
0.98
13,226
0.97
NVIDIA GeForce RTX 3080 12GB11,593
0.92
9,358
0.91
12,151
0.89
AMD Radeon RX 6950 XT15,038
1.19
10,644
1.03
10,788
0.79
AMD Radeon RX 6900 XT14,333
1.14
9,952
0.97
10,398
0.76
AMD Radeon RX 6800 XT12,950
1.03
9,203
0.89
9,536
0.70
AMD Radeon RX 680010,568
0.84
7,648
0.74
7,812
0.57

In Average:
4090 AIB
1.93
4090 Stock
1.89
4080 16GB
1.34
3090 Ti
1.08
3090 stock
1.00
3080 Ti
0.98
3080 12GB
0.91
6950 XT
1.00
6900 XT
0.95
6800 XT
0.87
RX 6800
0.71
 
Last edited:
Nvidia is really going for the "Oh well I guess I'll just buy a 4090" crowd this time. Good lord.

41% more performance for 33% more money. On a flagship GPU... Value gets better as you go up the line... It's backwards.
 
The 4080 pricing is trash for that performance increase. Everyone is def hopping on the 4090 once more benches come out.
 
From a value standpoint the 4090 makes so much more sense than the 4080 cards but the size of the cards and the whopping 450 watts baseline TDP is a massive turn off. I flat out did not play demanding games sometimes just because I did not want to deal with the 350 watts on my 3080 ti.

The 4080 12GB is the most pathetic gen on gen improvement ever as its less than 20% faster than the plain 3080 yet it costs 28% more. So not only is the actual performance uplift a joke but for the first time ever on next gen 80 class card you get LESS performance per dollar than the previous gen.
 


Rumors from people that reviewed the card still under embargo

- 4090 about 80% above a 3090 in pure raster, sometime higher
- In Ray tracing often over double of a 3090
- 4090 probably safe to drop-in a system with a 3090, without the power spike issue which make it significantly easier to drive than a 3090TI
- Lovelave perf-watt would be excellent and would keep much of it if you cap the card at lower power, which would be specially good for the Laptop version
 


Rumors from people that reviewed the card still under embargo

- 4090 about 80% above a 3090 in pure raster, sometime higher
- In Ray tracing often over double of a 3090
- 4090 probably safe to drop-in a system with a 3090, without the power spike issue which make it significantly easier to drive than a 3090TI
- Lovelave perf-watt would be excellent and would keep much of it if you cap the card at lower power, which would be specially good for the Laptop version

  • Raster is where its at, I think for most people and those numbers are looking good. Great raster performance gives more FPS for compatitive gamers, better framerate for high resolution gamers and especially VR gamers. For VR, we cannot rely on DLSS (though its good to have in the few VR titles that supports it) and need good raster performance combined with OpenVR_FSR.
  • Nvidia solving much of the issues with transient spikes should give a lot of us less headache and worry. Its a pain having to account for a big headroom in PSU power just because there are some microseconds of spikes that can trigger OCP. With the 4XXX series, its possible to have a more realistic approach when getting a PSU to match the cards.
  • Capping the cards, while keeping performance is great for us that value the power vs noise ratio.
  • Ray tracing is a bonus feature at best, worthless at worst. It was highlighted as something important and worthwhile during the launch of the 2XXX series, but have been pretty much worthless in most cases. With my 2080 TI, it has been something I have turned on, just to see, and then turned off again when I wanted to play the game. The only game I have found it worthwhile playing with is in Minecraft RTX. As long as its something that "might" be useful mostly on halo products, I dont see it becoming "a nessesary to have" feature in the near future. Raytracing is still out of reach for the masses for developers to make it essensial in games. A mere marketing feature, like it was when first introduced with the 2XXX series IMO. Some reviewers focus on the value of raytracing performance in the previous generation made the reviews misleading. Lets hope they have a more realistic and sane approach to it with the 4XXX series.
 
Imho RT performance should remain hidden in reviews, tucked somewhere to separate page and ignored otherwise only when Ada Lovelace has better RT performance than RDNA3.
Should AMD manage to improve RT performance of RDNA3 past Nvidia levels then it should go in to overall performance charts and RT performance presented as the biggest advantage.

It is very simple. What do you not get?
You all feel the same way, no matter GPU vendor preference. Even you Jensen lovers silly you 😃
 
Imho RT performance should remain hidden in reviews, tucked somewhere to separate page and ignored otherwise only when Ada Lovelace has better RT performance than RDNA3.
Should AMD manage to improve RT performance of RDNA3 past Nvidia levels then it should go in to overall performance charts and RT performance presented as the biggest advantage.

It is very simple. What do you not get?
You all feel the same way, no matter GPU vendor preference. Even you Jensen lovers silly you 😃
For many of us, this is not an AMD vs. Nvidia, but an AMD/NVIDA vs consumers. Hardware Unboxed got much trouble from Nvidia when they chose to review the cards for how they perform in most games, rather then having a heavy focus on some limited RTX feature in games. They stood their ground and became a more reliable source then many of the other reviews that were RTX- and especially ray tracing heavy. Even though I did buy a 2080 TI card myself, it was never for the raytracing performance, but the VR performance coming from a GTX 1080.

It doesn´t matter if AMD or Nvidia have the best RT performance this generation when it comes to the actual use of the cards. Install base for cards that can do raytracing is too low and many of those that can do raytracing, might not be able to do so in chosen resolution and framerate. Perhaps a HALO card from Nvidia or AMD can do this, but there will be relatively few I think that will use PSYCO RTX settings ingame outside of benchmarks.

The simplicity in this, is that reviews should reflect the value any feature would have for consumers, not the marketing value of the feature for the GPU vendor, regardless of who it is. Anyone who bought 2XXX series due to their ray tracing performance can look back on it now and see how little it mattered compared to how much some of the reviews painted it out to be.

I´ve had only Nvidia cards the last 5-6 years at least. But I will have no issues going AMD either. Not all of us have some heavy preference or loyalty to either of the companies and we normally see things from a consumer perspective.

Edit: I do get your post was a bit tongue in cheek, but ray tracing and its importance have been so misleading and a lot of reviewers and even forum members should be ashamed pushing it to be of some dealbreaking importance with the 2XXX and even the 3XXX series. At least if they dare to look back and see how little impact it had the last years.
 
Last edited:
ray tracing and its importance have been so misleading and a lot of reviewers and even forum members should be ashamed pushing it to be of some dealbreaking importance with the 2XXX and even the 3XXX series. At least if they dare to look back and see how little impact it had the last years.
Agreed. Someone did a breakdown of it here. Of the number of games that have ray tracing, most of them are indie titles with relatively little or even not good reviews. Of the more popular titles, a lot (like WoW) slap on raytracing mostly for bragging instead of significant graphics improvement.

The number of titles that actually use ray tracing good is pretty small.
 
Agreed. Someone did a breakdown of it here. Of the number of games that have ray tracing, most of them are indie titles with relatively little or even not good reviews. Of the more popular titles, a lot (like WoW) slap on raytracing mostly for bragging instead of significant graphics improvement.

The number of titles that actually use ray tracing good is pretty small.
RT is the future, and it has to start somewhere. Pretty much how all new technologies in rendering have begun. Never great the first few years and then picks up over time.
 
RT is the future, and it has to start somewhere. Pretty much how all new technologies in rendering have begun. Never great the first few years and then picks up over time.
I get that. Invest in it when it makes sense to. I just refuse to believe the sheer number of people who say they need raytracing actually use it significantly.
 
I use it in every game that I own if it supports it.
Right, I would too if I already had the card. But I'd hardly be buying one so I can play WoW with slightly better shadows.

I hear about cyberpunk and control, but then again I've never bought a new gpu because of a single game.
 
Ray tracing is a bonus feature at best, worthless at worst. It was highlighted as something important and worthwhile during the launch of the 2XXX series, but have been pretty much worthless in most cases. With my 2080 TI, it has been something I have turned on, just to see, and then turned off again when I wanted to play the game. The only game I have found it worthwhile playing with is in Minecraft RTX. As long as its something that "might" be useful mostly on halo products, I dont see it becoming "a nessesary to have" feature in the near future. Raytracing is still out of reach for the masses for developers to make it essensial in games. A mere marketing feature, like it was when first introduced with the 2XXX series IMO. Some reviewers focus on the value of raytracing performance in the previous generation made the reviews misleading. Lets hope they have a more realistic and sane approach to it with the 4XXX series.
I feel like it has been has of now and for many Turing buyer they will change their card before having had significant benefit from it, some title RT has a big enough impact where it becomes possible to see if it is on or off without having to look at the menu or be an expert that know what to look for, say Control:


Spider Man remaster do give some hope (if this is really in real time and not some trick which considering it will be launched has a playable demo in November I imagine it is not too much trickey, lot of camera focus and blur tricks to make it possible, but still, I feel that not far from what mid budget render far movie just 15 years ago could have look):


The day game studio save a lot of dev time and assets in not having to do all the cheating to simulate light a lot of value could be passed back to the RT buyers (and by then pathtracing, Pixar like render quality in real time will be a thing), but the in between when the game has to include and the people must have made everything for RT-off and just a mix of Rt vs regular when it is on, the value is less obvious, DLSS value was more obvious (and many different equivalents was already in place for that reason), and it exploded much faster.

Now it depend on what we mean by near future, expected useful life time of a 4090 (7-8 year's) maybe it will, the next generation of consoles having a word to say about it being my guess, if we already here today:


We could be a single doubling RT performance away in just 2 years to make it viable and potential game changing, and the playstation 6-Xbox-X2 having enough of that power to make it fully mainstream in 2026, the way AI progress, what would the denoising and predicting part will make possible with just 4 time the raw power that will be available by then is hard to predict.
 
3dmark expressed in absolute and 3090 ratio:
https://videocardz.com/newz/nvidia-...cores-leaked-at-least-82-faster-than-rtx-3090
Fire Strike UltraTime Spy ExtremePort Royal
NVIDIA GeForce RTX 4090 AIB25,708
2.04
19,431
1.89
25,414
1.86
NVIDIA GeForce RTX 4090 Stock25,256
2.00
18,892
1.84
24,886
1.82
NVIDIA GeForce RTX 4080 16GB17,465
1.39
13,977
1.36
17,607
1.29
NVIDIA GeForce RTX 3090 Ti14,007
1.11
10,709
1.04
14,851
1.09
NVIDIA GeForce RTX 309012,607
1.00
10,293
1.00
13,642
1.00
NVIDIA GeForce RTX 3080 Ti12,457
0.99
10,042
0.98
13,226
0.97
NVIDIA GeForce RTX 3080 12GB11,593
0.92
9,358
0.91
12,151
0.89
AMD Radeon RX 6950 XT15,038
1.19
10,644
1.03
10,788
0.79
AMD Radeon RX 6900 XT14,333
1.14
9,952
0.97
10,398
0.76
AMD Radeon RX 6800 XT12,950
1.03
9,203
0.89
9,536
0.70
AMD Radeon RX 680010,568
0.84
7,648
0.74
7,812
0.57

In Average:
4090 AIB
1.93
4090 Stock
1.89
4080 16GB
1.34
3090 Ti
1.08
3090 stock
1.00
3080 Ti
0.98
3080 12GB
0.91
6950 XT
1.00
6900 XT
0.95
6800 XT
0.87
RX 6800
0.71
One 12 GB SKU conveniently absent from these tables... Everyone knows its shit already.
 
The 4080 pricing is trash for that performance increase. Everyone is def hopping on the 4090 once more benches come out.
that is their master plan lol. 4090 probably has higher margins anayways.
 
I feel like it has been has of now and for many Turing buyer they will change their card before having had significant benefit from it, some title RT has a big enough impact where it becomes possible to see if it is on or off without having to look at the menu or be an expert that know what to look for, say Control:


Spider Man remaster do give some hope (if this is really in real time and not some trick which considering it will be launched has a playable demo in November I imagine it is not too much trickey, lot of camera focus and blur tricks to make it possible, but still, I feel that not far from what mid budget render far movie just 15 years ago could have look):


The day game studio save a lot of dev time and assets in not having to do all the cheating to simulate light a lot of value could be passed back to the RT buyers (and by then pathtracing, Pixar like render quality in real time will be a thing), but the in between when the game has to include and the people must have made everything for RT-off and just a mix of Rt vs regular when it is on, the value is less obvious, DLSS value was more obvious (and many different equivalents was already in place for that reason), and it exploded much faster.

Now it depend on what we mean by near future, expected useful life time of a 4090 (7-8 year's) maybe it will, the next generation of consoles having a word to say about it being my guess, if we already here today:


We could be a single doubling RT performance away in just 2 years to make it viable and potential game changing, and the playstation 6-Xbox-X2 having enough of that power to make it fully mainstream in 2026, the way AI progress, what would the denoising and predicting part will make possible with just 4 time the raw power that will be available by then is hard to predict.

Exactly! For Turing, even though some reviewers had an extreme focus on raytracing, it has been pretty much a sidenote. Same goes with Ampere and this generation its the same. Ray tracing is not there yet for it to be more then a sidenote and reviews should reflect that for consumers sake. Some years from now, it might have more value for the general consumer, but they (reviewers) really should tone it down, not to give consumers the wrong impression of its value. If not, we´ll at least see which reviewers consumers can trust in how they review the products as shown earlier with Hardware Unboxed.

CDPR and Cyberpunk is a big disappointment when it comes to adding value into the game in terms of how light sources are represented. They have gone heavy in with raytracing and are even developing with Nvidia a Psyko preset for this, while their HDR implementation is broken. Cyberpunk could have been a HDR showcase, but instead you don´t get the high dynamic range with inky blacks and bright lights with great detail inbetween. SDR actually have better contrast overall, while HDR only have in the brightest highlights some extra detail. A prime example of how not to do HDR. HDR, if done right, would have much bigger impact in Cyberpunk IMHO, then raytracing and for a much larger user base (everyone with PS or Xbox would benefit from it if they have a decent HDR tv). But I digress.

Don´t get me wrong, I look forward to what raytracing can do in the future, but the emphasis given on it by some reviewers and also forum members have been totally misplaced in the past and I do hope they don´t repeat it again with the new 7XXX and 4XXX series. It would make it harder for me and probably many others to trust and value their judgment then.

I hope [H] at some point brings back reviews again. Kyle was not afraid to stand up and call out bullshit marketing and benchmark queens (2900 XT lol) from AMD and Nvidia. It was possible to trust that those reviews were done for the benefit of the consumer, not the reviewer and their access to hardware from vendors.
 
The simplicity in this, is that reviews should reflect the value any feature would have for consumers, not the marketing value of the feature for the GPU vendor, regardless of who it is. Anyone who bought 2XXX series due to their ray tracing performance can look back on it now and see how little it mattered compared to how much some of the reviews painted it out to be.
Is it Turing rage allover again? 🙄
Yes, Nvidia could have just added more cores/shaders and RT could be pushed in to the future.
We would get better absolute rasterization performance per generation out of the same die space / transistor count.

Nvidia choose RT as natural way to move graphics forward.
Is it 'needed' for games? If we want graphics to stop looking more fake than KK butt then yes we need RT.

I´ve had only Nvidia cards the last 5-6 years at least. But I will have no issues going AMD either. Not all of us have some heavy preference or loyalty to either of the companies and we normally see things from a consumer perspective.
Yes you are
And I am sure there will be good reason for you to get RTX 40x0 instead of RDNA3 card 🙃

Edit: I do get your post was a bit tongue in cheek, but ray tracing and its importance have been so misleading and a lot of reviewers and even forum members should be ashamed pushing it to be of some dealbreaking importance with the 2XXX and even the 3XXX series. At least if they dare to look back and see how little impact it had the last years.
I disagree.
The general consensus in the reviews was that the tech is amazing and future of computer graphics and the only way to simplify game development whilst making game lighting look consistent. Those points are still valid. We are just not there yet and won't be for years.

Proper RT card would have at least half of its die space allocated to RT and aim at full path tracing right away.
What we have now is low cost (transistor-wise) implementation slapped in there to allow for otherwise hard to replicate effects being added in more accurate fashion and performance of it is supposed to be developed further as need for this tech increases. So if more games have really worthwhile effects and maybe some just straight requiring RT by ditching faking these same effects with shaders then we will see companies like Nvidia investing more die space for RT effects.

That is good. This is the way to go.
We really do not need to have the same fake graphics in 8K. It wouldn't really improve anything as 4K is at the limit of human vision from normal viewing distances anyway.
 
Nvidia choose RT as natural way to move graphics forward.
Is it 'needed' for games? If we want graphics to stop looking more fake than KK butt then yes we need RT.
Why do people constantly equate "RT simply isn't as important NOW as it's made out to be" with "RT will never be important"?

Nobody is saying ray tracing isn't the future. But how many of the top played games even have it? How about have it and actually look good?

As usual, we'll be using rx7xxx and rtx4xxx cards mostly for rasterized work. I bet that will still hold true when the next generation comes out. Hence why it is still by far the most important metric to consider for most when determining what they'll get out of their card in real performance.

Reviews and marketing should reflect that.
 
Why do people constantly equate "RT simply isn't as important NOW as it's made out to be" with "RT will never be important"?

Nobody is saying ray tracing isn't the future. But how many of the top played games even have it? How about have it and actually look good?

As usual, we'll be using rx7xxx and rtx4xxx cards mostly for rasterized work. I bet that will still hold true when the next generation comes out. Hence why it is still by far the most important metric to consider for most when determining what they'll get out of their card in real performance.

Reviews and marketing should reflect that.
Reviews rarely reflect the top games played, look at Steam top 100 games being played and count the RT capable ones (which does not mean RT is being used by the player if it has it). Those are the games the majority of Steam users, thus PC gamers in general want to play and are playing. Now I was surprised as hell seeing Cyberpunk at #10 when previously it drifted off the top 100 chart, a game I got bored with and left sometime ago. Anyways seeing a reviewer taking notice and testing what is needed to get a good game experience for the top 100, being in touch with what people actually want and are playing, would be very useful and knocking it out of the park. Anyways virtually all of those games don't need a 40 series card or even a 30 series of cards to have a great gaming experience.
 
Exactly! For Turing, even though some reviewers had an extreme focus on raytracing, it has been pretty much a sidenote. Same goes with Ampere and this generation its the same. Ray tracing is not there yet for it to be more then a sidenote and reviews should reflect that for consumers sake. Some years from now, it might have more value for the general consumer, but they (reviewers) really should tone it down, not to give consumers the wrong impression of its value. If not, we´ll at least see which reviewers consumers can trust in how they review the products as shown earlier with Hardware Unboxed.
It is not easy for most reviewers to make any future type projection (even would they make their own 3d engine or play with UE to do it, the studio result could end up quite more optimized).

Say how much focus some reviewer did put on the 16 gig of VRAM offer by AMD versus 8-10 on NVIDIA direct competition, has of now has been a complete sidenote, would it still be true in 2023, would it have been has true with COVID and supply issues........

That more the jobs of a technologist than the reviewers we have in mind that usually have not much significantly more the first clue than their potential buyer, they can show how much visual quality it adds in current title at what cost, can pop up a list of known upcoming games and let them decide and it is usually what they do.

One issue is Nvidia somewhat hold over the industry and a tie lip in general, maybe there are a world where reviewer when a reflex, DLSS 3, Resizable bar, cache instead of high memory bandwidth, raytracing technology, amount of VRAM choice that significantly different from 2 potential consumer option you have an interview with some of the best one working on Frostbyte, Unreal, 4A, RED, Crytek, id, Sony-Microsoft-Ubisoft and drivers people that had access to engineering sample to play with, that give us their opinion on the tech and their informed opinion on the near and mid future, but usually it is a big nothing, a lot of those reviewer feel like they never coded a shader in their life and yet reviews those products simply because they loved playing game and playing with computers when they were young. The conversation among them on the resizable bar was a bit of a glaring example of that (or around Direct Storage), those things stay impressively unclear in the online world in part because of how much unclear it seem to be to reviewer (and how little they have their own little 3d engine around, to try those tech and explain us the current difference they see).

TLDR; Turing and Ampere reviewer for the most part had no idea beyond current tech demo, current games and announced game, will have the same for Lovelace.
 
  • Like
Reactions: noko
like this
Reviews rarely reflect the top games played, look at Steam top 100 games being played and count the RT capable ones (which does not mean RT is being used by the player if it has it). Those are the games the majority of Steam users, thus PC gamers in general want to play and are playing. Now I was surprised as hell seeing Cyberpunk at #10 when previously it drifted off the top 100 chart, a game I got bored with and left sometime ago. Anyways seeing a reviewer taking notice and testing what is needed to get a good game experience for the top 100, being in touch with what people actually want and are playing, would be very useful and knocking it out of the park. Anyways virtually all of those games don't need a 40 series card or even a 30 series of cards to have a great gaming experience.
Cyberpunk got some revival from a Netflix tv show I think.

I think it is ok for reviews to just have some of the top games played (a la GTA 5), the top game played worldwide will be significantly different to what a potential buyer (or tech enthusiast watcher) would be interesting in when they look for the review of a 4090, a series of how much a 4090 do better than a 3080TI or 6900x in Dota 2, Apex, Counter Strike, Team Fortresss, Fallout 4, Destiny 2 would get old fast.

Probably more interested in what the hard to run 2022-2023 ( Stalker 2, The Witcher 4, Ark 2, plague tale 2) title would do than the old game, a bit like CPU reviews does not necessarily use a lot of space for very common task than a mid range 10 year's old CPU do very well and fast enough.
 
  • Like
Reactions: noko
like this
It is not easy for most reviewers to make any future type projection (even would they make their own 3d engine or play with UE to do it, the studio result could end up quite more optimized).

Say how much focus some reviewer did put on the 16 gig of VRAM offer by AMD versus 8-10 on NVIDIA direct competition, has of now has been a complete sidenote, would it still be true in 2023, would it have been has true with COVID and supply issues........

That more the jobs of a technologist than the reviewers we have in mind that usually have not much significantly more the first clue than their potential buyer, they can show how much visual quality it adds in current title at what cost, can pop up a list of known upcoming games and let them decide and it is usually what they do.

One issue is Nvidia somewhat hold over the industry and a tie lip in general, maybe there are a world where reviewer when a reflex, DLSS 3, Resizable bar, cache instead of high memory bandwidth, raytracing technology, amount of VRAM choice that significantly different from 2 potential consumer option you have an interview with some of the best one working on Frostbyte, Unreal, 4A, RED, Crytek, id, Sony-Microsoft-Ubisoft and drivers people that had access to engineering sample to play with, that give us their opinion on the tech and their informed opinion on the near and mid future, but usually it is a big nothing, a lot of those reviewer feel like they never coded a shader in their life and yet reviews those products simply because they loved playing game and playing with computers when they were young. The conversation among them on the resizable bar was a bit of a glaring example of that (or around Direct Storage), those things stay impressively unclear in the online world in part because of how much unclear it seem to be to reviewer (and how little they have their own little 3d engine around, to try those tech and explain us the current difference they see).

TLDR; Turing and Ampere reviewer for the most part had no idea beyond current tech demo, current games and announced game, will have the same for Lovelace.
So right there. It is hard for folks to see what is missing then what is right in front of them. It would not be that difficult to import into Unreal 5 million+ polygon models to see how a GPU handles very dense polygon models or scenes. Then add in lights and increase the ray count. Then making mega shaders (multiple shaders in one which benefits how well a cache works). Today reviewers are not pushing the understanding or making better the reader except in rare occasions. When everything was new, years past, the reviewers did push understanding, ability in general of the readers on PC tech and building. Totally stagnant today.
 
Cyberpunk got some revival from a Netflix tv show I think.

I think it is ok for reviews to just have some of the top games played (a la GTA 5), the top game played worldwide will be significantly different to what a potential buyer (or tech enthusiast watcher) would be interesting in when they look for the review of a 4090, a series of how much a 4090 do better than a 3080TI or 6900x in Dota 2, Apex, Counter Strike, Team Fortresss, Fallout 4, Destiny 2 would get old fast.

Probably more interested in what the hard to run 2022-2023 ( Stalker 2, The Witcher 4, Ark 2, plague tale 2) title would do than the old game, a bit like CPU reviews does not necessarily use a lot of space for very common task than a mid range 10 year's old CPU do very well and fast enough.
I agree, a balance of pushing the card and also what is useful would reflect a better approach in general. I wonder how many will buy a 4090 but predominantly will play CSGo or some other game that would go way beyond any usefulness of the 4090, where a 3070 would actually be overkill? My view, if your GPU is rendering faster than your monitor capability to show that data then that is mostly wasted performance, unless one thinks tearing is useful and mismatched information on the one frame showing multiple frames is useful and that mismatched data/image can enhance reaction time due to the faster rendering (I generally would say it is mostly detrimental at this stage).
 
I'm sad to see PC-gaming spinning completely out of whack from a value perspective. This has always been an expensive hobby, there has always been a better value in gaming consoles, at least when they are somewhat new. This value lowers year-over-year as the price on the console does not decrease linearly compared to whatever hardware they feature. But as an enthusiast, that has always been willing to put a whole bunch of money into my computer rig to stay ontop of thing. I love everything new in tech, and I really dislike fiddling with optimising my settings to achieve good performance. I've always spent top dollar on getting whatever graphics card that will provide me with Very High / Ultra settings at my display resolution while also giving me a stable 60 FPS. In later years my bar has moved from having a rig that gives me a stable 2560x1440@60FPS experience, into something that will provide me with a stable 3440x1440@120FPS experience, still obtaining Very High+ settings, and if there are some ray-tracing goodness available I prefer to be able to utilise them without my FPS plummetting.

Seems like this is growing more and more unlikely by the day. This RTX 4090 seems like my perfect graphics card. It's the very first card on the market where I can feel comfortable that I should be able to achieve 3440x1440@120FPS regardless of my in-game settings. But this price point is getting ridiculous.

I have never been about getting the most for my money. There is no such thing as being an enthusiast and getting great value for money. I've paid for the top-of-the-lines cards in the past, even though the performance per dollar drops a lot when moving into the most expensive cards. The whole reason has been that this is my primary hobby and I don't mind paying extra for getting that little bit of extra performance that ensures I'll hit my performance goals without having to optimise a lot of settings and sacrifice visual quality in the games I'm playing.

Normally If you were an XX80 Ti user, you would look at whatever is the new generation of XX80 Ti. It would normally be in somewhat the same ballpark in terms of price compared to your previous XX80 Ti model making it all about generational improvements. Will this new generation of XX80 Ti, the card within my price range, offer a decent enough improvement to warrant an upgrade? And I would mostly conclude with yes, yes it does. These models were basically "price brackets". And I feel it's been like this forever until the RTX 3000 series where we had a price hike and XX80 Ti became XX90 at a much higher cost. That felt somewhat okay to me, considering inflation and whatnot. But having yet another price hike this time around makes RTX 4000 feel extremely awkward to me. The performance gains are truly impressive, efficiency is looking very good even though the max power usage of the card is kinda ridiculous on its own. But most of the generational gains go down the drain when NVIDIA yet again gives it a massive increase in price.

I could get the RTX 3090 at launch for about 15'000 NOK, the RTX 4090 is costing 21'000 NOK. This comes off as extremely strange to me. It makes the improvements from one generation to the next feel almost irrelevant when NVIDIA is also pricing the card at levels one would expect an RTX 3095 to exist with almost similar performance levels. When looking at this card as an RTX 4090, the performance uplift is very impressive. But when the card is not keeping the XX90 pricing levels from the previous generation it doesn't feel that impressive anymore.

The generational leap in performance would be comparing the RTX 3090 to the RTX 4080 16GB as they are both existing within the same price bracket. And I suspect that outside of improvements to RT that won't be all that impressive. One thing is for certain if the future puts even high-end (XX80) levels of performance at 15'000 NOK price points my days as a PC enthusiast is for sure over.
 
I agree, a balance of pushing the card and also what is useful would reflect a better approach in general. I wonder how many will buy a 4090 but predominantly will play CSGo or some other game that would go way beyond any usefulness of the 4090, where a 3070 would actually be overkill? My view, if your GPU is rendering faster than your monitor capability to show that data then that is mostly wasted performance, unless one thinks tearing is useful and mismatched information on the one frame showing multiple frames is useful and that mismatched data/image can enhance reaction time due to the faster rendering (I generally would say it is mostly detrimental at this stage).

I have a hard time believing that people only playing CS: GO are considering anything like this. Sadly it's very uncommon for things to be as simple as taking this one game, and getting whatever graphics card will net you maximum settings, at your display resolution and refresh rate. I game at 3440x1440@175Hz, I don't care for 175 FPS, I find 120 FPS to be plenty. But I still prefer to have a setup that nets me games at mostly maximum settings while playing at 3440x1440 and still achieve a stable 120 FPS. People claim 3060 Ti / 3070 to be the perfect card for 1440P gaming. There is no way I would get 3440x1440@120FPS stable in games like uncapped Elden Ring, Red Dead Redemption 2, Cyberpunk 2077, Microsoft Flight Simulator, Death Stranding, Horizon Zero Dawn, Final Fantasy VII Remake etc. At least not without having to optimise settings and/or utilise some kind of DLSS to render at a lower resolution to get better performance which comes with some drawbacks.

In a lot of the games, I play graphics card A would be more than capable to achieve my goal of 3440x1440@120FPS with Very High+ settings. But you always have this list of games that won't be able to, and you always want to have some kind of headroom if you are not planning on replacing the graphics card rather quickly as there are always new games on the horizon that will push even further.

EDIT:

This would normally be the benefit of being a PC gamer and enthusiast. It sure is more expensive compared to console gaming, but if you are willing to put down some money you will mostly be able to achieve a great overall gaming experience far outpacing what you get on consoles. But with this trend of prices going crazy I'm more tempted to just skip it altogether. Sure some will argue that being an enthusiast will always be expensive. And I can attest to that, I've been one for the past 15 years. But we are moving into territory where being a PC enthusiast is getting into price points that are getting silly. I've always been pushing the capabilities of my gaming PC and paid a lot for it. It's not like I'm expecting things to change, I'm just saying that to stay at my current level of PC enthusiast I would start to have to pay so much money that I can't really defend spending this much on my hobby. And if I'm going to lower my expectations on what my PC gaming experience should be like, lower myself from enthusiast to mid-high tier PC gamer, then my PS5 and Xbox Series X is starting to look more and more attractive considering what they are offering at their respective price point and the number of optimisations that goes into most console games.

Death Stranding Director's Cut looks better and runs better on my PC, and up until this point in time, I've been willing to spend that extra money for what I consider a superior PC gaming experience. But with these prices, the benefits are getting far too slim considering the massive uplift in price that I would consider starting to have a PS5 connect to my monitor for gaming instead.
 
Last edited:
  • Like
Reactions: noko
like this
I'm sad to see PC-gaming spinning completely out of whack from a value perspective. This has always been an expensive hobby, there has always been a better value in gaming consoles, at least when they are somewhat new. This value lowers year-over-year as the price on the console does not decrease linearly compared to whatever hardware they feature. But as an enthusiast, that has always been willing to put a whole bunch of money into my computer rig to stay ontop of thing. I love everything new in tech, and I really dislike fiddling with optimising my settings to achieve good performance. I've always spent top dollar on getting whatever graphics card that will provide me with Very High / Ultra settings at my display resolution while also giving me a stable 60 FPS. In later years my bar has moved from having a rig that gives me a stable 2560x1440@60FPS experience, into something that will provide me with a stable 3440x1440@120FPS experience, still obtaining Very High+ settings, and if there are some ray-tracing goodness available I prefer to be able to utilise them without my FPS plummetting.

Seems like this is growing more and more unlikely by the day. This RTX 4090 seems like my perfect graphics card. It's the very first card on the market where I can feel comfortable that I should be able to achieve 3440x1440@120FPS regardless of my in-game settings. But this price point is getting ridiculous.

I have never been about getting the most for my money. There is no such thing as being an enthusiast and getting great value for money. I've paid for the top-of-the-lines cards in the past, even though the performance per dollar drops a lot when moving into the most expensive cards. The whole reason has been that this is my primary hobby and I don't mind paying extra for getting that little bit of extra performance that ensures I'll hit my performance goals without having to optimise a lot of settings and sacrifice visual quality in the games I'm playing.

Normally If you were an XX80 Ti user, you would look at whatever is the new generation of XX80 Ti. It would normally be in somewhat the same ballpark in terms of price compared to your previous XX80 Ti model making it all about generational improvements. Will this new generation of XX80 Ti, the card within my price range, offer a decent enough improvement to warrant an upgrade? And I would mostly conclude with yes, yes it does. These models were basically "price brackets". And I feel it's been like this forever until the RTX 3000 series where we had a price hike and XX80 Ti became XX90 at a much higher cost. That felt somewhat okay to me, considering inflation and whatnot. But having yet another price hike this time around makes RTX 4000 feel extremely awkward to me. The performance gains are truly impressive, efficiency is looking very good even though the max power usage of the card is kinda ridiculous on its own. But most of the generational gains go down the drain when NVIDIA yet again gives it a massive increase in price.

I could get the RTX 3090 at launch for about 15'000 NOK, the RTX 4090 is costing 21'000 NOK. This comes off as extremely strange to me. It makes the improvements from one generation to the next feel almost irrelevant when NVIDIA is also pricing the card at levels one would expect an RTX 3095 to exist with almost similar performance levels. When looking at this card as an RTX 4090, the performance uplift is very impressive. But when the card is not keeping the XX90 pricing levels from the previous generation it doesn't feel that impressive anymore.

The generational leap in performance would be comparing the RTX 3090 to the RTX 4080 16GB as they are both existing within the same price bracket. And I suspect that outside of improvements to RT that won't be all that impressive. One thing is for certain if the future puts even high-end (XX80) levels of performance at 15'000 NOK price points my days as a PC enthusiast is for sure over.

Problem is that nvidia is competing with themselves.

Because of mining and the insane demand during covid times, they mega-boosted production and now that mining is dead there's a ridiculous number of Ampere cards around (there's big inertia in manufacturing lines and the death of mining was extremely sudden) and they adjusted the prices and RTX 4090 is just the top of the ladder because you know, they can do whatever the hell they want anyway. But it's the same price as 3090 on release (ok, $100 more), it's just that the dollar got stronger so outside of the USA it feels way more expensive. The latter at least is not nvidia's fault. AMD cards will also feel ridiculously outside of the USA this gen (and CPUs). Not to mention energy costs in EU with those very high power parts. What really hurts is not the price per se, it's making the top card the best VALUE. This never happened until now. There was always a card costing half the price of the halo card and giving 90-95% of the performance. But they decided not to do that anymore. And there's no competition out there to punish them.

While the AMD cards are not necessarily far behind in raster performance and even had a bit better efficiency last gen, they fall short on all other metrics (OK they are better at running 1080p/1440p on some games but those resolutions have been trivial to run for the last few generations of card so who cares?). Features, software support and availability of cards, well nvidia is just too far ahead. Seems they got ahead of everyone in terms of cooling too for this gen too, which will hurt custom card sales and boost the real value of the FE even further, like Steve from GN said, EVGA probably got out at the right time :)

tl;dr nvidia is too powerful
 
Last edited:
Does anyone know any rumors/speculation about the 4060 ti?

- Price?
- Release date?
- Power draw?
- Improvement % over 3060 ti?
 
Well we shall worry oubout it over time then.
The pattern I typically see is:
-nvidia releases a new, proprietary technology to push GPU sales
-AMD releases a half-baked version of it in their next-gen
-eventually, 3-4 gens later it works and is open source and no one uses the nvidia solution anymore

So it's like... thanks for pushing development / new features nvidia, but since it's probably just a tech demo I won't base any purchasing decisions around it (except rare cases like minecraft/quake where the base game is so easy to run that you can actually benefit from it now)
see also: hairworks
 
The pattern I typically see is:
-nvidia releases a new, proprietary technology to push GPU sales
-AMD releases a half-baked version of it in their next-gen
-eventually, 3-4 gens later it works and is open source and no one uses the nvidia solution anymore

So it's like... thanks for pushing development / new features nvidia, but since it's probably just a tech demo I won't base any purchasing decisions around it (except rare cases like minecraft/quake where the base game is so easy to run that you can actually benefit from it now)
see also: hairworks

You would think they would learn and make their stuff open source, so it would force AMD to have to use that standard instead. Nvidia hurts themselves by being so proprietary and difficult to work with.
 
RT is the future, and it has to start somewhere. Pretty much how all new technologies in rendering have begun. Never great the first few years and then picks up over time.
Well we shall worry oubout it over time then.
The pattern I typically see is:
-nvidia releases a new, proprietary technology to push GPU sales
-AMD releases a half-baked version of it in their next-gen
-eventually, 3-4 gens later it works and is open source and no one uses the nvidia solution anymore

So it's like... thanks for pushing development / new features nvidia, but since it's probably just a tech demo I won't base any purchasing decisions around it (except rare cases like minecraft/quake where the base game is so easy to run that you can actually benefit from it now)
see also: hairworks
There is nothing proprietary with ray tracing. There are agnostic extensions in Vulkan now, and it was always baked into DirectX 12 Ultimate (feature level 12_1).
 
Back
Top