RTX 5090 - $2000 - 2 Slot Design - available on Jan. 30 - 12VHPWR still

So, to summarize, wait for reviews, gaming may or may not be a good uplift, depending on the settings you use, but, we expect some decent uplift in compute performance?
 
I have mixed feelings. On one hand, you're correct. Even raster itself employs many tricks to get closer to "life". Shadows, rays, etc. The question has always been about "what's the real deal?" Is raster the complete removal of any "fakery"? No, not really. Raster is a lot of fakery, too (at least afaik). Is "real life" the actual source of truth? Then neither tech is inherently wrong. AI is simply approaching it from a different angle.
The biggest issue is raster fakery costs millions in staffing and development costs, Ray Traced AI fakery costs a few thousand in hardware they had to buy anyways.
Ray Tracing was never about delivering the superior product, it was about delivering 90% of the best raster results for 10% of the cost.
 
It does when you put a water block on it :D
Depends what direction you radiator fans are blowing. Most manufacturers say in blowing rad fans because the outside air is going to cool more than inside, hence more efficient as a cooling method, so you could potentially still be blowing that heat inside the case
 
So, to summarize, wait for reviews, gaming may or may not be a good uplift, depending on the settings you use, but, we expect some decent uplift in compute performance?
To summarize all the fake frame nonsense people groaned and complained about last time... well now they're putting in even more fake frames really increasing the artificial FPS numbers
 
Everyone is starting to wake up to the fact that the RTX 5090 probably isn't as powerful as everyone wants it to be. It's definitely an iterative performance increase, which is sad considering I've had my RTX 4090 since launch day over 2 years ago.
I was expecting $2,500 for a much more powerful card. I saw that the 7900 XTX could usually be OC’d to 3.2 GHz, so I thought that we would get +20-25% clock speeds in addition to higher core counts and architectural improvements. If I had known that Blackwell would have basically the same thermal limitations as Lovelace, I would have had very different expectations.

I’m happy for the people who can’t spot the deficiencies of upscaling and frame-gen, but for the kind of gaming I do, it’s just not relevant. Therefore, the performance increase is about 10-20% for my use case, and with a card that runs hotter in my SFF case. That doesn’t exactly make me jump for joy. The price reduction is welcome, of course, but it’s only a slight reduction from frankly-insulting prices. I’m the kind of gamer who likes to see a night-and-day difference when I upgrade, which usually means double the raster performance. I thought maybe the 5080 would deliver that over my 6800 XT, and that I would see if I wanted one when the 24 GB version came out, depending on pricing. Now I can see that we’re a long way away. Here’s to 2026!
 
To summarize all the fake frame nonsense people groaned and complained about last time... well now they're putting in even more fake frames really increasing the artificial FPS numbers
Which is fine, for those that use it. For others, not so much.
 
To summarize all the fake frame nonsense people groaned and complained about last time... well now they're putting in even more fake frames really increasing the artificial FPS numbers
At which point are the fake frames fake though, changes to DX12 and upcoming changes to Vulkan are going to be providing the GPU with even more data with which it can more accurately generate those frames
https://devblogs.microsoft.com/dire...rectx-cooperative-vector-support-coming-soon/
What is currently AI fakery is getting baked in at an even lower level which will lead to better quality results with less overhead, Nvidia has been laying the groundwork for this for years now.
 
Depends what direction you radiator fans are blowing. Most manufacturers say in blowing rad fans because the outside air is going to cool more than inside, hence more efficient as a cooling method, so you could potentially still be blowing that heat inside the case

That is true, but if you have blocks on both your CPU and GPU it barely matters anymore. You can orient those fans either way you want. Blowing in or blowing out, because there won't be any significant heat generating components dumping heat in the case anymore. (though DDR5 and nsome NVMe drives are sure trying to give CPU's and GPU's a run for their money :p )
 
That is true, but if you have blocks on both your CPU and GPU it barely matters anymore. You can orient those fans either way you want. Blowing in or blowing out, because there won't be any significant heat generating components dumping heat in the case anymore. (though DDR5 and nsome NVMe drives are sure trying to give CPU's and GPU's a run for their money :p )
Right they move enough air that as long as you have some decent flow in and out the difference between them is academic .
 
The biggest issue is raster fakery costs millions in staffing and development costs, Ray Traced AI fakery costs a few thousand in hardware they had to buy anyways.
Ray Tracing was never about delivering the superior product, it was about delivering 90% of the best raster results for 10% of the cost.

What changes is who pays for that though.

You are cutting costs at game developers, who no longer have to hire specialized artists to recreate lighting effects and reflections with raster fakes, but instead you are pushing that cost onto consumers. Now every consumer needs to buy a way more expensive GPU than they used to, that draws way more power than they used to, and continues to draw that power for the life of the game.

Not ever game can be MineCraft or Grand Theft Auto V with their 200M to 300M sales, but if we take an example like Cyberponk 2077 and its ~8M sales (an apt example, since it is the poster child for RT) and we compare total costs on each side, I think the costs of the 8 million users over the life of the game are going to be much higher than the costs of the - what - 10 artist-years?

So, developers shifting relatively small costs on their end to relatively large costs when added up over millions of users and a decade or so the game is still relevant and played.

Is that a fair tradeoff? The net cost (not to mention net environmental impact from several hundred watt GPU's becoming a requirement, if that is your thing) would suggest otherwise.

Especially when considering that hand optimized rasterization with "cheats" like shadow maps and manual reflection effects can look almost as good as true RT (but haven't ever since RT became a thing, because those artists jobs were the first to be cut, so non-RT rendering now looks like crap)

I always chafe at developers who refuse to use low level languages and optimize their code when they say things like "whatever, CPU cycles and RAM is cheap". Yeah, well fuck you, you are spending other peoples money, and it adds up over the very large number of people you sell your software to, so fucking do your job like it is 1991, and code things efficiently in C without cheat libraries like Boost.
 
I'd wait on the "2 slot" thing. Many cards are 2 slots, you just can't plug anything into the 3rd slot :)
 
Cyberponk 2077 and its ~8M sales
https://www.pcgamesinsider.biz/news/74838/cyberpunk-2077-hits-30m-sale

~30M for the base game and ~8M for Phantom Liberty.

But yeah why spend 100's of Millions on hand optimizations when they can tell us to buy a new GPU.

Overall though Nvidia is leading the charge in changing how things are rendered as a whole, it's not now but I would not be surprised if the next generation of game engines used AI generation to render the frames as a whole or at least partially, perhaps only rendering the wire frame with AI asset tags in place to the GPU's neural engine knows how to place the assets inside accordingly.
 
I'd wait on the "2 slot" thing. Many cards are 2 slots, you just can't plug anything into the 3rd slot :)
My Server GPU's are all 600w 2 slot monsters, I mean no fans on them at all and they rely on the chassis fans to move the air out and through them but hey they are all 2 slot...
 
Looks pretty 2-slot to me. I'm not a Linus fan but this was a nice preview.


View: https://www.youtube.com/watch?v=3a8dScJg6O0

Again, emphasis, as long as you can plug something into the 3rd slot.

Let me put it this way. Today, to get a 2 slot card, we'd need someone claiming a "one slot" solution.

And with the wattages required, no vapor pipes or liquid metal is going to be really effective/enough, IMHO. So, even if "compact", it's gotta breathe. I'd wait to get too excited. If an "apparent" 2 slot starts appearing in mass from the 3rd parties... then, I'd say "this seems real".
 
My Server GPU's are all 600w 2 slot monsters, I mean no fans on them at all and they rely on the chassis fans to move the air out and through them but hey they are all 2 slot...
:) I guess we have "a solution".
 
I talked about it here:
https://hardforum.com/threads/rtx-5...d-overclocking-thread.2038962/post-1046029115

TL;DR, I think with this gen they've given up almost entirely on just raw perf improvements, they're just going all in on AI since that's where their R&D is anyway and also because they probably can't figure out a way to keep up with game demands just using raw HW perf increase anyway. Most of this gen is about increased AI hardware, not raster hardware.
And I for one am perfectly content with this. After all, their raster HW is already the best in the world, and the AI fakery actually happens to.... you know... work! It looks good, and dramatically bumps frame rates on 4K+max settings content. They are doing exactly the right things, and as a 4090 owner I am rather excited by the 5090.
 
Bigger die size with with almost a third more cores, though with a 4-ish percent lower official boost clock, but with significantly wider memory bus and faster memory for a total of nearly 80% more memory bandwidth. Granted most of that bandwidth is probably to supply fake frames for the not-really-AI tensor cores, but I am still interested.


Hating frame generation and upscaling are perfectly reasonable responses, since the images being rendered aren't the real images but approximations. When you buy solid wood furniture, is MDF with a veneer that looks similar to and sometimes even feels similar to actually the same as solid wood planks from the the real hardwood? No. Is it even more galling (pun intended) when the price charged for the MDF is the same as the real wood? Yes.

Alternate example, six and four cylinder cars with sporty aspirations that pump V8 noises through the stereo into the cabin. Are those really just the same as an actual performance V8 of the same vintage?

Fake frames are bad.
I respectfully disagree.

Metaphors are great and all, but they can be spun to support precisely the opposite argument. For example solid metal furniture being far more durable than real hardwood. Or six cylinder turbos that make substantially more power than their V8 NA brethren and are, for all practical purposes, better.

Frame generation and upscaling are perfectly viable technologies, particularly at the high resolutions that these GPUs target. It's just a different technique to get to a similar (or superior) end result. After all, let's all be clear that rasterization is "fakery" as well, if you consider ray tracing to be the goal post. And if we've all been happy with rasterization "fakery" for decades now, why are we suddenly offended by AI frame generation?
 
I respectfully disagree.

Metaphors are great and all, but they can be spun to support precisely the opposite argument. For example solid metal furniture being far more durable than real hardwood. Or six cylinder turbos that make substantially more power than their V8 NA brethren and are, for all practical purposes, better.

Frame generation and upscaling are perfectly viable technologies, particularly at the high resolutions that these GPUs target. It's just a different technique to get to a similar (or superior) end result. After all, let's all be clear that rasterization is "fakery" as well, if you consider ray tracing to be the goal post. And if we've all been happy with rasterization "fakery" for decades now, why are we suddenly offended by AI frame generation?
Yup, all the pixels on the screen are mathematically calculated either way. The math being used is just different.
 
Again, emphasis, as long as you can plug something into the 3rd slot.

Let me put it this way. Today, to get a 2 slot card, we'd need someone claiming a "one slot" solution.

And with the wattages required, no vapor pipes or liquid metal is going to be really effective/enough, IMHO. So, even if "compact", it's gotta breathe. I'd wait to get too excited. If an "apparent" 2 slot starts appearing in mass from the 3rd parties... then, I'd say "this seems real".

I don't care if its 5 slot. I'm just saying, with my own eyes, I can see it takes up 2 slots, not 2.1 or 2.5 or whatever. Where the fans get their air have never been considered for this, we dealt with this problem extensively back in the crossfire/SLI days.
 
I'm still rocking my 3080ti, depending upon reviews the 5080 seems like a decent upgrade for me...maybe wait on the ti models unless the 5090's are easy to get...LOL
I doubt we will see a Ti model, but I would be interested in a 5080 Super 24GB to upgrade my 3080 Ti.
 
Frame generation and upscaling are perfectly viable technologies, particularly at the high resolutions that these GPUs target. It's just a different technique to get to a similar (or superior) end result. After all, let's all be clear that rasterization is "fakery" as well, if you consider ray tracing to be the goal post. And if we've all been happy with rasterization "fakery" for decades now, why are we suddenly offended by AI frame generation?

"to get to a similar (or superior) end result."

Aye, but there's the rub.

If you are already at 90fps, adding frame gen to max out your 144hz monitor, looks very smooth, and doesn't suffer play-ability wise. It may not add much, but it doesn't hurt.

But try using frame gen to bring a 45fps framerate up to 90fps. You aren't going to like it very much.

Sure it looks smooth to a bystander, but as soon as you grab that mouse it feels just as laggy and terrible as playing at 45fps does.

If Frame Gen and Upscaling could deliver the goods it would be one thing, but it very much can't. Upscaling is better than frame gen at lower frame rates. You get a tiny IQ hit in exchange for a pretty substantial performance increase, but there is an IQ hit.

But frame gen is really rather useless unless you already have an acceptable frame rate to begin with, and if that is the case, the motivation to even bother with it is relatively limited, because the benefits are so tiny. Going from DLSS3 to DLSS4 with multiframe seems even more useless.
 
It's hard to believe they're not going to have more issues with this new version of frame gen as they can't even fix the issues they have right now in the current version. There are still several games where it feels or acts weird when you turn it on. FFS it is broken as hell in Indiana Jones right now with tons of issues and no fix in sight. Even DLSS and DLAA still have issues in lots of games with laughable amounts of ghosting and trailing. In some games you spend more time googling how to fix issues than actually enjoying the game.
 
I can't believe none of you guys, or a majority of you have selective memory issues, but the 5090 is literally a TITAN masquerading as a gaming card. PERIOD. While at the same time, Jen Hsun is laughing at you guys while upping the price and some of you actually defend this price! Worst part you're doing it for free.

Between the CUDA core count, VRAM, and tensor cores, literally none of the gamers are going to see the benefits of this. I say this as an arcvhiz artist myself who dabbles in both worlds so it must be frustrated for all the fellow artists in the gaming industry seeing all the hard work they put into textures, lighting, and modeling to see it perverted by AI marketing bullshit and interpolated garbage. It's reductive and lacks inovation. I feel like I've seen this exact bullshit from the 5 years ago only to be regurgitated again. Seriously, it feels like Idiocracy more and more. Feel free to downvote me, but this is the sad truth, but gamers are no longer the customer base, not for a long time now.
 
It's hard to believe they're not going to have more issues with this new version of frame gen as they can't even fix the issues they have right now in the current version. There are still several games where it feels or acts weird when you turn it on. FFS it is broken as hell in Indiana Jones right now with tons of issues and no fix in sight. Even DLSS and DLAA still have issues in lots of games with laughable amounts of ghosting and trailing. In some games you spend more time googling how to fix issues than actually enjoying the game.
The Vulkan Libraries for DLSS are very interesting to say the least....
The "New Version", is just a major revision on the old version, I mean the current build of DLSS is at 3.8.10 as of Nov 13'th. Just has enough changes to warrant a 4.0.0 rather than a 3.9.0.

Many games with DLSS can get significant improvements from replacing their shipped DLSS.dll file with the new one.
The fact you can just swap out the DLL file to "upgrade" the internal DLSS version the games use is one of the features I do like about it.
 
I doubt we will see a Ti model, but I would be interested in a 5080 Super 24GB to upgrade my 3080 Ti.
I think that's what the TI model will be IMO, a 24GB card with maybe a bump in cores since the gap between the 80 and 90 models is so big.
 
I can't believe none of you guys, or a majority of you have selective memory issues, but the 5090 is literally a TITAN masquerading as a gaming card. PERIOD. While at the same time, Jen Hsun is laughing at you guys while upping the price and some of you actually defend this price! Worst part you're doing it for free.

Between the CUDA core count, VRAM, and tensor cores, literally none of the gamers are going to see the benefits of this. I say this as an arcvhiz artist myself who dabbles in both worlds so it must be frustrated for all the fellow artists in the gaming industry seeing all the hard work they put into textures, lighting, and modeling to see it perverted by AI marketing bullshit and interpolated garbage. It's reductive and lacks inovation. I feel like I've seen this exact bullshit from the 5 years ago only to be regurgitated again. Seriously, it feels like Idiocracy more and more. Feel free to downvote me, but this is the sad truth, but gamers are no longer the customer base, not for a long time now.
Who is defending the RTX 5090? Seems like most of this thread is people that are really skeptical about the RTX 5090's actual performance uplift over the 4090. And yes, it can be a prosumer card, but as for the 4090, it is very much a gaming GPU with prosumer features on it. It offered a massive uplift in performance compared to the 3080 and 3090, and when you have a high refresh rate 4K monitor, especially OLED, it is very noticeable.
 
It'll be interesting to see how many people acting like they'd never get one in a million years will be posting that they bought one in a month or two. Happened with both the 3090 and 4090.
I'm one of those people that said my 3080 was good, but then the 4090 turned out to be an absolute monster, so I bought one. If the 5090 turns out the same, I will buy one.
 
They mentioned memory efficiency and CPU overhead improvements for frame generation for both the 4000 and 5000 series.

The one slide they showed was something like .5GB saved, which won't be enough for some games/settings. Unless I missed something else, doesn't seem like a whole lot of saving. Better than nothing.
 
It'll be interesting to see how many people acting like they'd never get one in a million years will be posting that they bought one in a month or two. Happened with both the 3090 and 4090.
You getting one ?
 
The Vulkan Libraries for DLSS are very interesting to say the least....
The "New Version", is just a major revision on the old version, I mean the current build of DLSS is at 3.8.10 as of Nov 13'th. Just has enough changes to warrant a 4.0.0 rather than a 3.9.0.

Many games with DLSS can get significant improvements from replacing their shipped DLSS.dll file with the new one.
The fact you can just swap out the DLL file to "upgrade" the internal DLSS version the games use is one of the features I do like about it.
See that's part of what I'm talking about is I don't want to have to dick around with stuff that I shouldn't have to in the first place. I mean look at Dead Space as those lazy twats never even cared to fix the ghosting when all it took was a simple newer version. The Last of Us ghosting with DLAA still cannot be fixed no matter what version I use, it still has the issue. And in the Avatar game when I tried to use a newer version that fixed the ghosting it ended up causing other issues after a few minutes of playing that were fixed when I went back to the old dll. I mean I could go on and on but as I said I'm just tired of having to go research what issues can and cannot be fixed.
 
See that's part of what I'm talking about is I don't want to have to dick around with stuff that I shouldn't have to in the first place. I mean look at Dead Space as those lazy twats never even cared to fix the ghosting when all it took was a simple newer version. The Last of Us ghosting with DLAA still cannot be fixed no matter what version I use, it still has the issue. And in the Avatar game when I tried to use a newer version that fixed the ghosting it ended up causing other issues after a few minutes of playing that were fixed when I went back to the old dll. I mean I could go on and on but as I said I'm just tired of having to go research what issues can and cannot be fixed.
But at least with DLSS we can do those things, it's not an option with FSR or XeSS.
Sadly like all tech implementations though many developers don't take the time to properly ensure they have it working correctly, there are lots of examples where they aren't passing the vector calculations which greatly improve the performance and results and instead just relying on the motion analysis which while functional for both DLSS and FSR ultimately ends up with a bad result, which they then use other DLAA/TAA tricks elsewhere to mask which just makes everything look soft and fuzzy.

Bad dev's doing bad jobs always make what could be good things suck.

This problem is likely why Nvidia is working so very closely with Epic to get better integration into UE5 for DLSS to further take the burden off the developers, because they keep doing hatchet jobs and it only makes Nvidia look bad.
 
Back
Top