RTX 5090 - $2000 - 2 Slot Design - available on Jan. 30 - 12VHPWR still

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
12,431
it appears to only be 2 slots wide ... they're continuing with 12VHPWR ??

Required Power Connectors4x PCIe 8-pin cables (adapter in box) OR
1x 600 W PCIe Gen 5 cable

1736223750172.png


"Availability
For desktop users, the GeForce RTX 5090 GPU with 3,352 AI TOPS and the GeForce RTX 5080 GPU with 1,801 AI TOPS will be available on Jan. 30 at $1,999 and $999, respectively.

The GeForce RTX 5070 Ti GPU with 1,406 AI TOPS and GeForce RTX 5070 GPU with 988 AI TOPS will be available starting in February at $749 and $549, respectively.

The NVIDIA Founders Editions of the GeForce RTX 5090, RTX 5080 and RTX 5070 GPUs will be available directly from nvidia.com and select retailers worldwide."

1736220196759.png


1736220211965.png


1736223222274.png


1736220284120.png


Source:
View: https://www.twitch.tv/NVIDIA
 
Last edited:
At least these prices are a lot more reasonable than what we were expecting. Still high, but not egregious. Gonna nab a 5090 when I can get my hands on one.
 
WTF is an AI TOP?

I mean, I expected them to go AI crazy, as that is what they do these days, but I don't give a rats ass about the cards AI capabilities. I don't want them.

Dual slot only makes me suspicious that this won't be a very big performance leap over last gen.

If they required that much power and heat to make the 4090 work, there is no way dual slot is going to cut it in a card that is supposed to be faster than the 4090.

Yes, next gen fab process and all that, but the difference there is minor between TSMC's 4N process and the 4NP this is supposed to use is minor at best. A few percent. I'm not expecting that they will be able to radically improve perf/watt to the extent that this works.

Which probably means this is some "replace even more real performance with AI generative bullshit" generation, and if that is the case, I'm out. I'll keep my 4090 indefinitely. No more fake pixels and frames.
 
WTF is an AI TOP?

I mean, I expected them to go AI crazy, as that is what they do these days, but I don't give a rats ass about the cards AI capabilities. I don't want them.

Dual slot only makes me suspicious that this won't be a very big performance leap over last gen.

If they required that much power and heat to make the 4090 work, there is no way dual slot is going to cut it in a card that is supposed to be faster than the 4090.

Yes, next gen fab process and all that, but the difference there is minor between TSMC's 4N process and the 4NP this is supposed to use is minor at best. A few percent. I'm not expecting that they will be able to radically improve perf/watt to the extent that this works.

Which probably means this is some "replace even more real performance with AI generative bullshit" generation, and if that is the case, I'm out. I'll keep my 4090 indefinitely. No more fake pixels and frames.

Yup. DLSS 4 has Multiframe gen and 5000 series exclusive. So they are pushing on the AI frame gen hard. 5090 will obviously be faster on raster, but not nearly as big as these numbers they are spouting. So definitely take it with a grain of salt.
 
WTF is an AI TOP?

I mean, I expected them to go AI crazy, as that is what they do these days, but I don't give a rats ass about the cards AI capabilities. I don't want them.

Dual slot only makes me suspicious that this won't be a very big performance leap over last gen.

If they required that much power and heat to make the 4090 work, there is no way dual slot is going to cut it in a card that is supposed to be faster than the 4090.

Yes, next gen fab process and all that, but the difference there is minor between TSMC's 4N process and the 4NP this is supposed to use is minor at best. A few percent. I'm not expecting that they will be able to radically improve perf/watt to the extent that this works.

Which probably means this is some "replace even more real performance with AI generative bullshit" generation, and if that is the case, I'm out. I'll keep my 4090 indefinitely. No more fake pixels and frames.
He said the 5070 is equivalent to 4090 series performance to start the lineup with
 
RTX 5070ti at $750 seems like a decent card. If the leaked specs are right, it shouldn't be that much slower than a 5080. For $250 less, might be a good value seeing that the 5080 is also 16GB. I know the price of the 5090 is a lot higher than the 5080, but seems like they are wanting to push people to buy the 5090 if the 5070ti is not enough performance. I assume the 5080 will be ~20% faster than the 5070ti, but we'll have to wait and see what benchmarks show.

He said the 5070 is equivalent to 4090 series performance to start the lineup with

Doubt it, maybe with ray tracing and DLSS. For games that don't make heavy use of ray tracing I assume any improvements in ray tracing performance will be negated. Also, lesser VRAM which can be a problem when using frame generation which is typically required for good ray tracing performance. So IMO, can't really match 4090 performance.
 
https://www.nvidia.com/en-us/geforce/graphics-cards/50-series/

I'm just going to say it... I'm not trying to be anti Nvidia here.
The 5070 isn't impressive.
We should wait for real reviews obviously but based on NV marketing.
Around 20% faster without DLSS vs a 4070 based on Far Cry bump
Around the same 20% faster with DLSS 3 based on Plague Tale number
Everything else they show.... also 20% faster but they posted numbers with DLSS4.
"2560x1440, Max Settings. DLSS SR (Quality) and DLSS RR on 40 Series and 50 Series; FG on 40 Series, MFG (4X Mode) on 50 Series. A Plague Tale: Requiem only supports DLSS 3. CPU is 9800X3D for games, 14900K for apps."

We'll have to see what 3 fake frames for every real 1 actually looks like in practice. Its not a 4090 = upgrade. lol
 
RTX 5070ti at $750 seems like a decent card. If the leaked specs are right, it shouldn't be that much slower than a 5080. For $250 less, might be a good value seeing that the 5080 is also 16GB. I know the price of the 5090 is a lot higher than the 5080, but seems like they are wanting to push people to buy the 5090 if the 5070ti is not enough performance. I assume the 5080 will be ~20% faster than the 5070ti, but we'll have to wait and see what benchmarks show.

Specs are on their website.
 
Specs are on their website.

For easy reference:

Untitled.jpg


As for the multi frame DLSS, how hard will this hit VRAM? Because frame gen is already a VRAM hog, and one has to question how useful it will be in practice on a 5070 with 12GB of VRAM. Will GDDR7 make a difference here?

The 5070 seems a bit underwhelming. CUDA core count is not much higher than the 4070, and less than the 4070 Super. Yes I know that two aren't directly comparable, but still, seems like a low increase.

Almost feels like they want to push 5070ti & 5090 sales over 5070 and 5080.
 
Last edited:
He said the 5070 is equivalent to 4090 series performance to start the lineup with

It's probably the same bullshit as when they first introduced DLSS and were comparing DLSS performance to the old cards without DLSS3.

I expect the same now. Comparing the new even more fake pixels and frames against the old more limited fake pixels and frames.

I'll only agree to call if a performance increase, if the performance actually increases native res, to native res.

Nvidia has talented engineers and all, but they are not talented enough to violate the laws of physics.

You can't do more with less. Perf/watt is unlikely to have seen more than than a 5-10% improvement generation over generation. Seeing the wimpy cooler on the 5090, I bet it is actually a decrease in real performance compared to the 4090 when you turn all the fake AI DLSS garbage off.
 
Yeah, I don't believe it for a second.

I'm guessing they are including AI fakery in that performance comparison.
He straight up followed it up with "impossible without artificial intelligence, 4 tflops of ai tensor cores"...it's all DLSS....Don't go fire selling 4090's people...you'll be kicking yourself...lol. All marketing and DLSS.
"NVIDIA DLSS 4 Boosts Performance by Up to 8x
DLSS 4 debuts Multi Frame Generation to boost frame rates by using AI to generate up to three frames per rendered frame. It works in unison with the suite of DLSS technologies to increase performance by up to 8x over traditional rendering, while maintaining responsiveness with NVIDIA Reflex technology.

DLSS 4 also introduces the graphics industry's first real-time application of the transformer model architecture. Transformer-based DLSS Ray Reconstruction and Super Resolution models use 2x more parameters and 4x more compute to provide greater stability, reduced ghosting, higher details and enhanced anti-aliasing in game scenes. DLSS 4 will be supported on GeForce RTX 50 Series GPUs in over 75 games and applications the day of launch.

NVIDIA Reflex 2 introduces Frame Warp, an innovative technique to reduce latency in games by updating a rendered frame based on the latest mouse input just before it is sent to the display. Reflex 2 can reduce latency by up to 75%. This gives gamers a competitive edge in multiplayer games and makes single-player titles more responsive.

Blackwell Brings AI to Shaders
Twenty-five years ago, NVIDIA introduced GeForce 3 and programmable shaders, which set the stage for two decades of graphics innovation, from pixel shading to compute shading to real-time ray tracing. Alongside GeForce RTX 50 Series GPUs, NVIDIA is introducing RTX Neural Shaders, which brings small AI networks into programmable shaders, unlocking film-quality materials, lighting and more in real-time games.

Rendering game characters is one of the most challenging tasks in real-time graphics, as people are prone to notice the smallest errors or artifacts in digital humans. RTX Neural Faces takes a simple rasterized face and 3D pose data as input, and uses generative AI to render a temporally stable, high-quality digital face in real time.

RTX Neural Faces is complemented by new RTX technologies for ray-traced hair and skin. Along with the new RTX Mega Geometry, which enables up to 100x more ray-traced triangles in a scene, these advancements are poised to deliver a massive leap in realism for game characters and environments.

The power of neural rendering, DLSS 4 and the new DLSS transformer model is showcased on GeForce RTX 50 Series GPUs with Zorah, a groundbreaking new technology demo from NVIDIA."


https://www.techpowerup.com/330612/...eries-opens-new-world-of-ai-computer-graphics
 
Yeah, I don't believe it for a second.

I'm guessing they are including AI fakery in that performance comparison.

Skim the page for them. Looks like in raster it will be closer to 30% performance increase. Ray tracing is typically where frame rates drop and it is becoming more common though a few questions remain:

- Are the downsides to frame gen still here? Will this new frame gen make the issues worse?
- Will the 5070 even have the VRAM for this new frame gen?

Erek, the 12v2x6 should be being used. They switched to that long ago.
 
Same here. I'm still on a 3090. To me the only surprise is the pricing on the 5070Ti and 5070. They're $50 lower than last gen.
Based on their performance numbers they posted... I think people might feel that is correct when the benchs come.
I mean based on the Far Cry and Plague tale they posted in their graph. It looks like its 20% faster. If we assume blackwell is 30-40% faster with RT, that means the actual raster improvement is very little or maybe even flat.
 
"NVIDIA DLSS 4 Boosts Performance by Up to 8x
DLSS 4 debuts Multi Frame Generation to boost frame rates by using AI to generate up to three frames per rendered frame. It works in unison with the suite of DLSS technologies to increase performance by up to 8x over traditional rendering, while maintaining responsiveness with NVIDIA Reflex technology.

DLSS 4 also introduces the graphics industry's first real-time application of the transformer model architecture. Transformer-based DLSS Ray Reconstruction and Super Resolution models use 2x more parameters and 4x more compute to provide greater stability, reduced ghosting, higher details and enhanced anti-aliasing in game scenes. DLSS 4 will be supported on GeForce RTX 50 Series GPUs in over 75 games and applications the day of launch.

NVIDIA Reflex 2 introduces Frame Warp, an innovative technique to reduce latency in games by updating a rendered frame based on the latest mouse input just before it is sent to the display. Reflex 2 can reduce latency by up to 75%. This gives gamers a competitive edge in multiplayer games and makes single-player titles more responsive.

Blackwell Brings AI to Shaders
Twenty-five years ago, NVIDIA introduced GeForce 3 and programmable shaders, which set the stage for two decades of graphics innovation, from pixel shading to compute shading to real-time ray tracing. Alongside GeForce RTX 50 Series GPUs, NVIDIA is introducing RTX Neural Shaders, which brings small AI networks into programmable shaders, unlocking film-quality materials, lighting and more in real-time games.

Rendering game characters is one of the most challenging tasks in real-time graphics, as people are prone to notice the smallest errors or artifacts in digital humans. RTX Neural Faces takes a simple rasterized face and 3D pose data as input, and uses generative AI to render a temporally stable, high-quality digital face in real time.

RTX Neural Faces is complemented by new RTX technologies for ray-traced hair and skin. Along with the new RTX Mega Geometry, which enables up to 100x more ray-traced triangles in a scene, these advancements are poised to deliver a massive leap in realism for game characters and environments.

The power of neural rendering, DLSS 4 and the new DLSS transformer model is showcased on GeForce RTX 50 Series GPUs with Zorah, a groundbreaking new technology demo from NVIDIA."


https://www.techpowerup.com/330612/...eries-opens-new-world-of-ai-computer-graphics

In other words more shitty fake pixels and fake frames.

Nvidia can take everything AI and everything DLSS and shove it

Every single frame should be real, naturally rendered at 100% resolution. It is the only thing that is acceptable.
 
Skim the page for them. Looks like in raster it will be closer to 30% performance increase. Ray tracing is typically where frame rates drop and it is becoming more common though a few questions remain:

- Are the downsides to frame gen still here? Will this new frame gen make the issues worse?
- Will the 5070 even have the VRAM for this new frame gen?

Erek, the 12v2x6 should be being used. They switched to that long ago.
5070 is 12GB 5070ti is 16GB.... 12GB seems dangerous with games starting to push 10gb into junk status. 16GB probably safe for awhile.

I don't envy the reviewers this gen. They are going to be looking at FSR4 DLSS4 Frame gen Multi Frame gen. I hope some of the better reviewers go out of their way to find games outside the zero day driver optimization games, yet somehow can still show off all the BS software tech.

Today was pretty unimpressive all around AMD playing scared. Nvidia doing the absolute BS 5070=4090 slide. lol
 
From Nvidia's website. Notice the Far Cry 6 graph... that's the actual performance uplift without all the new stuff. That's not very impressive, IMO.

View attachment 702262
Really it looks to me like if we assume the RT cores are quite a bit faster, that Raster performance might even be a regression. (I'm sure it isn't, its just not impressive)
 
  • Like
Reactions: erek
like this
WTF is an AI TOP?
Someone correct my crackhead answer if it's wrong as the following is a guess:

All data is binary and has various sizes. char, int, float, double etc..(sorry, i know you all know this)

SIMD is single instruction multiple data, as im sure everyone here knows. But on the metal side, what does that mean?

It means you take a register that has multiple of a data type size and the arithmetic logic unit (alu) or floating point unit (fpu) will do the same op across all the multiple values in that register in a single cycle. So if you had two ints in a pair of simd registers you can add them both in one step. More words per simd register means more math per cycle.

Big registers take a LOT of space and make a lot of heat. If you're code needs the precision of a byte, then a 64 bit register could fit 8 of those in and do simd on that.

So what is an AI TOP? They've dropped the word size down to 2 and 4 bits. So a 4 bit word can pack in 16 in a 64 bit, you get it. Not sure how big gpu simd registers are, but they're big.

So trillions of operations per second on big types like 32 bit or 64 bit is significantly different than on a nibble or less. Thus, I think an AI TOP is the performance on like 4 or 2 bit types. And those numbers would scale down linearly with the word size increase cuts the number of things you can pack in a register.

Again, just my terrible guess.
 
Last edited:
From Nvidia's website. Notice the Far Cry 6 graph... that's the actual performance uplift without all the new stuff. That's not very impressive, IMO.

View attachment 702262
How different is the node for the 5090 from the 4090? If it's purely architectural gains for the most part, it's kind of impressive. Still though, if I owned a 4090 I wouldn't be in a hurry to plunk down 2 grand.
 
From Nvidia's website. Notice the Far Cry 6 graph... that's the actual performance uplift without all the new stuff. That's not very impressive, IMO.

View attachment 702262
Really it looks to me like if we assume the RT cores are quite a bit faster, that Raster performance might even be a regression. (I'm sure it isn't, its just not impressive)
yeah ChadD, we're still not seeing raw raster performance comparison in this
 
How different is the node for the 5090 from the 4090? If it's purely architectural gains for the most part, it's kind of impressive. Still though, if I owned a 4090 I wouldn't be in a hurry to plunk down 2 grand.
That's where I'm leaning, too. I was thinking "$2000 isn't as bad as I thought" but then remembered that the 4090 was like 70% faster than the 3090 in rasterization, and much faster in RT/AI. I don't think we're getting that this time.
 
That's where I'm leaning, too. I was thinking "$2000 isn't as bad as I thought" but then remembered that the 4090 was like 70% faster than the 3090 in rasterization, and much faster in RT/AI. I don't think we're getting that this time.
in raw performance, AKA comparing identical settings and using no frame-gen, the 5090 is about 20-30% faster than the 4090, by Nvidia's own graphs.
 
in raw performance, AKA comparing identical settings and using no frame-gen, the 5090 is about 20-30% faster than the 4090, by Nvidia's own graphs.
I talked about it here:
https://hardforum.com/threads/rtx-5...d-overclocking-thread.2038962/post-1046029115

TL;DR, I think with this gen they've given up almost entirely on just raw perf improvements, they're just going all in on AI since that's where their R&D is anyway and also because they probably can't figure out a way to keep up with game demands just using raw HW perf increase anyway. Most of this gen is about increased AI hardware, not raster hardware.
 
I talked about it here:
https://hardforum.com/threads/rtx-5...d-overclocking-thread.2038962/post-1046029115

TL;DR, I think with this gen they've given up almost entirely on just raw perf improvements, they're just going all in on AI since that's where their R&D is anyway and also because they probably can't figure out a way to keep up with game demands just using raw HW perf increase anyway.

Games aren't getting more demanding for any good reason: though. Publishers are getting more demanding and developers are cutting corners everywhere to get games out the door as fast as possible.

Add on the fact that Nvidia is doing everything they can to make their GPUs as 'minimum viable product' as they can to prevent potential customers using them as discounted AI products, and you have GPUs progressing as slowly as possible and working harder to produce lower-quality images.
 
this concerns me, wish they'd do what Battlemage did, but 4x full sized 8 pinners

front.jpg


See the ++H? That indicates it has the revised 12v2x6 connector. They started doing this with the 4080s, and if I recall by the time the 4070s (though many used the older 8 PIN) came out all new cards were shipping with it. The above is from a 4070 Super FE. I don't see Nvidia going back to the older 12VHPWR for obvious reasons.


As for the raster performance, the uplift seems like it will be quite low. We'll have to wait for benchmarks. Ray tracing performance might be a decent jump, as is the frame generation performance, but not all games use those. And frame gen will have downsides that native resolution does not have.
 
So this bears out that the 5080 by core count really is, as many people expected, a "70" class chip based on what used to be Nvidia's naming structure, the 5070ti is a further cut down "70" like when certain 60ti or 60super class cards were cut down 70's. The 5060 is barely a "50" class with the tiny piece of 205 silicon.

Pricing is not quite as gouging on the 5090 as many of us were dreading, but still bad. I'm really curious to see, well hear, how loud the 5090's two fans will be at full load. I'm thinking worse the Radeon 290X or Der8auer's new video with a 4090 being cooled by a 40mmx2 radiator. Further, and like at least one other poster, I am concerned about running 500-ish watts through Nvidia's badly designed 12v cable.

Pricing on the 5080 also on the lower end of gouging range, so I'll be more than a little surprised if it's better than 90% of the way to a 4090 in raster. Needs to be equal to a 4090D or less to ship to China without running afoul of the existing import restrictions, though a 5080 or 5090D will probably have something about as basic as a software toggle switch to unlock full performance once in mainland China.

As far as the 70ti and 70 cards. My interest is solely based on price to performance as compared to last gen's lackluster 40"70" range cards and the 7900XT at 2024's Prime Day / Black Friday pricing (whichever was lower).

Now for AMD to quickly re-adjust the price of the badly named 9070XT because unless reviewer testing shows the 5070 to be merely on par with the 4070, like the 4060 and 4060ti were to the 30 series equivalents, then the 9070XT is likely DOA at $499 or more.
 
Back
Top