RTX 3xxx performance speculation

With a conventional open fan card, the hot GPU air is mostly just recirculated inside the box. It's eventually going to heat up everything, including the CPU.

You need better airflow design if your just leaving gpu air to recirculate in the case.
 
I'm really curious about the new PCB layout and its high density and how it will affect those folks who use a system like the Kraken G12 with an AIO cooler. My guess is that with everything stuffed so close together there may be issues.
 
And, we actually built our own equipment to test transient power response. I do not think any other review site ever covered that, and that is what ASUS is focusing on. You can ask @[spectre] He knows a lot more about this than me.

Yeah. We had to go the route of building custom equipment because loads aren't static and even active load testers can not mimic those transient spikes of power draw. It was one of those innovations that we brought to the table years ago that continues to be relevant today. For a while, power draws were not increasing as much as in the past. However, with what looks to be some very power hungry new components those kinds of issues are likely to crop up again.
 
Might be problematic for cpu temperature performance as I'm still using the old fashion air cooling (Noctua NH-u12s) .

Good for the card temp but bad of my cpu setup. Gotta see what other AIBs are designing their cooling system.

View attachment 275446

That’s gotta be better than a lot of AIB’s. It dumps a lot of the heat outside the case. I actually really like the design. They also say it runs way cooler and quieter than the typical cooler... I am inclined to believe it.

For me the computer I am sticking this in is an open case in the basement. I get to chuckle at heat concerns.
 
Last edited:
Yeah. We had to go the route of building custom equipment because loads aren't static and even active load testers can not mimic those transient spikes of power draw. It was one of those innovations that we brought to the table years ago that continues to be relevant today. For a while, power draws were not increasing as much as in the past. However, with what looks to be some very power hungry new components those kinds of issues are likely to crop up again.

Did you notice if parts where particularly sensitive to voltage changes?

I know in my S4 as well as my parents benz that if the 12v battery even starts to go due to age or other problems, all sorts of systems in the vehicle start to behave erratically. Wondering if the same would be true for pc components?
 
How much does the game asset decompression support mitigate GPU RAM? At first I was uncertain about the 3080’s 10 GB buffer and was wondering whether to rationalize the 3090’s far higher price and correspondingly fractional performance increase. But, it seems to me that if the assets can be loaded onto the GPU much more quickly than before, that might ameliorate RAM exhaustion.
 
How much does the game asset decompression support mitigate GPU RAM? At first I was uncertain about the 3080’s 10 GB buffer and was wondering whether to rationalize the 3090’s far higher price and correspondingly fractional performance increase. But, it seems to me that if the assets can be loaded onto the GPU much more quickly than before, that might ameliorate RAM exhaustion.
Are you talking about RTX IO / DirectStorage or something else?
 
Did you notice if parts where particularly sensitive to voltage changes?

I know in my S4 as well as my parents benz that if the 12v battery even starts to go due to age or other problems, all sorts of systems in the vehicle start to behave erratically. Wondering if the same would be true for pc components?

Of course certain parts are more sensitive than others. It is in the ATX 12v specification. +/-5% on different voltages mean different ranges for different things. 5% on 12v is not the same as 5% on 3.3v.
 
Put up the perf/Watt slide and I will explain to you why you fail physics...
Yes indeed, explain how a 220w card that performs about the same as a 250w card is 90% more efficient. Maybe it is at idle where the 3070 shuts off it's fan and the 2080Ti doesn't :D

As for physics, I took college physics when in High school, tenth grade, got an A. Ran nuclear power plants and taught at Naval Power School, several courses. Glad you know so much about me.

Oh yeah, show me a picture of yourself and I can explain to you how you fail at everything . . . :D just kidding, yeah for real.
 
the RTX 3090 because it may be the first card that truly has the power to realize this giant TV/monitor's potential. Full 10-bit, 4:4:4 4k/120hz GSYNC + HDR at close to max settings
So faster frames? Not really going to do anything extra for your TV. No?
 
Yes indeed, explain how a 220w card that performs about the same as a 250w card is 90% more efficient. Maybe it is at idle where the 3070 shuts off it's fan and the 2080Ti doesn't :D

As for physics, I took college physics when in High school, tenth grade, got an A. Ran nuclear power plants and taught at Naval Power School, several courses. Glad you know so much about me.

Oh yeah, show me a picture of yourself and I can explain to you how you fail at everything . . . :D just kidding, yeah for real.

First you make up claims like "serialized raytracing"...and now you don't understand a simple picture:
1599023436939.png


Very telling...

And in the case you do not understand:

60 FPS on Turing would require ~240 Watt
60 FPS on Ampere requires ~130 Watt.

It is basic physics/math.
 
Seems my original 40-50% 3080Ti performance increase prediction over 2080Ti wasn't that far off.
FrgMstr should hire me as [H] analyst instead of banning me :D
No 3080Ti (yet), but 3080 is the current flagship, and 3090 is rebranded Titan.
The huge gap in price means there will be a filler card sometime in the future. Looks like 3080Ti/S is coming. Does this mean Big Navi is actually competitive or faster than 3080? That's.... big.
But they are late to the party. Everyone will jump on 3080.

I'm getting both PS5 and 3080, unless 3090 is 50%+ faster than 3080, in which case I'm very tempted. But I think it would be better to just get 3080, sell and get 3080Ti.
 
First you make up claims like "serialized raytracing"...and now you don't understand a simple picture:
View attachment 275525

Very telling...

And in the case you do not understand:

60 FPS on Turing would require ~240 Watt
60 FPS on Ampere requires ~130 Watt.

It is basic physics/math.

But the performance of the 3070 matches the 2080 ti and it needs 220w to run vs the 250w that the 2080Ti sucks up. How does that compute into the marketing slide?
 
First you make up claims like "serialized raytracing"...and now you don't understand a simple picture:
View attachment 275525

Very telling...

And in the case you do not understand:

60 FPS on Turing would require ~240 Watt
60 FPS on Ampere requires ~130 Watt.

It is basic physics/math.
I guess your saying the 3070 and 2080Ti does not follow that graph, at same performance they are roughly 10% within each other in power, if the rating is correct or maybe Jensen is lying (would not be his first).

More importantly, that graph is totally worthless marketing BS that means zero. What settings, part of game, GPU card etc. . .
 
Anyone else see that RTX on/off cp 2077 comparison slider on the GeForce site? Shadowing and reflections both look noticeably better with raytracing. EDIT: Forgot how I got to it, so no link :(.
 
Anyone else see that RTX on/off cp 2077 comparison slider on the GeForce site? Shadowing and reflections both look noticeably better with raytracing. EDIT: Forgot how I got to it, so no link :(.
Yes because first gen was just an over priced tech demo :ROFLMAO:
Where were ya AMD?
 
Yes because first gen was just an over priced tech demo :ROFLMAO:
Where were ya AMD?
Eh, it'll take time (years?) to be common but major titles do seem to be starting to pick it up. Heck even call of duty and World of Warcraft (very soon) have it. I know I'd turn it off for raiding if I played wow, but that's not most of your time in game anyway.
 
Yes indeed, explain how a 220w card that performs about the same as a 250w card is 90% more efficient. Maybe it is at idle where the 3070 shuts off it's fan and the 2080Ti doesn't :D

As for physics, I took college physics when in High school, tenth grade, got an A. Ran nuclear power plants and taught at Naval Power School, several courses. Glad you know so much about me.

Oh yeah, show me a picture of yourself and I can explain to you how you fail at everything . . . :D just kidding, yeah for real.

You know, we have have to wait to see detailed testing and reviews.
Nvidia claimed 1.9x perf/watt improvement in regard to framerate but the absolute data crunching capabilitie aren't limited or restricted to specific fps.
It appears that there has been a large improvement in 3 different computing components.

Shader cores 2.7x
RT cores 1.7x
Tensor cores 2.7x

In normal loads, the work a graphics card is doing is grounded in the frames being rendered.
Typically, a GPU is not fully loaded in such a way that every possible computing resource is crunching out at the absolute max. As this has long been the case for graphics cards of the past. But now they are adding in ever more special functions and cores have become extremely powerful in their own special purpose crunching powers, taking up more and more die and transistor space.

Let's look at single precision
13.4 tflops for RTX 2080ti
20.4 TFLOPS for RTX 3070

Now tensor performance
114 TFLOPS for RTX 2080ti
163 TFLOPS for RTX 3070

Ray tracing unsure on 2080ti
40 TFLOPS for the 3070

The capability in those areas have greatly expanded and even if in normal (most) cases the GPU load will not be crunching out every speciallized TFLOP, they card needs to be built to support high unrealistic full load demands.

Then lest talk about memory. The controller and memory itself use a large portion of the power requirements. So even if your gpu architecture itself is twice as efficient, unless the memory system is somehow twice as efficient, the GPU can't be twice as efficient when looking at total power draw.

Lastly, although the 3070 is based on the ampere architecture, it's made with design choices to meet goals specific to the segment they want it to compete in. Ampere, like any architecture, can be pushed to less efficiency of they want to get more performance.

The point is ampere architecture is more than one card that might be exploiting it.
 
I guess your saying the 3070 and 2080Ti does not follow that graph, at same performance they are roughly 10% within each other in power, if the rating is correct or maybe Jensen is lying (would not be his first).

More importantly, that graph is totally worthless marketing BS that means zero. What settings, part of game, GPU card etc. . .

Your fantasy world is your own 👋
 
  • Like
Reactions: noko
like this
I guess your saying the 3070 and 2080Ti does not follow that graph, at same performance they are roughly 10% within each other in power, if the rating is correct or maybe Jensen is lying (would not be his first).

More importantly, that graph is totally worthless marketing BS that means zero. What settings, part of game, GPU card etc. . .

Yes, the 90% (1.9x) perf/watt improvement stuff is useless marketing BS. You will see It's the same as AMDs 50% perf/watt claims about RDNA2.

They are all done, in a theoretical case, of holding the new part back, to level of the old part under comparison, which you would never do. Once you actually use the new part to its actual capability most of those gains are gone.
 
  • Like
Reactions: noko
like this
RTX. IT’S ON. | The Ultimate Ray Tracing and AI



Ray Tracing in Cyberpunk 2077 looks so good. Honestly before yesterday I wasn't sure we would be able to run RTX@4K in Cyberpunk but after seeing the 3090 and knowing how well DLSS 2.0 works, I think it is a realistic expectation this generation.
 
I have said this 1000x and I am saying it again for myself because I am irrationally aroused by the 3090.

Whenever you buy hardware based on synthetic benchmarks, relative performance to previous generations, or features alone, you are probably going to make an expensive mistake.

I have been doing this since 1998. The question is:
(1) What is the most demanding game I will be playing;
(2) What resolution will I use;
(3) What is my minimum frame rate requirement;
(4) What features/settings do I want/need;
(5) Does potential card hit all of the above with room to spare?

So, for example, if you play Battlefield, we know Battlefield 6 is coming. We know GTA 6 is coming. We know certain games tend to push hardware (anything outside, with foliage).

Can you imagine how sick you would feel if you dropped $1,500 on an RTX3090 and then a year later or two years later a next gen game launched (we are on the cusp or a new console gen after all) and it didn’t quite hit that 4K at 120fps, HDR you wanted. I mean it is one thing to upgrade from an $800 card in that time, but $1,500?

I think you not only need to have the $1,500 but also be willing to risk that it will fall short when it really matters.

I mean, are we really building PC’s around this cyberpunk game?

I always look at Frostbite Engine and Grand Theft Auto as my benchmark because there are TREES that get rendered. Pick your killer app and then go one level ABOVE what it requires.

Or you just wait until you load a game you are dying to play and you just can’t get the FPS you need.

I bought two 980s at launch for SLI, and then right after launch they announced that Shadow of Mordor could use more VRAM than I had. That setup ended up SUCKING in Battlefield and got replaced with a single 1080ti. That card was not THAT much faster but lasted three years because it had the bus speed and vram to knock everything out of the park at 1080p.

I also once bought a 590, which was touted as a giant killer. Also sucked. So hot it destabilized the whole system.

The RTX 3080ti will be the card to own, most likely, for 4K. Probably next year. There will probably be better displays as well and better benchmarks. I just have a bad feeling about the 3080 at only 10GB and 3090 at $1,500. My gut says wait it out.

And the fact that Fortnite and Minecraft are our RTX benchmarks do not bode well for the longevity of these cards.

I think NVIDIA accomplished a lot but the purchase decision is always subjective and based on the games you play and will play, and your monitor!
 
Last edited:
Back
Top