RTX 3xxx performance speculation

I don't know about anyone else, but I am shocked that 3090 is a dual slot design. even with the leaf blower I wouldn't think it can keep it cool enough.

Crazy
 
I don't know about anyone else, but I am shocked that 3090 is a dual slot design. even with the leaf blower I wouldn't think it can keep it cool enough.

Crazy

Same. I didn't think there would be any blower style 2 slot designs. I wonder how well it performs because thats really interesting for a SFF build.
 
A lot of people buying that card, like myself, will not be using it for gaming - but rather will be buying multiples for scientific computing (like machine learning, etc).

For that application, I NEED blower cards, and I NEED a two slot config. I was really worried I would not be able to find a blower design, but luckily I should be able to grab three or four of these and fit them in my system. Cooling will be an issue but my 4u case has 2x 120mm fans directly above the gpu section blowing down onto the cards so it should be fine, and noise is not an issue at all.

I was able to keep 4x RTX 2080TIs cool (80C @ 70% fan speed) under 100% load, so hopefully this will be doable as well.
 
Probably gonna end up being known as the "Throttle Edition"

But in all fairness it will still very likely be the fastest blower card in existence. It's like a high end laptop GPU, it's totally gimped but still the best you can get with the constraints you have. I just don't want to ever get anywhere near one of those, personally :)
 
Same. I didn't think there would be any blower style 2 slot designs. I wonder how well it performs because thats really interesting for a SFF build.

yeah I had planned to build a sff pc for my oled tv man cave area I am building, but kind of gave up once I saw the 3090 specs being triple slot. Might have to change my mind now....very interesting
 
yeah I had planned to build a sff pc for my oled tv man cave area I am building, but kind of gave up once I saw the 3090 specs being triple slot. Might have to change my mind now....very interesting

I put mine in my basement and ran extension HDMI cables. That way it’s completely silent, I leave the side panel off, and it’s mATX so there’s no sff “fees”.
 
I put mine in my basement and ran extension HDMI cables. That way it’s completely silent, I leave the side panel off, and it’s mATX so there’s no sff “fees”.

Hmmm, do they even make a long quality HDMI 2.1 cords? sounds like a good idea!
 
Hmmm, do they even make a long quality HDMI 2.1 cords? sounds like a good idea!

I was wondering about that. I was going to try my 2.0 cables since there’s no harm in it. Right now my length is ~50 ft I might have to do some rearranging once I get the card and an OLED TV. I’ve had no issues at 50 ft so I’d imagine ~20 ft would definitely be fine if things are linear.

I think there’s active amps for more range but haven’t looked into it.

Some rando company on amazon sells 2.1 cables that are 150ft? Ehhhh...

Overall I think a lot of people put too much thought into and overdo HDMI cables. Kinda like watercooling - it doesn’t take much water to cool down a few hundred watts.
 
Last edited:

There’s still a market for the blower-type of GPU’s unfortunately. Crappy & expensive cases such as the Alienware Aurora, which is an MATX case, that’s the size of a full ATX tower, but only has 1x120mm intake and 1x 120mm exhaust fan only. Even ITX cases less than half the size support more than double of the fans of that atrocity.
 


Gen 3 vs Gen 4, only shows gaming applications and highlights the 1-2% gain. However, seems 3D Mark shows that a bandwidth difference exist...
 


Gen 3 vs Gen 4, only shows gaming applications and highlights the 1-2% gain. However, seems 3D Mark shows that a bandwidth difference exist...


What data is actually going through the pci-e lane? From my understanding:
1. GPU request to CPU
2. CPU instructions back to GPU
3. System memory

I can't see the #1 and #2 needing that much bandwidth unless the games are HIGHLY threaded, so the only advantage I see is if dynamic memory is in high demand which could be due to lack of vram among other factors.

Is this at all accurate?
 


Gen 3 vs Gen 4, only shows gaming applications and highlights the 1-2% gain. However, seems 3D Mark shows that a bandwidth difference exist...


Did you actually watch the whole video? They show productivity benches near the end and there is literally no difference. They also said that the 1-2% differences in gaming were within margin of error. The take-away is that currently there is no difference between Gen 3 and Gen 4 for performance.
 
Did you actually watch the whole video? They show productivity benches near the end and there is literally no difference. They also said that the 1-2% differences in gaming were within margin of error. The take-away is that currently there is no difference between Gen 3 and Gen 4 for performance.

Let's try this again...

I said " Gen 3 vs Gen 4, only shows gaming applications and highlights the 1-2% gain. However, seems 3D Mark shows that a bandwidth difference exist... "

My post wasn't a holy shit there is a huge improvement, it is the equivalent of x16 vs x8 for gaming (1-2% difference in performance). So, for people who have been worried about their 3rd gen bottlenecking them in games, it really isn't.

Adding on to my "why buy small cases" post a few days ago. exhibit a:

https://i.redd.it/sjgy75hlgzn51.jpg

Also, that is a nice build, how are the temps in the Define S (or it a C)?
 
I don't know about anyone else, but I am shocked that 3090 is a dual slot design. even with the leaf blower I wouldn't think it can keep it cool enough.

Crazy

It'll just throttle like crazy like the Titan V did on a FE cooler. On a normal 50% fan curve the Titan V generally throttles down to the mid 1400MHz range. Which is about as good performance as you are gonna get with a cooler than can only handle about 165W on the default fan curve.
 
Yeah I am also looking at the tuf, but also the evga so I can possibly stepup to a 20gb model later.
 
I'm waiting for reviews of the Asus ROG STRIX and OC versions, there are none so far.

The flagship cards are going to be delayed a little bit, the AIB's didn't have enough time to redesign everything for 400w cooling. 300 vs. 400 is a substantial uptick in heat management.
 
The flagship cards are going to be delayed a little bit, the AIB's didn't have enough time to redesign everything for 400w cooling. 300 vs. 400 is a substantial uptick in heat management.
Yeah, my card isnt set to ship until Sep 30th at the earliest.
 
from the TUF. Thought this was interesting.

https://www.techpowerup.com/review/asus-geforce-rtx-3080-tuf-gaming-oc/2.html

cooler5.jpg
 
Do you guys know if there are any physical differences between tuf and tuf oc other than gpu speeds? Also please correct me the bios switch is to change the fan profiles? It doesn’t changes gpu clocks right? As usual I will use afterburner to control voltages and fan profiles. I will probably try to undervolt it if possible to reduce heat/power consumption.
 
Saw a board analysis of it and they really focused on getting the most out of the cool while not cutting too many corners on the power delivery.
That’s the board I’m thinking of buying. If there are no physical differences I’m might even go for non-oc version.
 
Imo doesn’t matter how good the cooling is on the card if they gimp the power limit of the card. Make sure you find out the power limit if u plan to overclock a card
 
Imo doesn’t matter how good the cooling is on the card if they gimp the power limit of the card. Make sure you find out the power limit if u plan to overclock a card
How much power do you want your GPU to pull out of the wall? 400W? I think when it comes to efficiency Ampere is a joke. Yes it is 65% faster than a 2080, but we have seen much more impressive generational leaps before.
The jump we got with the GTX 1080 was also 63% compared to the GTX 980, but it had the same power draw as the GTX 980.

This time it is like 65% more performance with 80% more power draw, if you compare the 2080 with the 3080. I bet this is because Nvidia couldn't get a place in line at TSMC or they didn't want to pay the price that TSMC wanted to manufacture the GPUs. Is there any info on if Nvidia is going back to TSMC in the future or will they stay with Samsung?
 
... I bet this is because Nvidia couldn't get a place in line at TSMC or they didn't want to pay the price that TSMC wanted to manufacture the GPUs. Is there any info on if Nvidia is going back to TSMC in the future or will they stay with Samsung?

Their data center top of the line Ampere version A100 is on TSMC 7nm. Those customers demand high efficiency, so there is that.
 
How much power do you want your GPU to pull out of the wall? 400W? I think when it comes to efficiency Ampere is a joke. Yes it is 65% faster than a 2080, but we have seen much more impressive generational leaps before.
The jump we got with the GTX 1080 was also 63% compared to the GTX 980, but it had the same power draw as the GTX 980.

This time it is like 65% more performance with 80% more power draw, if you compare the 2080 with the 3080. I bet this is because Nvidia couldn't get a place in line at TSMC or they didn't want to pay the price that TSMC wanted to manufacture the GPUs. Is there any info on if Nvidia is going back to TSMC in the future or will they stay with Samsung?

If you are going to pay a $150-200 premium for better cooling and overclocking and find out your limited to 375w. Then that’s the point? I mean this is the [H] forum. You are damn right I want to pull as much power as I want to overclock. That’s why you pay a premium.
 
Their data center top of the line Ampere version A100 is on TSMC 7nm. Those customers demand high efficiency, so there is that.
Would be interesting to see how much power they would draw...

If you are going to pay a $150-200 premium for better cooling and overclocking and find out your limited to 375w. Then that’s the point? I mean this is the [H] forum. You are damn right I want to pull as much power as I want to overclock. That’s why you pay a premium.
I know what you are saying.
Here comes the but. A new Gen shouldn't use much more power then the last one to get a performance increase. I think Nvidia failed here. They have 57% more performance on average compared to the 2080 but also 43% more power draw.
There were much more impressive generational leaps before. Just compare the jump from Maxwell to Pascal. The GTX 1080 had 63% percent more performance than the GTX 980 but only a 4% higher power draw. Ampere feels like Fermi. It has the performance, but it comes at a price.
 
Do you guys think I would have a problem using one of these on 1 of the 2 pcie cables from my psu (650w)to power a 3 plug evga ftw 3080? Not planning to oc or anything, probably undervolt if possible... Just don't want to have to replace my psu.

1600551848370.png
 
Not many PSUs have 3 8 pin outputs though, seems to be the main issue. So you have to decide to split one of the inputs generally.

I had an 850 ages ago that had 2 6+2 connectors on EACH cable (2 cables total for pcie).

That's why I wondered if I kept it easy on the voltage, if an adapter would get it done for me, or if I'd have to go buy an 800w to square it away.

This is all dependent on getting to step up to a 3080ftw anyway, so I'm sure I have some time before it matters. With that said, I'll buy the adapter and give it a shot undervolted (the plan anyway, should still be plenty of power) and if I have problems keeping the card stable, I'll do a psu upgrade.

I wouldn't have gotten a 650 to start with if I'd thought it was going to be an issue, but with the botched launch, my best path to a 3080 was via step up. I didn't know that the FTW with 3 8 pins required was going to be the only choice for step up when I made my decision.

Oh well, just happy to be back to a gaming desktop, my 2080 super will be here Tuesday, very ready to play :)
 
Back
Top