Power Efficiency of Larger vs Smaller GPUs

InquisitorDavid

Weaksauce
Joined
Jun 27, 2016
Messages
90
So I'm currently on the fence of getting an upgrade to my aging GTX 970, which has served me many years of great 1080p gaming.

Games nowadays are getting more demanding (especially for VRAM), and I'm looking into upgrading. I'll definitely be waiting for NV's 11-series, but I had a question about power efficiency of large VS small GPUs.

The question is this:


Is it more power efficient to run a much larger GPU (x80/x80 Ti-class) at low utilization VS a smaller GPU that runs mostly fully utilized? Like, say, getting a x80Ti that runs at only 20-30% utilization vs an x70 that runs at 80-90%?


Note: This doesn't talk about performance headroom, I know I'd love to have headroom, but that's out of topic.

This may be a silly question, but I don't have concrete figures for it, so I thought I'd ask.
 
probrably the x70 card. gpus dont easily scale down on power ussage as they are designed to be ran at 100%. the most you could easily do is undervolt the heck out of the card and downclock it to keep it stable. or undervolt it and lower the power target. but does power really matter all that much? i wou just buy the best card you can afford and run it til it cant easily play games many years from now
 
While I will definitely use the best I can afford, I try to stay away from 250+W GPUs (room temps are already hot enough as it is now) and I'm gaming mainly on 1080p60. I do plans of getting 1080p144 or 1440p60+ in the somewhat near future, but I like to target performance with my purchases and duck below 200W if possible.

Now, if a suitable x80Ti will be ran way below it's assumed 250W TDP playing my usual drivers with room to go higher if playing an intensive game at the best settings, then sure, that's worth it. But if it's going to be matched in performance target by an OC'd x70 and output less heat (with less performance headroom, yes), yeah, I can live with that.
 
While I will definitely use the best I can afford, I try to stay away from 250+W GPUs (room temps are already hot enough as it is now) and I'm gaming mainly on 1080p60. I do plans of getting 1080p144 or 1440p60+ in the somewhat near future, but I like to target performance with my purchases and duck below 200W if possible.

Now, if a suitable x80Ti will be ran way below it's assumed 250W TDP playing my usual drivers with room to go higher if playing an intensive game at the best settings, then sure, that's worth it. But if it's going to be matched in performance target by an OC'd x70 and output less heat (with less performance headroom, yes), yeah, I can live with that.

ah then you may ne asking a difernt question then you think. from what i unserstand you are asking

1070 vs 980ti at 100%

or possilby

1080 or 1080ti vs 1170 at 100%

many of these cards should preform the same but its normally more efficiant to buy the latest generation mid-high end then last gen high end.


whats the rest of your computer specs?
 
By the largest GPU you can afford. If you aren't playing demanding games on a regular basis, downclock the card by lowering the clockspeed or power limit.

Trying to rationalize a tiny amount of "hopefully" improved efficiency is crazy.
 
Not really last-gen VS new-gen, but like x80Ti (which is usually a 250W GPU) vs a x70 (usually 140-160W).

If the massive array of the x80Ti will be mostly at low power (say, ~30% power) vs a x70 which will be running at high power (70-90% power) but both providing me my performance target, which would be more power efficient?
 
By the largest GPU you can afford. If you aren't playing demanding games on a regular basis, downclock the card by lowering the clockspeed or power limit.

Trying to rationalize a tiny amount of "hopefully" improved efficiency is crazy.

I've owned 200+W GPUs, and 150W ones, and in a room with poor cooling and ventilation, it makes a difference, not in performance but for the people in the same room when it's running for hours on end full-bore.

But yeah, I've gravitating towards the larger GPU and thinking of downclock/power limiting for lower power games (of which I play a lot), but still have the oomph to play smooth for bigger AAA titles (still play a few).

Thanks for your 2c.
 
Not really last-gen VS new-gen, but like x80Ti (which is usually a 250W GPU) vs a x70 (usually 140-160W).

If the massive array of the x80Ti will be mostly at low power (say, ~30% power) vs a x70 which will be running at high power (70-90% power) but both providing me my performance target, which would be more power efficient?

gpus really dont work like that. using 30% of the avalible capacity will probrably use 80% or so of the max power. It can also be difficult to get a gpu to use only 30%

I've owned 200+W GPUs, and 150W ones, and in a room with poor cooling and ventilation, it makes a difference, not in performance but for the people in the same room when it's running for hours on end full-bore.

But yeah, I've gravitating towards the larger GPU and thinking of downclock/power limiting for lower power games (of which I play a lot), but still have the oomph to play smooth for bigger AAA titles (still play a few).

Thanks for your 2c.

you wont get very far without significantly droping the voltage of the chip
 
Just to throw this in there... If you use VSYNC or something to cap the framerate to an arbitrary value, should one also be seeing less heat/power usage assuming the GPU can render more FPS than the cap you set it at?
 
Just to throw this in there... If you use VSYNC or something to cap the framerate to an arbitrary value, should one also be seeing less heat/power usage assuming the GPU can render more FPS than the cap you set it at?

less, yes but normally not all that much less. most gpus dont have enough states between idle and full load to properly scale power usage.
 
I've owned 200+W GPUs, and 150W ones, and in a room with poor cooling and ventilation, it makes a difference, not in performance but for the people in the same room when it's running for hours on end full-bore.

But yeah, I've gravitating towards the larger GPU and thinking of downclock/power limiting for lower power games (of which I play a lot), but still have the oomph to play smooth for bigger AAA titles (still play a few).

Thanks for your 2c.

Sorry, was on Mobile so I was being a bit quick with my reply. While I still support what I was saying, I can understand your point. Here is the thing. Let's use some tasty RX VEGAs since I currently own 6 of em and a single 64. Now these cards are used to play Cryptonight EXTREME! most of the time but I do use 1~3 of the 56s that are in my gaming rig to game every now and then. My VEGAs will do a sustained ~1750Mhz/1100Mhz HBM if I let them run at the insanely high default voltage of ~1.24V on games that need it. My single best one will do 1750Mhz undervolted with 80% PL increase. That allows me to chew through anything I throw at it (limited to 1440P 80Hz ATM but am going UW 1440P ~100Hz+ here soon). That is the high end of things.

Now let's examine the lower end for less demanding games. I can take a 56 and run it @ .9V, which results in ~1400Mhz core/1100MHz HBM and a power draw of ~140W (peak). This gets you better then 580/1060+/Fury X performance for very little power and heat. I do not have one currently, but I am sure you can do the same with a 1070/ti/1080 etc. That is my argument for buying the bigger GPU and playing with voltage/clocks/Power Limits. Buy the bigger GPU so you can play through demanding modern games (KCD etc) at full speed, and then downclock for the less demanding/older games. It is the best of both worlds. You do not have to spend a bunch of your gaming time messing with settings to give you playable FPS on the newer games. Use that time to actually game ;).
 
Bigger GPUs (from the same family/process node) are always more efficient at the same performance level as smaller GPUs. Higher clock speed uses more power than adding more cores, assuming you don't have extremely high overhead.

But the memory attached could affect their actual power draw. So there's still a big "probably" attached to that statement, if you're talking "board power."
 
Last edited:
Sorry, was on Mobile so I was being a bit quick with my reply. While I still support what I was saying, I can understand your point. Here is the thing. Let's use some tasty RX VEGAs since I currently own 6 of em and a single 64. Now these cards are used to play Cryptonight EXTREME! most of the time but I do use 1~3 of the 56s that are in my gaming rig to game every now and then. My VEGAs will do a sustained ~1750Mhz/1100Mhz HBM if I let them run at the insanely high default voltage of ~1.24V on games that need it. My single best one will do 1750Mhz undervolted with 80% PL increase. That allows me to chew through anything I throw at it (limited to 1440P 80Hz ATM but am going UW 1440P ~100Hz+ here soon). That is the high end of things.

Now let's examine the lower end for less demanding games. I can take a 56 and run it @ .9V, which results in ~1400Mhz core/1100MHz HBM and a power draw of ~140W (peak). This gets you better then 580/1060+/Fury X performance for very little power and heat. I do not have one currently, but I am sure you can do the same with a 1070/ti/1080 etc. That is my argument for buying the bigger GPU and playing with voltage/clocks/Power Limits. Buy the bigger GPU so you can play through demanding modern games (KCD etc) at full speed, and then downclock for the less demanding/older games. It is the best of both worlds. You do not have to spend a bunch of your gaming time messing with settings to give you playable FPS on the newer games. Use that time to actually game ;).

This is interesting data. Thank you for your input!
 
Back
Top