Can anyone tell me where CPU scaling ceases with a 1080 Ti?

HeadRusch

[H]ard|Gawd
Joined
Jun 8, 2007
Messages
1,782
Just curious if anyone has a good web or YT link where they show how a 1080 Ti handles increasingly newer or more-cored CPU's.......I currently run an 8700k at 4.9 on all cores and was honestly just wondering, figured I'd ask, no particular reason other than to *know for sure* :)

I have no intention of upgrading my 1080 Ti anytime soon (I do very little gaming on PC currently), but seeing 8 and 10 core CPU's plummeting in price makes me think about doing some kind of small form factor build instead of the mid towers I've been using for 20...egads...no 30 years... ok maybe a Mini Tower.....get some newer technology on the ports and motherboard, faster built in Wifi, that sort of stuff.
 
Mine is used with a 6700K and works great with the games I use it for.
But if you intend on using it with some more modern games or remakes (ie latest release Witcher 3), the 8700K might struggle.

How much CPU power is needed (whether IPC + MHz related and/or number of cores related) will vary the max framerate with each game.
Decide based on the specific games you intend to use it for.
 
Don' t worry about it. Yes, there may be a point where you do not get the best bang for your buck, but so what? You won't get worse from what you have now. Get the cpu that supports the connectivity and non game tasks you need, and its all gravy.
 
A platform upgrade could net you some real gains on multi tasking, storage and connectivity. Depending on the game and resolution you may get a little more performance, especially if you are on a lower resolution. If the games you play are GPU limited now, a platform upgrade won't help, but will help with running other things in the background, etc., especially if you jump to Intel or AMD's latest gen.

The Titan XP is similar enough to compare it in a few titles against a 1080 Ti.

1679024646923.png
 
I run a 2080, so similar. It is game by game and the rez you play at that matters. Way to many variables for a YT vid to give a decisive answer. I went from a Ryzen 1700X to a Ryzen 5700X and I notice some difference in most and a HUGE diff in others. Your CPU is much in the middle of those 2 CPU's which would make me guess you are fine for the lifespan of your rig.
 
Thanks for the replies, appreciate it, I know it's never a black and white/cut and try thing.....
 
Just curious if anyone has a good web or YT link where they show how a 1080 Ti handles increasingly newer or more-cored CPU's.......I currently run an 8700k at 4.9 on all cores and was honestly just wondering, figured I'd ask, no particular reason other than to *know for sure* :)

I have no intention of upgrading my 1080 Ti anytime soon (I do very little gaming on PC currently), but seeing 8 and 10 core CPU's plummeting in price makes me think about doing some kind of small form factor build instead of the mid towers I've been using for 20...egads...no 30 years... ok maybe a Mini Tower.....get some newer technology on the ports and motherboard, faster built in Wifi, that sort of stuff.
This 3min video has a 5800X vs a 8700K w/ 1080Ti and a 2080Ti in some games and some programs, at 1080p and 1440p. So just some rough ball park figures if you decide to upgrade your CPU. The difference in games isn't all that great, but in programs is where you would see the most improvement.

 
I loved my 8700K at the time. Great chip. It'll still fight for its life against anything new and not be the last to the line.

However: others in the thread have alluded to it: you're due for a platform update. Question is - what's your budget if you were to drop some bucks?
 
Personal, anecdotal evidence.
With a Ryzen 1700 and a 1080ti the 1700 was holding me back.
With a 3900X and a 1080ti the 1080ti was holding me back.

I mostly play games at 1440p and I don't play any competitive FPS. Also I mostly play simulation games.

Don't know if this helps that much.
I did eventually upgrade both (5950X and 3080 12gb) and I don't feel that restrained anymore.
 
I had a 1080Ti on my 3770K ivy bridge system, when I upgraded to my ryzen 3900X all my framerates doubled.
 
I had a 1080Ti on my 3770K ivy bridge system, when I upgraded to my ryzen 3900X all my framerates doubled.
Hmmm… what resolution? I performed literally the same upgrade. 1080ti paired with a 3770k and carried it over to a 3900x. performance was certainly better but none of my frame rates doubled, much less all of them. That’s at 1440p
 
Hmmm… what resolution? I performed literally the same upgrade. 1080ti paired with a 3770k and carried it over to a 3900x. performance was certainly better but none of my frame rates doubled, much less all of them. That’s at 1440p
"all my framerates doubled"

Yeah no one is dumb enough to believe that nonsense.
 
I mostly played wow at that time and my framerate did double in it, I went from 70 fps on average to 150 ish after the upgrade in 1440 P. I really didnt make that clear as I was kinda half asleep when I posted that, didnt mean every single game doubled.
 
Depends on the era of the game. The 6700K was the fastest CPU when the 1080 Ti was released, and it was perfectly fine for the games of its generation, in fact even a Haswell CPU wasn't really a bottleneck back then. Depends on the game engine and how many draw calls it really throws at the CPU. On some really old games it doesn't matter if you are running on a 7950X3D or a 4790K, you just aren't going to be CPU bottlenecked.
 
Depends on the era of the game. The 6700K was the fastest CPU when the 1080 Ti was released, and it was perfectly fine for the games of its generation, in fact even a Haswell CPU wasn't really a bottleneck back then. Depends on the game engine and how many draw calls it really throws at the CPU. On some really old games it doesn't matter if you are running on a 7950X3D or a 4790K, you just aren't going to be CPU bottlenecked.

It harder to quantify these days because people have been prioritizing 60+fps and letting a game dynamically adjust accordingly.
 
Just curious if anyone has a good web or YT link where they show how a 1080 Ti handles increasingly newer or more-cored CPU's.......I currently run an 8700k at 4.9 on all cores and was honestly just wondering, figured I'd ask, no particular reason other than to *know for sure* :)

I have no intention of upgrading my 1080 Ti anytime soon (I do very little gaming on PC currently), but seeing 8 and 10 core CPU's plummeting in price makes me think about doing some kind of small form factor build instead of the mid towers I've been using for 20...egads...no 30 years... ok maybe a Mini Tower.....get some newer technology on the ports and motherboard, faster built in Wifi, that sort of stuff.
Just get the best platform you can afford, you can always upograde the GPU later if needed, and have the headroom. The 13600K is a killer deal at $300, even on a $150 B760 DDR4 mobo.
 
I know my CPU and MB upgrade pretty much unleashed what my 1080Ti had to offer. Now its my bottleneck. I upgraded from a 5.0Ghz 7700K.
 
Last edited:
Depends on the era of the game. The 6700K was the fastest CPU when the 1080 Ti was released, and it was perfectly fine for the games of its generation, in fact even a Haswell CPU wasn't really a bottleneck back then. Depends on the game engine and how many draw calls it really throws at the CPU. On some really old games it doesn't matter if you are running on a 7950X3D or a 4790K, you just aren't going to be CPU bottlenecked.
DCS (which only just recently got an open beta with proper multithreading), PlanetSide 2, ArmA 3... all of those would like a word with you about Haswell not being a bottleneck while being old enough to be more or less period-appropriate for that CPU in late 2013.

And if we go a bit older, I'm pretty sure Cortex Command technically dates back to somewhere around P4/Athlon XP era, maybe the early Athlon 64 X2/Core 2 age, and that's all just one CPU thread and no GPU whatsoever. Have fun watching the slideshow when you nuke the entire map and every pixel in it gets simulated... (Noita isn't much better about it decades later, especially since it adds fluid dynamics and fire propagation to the mix.)

The first two are a big reason why I upgraded from the ol' Kentsfield Q6600, and the Haswell 4770K was just the best thing out at the time (for all of $200 from Micro Center, to boot).

Eternal Skylake had meager gains at best, as verified with a 7700K and 9900K. The latter even had RTX 2080 SLI backing it up, and still Yet Another ArmA Benchmark had nothing to show for it.

Alder Lake with my 12700K? That's where the real gains kicked in for those old titles, something you didn't just measure in benchmarks but could actually feel. (Well, haven't tested PS2, I've not touched that in years.) This is with the same old DDR4-3600 CL16 used in the 7700K system, didn't even need to step up to DDR5 like I would for anything AM5.

The one other CPU that a lot of people reported huge gains with in similar titles while having a modest platform cost? The 5800X3D - it's all about that cache. Its Zen 4 successors are looking to continue the trend, especially once the 7800X3D finally releases.

MS FS2020 at 3440x1440 with my 1080Ti FTW3 and 5800X with 32GB ram was 90-100% GPU and 30-40% CPU.
The same settings with my 4790k was 100% CPU and like 30% GPU.
Interestingly, MSFS 2020 is one of those titles that people often report being heavily bottlenecked by both GPU and CPU, in that they do get substantial gains from a 5800X3D in the 0.1% lows and overall frame time consistency (not sure if anyone benched the 7950X3D yet), but that it'll still bring even the RTX 4090 to its knees, only about 45 FPS average flying in the midst of dense cities...

...but you're playing it in pancake mode, not VR where you really feel the framedrops (and not in a good way, literally discomforting). Makes me wonder how much those percentages would shift if you tried running it through something like a Valve Index, or to be extra punishing, an HP Reverb G2.

Honestly, I just think it's a case of poor optimization for today's hardware with an aging flight simulator engine, something that's been the case as far back as FSX. (Not that DCS and IL-2 Great Battles are any better in that respect!)
 
. . . .

Interestingly, MSFS 2020 is one of those titles that people often report being heavily bottlenecked by both GPU and CPU, in that they do get substantial gains from a 5800X3D in the 0.1% lows and overall frame time consistency (not sure if anyone benched the 7950X3D yet), but that it'll still bring even the RTX 4090 to its knees, only about 45 FPS average flying in the midst of dense cities...

...but you're playing it in pancake mode, not VR where you really feel the framedrops (and not in a good way, literally discomforting). Makes me wonder how much those percentages would shift if you tried running it through something like a Valve Index, or to be extra punishing, an HP Reverb G2.

Honestly, I just think it's a case of poor optimization for today's hardware with an aging flight simulator engine, something that's been the case as far back as FSX. (Not that DCS and IL-2 Great Battles are any better in that respect!)
Nvidia driver overhead on the CPU maybe affecting the 4090 performance in MSFS 2020. Plus settings in MSFS can really affect the performance as well so hard to compare apples to oranges. Anyways the 7900 XTX fly's through large cities rather well on ultra preset settings, I've found San Francisco to be more taxing than New York. Not only that but DX 12 Beta is faster than DX 11 now for AMD in this title. Using Ultra Preset, 4K resolution, no scaling tech as in FSR in this case, just TAA rendering. Flying low, stock GPU, 5800 X3D, system memory at 3733 (process of fine tuning) -> 60fps - 78fps. FPS monitor low is inaccurate, every time I take a screen shot it tanks the FPS for an instant afterwards showing up as a low in the counter. GPU is around 100%, meaning not being held up by the CPU which amazingly shows very little usage o_O which I find very strange. AMD did work to do hardware acceleration for RNDA3 to limit CPU usage, this maybe one clear case showing the fruits of it. As for a 1080 Ti, I suspect driver overhead/CPU on a modern game would be GPU limited in most cases, anything Zen 3, Alder Lake +.

20230326005455_1.jpg
20230326005831_1.jpg
20230326010123_1.jpg
 
Last edited:
DCS (which only just recently got an open beta with proper multithreading), PlanetSide 2, ArmA 3... all of those would like a word with you about Haswell not being a bottleneck while being old enough to be more or less period-appropriate for that CPU in late 2013.

And if we go a bit older, I'm pretty sure Cortex Command technically dates back to somewhere around P4/Athlon XP era, maybe the early Athlon 64 X2/Core 2 age, and that's all just one CPU thread and no GPU whatsoever. Have fun watching the slideshow when you nuke the entire map and every pixel in it gets simulated... (Noita isn't much better about it decades later, especially since it adds fluid dynamics and fire propagation to the mix.)

The first two are a big reason why I upgraded from the ol' Kentsfield Q6600, and the Haswell 4770K was just the best thing out at the time (for all of $200 from Micro Center, to boot).

Eternal Skylake had meager gains at best, as verified with a 7700K and 9900K. The latter even had RTX 2080 SLI backing it up, and still Yet Another ArmA Benchmark had nothing to show for it.

Alder Lake with my 12700K? That's where the real gains kicked in for those old titles, something you didn't just measure in benchmarks but could actually feel. (Well, haven't tested PS2, I've not touched that in years.) This is with the same old DDR4-3600 CL16 used in the 7700K system, didn't even need to step up to DDR5 like I would for anything AM5.

The one other CPU that a lot of people reported huge gains with in similar titles while having a modest platform cost? The 5800X3D - it's all about that cache. Its Zen 4 successors are looking to continue the trend, especially once the 7800X3D finally releases.


Interestingly, MSFS 2020 is one of those titles that people often report being heavily bottlenecked by both GPU and CPU, in that they do get substantial gains from a 5800X3D in the 0.1% lows and overall frame time consistency (not sure if anyone benched the 7950X3D yet), but that it'll still bring even the RTX 4090 to its knees, only about 45 FPS average flying in the midst of dense cities...

...but you're playing it in pancake mode, not VR where you really feel the framedrops (and not in a good way, literally discomforting). Makes me wonder how much those percentages would shift if you tried running it through something like a Valve Index, or to be extra punishing, an HP Reverb G2.

Honestly, I just think it's a case of poor optimization for today's hardware with an aging flight simulator engine, something that's been the case as far back as FSX. (Not that DCS and IL-2 Great Battles are any better in that respect!)
7000X3D series seem to be absolutely dominate on MSFS 2020 and I would assume we will see similar numbers for the other major sims as testing continues.
 
Back
Top