My GPU is self limiting? Nobody has been able to help...can anybody explain why my FPS is so unsteady?

Let us know how your scores on benchmarks like 3DMark or Uniengine stack up to systems with similar specs as yours. I’m curious to see what results you get.
One thing I like to do is run a benchmark suite stock when I build a new PC to get a baseline of performance so I can understand the impact of things like overclocking are having on the performance of my system. 3DMark actually stores all of my results online so I can refer to them later.

One other thing to never rule out is how some software evolves over time and how that reacts to your hardware. For example, with my previous build, I noticed over time that Battlefield 1 was starting to degrade in performance and stutter. I couldn’t figure out why. Then I did some reading online and discovered that the game was being optimized to work with 6 core processors, and any pure 4 core processor was getting hit hard. I had an i5-4670k under the hood, checked my numbers, and yep, CPU under 100% usage, GPU was looking bored at 65% usage, and I wasn’t able to improve things much by adjusting graphics settings because the CPU was still getting wrecked. In your case, the hardware is new, so I don’t suspect things would start to get worse via optimizations this soon, just something to keep in mind down the road.

Another useful piece of software to download is HWInfo64 to track all of the sensors in your PC. You can see here if anything is getting too hot which can result in thermal throttling as the computer slows itself to lower the temperature in an effort to protect the system components.

What motherboard are you using?

Is this a PC you built yourself?
There was a theory once that Nvidia make sther new driver more heavy so it kind of force you to buy new cards.
A guy on youtube tested it back on the 970 gtx cards days and it was true what he showed us.

He installed a old driver and got better score then the new drivers
Just a heads up

Hello all! So I did another UserBenchmark, I did MSI Kombustor (on whatever preset was labeled "1440") and I did Uniengine Heaven at the maximum preset I could make it do also ar 1440p. 3D mark was labled 30 dollars on steam and I could not find an independent free version of it that wasn't a base application-dependent DLC benchmark. I do not know much about these numbers and it should be mentioned that this is WITH the overclock on. I have it OCd by 100 Mhz on the core clock and 325 on the VRAM. What are you take aways here?

https://www.userbenchmark.com/UserRun/27502416
https://gpuscore.top/msi/kombustor/show.php?id=318905
The Heaven score was as follows:

FPS: 95.1
Score: 2396
min: 38.5
max: 170.9

This was with tessalation extreme, ultra quality, 8x AA, Direct3D11.

My MOBO is an ---ASUS ROG Crosshair Hero VIII (Wifi)---
And yes I built it myself :)

Thank you all!
 
Last edited:
Hello all! So I did another UserBenchmark, I did MSI Kombustor (on whatever preset was labeled "1440") and I did Uniengine Heaven at the maximum preset I could make it do also ar 1440p. 3D mark was labled 30 dollars on steam and I could not find an independent free version of it that wasn't a base application-dependent DLC benchmark. I do not know much about these numbers and it should be mentioned that this is WITH the overclock on. I have it OCd by 100 Mhz on the core clock and 325 on the VRAM. What are you take aways here?

https://www.userbenchmark.com/UserRun/27502416
https://gpuscore.top/msi/kombustor/show.php?id=318905
The Heaven score was as follows:

FPS: 95.1
Score: 2396
min: 38.5
max: 170.9

This was with tessalation extreme, ultra quality, 8x AA, Direct3D11.

My MOBO is an ---ASUS ROG Crosshair Hero VIII (Wifi)---
And yes I built it myself :)

Thank you all!

https://benchmarks.ul.com/3dmark#windows

I think the basic edition is free, try this link. The benefit here is that you should be able to match it against users with your same configuration. If the scores are within error, you’re fine. You likely are anyway, but it will help put your mind at ease.
 
There was a theory once that Nvidia make sther new driver more heavy so it kind of force you to buy new cards.
A guy on youtube tested it back on the 970 gtx cards days and it was true what he showed us.

He installed a old driver and got better score then the new drivers
Just a heads up

I heard that wasn’t the case. The problem lies in newer games not being optimized for previous generation cards. For example, if you have a 1070 and play and older game like GTA V or something, you don’t get degraded performance, but you do not benefit from performance enhancements in newer titles vs a 2070 since Nvidia’s moved on from Pascal.
 
https://benchmarks.ul.com/3dmark#windows

I think the basic edition is free, try this link. The benefit here is that you should be able to match it against users with your same configuration. If the scores are within error, you’re fine. You likely are anyway, but it will help put your mind at ease.
I heard that wasn’t the case. The problem lies in newer games not being optimized for previous generation cards. For example, if you have a 1070 and play and older game like GTA V or something, you don’t get degraded performance, but you do not benefit from performance enhancements in newer titles vs a 2070 since Nvidia’s moved on from Pascal.

Here is my Time Spy score! I think it is pretty good. I did not think firestrike was worth it since the free version only let me go up to 1080p and I game at 1440p. What do you guys think?

https://www.3dmark.com/3dm/46420784?

Thank you all!
 
I had same issue with my 3800x and was able to dramatically reduce the annoyance by turning off cpu smt in the bios. Most games don't utilize more than 8 cores/threads anyway so it's unnecessary overhead for the cpu to manage the extra threads and allows the cpu to reduce power consumption and boost higher to smooth out those dips in framerate.
 
Try running your RAM at stock speeds, not the factory overclock XMP settings. See if that helps. Ryzen can be finicky about RAM settings. Its likely you may need to hand tweak your RAM. And I don't mean that your system can't handle the factory overclock. I mean there might be a couple of numbers which are simply "wrong" for Ryzen, from the factory.
 
I had same issue with my 3800x and was able to dramatically reduce the annoyance by turning off cpu smt in the bios. Most games don't utilize more than 8 cores/threads anyway so it's unnecessary overhead for the cpu to manage the extra threads and allows the cpu to reduce power consumption and boost higher to smooth out those dips in framerate.
benchmarking 3dmark typically done at 1080p for comparative reasons, great score though.
Try running your RAM at stock speeds, not the factory overclock XMP settings. See if that helps. Ryzen can be finicky about RAM settings. Its likely you may need to hand tweak your RAM. And I don't mean that your system can't handle the factory overclock. I mean there might be a couple of numbers which are simply "wrong" for Ryzen, from the factory.

As far as the RAM is concerned, the "stock" speeds are like half of the rated speeds. I have 3600MhZ RAM but, out of the box, it performed at some ridiculously low number (in comparison to the rating) like 2144 or something. If THIS is what you mean by stock speeds then I have tried it and it destroys my performance. As far as the 3800x in concerned, I have never heard of SMT in the bios. What is it and what does it do? Anything to smooth out the experience. Also, here is my firestrike 3Dmark demo score:

https://www.3dmark.com/3dm/46425198?

I think that this score is worrying since when I compare the results online I am below average I think? The bar graph places me in a bar that is below what most users see.

Thank you all!
 
Here is my Time Spy score! I think it is pretty good. I did not think firestrike was worth it since the free version only let me go up to 1080p and I game at 1440p. What do you guys think?

https://www.3dmark.com/3dm/46420784?

Thank you all!

You can compare with other people’s results here:

https://www.3dmark.com/search#/?mod...Name=NVIDIA GeForce RTX 2080 SUPER&gpuCount=0

Take note of everyone else’s settings when doing this comparison to make sure it’s apples to apples. There are a lot of things that can impact your score in particular, such as temperatures you’re running at which will affect the algorithm that decides the clockspeed your processor and GPU are going to settle on. Cooler temps will enable your CPU and GPU to run faster, longer. HWInfo64 will allow you to monitor these temperatures. The guys at the top are running SLI, so they have two cards in the same machine, hence the massive delta.
 
As far as the RAM is concerned, the "stock" speeds are like half of the rated speeds. I have 3600MhZ RAM but, out of the box, it performed at some ridiculously low number (in comparison to the rating) like 2144 or something. If THIS is what you mean by stock speeds then I have tried it and it destroys my performance. As far as the 3800x in concerned, I have never heard of SMT in the bios. What is it and what does it do? Anything to smooth out the experience. Also, here is my firestrike 3Dmark demo score:

https://www.3dmark.com/3dm/46425198?

I think that this score is worrying since when I compare the results online I am below average I think? The bar graph places me in a bar that is below what most users see.

Thank you all!

The stock speeds for RAM are the default JEDEC standard. XMP is faster than that. You want to keep XMP enabled to take advantage of the 3600MHz speed for your RAM kit. It will make a massive difference, as you’ve seen.

SMT is “simultaneous multi threading”. Basically, it allows each core to do two things at once, hence why you have an 8 core processor, but 16 threads. This is AMD’s version of Intel’s “hyperthreading” technology, which might sound more familiar to you. Hyperthreading sometimes lead to slower gaming performance back in the day (like, Intel Haswell era), but now it’s impact on performance seems to vary from title to title:

https://www.google.ca/amp/s/www.techspot.com/amp/review/1882-ryzen-9-smt-on-vs-off/

You can try disabling it in the BIOS and see the impact, although as you can see here, on average it made a 1% impact. If you want my opinion, I left SMT on for my Ryzen 7 3700x.

Regarding the link, check the one I posted in the post I previously made. You need to compare results of people that match your configuration as closely as possible. The “high end gaming PC” bar on that chart uses an i9 processor, which is faster in gaming than a 3800x, so right there, you can’t compare the two. You need to compare your result with others who are running a 3800x with a 2080 Super to make it fair. When doing so, pay attention to reported clock rates. When running the benchmark, have nothing else running in the background aside from any monitoring software. You might want to monitor your temps with HWInfo64 and check for clock rate consistency with either that or Afterburner. If your components get too warm, they will automatically throttle down. If others are running cooler temperatures, they will have higher scores than you with the same configuration. Just something to keep in mind.

When I ran the comparison for your score vs others who have your same configuration, it looks to me that your results are within what you should expect for your hardware. My opinion, you’re fine, everything looks perfectly normal and within expectations, you have an awesome build that will give you years of enjoyment, go rack up some frags on your favourite FPS and stop worrying so much :).
 
Last edited:
As your using 2 8 gig memory modules
wouldn't hurt to make sure your memory are in slots A2 and B2
 
As far as the RAM is concerned, the "stock" speeds are like half of the rated speeds. I have 3600MhZ RAM but, out of the box, it performed at some ridiculously low number (in comparison to the rating) like 2144 or something. If THIS is what you mean by stock speeds then I have tried it and it destroys my performance. As far as the 3800x in concerned, I have never heard of SMT in the bios. What is it and what does it do? Anything to smooth out the experience. Also, here is my firestrike 3Dmark demo score:

https://www.3dmark.com/3dm/46425198?

I think that this score is worrying since when I compare the results online I am below average I think? The bar graph places me in a bar that is below what most users see.

Thank you all!
You're fine. You're less than five percent off pace for a system with a 2080 and a 9900K, and the 9900K is faster than your CPU by several percent. 3DMark weights the CPU score pretty heavily.

Many of the scores I see using the same hardware are running their graphics cards 100ish MHz faster than you, which would account for any difference you see. Experience tells me that they're probably using water cooling to achieve this.
 
Clipboard01.jpg

This is fun, difference between a 2017 PC and a 2020 one...
 
Last edited:
Try running your RAM at stock speeds, not the factory overclock XMP settings. See if that helps. Ryzen can be finicky about RAM settings. Its likely you may need to hand tweak your RAM. And I don't mean that your system can't handle the factory overclock. I mean there might be a couple of numbers which are simply "wrong" for Ryzen, from the factory.
You can compare with other people’s results here:

https://www.3dmark.com/search#/?mode=advanced&url=/proxycon/ajax/search2/cpugpu/spy/P/2478/1290/500000?minScore=0&cpuName=AMD Ryzen 7 3800X&gpuName=NVIDIA GeForce RTX 2080 SUPER&gpuCount=0

Take note of everyone else’s settings when doing this comparison to make sure it’s apples to apples. There are a lot of things that can impact your score in particular, such as temperatures you’re running at which will affect the algorithm that decides the clockspeed your processor and GPU are going to settle on. Cooler temps will enable your CPU and GPU to run faster, longer. HWInfo64 will allow you to monitor these temperatures. The guys at the top are running SLI, so they have two cards in the same machine, hence the massive delta.
The stock speeds for RAM are the default JEDEC standard. XMP is faster than that. You want to keep XMP enabled to take advantage of the 3600MHz speed for your RAM kit. It will make a massive difference, as you’ve seen.

SMT is “simultaneous multi threading”. Basically, it allows each core to do two things at once, hence why you have an 8 core processor, but 16 threads. This is AMD’s version of Intel’s “hyperthreading” technology, which might sound more familiar to you. Hyperthreading sometimes lead to slower gaming performance back in the day (like, Intel Haswell era), but now it’s impact on performance seems to vary from title to title:

https://www.google.ca/amp/s/www.techspot.com/amp/review/1882-ryzen-9-smt-on-vs-off/

You can try disabling it in the BIOS and see the impact, although as you can see here, on average it made a 1% impact. If you want my opinion, I left SMT on for my Ryzen 7 3700x.

Regarding the link, check the one I posted in the post I previously made. You need to compare results of people that match your configuration as closely as possible. The “high end gaming PC” bar on that chart uses an i9 processor, which is faster in gaming than a 3800x, so right there, you can’t compare the two. You need to compare your result with others who are running a 3800x with a 2080 Super to make it fair. When doing so, pay attention to reported clock rates. When running the benchmark, have nothing else running in the background aside from any monitoring software. You might want to monitor your temps with HWInfo64 and check for clock rate consistency with either that or Afterburner. If your components get too warm, they will automatically throttle down. If others are running cooler temperatures, they will have higher scores than you with the same configuration. Just something to keep in mind.

When I ran the comparison for your score vs others who have your same configuration, it looks to me that your results are within what you should expect for your hardware. My opinion, you’re fine, everything looks perfectly normal and within expectations, you have an awesome build that will give you years of enjoyment, go rack up some frags on your favourite FPS and stop worrying so much :).
As your using 2 8 gig memory modules
wouldn't hurt to make sure your memory are in slots A2 and B2
You're fine. You're less than five percent off pace for a system with a 2080 and a 9900K, and the 9900K is faster than your CPU by several percent. 3DMark weights the CPU score pretty heavily.

Many of the scores I see using the same hardware are running their graphics cards 100ish MHz faster than you, which would account for any difference you see. Experience tells me that they're probably using water cooling to achieve this.
View attachment 242623
This is fun, difference between a 2017 PC and a 2020 one...

Wow the volume of responses that I have gotten on this forum as compared to others is extremely encouraging and really helpful! You guys rock! I went and did some more digging and it turns out I may BE just worrying too much (call it first time build paranoia lol). My numbers ARE well within the margin of error for all of the benchmarks that I have run (I see that now thanks to your helpful links)! After some research I have concluded that battlefield games (especially the newer ones) and call of duty games tend to be sorta...weird...on PC which I think makes sense when I consider the experience that I have had on them in other systems (pre builts and laptops) but, since this was my first personal custom build I was extra worrisome about it. I did ensure that my chipset drivers, BIOS, and graphics drivers are all up to date and as close to defaults as possible and they are. I settled on leaving SMT and XMP on since most advice I saw when building said that they are decent options and that whatever weirdness there is with them will likely be evened out as software catches up and makes full optimized use of them. Knowing that turning them off is an option, that will definitely be on my list of potential tweaks if I ever encounter this sort of thing in the future. The last thing I would probably ask for potential guidance with would be a few other BIOS settings that I forgot to mention in the original post. I do not know anything about BIOS tweaking to be honest and I probably should not have messed with it when I first built the PC but I did and I forgot. I turned "Performance Enhancer" up to whatever was the maximum, I turned "Core Performance Boost" up to its maximum, I turned "Performance Bias" to whatever sounded like it would give me the most speed, and I enabled "Precision Boost Overdrive." These are all I messed with as far as I know (aside from upping the RAM to its rated speed of course using DOCP). Sorry for the absolute brick of a paragraph lol!

Thank you all! :) My mind is eased!
 
Wow the volume of responses that I have gotten on this forum as compared to others is extremely encouraging and really helpful! You guys rock! I went and did some more digging and it turns out I may BE just worrying too much (call it first time build paranoia lol). My numbers ARE well within the margin of error for all of the benchmarks that I have run (I see that now thanks to your helpful links)! After some research I have concluded that battlefield games (especially the newer ones) and call of duty games tend to be sorta...weird...on PC which I think makes sense when I consider the experience that I have had on them in other systems (pre builts and laptops) but, since this was my first personal custom build I was extra worrisome about it. I did ensure that my chipset drivers, BIOS, and graphics drivers are all up to date and as close to defaults as possible and they are. I settled on leaving SMT and XMP on since most advice I saw when building said that they are decent options and that whatever weirdness there is with them will likely be evened out as software catches up and makes full optimized use of them. Knowing that turning them off is an option, that will definitely be on my list of potential tweaks if I ever encounter this sort of thing in the future. The last thing I would probably ask for potential guidance with would be a few other BIOS settings that I forgot to mention in the original post. I do not know anything about BIOS tweaking to be honest and I probably should not have messed with it when I first built the PC but I did and I forgot. I turned "Performance Enhancer" up to whatever was the maximum, I turned "Core Performance Boost" up to its maximum, I turned "Performance Bias" to whatever sounded like it would give me the most speed, and I enabled "Precision Boost Overdrive." These are all I messed with as far as I know (aside from upping the RAM to its rated speed of course using DOCP). Sorry for the absolute brick of a paragraph lol!

Thank you all! :) My mind is eased!
Techpowerup has a really good article on RAM tweaking for Ryzen. Its a good place to start and will help you understand a little about what's going on. But to also quickly reach optimal settings.

Of course 3600mhz XMP RAM is faster than stock speeds. Not sure why people felt the need to point that out. My point is that there may be a few inoptimal numbers set in the RAM settings, which could be causing/contributing to the some of your poor performance scenarios. My point in trying the stock RAM speeds, was to see if the behavior changed (even though overall performance would go down).

Regardless,Techpowerup's RAM tweaking articles show very clearly, how a couple of inoptimal numbers can make a noteable difference. Even when your system seems like it should be fine. I have even experienced it on Intel systems.
 
I had similar issues to what you're describing on an earlier Ryzen build. It ended up being a ram issue (after lots of hardware trial and error). The ram passed all tests but I was having random GPU crashes and performance in BF 1 and V and Anthem. Decided since the ram I was using at the time wasn't on the qualified list for my mobo+cpu combo I got a set that was. Fixed all of the issues, to note the same set of "bad" ram worked 100% fine in an Intel build I did. Been only getting qualified memory for my Ryzen builds ever since and haven't run in to it again. Intel systems aren't as picky with memory, but Ryzen systems have been since release and still are. If you aren't using a set on the qualified list for your particular mobo (and if you're using Corsair memory it has to be the exact model and version listed), I'd try one and see where that goes. Not only that, you're pretty much guranteed to run at the qualified speed listed without any trial and error in the bios.
 
There was a theory once that Nvidia make sther new driver more heavy so it kind of force you to buy new cards.
A guy on youtube tested it back on the 970 gtx cards days and it was true what he showed us.

He installed a old driver and got better score then the new drivers
Just a heads up

Its possible but legally it would wreck them if they were doing this kind of activity and were caught.

Keep in mind that games evolve and demand more out of cards and cpus. Nvidia has to constantly optimize drivers around hundreds of games. Over time the games increase in detail and optimizations for older cards stop. And do you notice how nV drivers are always around 500ish MB in size. Well old card space has to be used for new cards in that software package.
 
Op it could be your temps.
My 2080ti was crashing constantly. Gpu core temps were not peaking past 55c (water cooled) but games were constantly crashing.

I remived my water block and remounted it with new paste and thermal pads. Lightly tightened it down. Now gpu rarely hits 50c and I have steady boosts and memory clocks, steady fps, no more crashes.

In theory I suppose the issue was simply overheating VRMs. They werent getting cooled right. Or maybe the vram was overheating.

Sometimes you have to take card apart, re-grease, reassemble. Give it a shot.
 
Op it could be your temps.
My 2080ti was crashing constantly. Gpu core temps were not peaking past 55c (water cooled) but games were constantly crashing.

I remived my water block and remounted it with new paste and thermal pads. Lightly tightened it down. Now gpu rarely hits 50c and I have steady boosts and memory clocks, steady fps, no more crashes.

In theory I suppose the issue was simply overheating VRMs. They werent getting cooled right. Or maybe the vram was overheating.

Sometimes you have to take card apart, re-grease, reassemble. Give it a shot.
IF heat was your issue it wasnt the core. theyre good till 90+. vrm/ram would be the issue
 
IF heat was your issue it wasnt the core. theyre good till 90+. vrm/ram would be the issue

I never said it was core. Itvwas either vrm or ram. I bought the evga that doesnt have 5 billion temp sensors and didnt realize it until I needed them. My next card will be temp sensored out fully.
 
Honestly I'm trying to make sense of this thread. Is the issue that he was anecdotally noticing something like 120+ fps awhile back between 4 or 5 games with different engines and is now anecdotally noticing something like 100+ fps between the same 4 or 5 engines?

Anyone else with the same setup compare a bench with him?
 
Honestly I'm trying to make sense of this thread. Is the issue that he was anecdotally noticing something like 120+ fps awhile back between 4 or 5 games with different engines and is now anecdotally noticing something like 100+ fps between the same 4 or 5 engines?

Anyone else with the same setup compare a bench with him?

We checked with comparable systems on 3D Mark. He’s good to go.
 
Ok, so turn off the overlay and log over time for various patches and compare.

Maybe upload results to YouTube for entertainment purposes.
 
Back
Top