6700k paired with 3080 (bottleneck?)

I have a 5930k at 4.45ghz

Everything is fine at 4k except my 4k nodded skyrim.

I want a 3090 and debating a 144hz big screen or the new LG for GPS games.

Oddly the TV won't do what I want still from my Samsung 4k.

Lesson: always something better. Do you want to spend 1+k = max return and have the cash, go 3090/3080.

Want a few benchmark fps? Get the chip.

If you came and said you were on x58 we could say yeah but even X99 is still rocking hard today.

I'd definitely wait for something more exciting on the mobo / chip realm... But hey that's why Intel is now behind. (kina)
 
Hello everyone,

I usually upgrade my gpu every other generation. I currently have a 6700k (overclocked 4.6) with a gtx 1080 and plan to game on my 4K tv (120 hz, g-sync) via hdmi 2.1. Do you think a gtx 3080 would be bottlenecked due to my older sky lake cpu at 4K resolution. My current specs are:

i7 6700k at oc’d 4.6 mhz
1080 gtx
16 GB Ram DDR4 3000
Asus Hero Mobo (pci express 3.0)
750 watt psu

I would prefer to keep the rest of my pc and replace just the gpu unless bottlenecks are inevitable. Thanks in advance.

I think you already on the minimum comfortable spot with that VGA.
If you plan using 4K, even RTX 3080 will be hardly reach >60fps with games in high setting, you'll need RTX 3090 for 4K 120Hz.
I also doubt that RTX 3090 can play average 120fps on high setting in 4K.
 
Using Gigabyte RTX 3080 Gaming OC here with overclocked 6700K - 4.6 GHz. The overall performance in benchmarks matches the average. Serious bottlenecking happens only in very crowded places of Night City, where frames can dip to 40 with CPU at 99% and GPU at 70%. I'm playing at 3440x1440. Other places of the game I get 60-75 FPS, otskirts is 80-90 FPS. All maxed out, RT on Ultra, DLSS on Balanced.
Apart from that no game is showing this. The performance boost over previous card (Aorus GTX 1080 Ti) is insane.
Deus Ex MD, TW3, Control, XCOM2, Hellblade, PUBG, GTA5 all is maxed out and most are locked at 100 FPS or higher (if g-sync turned off). In GTA5 I made some insane MSAA settings and it's smooth as butter at all times.
Glad about the decision to hit the card 3 days after release. The good old Skylake can pull it quite well. Will be replaced, but delivers better than expected.
 
I moved from a 7700k to a 10850k and it's night and day. But I keep a lot of shit running for a long time. I had a bunch of pcie 3.0 ssds based off u.2 and optane so AMD wasn't an option, also my cooling solutions. I'm happy with it but it's an EOL system at this point (2080 super) and I'll camp on it till pcie 5.0 and new things.
 
Yeah CPU bottlenecks are real, I found out the hard way when I upgraded from a 1060 to a 2060. My 2600K @ 4.6GHz pushed the 1060 for all it was worth at 1080p/120Hz.

But soon as I got the 2060, I never got more than 76% usage out of it unless I put it at 1440p. I was lucky to see more than 10-15 frames gained over the 1060. Dropped a few bones and got me a 10700K and was blown the fuck away with the performance at stock vs. the 2600K with the same card.

I still can't get over the fact, the staying power 4c/8t CPU's retained all these years though. Who would of thought we could use a 10y old CPU to game with?
 
I went from a 2500k to 6700k and saw a huge difference on my 1080 ti at the time.

Now I went from 6700k (4.6ghz) to 5600x with a 3080. Other than windows feeling a little snappier I don't see any fps gains in 1440p or in 4k (maybe 5%?)

Maybe when more multi-threaded games come out next year (because of new consoles) it'll make a difference, but for right now it wasn't worth it IMO.
 
  • Like
Reactions: yowen
like this
After gaming at 4K with my 3090 for over a month now in my overclocked 5960x, I am 100% waiting till LGA1700 and DDR5. No reason to sink money into a new built right now with the crazy prices and somewhat dead end platforms (after Intel 11 anyway). Can't say I am familiar enough with AMD these days to know if the next series will be a new socket as well.

I figured jumping on DDR4 when it first came out with X99 has turned out to be an amazingly long lasting move for me, maybe jumping LGA1700 with DDR5 will prove to be the same kind of long lasting jump.
 
It's only a problem if your goal is to game above 60fps I would say in most cases. If you plan to upgrade to an HDMI 2.1 monitor in the near future then you should do a CPU upgrade to a 8 core or higher at that point.
 
you might consider getting a 6950X off ebay to breathe more umpf into your x99 board. 10core/20 thread.
6950X are beasty CPUs and only <$300 on ebay. That'll tide you over till AM5 easy (or until Intel releases something that actually makes a wave). Or frankly -- more objectively -- until AMD or Nvidia can release a grapics card that isn't the bottleneck at 4K, because the 5th and 6th Gen Intel CPUs being discussed here in this x99 thread are not the bottleneck at high resolution gaming.

I'm pretty convinced I have zero reason to upgrade right now from my 6950X gaming at 3440x1440 - except boredom. Every time I look at game benchmarks at 4K with various CPUs and a top tier video card - there's just nothing to gain.
 
Don't forget that min FPS are massively impacted by the CPU, saw a big jump in CP2077 when moving from 5960x to 5800x.
From 5800x to 5900x nada but I knew that already and wanted it anyways lol. And yes ironic naming schemes between Haswell and Vermeer...
 
Last edited:
Don't forget that min FPS are massively impacted by the CPU, saw a big jump in CP2077 when moving from 5960x to 5800x.
From 5800x to 5900x nada but I knew that already and wanted it anyways lol. And yes ironic naming schemes between Haswell ans Vermeer...
What games what mins?

I recently put 110 hours in cyberpunk 2077 (arguably the most punishing to hardware game there is). with my Intel 6950x. If didn’t see any behavior on minimums that was alarming, or even visible and I played with the windows 10 Xbox overlay FPS frame counter up the whole time. The game ran surprisingly well with all settings maxed out and ray tracing in psycho at 3440x1440 and about mid 50s FPS average (with Gsync). Not sure frame times should be any significant difference between a Skylake or Broadwell 8 or 10 core and a modern 8 or 12 core CPU at 4k resolution.

3090 and various CPUs at 4K
1612964572974.png


Vega 64 and various CPUs at 1080p
1612964939216.png
 
Last edited:
Those canned tests don't capture the whole game, my frame rate on the 5960x at 4 Ghz would drop down to mid 30s in certain sections with lots of NPCs, same 3080, stayed about 50 in those exact same areas with a Zen 3, overall FPS per level probably didn't change too much. Running 3840x1600 UW maxed out...
 
Those canned tests don't capture the whole game, my frame rate on the 5960x at 4 Ghz would drop down to mid 30s in certain sections with lots of NPCs, same 3080, stayed about 50 in those exact same areas with a Zen 3, overall FPS per level probably didn't change too much. Running 3840x1600 UW maxed out...
Only place I saw 30FPS range was in the forrested city park area of the Glen. You aren’t getting 30FPS in that area now with all max settings on your Ryzen 5900x CPU?

Post 2280
https://hardforum.com/threads/cyber...ieres-june-25.1998205/page-57#post-1044895126
 
Last edited:
Will have to test but the 5800x was def about 10-15 fps better for mins in that section, expect 5900x to be the same. I never replay a game...currently on medium, ghostrunner was just too frustrating lol.
 
Will have to test but the 5800x was def about 10-15 fps better for mins in that section, expect 5900x to be the same. I never replay a game...currently on medium, ghostrunner was just too frustrating lol.
Do test for me please because thats the only section of the game for me that reliably dipped below the 50FPS range and into the 30s. If it really makes a difference there that’ll be interesting. For me you can see in my screenshot that area used LESS CPU. I think it was getting hammered with the Psycho ray tracing setting (all max settings) and all the leaf/foliage the GPU had to render, which seems to free up the CPU load. My GPU in Cyberpunk was normally 90-97% and my CPU 60-80% But in that scene (the lowest FPS in the game by far) my GPU was 100% and my CPU was 45%.
So I’m confused on why a faster CPU would benefit?
1612966142526.png
 
One thing most "benchmarks" on sites do not compare is; overclocks on the CPUs they test when comparing across a line. If you can run your 5960x at 4.5Ghz+ or your 6950x at 4.3Ghz+, you can still play ball with most modern CPUs, very much for gaming performance anyway at 2k and 4k.... even 1080p honestly if you are pushing more frames than you can see anyway. When I compared my heavily OC'ed 5960x to other CPUs in Cinebench, I was up there with a 3700X and 8700k, which are very much more "modern" CPUs. The x99 platform had some weaker IMC's, but later batches of Haswell-e and Broadwell-e had some good IMCs that could do 3200Mhz on the DDR4 with good timings pretty easily (which is a great price/performance range for DDR4 for Intel).

I have also put in a ton of hours on CP2077 at 4K, everything maxed, Psycho RT and with balanced DLSS and Slow HDD Mode = On (removes all stuttering on SSDs and NVMEs too); the game is smooth as butter (generally 50+ FPS). Like the last post, it chokes only in that park within the city; and that's likely the psycho RT at work there. Other games I play like BFV at 4K, with RT on, I am almost always pegged at 120+ FPS, with dips into the low 100's / high 90's in Heavy RT maps like Rotterdam and Solomon Islands.

Want proof of how "non-limiting" some older high end platforms can be? Check out our very own thread in these forums...

https://hardforum.com/threads/rtx-3090-h-owners-official-3dmark-time-spy-leaderboard.2003111/

I am still within the Top 10 for Graphics score in Timespy with my 3090; which is actually a 2K test, not 4k! Up there with 10 series Intel's and 5xxx series AMDs.

I do not think anyone would deny a newer platform is always better... you will eventually gain Smart Access Memory for your GPU and maybe someday PCIe 4.0 will actually make a difference for video cards (does not matter at all now). But for anyone with a HEDT x99 or x299 setup with a good OC, I would wait to 2H of 2021 for DDR5 and maybe PCIe 5.0 at this point. As of right now; most games will run perfectly fine at 2k & 4k on these older CPUs. I would say you need 8C/16t minimum though with a good OC.
 
A 7700k was a massive bottleneck in modern games with a 3080. Got my GPU before getting my CPU so had to run on the 7700k (5 weeks delivery time even though order 2 minutes after launch on CPU). Much smoother framerates with the new CPU and in some games 70% increase in framerate 1440p. Even in cyberpunk 2077 with raytracing on ultra and dlss off, the difference was about 15% with lots of people in the scene. With DLSS the difference was closer to 50%. While the graphics card does a lot more of the work it will still be a problem in 4k as the stuttering is the CPU not being able to processs data fast enough. Higher res means more drawcalls etc. and an overloaded CPU will struggle. When GPU bound the the min framerates will still suffer. The problem is lack of threads on the 6700k and 7700k. An 8 core, 16 thread you should be fine and probably on 6 core 12 thread, even with lower clocks, but 4 core 8 thread is too low.
 
I'm also with 6700K and will continue to do so, until Intel actually makes some kind of advancement in CPU-front. 10900K's benefit would have been minimal and this is over FOUR years after my 6700K-build.
Considered AMD too for a while, but don't want to go with AM4, which is also already 4 years old.
I'll wait until AM5, unless Intel gets their sh*t straight before that.

3090 will be my GPU and I don't expect much difficulties with current 4-year old CPU-Mobo-combo. This older PCIe generation might bottleneck at with very high framerates, which won't be a problem for 4K gamers.
Big Navi might be good, but they're 1.5 months late with their release, so they will lose a lot of enthusiast's money to Nvidia.
The older PCIe generation is the LEAST of your bottlenecks
 
I worry the same after every GPU launch, still have my trusty 8700k and doubt I'll need to upgrade for 3 years... sure we'll probs lose about 3fps give or take this gen, does not warrant a 400+ upgrade though.

If you had a 2500k though for example, you'd def have to upgrade
I was worried my [email protected] might hold back my 3080. I’m pretty sure I won’t need an upgrade for at least 3 years maybe more. I feel like the Ryzen 5 3600X in another system with a 3080 is in need of an upgrade.
 
I have a 6700K @ 4.6ghz also. Still in search of that 3080/3090 unicorn. But at 1440p
 
I replied to this thread a while ago... before upgrading my i7 6700k / RTX 2080 Ti system to an R7 5800x / RTX 3090 system... I spent some time using my RTX 3090 in my 6700k system while waiting for parts to become available for my new build...

The short answer is... it makes less of a difference at 4k than at the lower resolutions, but there is a very noticeable difference in many games. I found intermittent short stutters when other things were going on in the background pretty common. That is gone on this 5800x system. It is overall a smoother experience. It is hard for me to put into words, but lower GPU framerate feels much different than a CPU bottleneck. In some games, it is hard to put your finger on, but when you upgrade... you will feel the difference, at least that is my experience. I can even tell a difference in Valheim... which isn't a particularly demanding game (from a CPU/GPU utilization standpoint), but the experience it noticeably smoother than with the same GPU on the i7 6700k system.

As many have pointed out, depending on the game, the difference the CPU / platform upgrade will make is going to vary, but IMHO now is a great time to move on from a 6700k system. While it can technically play the games while paired with a beefy GPU, there is a difference in many games with the upgrade. If money is tight, feel free to hold on, but if you are looking for a premium gaming experience (as evidenced by selecting an RTX 3080 IMHO), it might be worth considering the upgrade to go with the GPU. I know in my case, my 6700k system was 6 years old...
 
I said exactly that about moving from a 6950x to 3800x and now 5800x. Certainly smoother game play and frametime wise very stable.
 
I replied to this thread a while ago... before upgrading my i7 6700k / RTX 2080 Ti system to an R7 5800x / RTX 3090 system... I spent some time using my RTX 3090 in my 6700k system while waiting for parts to become available for my new build...

The short answer is... it makes less of a difference at 4k than at the lower resolutions, but there is a very noticeable difference in many games. I found intermittent short stutters when other things were going on in the background pretty common. That is gone on this 5800x system. It is overall a smoother experience. It is hard for me to put into words, but lower GPU framerate feels much different than a CPU bottleneck. In some games, it is hard to put your finger on, but when you upgrade... you will feel the difference, at least that is my experience. I can even tell a difference in Valheim... which isn't a particularly demanding game (from a CPU/GPU utilization standpoint), but the experience it noticeably smoother than with the same GPU on the i7 6700k system.

As many have pointed out, depending on the game, the difference the CPU / platform upgrade will make is going to vary, but IMHO now is a great time to move on from a 6700k system. While it can technically play the games while paired with a beefy GPU, there is a difference in many games with the upgrade. If money is tight, feel free to hold on, but if you are looking for a premium gaming experience (as evidenced by selecting an RTX 3080 IMHO), it might be worth considering the upgrade to go with the GPU. I know in my case, my 6700k system was 6 years old...
Thanks for the insight. I've been looking at the 5800x.
 
I was planning to get an R9 5950x instead of an R7 5800x, but when I started looking into it, gaming performance, especially at 4k, there really isn't an appreciable difference to going with the higher end Ryzen 5000 series processors, combine that with the fact that the R7 5800x is now actually available for purchase... I started collecting my parts at the end of November for this build, so I was sick of waiting.

It is possible that there are a couple of things I plan to do with this system that are non-game related that could benefit from the higher spec parts, but not enough to make waiting longer worth it to me at this point. The 5800x seems to get a bad rap from reviewers etc. and I can appreciate where they are coming from, but it seems like a balanced processor for the $ for a high end gaming system.

I don't regret the purchase / upgrade from my i7 6700k system. That system has served me well (and will continue to for VR) but now was a good time to upgrade to maintain that premium smooth high resolution/refresh rate gaming experience as games start to take advantage of the higher end hardware that is available now.
 
I was on the holdout train for LGA1700, but an intel 11900k is going to be very tempting as an upgrade from my 5960x. Maybe PCIe 4.0, resizable bar and generally better IPC will get me through a generation or two of DDR5 and PCIe 5.0 to let it mature more once it releases later this year.

This is of course assuming a 11900k can even be had before then...
 
I have a 3090. This thread is convincing me that I really gotta upgrade from my 6700k (and update my sig)
 
Currently have an RTX 2080 with a 6700k at stock speeds and It's not that bad. Been running it for 2 years now since release of RTX 2080.
I'm losing around 5% in performance, which isn't that noticeable.
I think, once you go to the RTX 3080, that's when you should think of upgrading, or a whole new build.

I'm still trying to get 3080 to pair with my 8700k, but still out of stock everywhere.
 
Why don't you two overclock the 6700Overclockablethat'swhattheKisforNOTfornotoverclockingit?
Pretty much all Intel can overclock between 4.9GHz-5.0GHz on all cores with air, then again if your cooling is not as good or thermal paste, then why bother with an overclockable CPU?
Like getting a Ferrari and all you gonna do is drive it at the speed limit and never get it to 120mph on the highway........Might as well go for a Hyundai then.

Get good air cooling, even for $50 a Corsair A500 can deliver and some Grizzly Thermal Paste can easily hit 5.0GHz. Cryin' out loud!!!!
Because most 6700 can’t overclock that high
Only ~ 6% can click to 4.9Ghz and that’s at a whopping 1.44 voltage
https://siliconlottery.com/pages/statistics

looks like 4.7Ghz is likely doable though (68% of chips will do it at (still high) 1.4V
 
What is safe voltage on these Skylake processors? I feel uncomfortable with 1.4V+ for long time use? Is that being too conservative? ( I have actually fried a processor before back in the Pentium 4 era with too much voltage - voltage I read should be okay - so since then I've been a little gun shy ).
I've been using my 6950X at 1.285 volt and it only hits 4.1Ghz reliably with that voltage. It is water-cooled. (360MM RAD), but CPU speed hasn't really seemed to be too much of a problem with 10 core/20 thread, and gaming at 3440x1440. I have a 3080 and don't feel bottlenecked.
 
Last edited:
What is safe voltage on these Skylake processors? I feel uncomfortable with 1.4V+ for long time use? Is that being too conservative? ( I have actually fried a processor before back in the Pentium 4 era with too much voltage - voltage I read should be okay - so since then I've been a little gun shy ).
I've been using my 6950X at 1.285 volt and it only hits 4.1Ghz reliably with that voltage. It is water-cooled. (360MM RAD), but CPU speed hasn't really seemed to be too much of a problem with 10 core/20 thread, and gaming at 3440x1440. I have a 3080 and don't feel bottlenecked.
I believe the 6950x is Broadwell-e and NOT Skylake based... it still used a Ring Bus like Haswell-e (not Mesh) for the cores and Intel moved back to that idea with the 10 series LGA1200 CPUs (lower latency). I know Haswell-e can go up to 1.4V safely if you could keep it cool (1.35V for 24/7 use), but I believe for Broadwell-e the consensus was 1.35Vcore for a safe max as those got much hotter?

You can probably safey push that thing to at least 4.3Ghz, but I believe those Broadwell-e's were crap over clockers and would hit a wall at 4.3 or 4.4 without some crazy voltages. Maybe you got lucky! It will likely not hurt for you to test the waters so as long as you do not go crazy. You need to keep in eye on all other voltages though; which is why I manually set everything so the motherboard does not go crazy on "auto".

I think broadwell-e's did have some issues with dying, but that could have been people not keeping voltages under control. A lot of people in that day added stupidly high SA or Cache voltages and fried things over time.
 
You don't have to "believe" or guess what something is when Google is right in front of you. Yes the 6950x is Broadwell e and all it takes is a one second search.
 
Back
Top