10900k/10850k vs 5900x: Future proof and 4K gaming

But you cap out at 8 cores...
But then so does most software. Unless everyone around here is a video editor or 3D modeler now? Seems like most in this circle are gaming and browsing. FOMO and hype appear to have amped up Zen3 shortages, comment sections full of bros fiending for a "sic 5950 bild" because their youtube lords told them it's the bestest. And seeing more cores in task manager is no doubt psychological contentment.

I'd argue the first 8 cores of a CPU are the most important in the current landscape of multithreaded software, since support for meaningful scaling beyond 8 cores is a diminishing returns cliff.

Give me 8 of the most powerful cores I can buy, running lockstep at 5.5GHz+ allcore, over 12-16 cores with only 2 boosted and the rest throttled. Obviously usage scenario factors, and it's great we have choices. If I needed to run VM's, then AMD no brainer. I had bought all parts for what I thought would be a nice 5900x system, but the closer I examined the shortcomings, and compared notes with a buddy's frustrations with a 5950x, the more I decided to wait. If Rocket Lake was another year out, sure I'd just install the 5900x.
 
Last edited:
But you cap out at 8 cores...

You can pretty regularly pick up a 5800x. They were in stock for over an hour the other day at Amazon. It's the higher core counts that you can't get, and Intel has no answer for them.

So yes, AM4 is EOL essentially, but your LGA1200 alternative doesn't have what AM4 offers. Whether or not that matters to each individual's needs is another story (see also PCIe 4.0 SSDs). For the average person, 8 cores should be plenty (6 is probably plenty for average users). The 11600k vs. 5600x should be an interesting debate.

What reasonably priced consumer item is being held back by PCIe 3.0? I'm one of the people that ordered several 5800x's from amazon and the estimated ship date was always 1/21 or later. I would love to have the new AMD, but the 5800x and the similar motherboard was $100 more than a 10850k setup. I don't really see any choice being significantly better than the other, and grew tired of going back and forth in my mind over such a small difference in performance.
 
But then so does most software. Unless everyone around here is a video editor or 3D modeler now? Seems like most in this circle are gaming and browsing. FOMO and hype appear to have amped up Zen3 shortages, comment sections full of bros fiending for a "sic 5950 bild" because their youtube lords told them it's the bestest. And seeing more cores in task manager is no doubt psychological contentment.

I'd argue the first 8 cores of a CPU are the most important in the current landscape of multithreaded software, since support for meaningful scaling beyond 8 cores is a diminishing returns cliff.

Give me 8 of the most powerful cores I can buy, running lockstep at 5.5GHz+ allcore, over 12-16 cores with 2 boosted cores and the rest throttled. Obviously usage scenario factors, and it's great we have choices. If I needed to run VM's, then AMD no brainer. I had bought all parts for what I thought would be a nice 5900x system, but the closer I examined the shortcomings, and compared notes with a buddy's frustrations with a 5950x, the more I decided to wait. If Rocket Lake was another year out, sure I'd just install the 5900x.
Situation will be different in a couple of years.

Compared to 2015 situation I'd say:
6C/12T = 4C/4T
8C/16T = 4C/8T
12C/24T = 6C/12T

In 2020 with the 4C/4T you've been suffering for a long time now, 4C/8T is doing okay with some micro stutters etc., and 6C/12T processor would still be just fine still until DDR5 time.
 
Or inadequate VRMs on an older board - that's why I keep asking what motherboard it is. :p
I didn't have notification and missed your question.

It's MSI tomahawk x570 with latest bios.
I answered in this topic because during research about my problem I've send here by Google :)

I disassembled pc 3 times
I did about 24 hours of memtest without error.
And I'm not saying that it has to happen but when it happened to me I started searching.

There are few topics about random reboots on AMD Community. All connected to 5950x or 5900x and some with 5800x

Have seen topic on msi forum.
Google will probably redirect you to reddit.

Problem occurred on msi, asus and gigabyte mobos. Maby more.

Till now I wasn't able to find an answer if this is a bios, cpu, mobo or something else problem.

My ram is hyperx cl17 3600 fury rgb
 
I didn't have notification and missed your question.

It's MSI tomahawk x570 with latest bios.
I answered in this topic because during research about my problem I've send here by Google :)

I disassembled pc 3 times
I did about 24 hours of memtest without error.
And I'm not saying that it has to happen but when it happened to me I started searching.

There are few topics about random reboots on AMD Community. All connected to 5950x or 5900x and some with 5800x

Have seen topic on msi forum.
Google will probably redirect you to reddit.

Problem occurred on msi, asus and gigabyte mobos. Maby more.

Till now I wasn't able to find an answer if this is a bios, cpu, mobo or something else problem.

My ram is hyperx cl17 3600 fury rgb
Does it crash on idle? I know a guy who had 5950X crashing in idle, got the CPU swapped and problem was fixed.
 
I didn't have notification and missed your question.

It's MSI tomahawk x570 with latest bios.
I answered in this topic because during research about my problem I've send here by Google :)

I disassembled pc 3 times
I did about 24 hours of memtest without error.
And I'm not saying that it has to happen but when it happened to me I started searching.

There are few topics about random reboots on AMD Community. All connected to 5950x or 5900x and some with 5800x

Have seen topic on msi forum.
Google will probably redirect you to reddit.

Problem occurred on msi, asus and gigabyte mobos. Maby more.

Till now I wasn't able to find an answer if this is a bios, cpu, mobo or something else problem.

My ram is hyperx cl17 3600 fury rgb
Still pretty new for the chip; I tend to wait ~3-6 months after release to buy CPUs for this reason (did Zen2 TR and Coffee Lake this summer, about 3-6 months after bith were released). By then, BIOS and drivers are rock solid and stable. Sucks, but there are always bugs with newer releases.
 
Does it crash on idle? I know a guy who had 5950X crashing in idle, got the CPU swapped and problem was fixed.
I had crash on very low load. Didn't noticed on idle but I didn't left PC untouched for hour to check
 
I had crash on very low load. Didn't noticed on idle but I didn't left PC untouched for hour to check
The crashing happened after like 8 hours on idle IIRC. But could very well be an RMA case you have there.
 
And they don't really matter right now. GPUs still can't saturate Gen3, and Gen4 SSD's are good for a Crystaldiskmark run to gawk at the sequentials and then never thinking about it again.


Only AM4 is EOL. Z490/LGA1200 will support Rocket Lake.

This is my hesitation with breaking the seal on the $430 Dark Hero I've got sitting here- no upgrade path. I'm regretting selling my delid/copperHS 9900k @ 5.2Ghz allcore, I should've just kept it until 11900k @ 5.5Ghz, double-digit IPC uplift, Gen4 PCIe (FWIW), and no >4000Mhz RAM overclocking headaches.

I'm going to hang back and let the herd fight and FOMO and F5 themselves stupid for an overpriced 5900/5950x into Q2 with AMD not meeting demand, and wait until little blue Intel "11" boxes are stacked floor-to-ceiling at Micro Center next month. Z590 will allegedly be backward compat with 10th gen CPU's, which expands upgrade pathways if the MB's became available before 11th gen CPU's.
GEN4 GPU. I have 2 X16 sockets for a total of 8 NVMe drives. A GEN4 GPU will run full bore on a GEN4 8X PCIe lane. On a X570 you have 2 8X GEN4 sockets or one 16X socket. Using a GEN4 gpu in an 8X socket leaves you another 8X socket for drives or other devices. If you run a GEN3 gpu, well ,you need all 16 cpu lanes to the PCIe sockets for one graphics card. Utube's claim "GEN4 is not something you'll notice in real world use" is never backedup by test results. GEN4 helps boards with very few cpu lanes. When the GEN3 NVMe drives and gpus are all sold, then utube will tell you why you want GEN4. It's twice as fast! I bet "you'll never need more than 4 cores" was something you believed when you had a 4 Intel. My Sabrent drives are faster in all those Que depth/threads tests. Hell a GEN4 drive in a GEN3 mobo jumps from 3300/2200 to 3500/3500. The write speed is 50% faster in every Que depth. People that own one don't quote you.
 
GEN4 GPU. I have 2 X16 sockets for a total of 8 NVMe drives. A GEN4 GPU will run full bore on a GEN4 8X PCIe lane. On a X570 you have 2 8X GEN4 sockets or one 16X socket. Using a GEN4 gpu in an 8X socket leaves you another 8X socket for drives or other devices. If you run a GEN3 gpu, well ,you need all 16 cpu lanes to the PCIe sockets for one graphics card. Utube's claim "GEN4 is not something you'll notice in real world use" is never backedup by test results. GEN4 helps boards with very few cpu lanes. When the GEN3 NVMe drives and gpus are all sold, then utube will tell you why you want GEN4. It's twice as fast! I bet "you'll never need more than 4 cores" was something you believed when you had a 4 Intel. My Sabrent drives are faster in all those Que depth/threads tests. Hell a GEN4 drive in a GEN3 mobo jumps from 3300/2200 to 3500/3500. The write speed is 50% faster in every Que depth. People that own one don't quote you.
Huh?

I'm really not sure what you're trying to say here.

I own PCIE4 drives, and PCIE3 drives. For normal workloads, there's effectively no difference in performance on the same controller (controller matters more - most NVME4 drives have better/newer controllers). Outside of benchmarks or some of the whack NVMeOF stuff I do, you won't see it. And even I'm limited far more by RoCE than even a PCIE3 based drive.

Folks have also benchmarked PCIE3 vs PCIE4 for GPUs, and there's really no difference there either.
 
GEN4 GPU. I have 2 X16 sockets for a total of 8 NVMe drives. A GEN4 GPU will run full bore on a GEN4 8X PCIe lane. On a X570 you have 2 8X GEN4 sockets or one 16X socket. Using a GEN4 gpu in an 8X socket leaves you another 8X socket for drives or other devices. If you run a GEN3 gpu, well ,you need all 16 cpu lanes to the PCIe sockets for one graphics card. Utube's claim "GEN4 is not something you'll notice in real world use" is never backedup by test results. GEN4 helps boards with very few cpu lanes. When the GEN3 NVMe drives and gpus are all sold, then utube will tell you why you want GEN4. It's twice as fast! I bet "you'll never need more than 4 cores" was something you believed when you had a 4 Intel. My Sabrent drives are faster in all those Que depth/threads tests. Hell a GEN4 drive in a GEN3 mobo jumps from 3300/2200 to 3500/3500. The write speed is 50% faster in every Que depth. People that own one don't quote you.
GPU GEN4. If all I have to work with is one X16 or two X8 sockets, a GEN4 gpu running GEN4 8X leaves me a second socket for another device at X8. Using a GEN3 gpu takes all 16 cpu lanes and an X570 board is out of lanes.
 
GPU GEN4. If all I have to work with is one X16 or two X8 sockets, a GEN4 gpu running GEN4 8X leaves me a second socket for another device at X8. Using a GEN3 gpu takes all 16 cpu lanes and an X570 board is out of lanes.

I'm still not sure what you are trying to say. My guess is that you're saying that the bandwidth of PCIe 4.0 is twice that of PCIe 3.0 so you only have to run x8 on PCIe 4.0 to get the same as PCIe 3.0 at x16?
 
Huh?

I'm really not sure what you're trying to say here.

I own PCIE4 drives, and PCIE3 drives. For normal workloads, there's effectively no difference in performance on the same controller (controller matters more - most NVME4 drives have better/newer controllers). Outside of benchmarks or some of the whack NVMeOF stuff I do, you won't see it. And even I'm limited far more by RoCE than even a PCIE3 based drive.

Folks have also benchmarked PCIE3 vs PCIE4 for GPUs, and there's really no difference there either.
Nvidia didn't go to gen4 gpus for no reason. Radeon VII didn't go to a GEN4 Radeon Pro VII for no reason. DDR/cpu/gpu/NVMe updates for different components may come in stages and the real difference may not show to the user untill all the parts have come up to the new generation. I don't throw money at gen3 drives if my next purchase will be a mobo/cpu with gen4 lanes. If I bought a new sTRX4 socket board tomorrow everything in my old sTR4 board is ready to move up to all gen4 cpu lanes. My case/power/fans/pumps/blocks/ram/drives all are good from a 1900X to a 3970X. I don't have the money for the Radeon Pro VII. $1900. AMD's AM4 sockets doesnt have very many lanes but gen4 5700XT gives me 8 gen4 lanes for something else. The bonus is I need fewer lanes for each device.
 
Nvidia didn't go to gen4 gpus for no reason. Radeon VII didn't go to a GEN4 Radeon Pro VII for no reason. DDR/cpu/gpu/NVMe updates for different components may come in stages and the real difference may not show to the user untill all the parts have come up to the new generation. I don't throw money at gen3 drives if my next purchase will be a mobo/cpu with gen4 lanes. If I bought a new sTRX4 socket board tomorrow everything in my old sTR4 board is ready to move up to all gen4 cpu lanes. My case/power/fans/pumps/blocks/ram/drives all are good from a 1900X to a 3970X. I don't have the money for the Radeon Pro VII. $1900. AMD's AM4 sockets doesnt have very many lanes but gen4 5700XT gives me 8 gen4 lanes for something else. The bonus is I need fewer lanes for each device.
For the future. Not right now. Also, still not how that works.

Any x16 device (generalizing here) will work at x8, GPUs especially. This has been benchmarked completely when x570 came out. There was no performance difference- heck, there wasn’t much of one at PCIE2. GPU don’t push much across the bus, as games are currently designed.
Don’t get me wrong- I future proof purchases too, but it doesn’t make a lick of difference right now - especially for games.
 
For the future. Not right now. Also, still not how that works.

Any x16 device (generalizing here) will work at x8, GPUs especially. This has been benchmarked completely when x570 came out. There was no performance difference- heck, there wasn’t much of one at PCIE2. GPU don’t push much across the bus, as games are currently designed.
Don’t get me wrong- I future proof purchases too, but it doesn’t make a lick of difference right now - especially for games.
Yeah probably 1yr+ before nvme 4.0 really starts being utilized for gaming.
 
“Yes”.
At that resolution, it almost doesn’t matter, and we just don’t know how much smart memory access will matter. But whichever you can find or feel like building.

That's not entirely true. While being GPU bound primarily eliminates the CPU from the equation, there are differences in frame times and minimum FPS between different CPU's. Sometimes it can be bad enough to ruin the gaming experience despite what your average frame rates seem to indicate. Granted, if you stick with a 5800X, 5900X, 5950X or 10900K, 10850K, 10700K, etc. you shouldn't see any differences between them at 4K.
 
GPU GEN4. If all I have to work with is one X16 or two X8 sockets, a GEN4 gpu running GEN4 8X leaves me a second socket for another device at X8. Using a GEN3 gpu takes all 16 cpu lanes and an X570 board is out of lanes.
That's not how that works.

gpu-z.gif

1080Ti, X570 board, PCIe 3.0, using 8 lanes of my 24 lanes. (Which has been proven, countless times, to not be a bottleneck.)
I have 4 NVME drives and I just double checked, and each of them, including my PCI-E 4.0 NVME drive, are using 4 lanes each.
8(GPU)+4+4+4+4 = 24.

My second PCI-E slot, that could be used for X8, is being used as X4 with an nvme riser. Meanwhile, my motherboard has 3x m.2 slots, which is where the other 12 lanes are coming from.

-----------------
Back on topic, Both should be more than good for 4k gaming. My 3900X doesn't hinder anything that i'm trying to run at 4k except for the normal Planet Zoo/Coaster and Cities Skylines processor limited scenarios.
 
That's not entirely true. While being GPU bound primarily eliminates the CPU from the equation, there are differences in frame times and minimum FPS between different CPU's. Sometimes it can be bad enough to ruin the gaming experience despite what your average frame rates seem to indicate. Granted, if you stick with a 5800X, 5900X, 5950X or 10900K, 10850K, 10700K, etc. you shouldn't see any differences between them at 4K.
Mass generalization on my side for sure, and if even go back a generation (maybe two) for there being any difference on Intel (not touching Zen1 there).
 
Yeah probably 1yr+ before nvme 4.0 really starts being utilized for gaming.
If even then. Someone did a benchmark of loading times for games between a SATA SSD and NVMe; almost no difference. Seems to be more filesystem and iops bound than throughout for many games (there are outliers, and they’re getting more common). Boot times and video rendering saw the most benefit
 
If even then. Someone did a benchmark of loading times for games between a SATA SSD and NVMe; almost no difference. Seems to be more filesystem and iops bound than throughout for many games (there are outliers, and they’re getting more common). Boot times and video rendering saw the most benefit
Yeah we'll see if devs start putting in the suspend game console feature over to PC, along with the faster load times.
 
If even then. Someone did a benchmark of loading times for games between a SATA SSD and NVMe; almost no difference. Seems to be more filesystem and iops bound than throughout for many games (there are outliers, and they’re getting more common). Boot times and video rendering saw the most benefit
This. I have the 980Pro on a x570 AM board. No crazy significant difference in gaming loads (currently) compared to my buddy's rig that I built (z490 + Sabrent Rocket Q) and not a night/day difference compared to my old setup which utilizes Samsung SATA SSD.
 
Mass generalization on my side for sure, and if even go back a generation (maybe two) for there being any difference on Intel (not touching Zen1 there).

You aren't wrong generally speaking. But I ended up going down a rabbit hole of discovery when it turned out my Threadripper 2920X was absolute shit for gaming at 4K.
 
You aren't wrong generally speaking. But I ended up going down a rabbit hole of discovery when it turned out my Threadripper 2920X was absolute shit for gaming at 4K.
Yeah, I’m finding that a bit with my 1950X and 2080TI. But I’m also playing older games at 4K; newer stuff is 1440P on my 3080- want every feature turned on for those.
 
I'd separate workloads when your resource specs dilute your bullet pts.

Leave the HEDT platform as a standalone and build out a 6c gaming box.

We are starting to see this in AWS, where people are ordering Outposts to feed with datacenter gear.

Old ideas are new again.
 
I'd separate workloads when your resource specs dilute your bullet pts.

Leave the HEDT platform as a standalone and build out a 6c gaming box.

We are starting to see this in AWS, where people are ordering Outposts to feed with datacenter gear.

Old ideas are new again.
At least for me, every system but the gaming one has multiple jobs. The 1950 doubles as a HTPC, plex server, and vm farm when not playing some games at 4K. Etc. HEDT gives you flexibility to do both; with some minor trade offs.
 
At least for me, every system but the gaming one has multiple jobs. The 1950 doubles as a HTPC, plex server, and vm farm when not playing some games at 4K. Etc. HEDT gives you flexibility to do both; with some minor trade offs.
Jack of most trades is not the master of 1.
 
Yeah, I’m finding that a bit with my 1950X and 2080TI. But I’m also playing older games at 4K; newer stuff is 1440P on my 3080- want every feature turned on for those.

The issue is that the NUMA architecture and internal latency of the 1000 and 2000 series Threadripper's makes them less than ideal for gaming. Your minimum FPS and frametimes end up being less than ideal. This shouldn't be an issue with the 3000 series, as many of those issues were addressed in Zen 2.

I'd separate workloads when your resource specs dilute your bullet pts.

Leave the HEDT platform as a standalone and build out a 6c gaming box.

We are starting to see this in AWS, where people are ordering Outposts to feed with datacenter gear.

Old ideas are new again.
This is simply unnecessary. The Ryzen 1000 and 2000 series CPU's are a special case. I've done game testing on the 1950X, 2920X, 2990WX, etc. and Intel's 10980XE. While I didn't do the testing myself, benchmarks look good for the Threadripper 3000 series. HEDT has also been well suited for gaming in the past with Intel's previous offerings. The Threadripper 3000 series is fine for gaming excluding the 3990X. The 3990X isn't ideal because it's clocks are too low. Now, if you are going to need persistent VM's and things like that 24/7, then yes, I'd agree. But if you are a content creator or something and can devote time to playing games without a bunch of demanding shit running in the background, the HEDT platform is perfectly fine for gaming.
 
Last edited:
Jack of most trades is not the master of 1.
You fail to recognize that a HEDT is a trade and with the right gear is master of all purpose. Not everyone is just a toyboy. The "personal computer" was in every office in the country before anyone had a toy box PC at home. My budget HEDT has more compute power than the IBM 360 that drove the Raytheon digital radar scopes at America's Air Route Traffic Control Centers in the 1970's. I'm using 44 of the 48 cpu lanes on my PCIe sockets. You run games with a game engine that only uses 4 cores with only one X16 socket.
 
Heck, even with persistent VMs? I’ve got 40G/14c assigned to always running VMs. That leaves me 24G/10c for games or other apps; which is more than enough.
 
I’m not saying Hedt isn’t a capable gaming platform.
I’m saying that component selection for a workstation that doesn’t perform under a persons ideal gaming settings would
Especially when I'm dealing with a commodity resource build for vm host.



You fail to recognize that a HEDT is a trade and with the right gear is master of all purpose. Not everyone is just a toyboy. The "personal computer" was in every office in the country before anyone had a toy box PC at home. My budget HEDT has more compute power than the IBM 360 that drove the Raytheon digital radar scopes at America's Air Route Traffic Control Centers in the 1970's. I'm using 44 of the 48 cpu lanes on my PCIe sockets. You run games with a game engine that only uses 4 cores with only one X16 socket.
I was running an x299 system to model VMware multi datacenter workflow moves to AWS, hybrid connection, and Kubernetes lab work.

I have built x399 as local Kubernetes warm environments bc of environment spin up time for built test/deployment, bc that's become an essential component to CICD now.

I did get a ton of pushback from vendors in 2013 when I needed flash tiers and they wanted to sell FC, so we built out our own, bc I actually needed 100k burstable iops in my key value stores.

I have built out Dell rack workstations fully populated with Quadros as render hosts.

But yeah, I did play games on the 7820x & x299 box before it rotated out.
It was fine, x399 wasn't but I'm france pace sensitive, I'd just keep a dedicated gaming box for that purpose.
 
I suspect that matters more if you need Quadro/FirePro cards for rendering, than the rest of the HEDT setup. I don't, so I use 5700XT/6800XT cards (tuesday!) for mine - I just need the cores, RAM, and PCIe lanes. That way it's a perfectly good gaming box (x399 aside; waiting for Zen3 TR and I'll upgrade that machine to TRX40 as well), and can do the other stuff I need :)
 
You fail to recognize that a HEDT is a trade and with the right gear is master of all purpose. Not everyone is just a toyboy. The "personal computer" was in every office in the country before anyone had a toy box PC at home. My budget HEDT has more compute power than the IBM 360 that drove the Raytheon digital radar scopes at America's Air Route Traffic Control Centers in the 1970's. I'm using 44 of the 48 cpu lanes on my PCIe sockets. You run games with a game engine that only uses 4 cores with only one X16 socket.

That's not how that works.

View attachment 311209
1080Ti, X570 board, PCIe 3.0, using 8 lanes of my 24 lanes. (Which has been proven, countless times, to not be a bottleneck.)
I have 4 NVME drives and I just double checked, and each of them, including my PCI-E 4.0 NVME drive, are using 4 lanes each.
8(GPU)+4+4+4+4 = 24.

My second PCI-E slot, that could be used for X8, is being used as X4 with an nvme riser. Meanwhile, my motherboard has 3x m.2 slots, which is where the other 12 lanes are coming from.

-----------------
Back on topic, Both should be more than good for 4k gaming. My 3900X doesn't hinder anything that i'm trying to run at 4k except for the normal Planet Zoo/Coaster and Cities Skylines processor limited scenarios.
You have one M.2 mobo socket off the cpu. AM4 has 24 total. 8X 8X PCIe hard wired. One M.2 4X hard wired and One chipset 4X. I've seen the block diagrams for X570 boards. If your first gen 1080Ti is ok with GEN3 8X, great. I haven't seen that tested with a 3080/3090 or a 6800XT/6900XT. They should be ok with a GEN4 8X. I run 4 drives on an 16X socket and 2 drives on a 8X socket (2 Asus Hyper GEN4 PCIe cards). I have 8 more cpu lanes to a DIMM socket. A GEN3 GPU in a 16X socket and a 10GB lan card in an 8X PCIe_4. I still have 8 cpu lanes to the chip set. Try Crystal Disk mark on all four of your drives. If they're all GEN4 the speeds on chipset lanes will go up and down. The difference shows depending on how many SATA III or USB ports are in use. On an AM4 socket with two M.2 GEN4 drives running off the chip set they compete with each other for only 4 chip set lanes. If the specs for your board call an ' M.2 PCIe GEN4 4X " it's off chip set lanes. If the specs call it a "PCIe GEN4 4X/ SATA" it's off the chip set lanes. SATA only works through a chip set.
 
I'm the OP. I'm still running my 7820x after the inability to upgrade to a 5900x since it's OOS everywhere. I've preordered an eVGA RTX 3080 FTW3 ultra from shopBLT.com and I'm crossing figures I get it in the next month.

Now how about my current CPU / MOBO? Looks like Rocket Lake 11900k is a dud and hardly an upgrade from Cascade Lake 10900k so since I haven't upgraded thus far, what's on the horizon and should I wait it for 2022 or whatever the next gen sockets are? Keep in mind I am still gaming at 4k / 120hz monitor and my 1080Ti has not been cutting it only getting 40-50fps with either turning settings to medium or setting resolution scaling to like 50-60% with games like RDR2, Borderlands 3, Cyberpunk, and even COD Warzone (to get >70fps) to play a little more competitively.
 
I'm the OP. I'm still running my 7820x after the inability to upgrade to a 5900x since it's OOS everywhere. I've preordered an eVGA RTX 3080 FTW3 ultra from shopBLT.com and I'm crossing figures I get it in the next month.

Now how about my current CPU / MOBO? Looks like Rocket Lake 11900k is a dud and hardly an upgrade from Cascade Lake 10900k so since I haven't upgraded thus far, what's on the horizon and should I wait it for 2022 or whatever the next gen sockets are? Keep in mind I am still gaming at 4k / 120hz monitor and my 1080Ti has not been cutting it only getting 40-50fps with either turning settings to medium or setting resolution scaling to like 50-60% with games like RDR2, Borderlands 3, Cyberpunk, and even COD Warzone (to get >70fps) to play a little more competitively.
How's your system overclocked right now, like CPU multi, Mesh multi, memory type?
 
I'm at 4.8ghz, x32 mesh, 32GB DDR 3600 CL16.
That CPU OC is pretty much the upper limit for x299, if you wanted to get more out of it you could grab one of the higher core count chips and disable hyper threading it'll boost your ipc a bit but if it's worth it, ehhh...
 
Back
Top