Should I wait to upgrade or scratch the itch?

capt_cope

Gawd
Joined
Apr 12, 2009
Messages
948
I've come down with upgraditis pretty bad, but I'm not sure if I should upgrade right now or wait a few months.

At this point the primary focus of this computer is gaming - lots of Arma 3, some TF2, R6 siege, fall out 4, etc.
My main rig currently has:
Mobo: Asrock x79 Extreme 3
CPU: 3930k @4.7ghz
Ram: 32gb PC3-12800 (running this at rated speeds)
GPU: Asus GTX1080 Strix OC

I've been looking at the z170 boards paired with an i5 6500k as my budget friendly option (I've got a 4 1/2 week old son now, so my hobby has to take a back seat to essentials my wife finds, like a bluetooth enabled rocking cradle thing that plays music) since the total number of PCIe lanes really doesn't seem that important to me (I will NEVER run multiple cards again. Been there, done that, and I'm just not interested in doing it again.)

But looking at benchmarks, I'm worried it might be a lateral move or even a downgrade from my current setup. Yes I want nvme support and usb 3.1, but not SO bad that I'm willing to pay a few hundred dollars to get worse performance.

Which leads me to the X99 route - I could also pick up a used x99 board and an i7 5820k (or if it's really worth the extra cost a 6800k) but that's a lot of money (to me at least) and again I'm not sure it's a huge upgrade. I REALLY don't want to upgrade to x99 now only to have X100 (or whatever, you get my point) show up in 6 months. I don't NEED to upgrade right now, so I really don't want to blow my upgrade money on a chipset that's on its way out.


TLDR: Is there a logical upgrade from a 3930k @ 4.7ghz that isn't likely to be yesterday's tech in the next 6 months to a year?
 
I noticed few if any changes going from a 3930 @ 4.6 to a 6700 @ 4.7, much of what could be explained with a fresh install and a M2 SSD.
 
On the pro-upgrade side, the jump from Skylake to Kaby Lake is expected to be about the same as what was seen going from Haswell to Skylake. So next year you probably won't be kicking yourself over the ~5-10% performance difference and slight power savings.

However, similarly, the jump from Sandy Bridge-E to Skylake is not that substantial. CPU speed bumps have really slowed in the last few years. Unless you know for a fact that the titles you play are CPU-limited (which seems unlikely, given your overclock and most of the games' release dates), it doesn't seem worth the $600-1000 an upgrade would cost just to have things launch maybe a couple seconds faster.

NVMe/PCIe drives are certainly nice, but outside on artificial benchmarks aren't noticeably faster in real life for most uses. If you needed a new system for other reasons I'd say go for one, but alone they're not a reason to upgrade. A quality SATA SSD is fine. Also, the price premium on NVMe drives is still very high.

USB 3.1, while also nice, is still also very new. Little makes use of it. Outside of laptops, there's currently not much to recommend it over 3.0.

As for Haswell-E/X99, the advantages for that platform don't readily translate to a gaming system. You don't need additional cores or PCIe lanes, and the additional memory bandwidth will go unused.

TL;DR: Stick with what you have.
 
I came from a 3930K @4.3GHZ on an X79 Dark with 2133MHz RAM.

Then I had a 1650V2 @4.5GHz (4940K equivalent) with the same motherboard and RAM.

Then I got my 5820K @4.4GHz with an X99 Classified and 3200MHz RAM.

The difference is minimal. A couple of hundred points on my Physics score in 3DMark and average FPS ever so slightly more in Valley and Heaven.

Where is there real difference? In minimum FPS in CPU bound games. That's it. And I run at 1080P, so if I jumped to 1440P or even 4K that wouldn't be as big a deal since the CPU wouldn't be holding things up as much.

Your 1600MHz RAM is probably slowing things down in CPU bound games like Fallout 4, so getting 2133 or 2400 working would help out quite a bit there.

As for NVME? It's a joke. I've got a Samsung 950 Pro in a PCIE 3.0 expansion card (with a heatsink). My file transfers to and from my RAID 6 array are really fast. Game load times and Windows boot times are. The. Exact. Same. As my old Samsung 850 EVO SATA III drive.

As evidenced here:
Samsung's 950 Pro 512GB SSD reviewed

And here:
Samsung SSD 950 Pro review | PC Gamer
 
Last edited:
Your 1600MHz RAM is probably slowing things down in CPU bound games like Fallout 4, so getting 2133 or 2400 working would help out quite a bit there.

Unlikely, at least for Fallout 4. I'm running a mildly overclocked i7-3770k with DDR3-1600, plus a GTX1070 @ 1920x1200. Even with all graphics settings on max Steam's FPS meter is pinned at 60 FPS. Everything is very smooth.
 
I don't see a point of upgrading your cpu. It is fine and there is nothing groundbreaking at the moment. My buddy recently got 5820k with D15 heatsink and I am really surprised that it actually hotter than my cpu and doesn't overclock well :( If I was in your shoes, just stick with what you got.

The only thing i have been upgrading in past 4 years was video card.
 
Wait if you can. Hopefully AMD brings some competition back to the desktop market and either things will get cheaper or intel will come out with a new conroe or sandybridge.
 
Yea don't upgrade CPU. You have what counts, a 1080GTX. I just got a 6700k and I'm meh about the real world gains over my overclocked 2500k and playing at 4k makes me feel like the CPU is even less important coming from gaming at 1080p and I thought at least now having hyper threading with video rendering I would notice a nice jump in rendering speed but it's not impressing me at all. If a video render finishes 30 seconds faster than my last CPU was it really worth $500+ for CPU/Mobo/Ram for those few home videos I edit and final render? Nope. I could wait another 30 seconds with my old CPU.... But a benchmark score will make the small difference a jillion point difference and make it seem better than it actually is. I expected a large jump in video rendering time but without timing it with a stop watch, you'd be hard pressed to notice any seat of the pants performance difference. Yea, someone can site three games where the 6700k is 30fps minimum faster, but that's far and few in-between IMO and I don't even think I have any of those games that show those gains. Even this 16GB of ram seems like a huge waste over my 8GB. Last game I was playing at 4k mind you, my ram usage was only at 3.7GB and about the same on video ram usage (maybe a tad over 4gb....) I really want a faster GPU for 4k gaming or full SLI support for every game and not more unused ram and lackluster jumps in CPU performance! To me it feels like displays and videocards are the only things making real gains but I still want something faster than a 1080GTX for 4k gaming.
 
Something seems odd about those tests. I don't know WTF they were doing to get such low frame rates out of upper-level GPUs, even at higher RAM clock rates. It's certainly not what I'm seeing in that particular game.

Load up Ultra settings with the Nvidia driver version and game version that they used and test it out with a 980 Ti if you think their results look funky. Yes I know you won't and that's why I am pointing it out to you - there is hard evidence with their results and configuration settings listed for all to see and you call it funky.


You can lower your RAM speed in the BIOS and test it out the results with your current configuration if you want to see "hard" (aka your own) evidence.

Also, you do know that a minimum frame rate is a minimum. It's a dip, not a constant. And if it happens often enough laymen call it "stuttering." Different areas of different games tax the CPU and GPU differently, hence the inclusion of minimum, average, and max FPS in many reviews.
 
Load up Ultra settings with the Nvidia driver version and game version that they used and test it out with a 980 Ti if you think their results look funky. Yes I know you won't and that's why I am pointing it out to you - there is hard evidence with their results and configuration settings listed for all to see and you call it funky.


You can lower your RAM speed in the BIOS and test it out the results with your current configuration if you want to see "hard" (aka your own) evidence.

Also, you do know that a minimum frame rate is a minimum. It's a dip, not a constant. And if it happens often enough laymen call it "stuttering." Different areas of different games tax the CPU and GPU differently, hence the inclusion of minimum, average, and max FPS in many reviews.

Well, you're right, I won't do that. I'm not going to drop $3-400 on a GPU just to test it out for myself. However, as stated, I'm on a GTX1070, which is roughly equivalent, on a display that's slightly larger (1920x1200). The RAM is clocked at the stock 1600 MHz, and the i7-3770k is mildly overclocked (~4.1-4.2 GHz, off the top of my head). Overall, my system is pretty close to what they tested. If anything, it's disadvantaged (e.g., the display resolution). I don't need to lower my RAM settings, they're already there. And from what I've experienced, RAM speed is not a factor in Fallout 4 as I'm getting a solid 60 FPS with all graphics settings at their max level (even those that aren't set to such by the "ultra" defaults).

I know what stutter and other issues look like in this game. Until recently I had a GTX670 instead. Under that, it typically ran anywhere between 40-60 FPS, depending on the area. The system no longer has those issues, featuring a roughly equivalent GPU and the same RAM settings that those tests would indicate otherwise.

Maybe there's other factors at play here. Between then and now Nvidia could have updated the drivers to better optimize for F4. Or maybe Bethesda was able to optimize some code. Maybe at the point the those tests were run RAM speed did somehow make a difference. From what I've experienced, I don't believe that it does now.

And I wasn't even really looking at the minimum. I was looking at the average. My general feeling is that, unless you're using a predefined run that's consistent every time, minimum FPS is somewhat useless. It's too easy to throw by, for instance, changing viewpoints at different rates across runs.
 
And I wasn't even really looking at the minimum. I was looking at the average. My general feeling is that, unless you're using a predefined run that's consistent every time, minimum FPS is somewhat useless. It's too easy to throw by, for instance, changing viewpoints at different rates across runs.

And that's where you fail to grasp what the reviews and I were attempting to convey.

We're talking in English, yet you're deciding to read it in Portuguese. Not our fault there.
 
if you do a lot of single thread it can really help. If i were you i would wait til kaby lake if optane is something you want. It isn't that much further away.
 
Back
Top