Hold out for 10900k, grab 10700, or just go with a discounted 9900K?

The ventus runs hot because it's a compact 2 slot card and they have a small heatsink on those. What you're experiencing temp-wise sounds normal with that card. You can always convert to hybrid or get an arctic accelero.
 
Since your CPU temp is pretty good, you might look at adding an extra intake/exhaust fan or something if you have the space/header. Might help with the GPU heat, but what you're running now doesn't strike me as being abnormal. Might just end up being more noise.

I may switch out the default case fans. TBH, they don’t seem to be that powerful. Maybe swap out for some noctuas, even for the 212 fans.
 
Rocket Lake is coming out in late 2020...
they cant get 10900Ks and 10980XEs out the door, and you think theyl be able to get two desktop launches done in 6months? LOL no way in hell. i know the rumors. its not happening.
 
they cant get 10900Ks and 10980XEs out the door, and you think theyl be able to get two desktop launches done in 6months? LOL no way in hell. i know the rumors. its not happening.
Why don’t they preorder these things just like other industries do? This would help with any out of stock issues, no?
 
they cant get 10900Ks and 10980XEs out the door, and you think theyl be able to get two desktop launches done in 6months? LOL no way in hell. i know the rumors. its not happening.
10900Ks and 10980XE are vying for 14nm production, which is tapped -- as Intel gets 10nm / 7nm production moving, those production bottlenecks will likely ease.
 
Why don’t they preorder these things just like other industries do? This would help with any out of stock issues, no?
intel cant make enough to fill orders/demand. they just cant make enough chips no matter what. as idiotincharge said above unless they can move production of more products to smaller nodes (or dedicate more foundries to 14nm, which they wont do) they are gonna continue to have poor availability. which is exactly why rocket lake on desktop wont happen in late 2020.
 
10 cores sounds juicy. I'd wait for the 10-Series.
I'd say... sort of.

We're at that 'diminishing returns' period where >8 cores results in a negative ROI unless you can actually assign some value to those extra two cores. Gaming won't use them, for example, and much of your content creation either won't use them or needs as many cores as you can give it, so you should be running Threadripper instead :).
 
I'd say... sort of.

We're at that 'diminishing returns' period where >8 cores results in a negative ROI unless you can actually assign some value to those extra two cores. Gaming won't use them, for example, and much of your content creation either won't use them or needs as many cores as you can give it, so you should be running Threadripper instead :).

I couldn’t agree more. I compiled some software this morning, 2 seconds, do I need more cores for that? Nope.
 
I'd say... sort of.

We're at that 'diminishing returns' period where >8 cores results in a negative ROI unless you can actually assign some value to those extra two cores. Gaming won't use them, for example, and much of your content creation either won't use them or needs as many cores as you can give it, so you should be running Threadripper instead :).

You are 100% correct, but I'd still wait. I like shiny new things.
 
Why don’t they preorder these things just like other industries do? This would help with any out of stock issues, no?
intel has their own fabs, they have to make a balancing act between the more profitable server chips, keeping desktop/mobile OEM's happy, and the DIY markets while also having to produce everything else they produce.. this is one of the downsides to continuing to use a monolithic cpu design where there's legitimately no way for them to produce them any faster than they are. something they hopefully learn from after seeing what AMD's been able to do with their chiplet's being 100% identical from the epyc 7H12 all the way down to the R3 3100. it would be nice to see them just finally admit they screwed up, scrap their future plans with monolithic cpu's and start pushing their MCM designs they've talked about but that's probably wishful thinking.
 
it would be nice to see them just finally admit they screwed up, scrap their future plans with monolithic cpu's and start pushing their MCM designs they've talked about but that's probably wishful thinking.
Their screwups were at the fab level, though -- they have no problem producing monolithic dies. They have a lot of experience doing it as well.

What AMD has done works well for AMD given that they're fabless; they actually went backward in design and would have been better off going monolithic themselves, you know, if they could. They have to dedicate more die space for more cache and more power for interconnects and they're still just barely reaching parity with Intel's single-core performance standard set ~5 years ago with Skylake.
 
Their screwups were at the fab level, though -- they have no problem producing monolithic dies. They have a lot of experience doing it as well.

What AMD has done works well for AMD given that they're fabless; they actually went backward in design and would have been better off going monolithic themselves, you know, if they could. They have to dedicate more die space for more cache and more power for interconnects and they're still just barely reaching parity with Intel's single-core performance standard set ~5 years ago with Skylake.

In fairness, Intel is still where they were 5 years ago with Skylake just with more clockspeed and more cores.
 
In fairness, Intel is still where they were 5 years ago with Skylake just with more clockspeed and more cores.

No they're not.

It's just that they haven't fundamentally changed the architecture since Haswell.

Ryzen borrows very heavily from Bulldozer/Piledriver etc.

Skylake came out with heaps of bugs.

Kaby lake improved upon Skylake by fixing some of these bugs and adding a better form of HWP and HDC - specifically HDC on the CPU rather than just the GPU. Albeit adding the random thermal spike bug.

Coffee lake improved upon Kabylake with more cores and more cache, as well as feature size - not to mention more support for HDMI/DP and better QSV . Fixing the thermal spike bug

Coffee lake refresh brought 8 cores and TVB (though the latter is not really that relevant for most of the desktop SKUs out there), fixed security vulnerabilities.

Comet lake brought 10 cores (with low inter-core latency), and desktop worthy TVB

Each of these came with a speed bump too.

To say that intel "is still where they were 5 years ago" is to discount each of these milestones.
 
I'd say... sort of.

We're at that 'diminishing returns' period where >8 cores results in a negative ROI unless you can actually assign some value to those extra two cores. Gaming won't use them, for example, and much of your content creation either won't use them or needs as many cores as you can give it, so you should be running Threadripper instead :).

That's why I'm still debating between TR and 3950X for my workstation. I can use the cores, but debating if it makes enough for that. This is gaming though.
No they're not.

It's just that they haven't fundamentally changed the architecture since Haswell.

Ryzen borrows very heavily from Bulldozer/Piledriver etc.

Skylake came out with heaps of bugs.

Kaby lake improved upon Skylake by fixing some of these bugs and adding a better form of HWP and HDC - specifically HDC on the CPU rather than just the GPU. Albeit adding the random thermal spike bug.

Coffee lake improved upon Kabylake with more cores and more cache, as well as feature size - not to mention more support for HDMI/DP and better QSV . Fixing the thermal spike bug

Coffee lake refresh brought 8 cores and TVB (though the latter is not really that relevant for most of the desktop SKUs out there), fixed security vulnerabilities.

Comet lake brought 10 cores (with low inter-core latency), and desktop worthy TVB

Each of these came with a speed bump too.

To say that intel "is still where they were 5 years ago" is to discount each of these milestones.

Those are minor improvements when compared to the major jumps AMD made in the same timeframe. That's the issue. One side is coming up with neat stuff, the other is just ... iterating.
 
Those are minor improvements when compared to the major jumps AMD made in the same timeframe. That's the issue. One side is coming up with neat stuff, the other is just ... iterating.

Urr no. Major jump. Zen 1 -> Zen 2 is not a major jump, it's significant, but it's not major. Skylake to Comet lake is 1.1ghz (4.2 turbo to 5.3 turbo).

Bulldozer was released October 2011
Zen 1 was released in April 2017 (6 years, major jump)
Zen 2 was released July 2019

C2D was released July 27, 2006
Nehalem was released early 2009 (not a huge jump, but significant)
Sandy bridge January 2011 (significant jump) - 5 years
Haswell June 2013
Skylake was released in September 2015
Comet lake 2020 (add all the differences, it's significantly faster/better than Skylake)

We're due for another significant jump at the end of this year or mid next, based on history.
 
In fairness, Intel is still where they were 5 years ago with Skylake just with more clockspeed and more cores.
Can't fundamentally disagree. They've massaged Skylake significantly and made it work in a broad range of scenarios from ultra-mobile to HPC, but it's still basically Skylake.
To say that intel "is still where they were 5 years ago" is to discount each of these milestones.
Relative to say Lynnfield --> Sandy Bridge, the milestones seem pretty short, especially when each one can be resolutely described in the terms of its predecessor plus a few footnotes. Granted at its base, Skylake is just damn good; Intel could have done much worse in terms of picking an architecture to get 'stuck' on.
That's why I'm still debating between TR and 3950X for my workstation. I can use the cores, but debating if it makes enough for that. This is gaming though.
For gaming... honestly it's a matter of baseline + excess budget for... excesses. After eight cores, you're pretty much just pissing in the wind. Either you could use significantly more cores or you could probably run with six and not notice the difference. Eight is a nice comfortable 'safe' spot.

For me, I'd just grab a 3950X. Mostly because it's not slower than a 3700X at gaming, and partially because the price delta is minuscule relative to overall system cost if you're not skimping elsewhere. And a little bit of Pinky Freedman's 'Why the hell not'.

Those are minor improvements when compared to the major jumps AMD made in the same timeframe. That's the issue. One side is coming up with neat stuff, the other is just ... iterating.
AMD's mostly out of tricks too, though. They're basically tapped out with respect to TSMC; if TSMC falters, so follows AMD, just as Intel is sitting on architectural advancements that they can't yet fab.

Note also that AMD chose to run with Bulldozer; they put the gun to their own head and pulled the trigger on a decade of misadventure. Also note that until TSMC started shipping their "16nm" node, AMD would have not stood any better with Zen. Ryzen 1000 and 2000 were decidedly slower than Skylake. Ryzen 3000 still can't clock high enough.
 
Urr no. Major jump. Zen 1 -> Zen 2 is not a major jump, it's significant, but it's not major. Skylake to Comet lake is 1.1ghz (4.2 turbo to 5.3 turbo).

Bulldozer was released October 2011
Zen 1 was released in April 2017 (6 years, major jump)
Zen 2 was released July 2019

C2D was released July 27, 2006
Nehalem was released early 2009 (not a huge jump, but significant)
Sandy bridge January 2011 (significant jump) - 5 years
Haswell June 2013
Skylake was released in September 2015
Comet lake 2020 (add all the differences, it's significantly faster/better than Skylake)

We're due for another significant jump at the end of this year or mid next, based on history.

Clockspeed isn't a jump, at least to me. That's just... well, meh. We've played that game over and over again throughout the years. Let me know when we hit 10Ghz and I'll pay attention; if ever.

Nehalem and Westmere were a massive jump in architecture; at least from what they enabled - consumer side didn't see as much as the server side did, but there's a reason those chips are still in use today (and the Merom generation Xeons aren't). Same with the original C2D Merom/etc architecture.
Sandy Bridge was another major jump, this time more so on consumer, but also massive on enterprise as well - those are ALSO still in use today.
Haswell was a meh, same for Broadwell - bigger and faster, but that's really it. Skylake was meh, but it enabled the Intel Scalable Architecture on the server side, and that's been a MASSIVE jump - huge consolidation gains there. Especially with Cascade Lake R.

Now AMD? Bulldozer was meh at best and then a disaster. Zen 1 was a MAJOR jump. Zen2 wasn't as major to YOU (although the massively improved IMC and major increase in core counts was big), but server side that architecture is cleaning house - Epyc is nuts.

Not every major architectural change shows up on the consumer side as much as enterprise side - I deal in both worlds constantly; I see what feeds back and forth. Skylake was what I bought only because Sandy Bridge was OLD as shit by then, and I wanted an upgrade. Hasn't been anything since from intel that interests me, since the ICA hasn't really trickled down to consumer yet outside of the W- series chips and the XEs, which are overpriced for what you get (compared to AMD Zen2/TR2). I'm looking at an upgrade now only because my wife's Bulldozer/Piledriver era system is finally too old to do what she wants, and I'll give her my parts.

I guess it all depends on what you see as major. I'm looking at the whole architecture - consumer, mobile, server, vs just consumer.
 
I guess it all depends on what you see as major. I'm looking at the whole architecture - consumer, mobile, server, vs just consumer.
Well, there's core architecture, there's the actual dies using it and associated uncore, there's the packaging, the chipset, the firmware, the drivers, the motherboards, the integraters...

It's a long list. Even Kaby Lake, a non-event on desktop, made a splash elsewhere.

Overall, AMD is 'ahead' because TSMC managed to move to sub-10nm. They still can't make enough, which is why Intel can still command the pricing that they do, and by the time TSMC is in a position to 'catch up', Intel will likely have recovered. It's that, or Intel will get bought out. I wouldn't bet on the second option.
 
Well, there's core architecture, there's the actual dies using it and associated uncore, there's the packaging, the chipset, the firmware, the drivers, the motherboards, the integraters...

It's a long list. Even Kaby Lake, a non-event on desktop, made a splash elsewhere.

Overall, AMD is 'ahead' because TSMC managed to move to sub-10nm. They still can't make enough, which is why Intel can still command the pricing that they do, and by the time TSMC is in a position to 'catch up', Intel will likely have recovered. It's that, or Intel will get bought out. I wouldn't bet on the second option.

I'd argue that the drastic increase in core counts, better IMC, PCIE 4.0, etc make the Zen2 architecture pretty significant - although a lot of that (especially the PCIE stuff) doesn't matter as much when you're on desktop, like I said. Then again, I just shipped systems with 20 NVMe drives and dual PCIE Optane cards, so the needs I look at are different than others too - also like I said :)

Lemme put it this way. On a dual 6248R Xeon, I can generally run 140-145 VDI desktops at a conservative consolidation ratio. The 6258R doesn't buy me enough extra cores to justify the cost.

On a dual EPYC, I can run 380-385, on otherwise identical hardware (more RAM, of course). Same thing for containers. Or VM counts. That's less than half the floor footprint, and often less than half the socket licenses, etc. That's a major change. Millions of dollars of change, in fact.

But I'm talking enterprise, because that's what's trickled back and forth to enable the consumer side :)
 
I'm going to build a new system Walmart decided to give us another bonus for this Corona stuff.
I'm going to go with a 10700 non K version then when the next gen chips come out for the z490 boards I'll just upgrade to the best of that unless it's
going to be too hot then I might hold back on getting the latest and greatest melting chip.
 
I'm going to build a new system Walmart decided to give us another bonus for this Corona stuff.
I'm going to go with a 10700 non K version then when the next gen chips come out for the z490 boards I'll just upgrade to the best of that unless it's
going to be too hot then I might hold back on getting the latest and greatest melting chip.

I don't think that's a bad plan. After tweaking mine it's a little faster than a 3700x for a little more money. And a little more power hungry depending on what I'm doing.
 
I'm going to build a new system Walmart decided to give us another bonus for this Corona stuff.
I'm going to go with a 10700 non K version then when the next gen chips come out for the z490 boards I'll just upgrade to the best of that unless it's
going to be too hot then I might hold back on getting the latest and greatest melting chip.

why not look into the AMD 3600 or 3700X?...the 10600K chip is maybe 6% better at 1080p gaming and even less at 1440P and above...Intel chips run hotter, more expensive, less cores, worse at everything else non-gaming related, no PCIe 4 support, more vulnerable to exploits such as Spectre/Meltdown etc...I'm also looking into building a new system and as a long time Intel user there's nothing appealing about Intel right now over AMD
 
I don't think that's a bad plan. After tweaking mine it's a little faster than a 3700x for a little more money. And a little more power hungry depending on what I'm doing.

So good results? Microcenter got the good Z390 board back in stock, but I'm still torn on things. 10700 would be a decent bump up for me, and last till whatever comes next - while my 6700K would be a MASSIVE gain from my wife's FX-8350 for VR (what she uses it for). And the 1080 is a jump from her 1060.
 
Intel is the BEST =) the non K 10700 is only 65w that is alot cooler than my 8700k which is 95w
I know the Intel machine very well they programed many of things into my head with the Intel Retail Edge program from
2014-2017 I upgraded every year too bad Walmart got shut out of the program.
 
why not look into the AMD 3600 or 3700X?...the 10600K chip is maybe 6% better at 1080p gaming and even less at 1440P and above...Intel chips run hotter, more expensive, less cores, worse at everything else non-gaming related, no PCIe 4 support, more vulnerable to exploits such as Spectre/Meltdown etc...I'm also looking into building a new system and as a long time Intel user there's nothing appealing about Intel right now over AMD

This is also true. The only reason I'm really looking at Intel is because I want to keep ~one~ intel system around for my use (I do sometimes need to actually do cross-architecture stuff), and since they're best at gaming, I'll make it the gaming machine.
 
why not look into the AMD 3600 or 3700X?...the 10600K chip is maybe 6% better at 1080p gaming and even less at 1440P and above...Intel chips run hotter, more expensive, less cores, worse at everything else non-gaming related, no PCIe 4 support, more vulnerable to exploits such as Spectre/Meltdown etc...I'm also looking into building a new system and as a long time Intel user there's nothing appealing about Intel right now over AMD
Personally, if I were still pre-Skylake when Zen 2 hit retail, I'd probably be running one of these myself. Well, probably a 3900X, but only for "reasons".
 
So good results? Microcenter got the good Z390 board back in stock, but I'm still torn on things. 10700 would be a decent bump up for me, and last till whatever comes next - while my 6700K would be a MASSIVE gain from my wife's FX-8350 for VR (what she uses it for). And the 1080 is a jump from her 1060.

You definitely need a different board though. Z490 if you want to tweak the memory above 2933Mhz or one of the B460/H470 boards that allow for more than stock turbo boost (Asus and ASRock do that I know of). I think the Asus Strix B460-F allows for up to 225W which is more than the chip will draw even under an AVX load. I can't get mine to draw more than 160W or so. Couple that with a set of cheap DDR3000 memory and you don't really need a Z490.

This is also true. The only reason I'm really looking at Intel is because I want to keep ~one~ intel system around for my use (I do sometimes need to actually do cross-architecture stuff), and since they're best at gaming, I'll make it the gaming machine.

I have both a R5 3600 and a 10700 in the house. For all the talk about the Intel stability, the 3600 gives me no issues. The Intel system will randomly reboot, but I'm still tweaking the turbo and BCLK, so I'm chalking it up to that right now. I'm also running the QuickCPU program which allows for even lower power draw at idle, etc. That might be causing the instability.
 
I don't think that's a bad plan. After tweaking mine it's a little faster than a 3700x for a little more money. And a little more power hungry depending on what I'm doing.

I'm tempted to change my mind about the B460 boards and a 10700. The MSI Mortar B460 at $114.99 is a beast, figure run that setup over a 3700X drop in, I'm still not sold on Zen 2 for gaming after owning a 1600 and 2700x.
 
I'm tempted to change my mind about the B460 boards and a 10700. The MSI Mortar B460 at $114.99 is a beast, figure run that setup over a 3700X drop in, I'm still not sold on Zen 2 for gaming after owning a 1600 and 2700x.

I would probably drop in a 3700x at $260-280, personally before dropping some $450 or so on a B460 and 10700. I mean $150 is $150, plus you can sell your 2700x for $125-150. Zen 2 is significantly better at gaming than previous generations, and honestly, I couldn't tell the difference in the games I play between my 10700 and my 3800x at 1440p/144Hz Freesync. But I play a lot more single player games and nothing even remotely competitive. Put it this way, I never had a situation in gaming where I thought to myself, "If only I had an Intel chip right now I wouldn't have this problem."

Depending on what actually happens with the turbo on the Mortar board (some manufacturers were limiting the turbo to 125W instead of essentially unlimited like on Z490), you might get essentially the same performance as a 3700x.
 
I would probably drop in a 3700x at $260-280, personally before dropping some $450 or so on a B460 and 10700. I mean $150 is $150, plus you can sell your 2700x for $125-150. Zen 2 is significantly better at gaming than previous generations, and honestly, I couldn't tell the difference in the games I play between my 10700 and my 3800x at 1440p with Freesync. But I play a lot more single player games and nothing even remotely competitive. Put it this way, I never had a situation in gaming where I thought to myself, "If only I had an Intel chip right now I wouldn't have this problem."

Single player is fine, but when it comes to high FPS Ryzen will always be behind Intel, and doesn't make sense for me keeping a 165hz 1440P panel around.

The other option is seeing what Zen 3 holds. Its not like my 2700X and 2070 are dated lol.
 
Single player is fine, but when it comes to high FPS Ryzen will always be behind Intel, and doesn't make sense for me keeping a 165hz 1440P panel around.

The other option is seeing what Zen 3 holds. Its not like my 2700X and 2070 are dated lol.

Fair enough, I guess when I look at something like this:
relative-performance-games-2560-1440.png


I don't see the huge difference. Granted that's a game suite average and not the specific game(s) you play. I mean 4% isn't worth $150 extra to me. To put it in perspective, if the Intel chip is outputting 165 FPS on the number, then the Ryzen 3700x system is at 158 FPS. The 2700x is about 8% behind at 152 FPS. Also keep in mind that without a Z motherboard you're going to be capped at 2933Mhz with the memory, so you're going to be closer to "only" 3% difference.

Now if you want to try something new or that 4% really is important to you, then go for it. I had to build a new system anyway and wanted to try something new that's why I ended up with the 10700. But if you're really looking for the max framerate, then you're probably the target market for the 10600k with an overclock.
 
Last edited:
You definitely need a different board though. Z490 if you want to tweak the memory above 2933Mhz or one of the B460/H470 boards that allow for more than stock turbo boost (Asus and ASRock do that I know of). I think the Asus Strix B460-F allows for up to 225W which is more than the chip will draw even under an AVX load. I can't get mine to draw more than 160W or so. Couple that with a set of cheap DDR3000 memory and you don't really need a Z490.



I have both a R5 3600 and a 10700 in the house. For all the talk about the Intel stability, the 3600 gives me no issues. The Intel system will randomly reboot, but I'm still tweaking the turbo and BCLK, so I'm chalking it up to that right now. I'm also running the QuickCPU program which allows for even lower power draw at idle, etc. That might be causing the instability.

I'd go Z490 just because at some point that's more future proof - if Rocket Lake/etc is worth it, I might upgrade - or at least it's a better board for future uses (if I turn it into a server/etc) :)
Fair enough, I guess when I look at something like this:


I don't see the huge difference. Granted that's a game suite average and not the specific game(s) you play. I mean 4% isn't worth $150 extra to me. To put it in perspective, if the Intel chip is outputting 165 FPS on the number, then the Ryzen 3700x system is at 158 FPS. The 2700x is about 8% behind at 152 FPS. Also keep in mind that without a Z motherboard you're going to be capped at 2933Mhz with the memory, so you're going to be closer to "only" 3% difference.

Now if you want to try something new or that 4% really is important to you, then go for it. I had to build a new system anyway and wanted to try something new that's why I ended up with the 10700.

And this is why I'm looking at the 10700. If I've got to build one, might as well be current hardware and newest, rather than already last-gen - even though last-gen might be marginally better right now, it's end of the road.

But debating.
 
I don't see the huge difference. Granted that's a game suite average and not the specific game(s) you play. I mean 4% isn't worth $150 extra to me. To put it in perspective, if the Intel chip is outputting 165 FPS on the number, then the Ryzen 3700x system is at 158 FPS. The 2700x is about 8% behind at 152 FPS.
This is why I've stopped recommending Intel, unless the requestor actually has a use case where Intel is consistently faster in a scenario where it would actually make a real-world difference.
 
I'd go Z490 just because at some point that's more future proof - if Rocket Lake/etc is worth it, I might upgrade - or at least it's a better board for future uses (if I turn it into a server/etc) :)


And this is why I'm looking at the 10700. If I've got to build one, might as well be current hardware and newest, rather than already last-gen - even though last-gen might be marginally better right now, it's end of the road.

But debating.

That was my thinking also. I had a Z370 board that I could have easily used for an upgrade with anything from a 8700k to a 9900k, but I couldn't find the deal I wanted, and the 10700 had decent price performance as long as I upgraded the motherboard. Plus, I have the option for Rocket Lake. If Rocket Lake is a turd, and I move off this platform, I can always upgrade my father's 5960x with the 10700 and my motherboard and he'll be happy. I figured the newer platform gives me more options than the older one.
 
This is why I've stopped recommending Intel, unless the requestor actually has a use case where Intel is consistently faster in a scenario where it would actually make a real-world difference.

The biggest advantage of the 10700 over the 3700x is the IGP, but you're essentially paying $50 for it. You could always buy a $50 card that will likely out perform the UHD630 (or whatever it is). The biggest disadvantage is the fact that you need to spend at minimum $150 for a motherboard (e.g. a cheap Z490 board) to really take maximum advantage of the 10700 with higher memory speeds and higher boost. Obviously, you can buy an expensive B550/X570 board also, but you don't have to if you don't want to.

As much as I hate to admit it, I think there is some merit to Intel's claim that "Benchmarks don't matter." For the first time in a long time there is almost a parity between the companies with slight tradeoffs either way. I think it would take a very discerning person (or running very specific tasks) to be able to tell the difference without running CPU-Z, and the average consumer wouldn't be able to tell.
 
Fair enough, I guess when I look at something like this:
View attachment 256222

I don't see the huge difference. Granted that's a game suite average and not the specific game(s) you play. I mean 4% isn't worth $150 extra to me. To put it in perspective, if the Intel chip is outputting 165 FPS on the number, then the Ryzen 3700x system is at 158 FPS. The 2700x is about 8% behind at 152 FPS. Also keep in mind that without a Z motherboard you're going to be capped at 2933Mhz with the memory, so you're going to be closer to "only" 3% difference.

Now if you want to try something new or that 4% really is important to you, then go for it. I had to build a new system anyway and wanted to try something new that's why I ended up with the 10700. But if you're really looking for the max framerate, then you're probably the target market for the 10600k with an overclock.

The perfect budget world gaming PC would be a 10600KF with an MSI Z490 pro board, but in the current environment its hard finding a 10600k period, while 10700s are widely available and close to MSRP.
 
The perfect budget world gaming PC would be a 10600KF
So, I have a 9100F -- used it in a Node 202 for my wife's computer (she didn't need much and I already had half the parts), and well, I'm not buying another 'F'. She's started doing a lot of image stacking and then work in Photoshop, product and food photography, and well, she needed 'more'. So I grabbed a 9400 to toss in her machine, which works great, but now I have to burn my spare 1050Ti just to give it a video output. This is for a system that's built up out of ancient parts, some over a decade old, just to run a domain controller locally.

Of course, AMD has the same problem... and yeah, while Intel's current IGPs are nothing to get excited about with respect to 3D rendering, they do work extremely well otherwise and are beasts when it comes to transcoding. Something even AMD GPUs fall short at, somehow.
 
So, I have a 9100F -- used it in a Node 202 for my wife's computer (she didn't need much and I already had half the parts), and well, I'm not buying another 'F'. She's started doing a lot of image stacking and then work in Photoshop, product and food photography, and well, she needed 'more'. So I grabbed a 9400 to toss in her machine, which works great, but now I have to burn my spare 1050Ti just to give it a video output. This is for a system that's built up out of ancient parts, some over a decade old, just to run a domain controller locally.

Of course, AMD has the same problem... and yeah, while Intel's current IGPs are nothing to get excited about with respect to 3D rendering, they do work extremely well otherwise and are beasts when it comes to transcoding. Something even AMD GPUs fall short at, somehow.

I haven't used an IGP in probably 10 years, I have a 290 and 1070 sitting around.... but I get it.
 
Back
Top