No excitement about intel's latest offering??

Were you excited about the Pentium 4? Because that's basically where Intel is at again.
Perhaps the early Northwood days, but certainly not Prescott if we are talking about single core absolutes. If we toss in multicore performance into the mix then it's very much the Prescott days again for Intel. Hot, comparably slow, and late.
 
Going through the some of the publishers of results the common trend is that the 900 and 700K's are power hungry versions of the 3950 and 3900 which offer far better efficiency and still remain good gaming options despite non of the tests being done overclocked. I would still say the 3950X is well worth its price point and is still the best all round option for high end.

Intel does score a win with the 10600K which is definitely faster than the 3600 however the 3600 has has been sold to every man and his dog in the year and not many people will change platform now, given Zen3 is close.

The big caveat is that all these CPU's need to run much higher clock speeds and run hot as well as use a boatload of power compared to AMD. In most benches the gap is 10-15FPS vs a max OC 10th gen, to while pretty much all these CPU's are in the 160-180FPS range which is more than enough. I will take efficiency over gimmick any time.
Yeah, similar outcome. The only thing I wanted to point out was.. 10600k does beat 3600, but really that's not it's competition.
10600k = $262 MSRP (if you can find one)
3600 = $170 shipped with a cooler.
3600x = $200 with cooler
3700x = $290 with cooler

So, really it should be compared to the 3600x or 3700x (closer to the former). And these parts are going to have to compete with Zen3 as well for some time, so if they're not holding up well now, I feel they aren't going to magically get better. The 10600K does still outpace AMD's offerings in games, so if that's your primary focus, then Intel still has some relevance there.
 
Yeah, similar outcome. The only thing I wanted to point out was.. 10600k does beat 3600, but really that's not it's competition.
10600k = $262 MSRP (if you can find one)
3600 = $170 shipped with a cooler.
3600x = $200 with cooler
3700x = $290 with cooler

So, really it should be compared to the 3600x or 3700x (closer to the former). And these parts are going to have to compete with Zen3 as well for some time, so if they're not holding up well now, I feel they aren't going to magically get better. The 10600K does still outpace AMD's offerings in games, so if that's your primary focus, then Intel still has some relevance there.
You get even better deals of you have Microcenter near you!
 
Or NOT running simultaneous renders which then favors Intel fanboy tests... No offense, but you can play these games all day which is why you pick the best CPU for your use case....

Conversly, no doubt you run Renders all day. Perhaps show us some your great 3D render work.
 
Intel is at the point where AMD was with Thuban. They set themselve up at this point when they decide to sit on their laurel starting with Sandy Bridge just like AMD did with Athlon 64. Both companies made mistake when their enemy is down and they are on top.
Perhaps the early Northwood days, but certainly not Prescott if we are talking about single core absolutes. If we toss in multicore performance into the mix then it's very much the Prescott days again for Intel. Hot, comparably slow, and late.
At least with Prescott (or PresHOT), intel managed to shrink the process. If they can’t manage to do that soon, it will be Athlon 64 vs Pentium 4 again.

Also funny to see fanboys and fanboyism haven’t change much in 10 years, lol.
 
Yeah, similar outcome. The only thing I wanted to point out was.. 10600k does beat 3600, but really that's not it's competition.
10600k = $262 MSRP (if you can find one)
3600 = $170 shipped with a cooler.
3600x = $200 with cooler
3700x = $290 with cooler

So, really it should be compared to the 3600x or 3700x (closer to the former). And these parts are going to have to compete with Zen3 as well for some time, so if they're not holding up well now, I feel they aren't going to magically get better. The 10600K does still outpace AMD's offerings in games, so if that's your primary focus, then Intel still has some relevance there.

Ditto, that is basically my point, AMD out the box offers really good performance and can run stock fans when provided. At all levels AMD is more balanced than intel and farm more efficient. Zen 3 gives Intel limited time to do anything with this gen, I have a feeling Zen 3 is going to be very impressive.
 
Conversly, no doubt you run Renders all day. Perhaps show us some your great 3D render work.

Shouldn't you be focusing your angst on shaolin95 who brought up Premiere to begin with?

Ironically, you just made a value proposition implying that a simultaneous render wasn't needed and AMD has far better value when time is not a factor :D.
 
of course and the fact that Zen 3 is close makes it a very limited window

I said this back on the first page of this thread. The real reason for a complete lack of excitement is that Zen 3 and Rocket Lake are coming sooner than later. The argument about Zen2 vs whatever Skylake iteration we are on now has been beaten to death over the past year. Unless you have some sort of catastrophic failure and need a new build now, you're better off standing pat until then assuming you have Zen 2 or some CFL variant at this point.
 
Intel is at the point where AMD was with Thuban. They set themselve up at this point when they decide to sit on their laurel starting with Sandy Bridge just like AMD did with Athlon 64. Both companies made mistake when their enemy is down and they are on top.

At least with Prescott (or PresHOT), intel managed to shrink the process. If they can’t manage to do that soon, it will be Athlon 64 vs Pentium 4 again.

Also funny to see fanboys and fanboyism haven’t change much in 10 years, lol.
Not quite to that point, they are still competitive in few work loads. The difference was closed a lot with zen2 but there are still a few things that do better in Intel hardware. If this evaporates (or even just close enough) with zen3 and intel doesn't maintain their gaming performance leads then they will truly be in a bad spot... Right now they at least have that value proposition for gamers.
 
Have you guys been watching the minimum frame rate results? When looking at average frame rate Intel is pretty underwhelming, but when looking at minimum frames Intel has a pretty significant edge in some games. I hope Ryzen 3 closes the gap here to make a Ryzen upgrade a no brainer.
 
Have you guys been watching the minimum frame rate results? When looking at average frame rate Intel is pretty underwhelming, but when looking at minimum frames Intel has a pretty significant edge in some games. I hope Ryzen 3 closes the gap here to make a Ryzen upgrade a no brainer.

This is true, but it's still not a worthwhile jump if you're already on an 8th or 9th gen intel. Ryzen 2 to Ryzen 3 was pretty impressive, so I imagine most people would prefer to hold out
 
Well, that just about killed ANY chance of me jumping on a 10700k

Guru3D said:
One thing I do like to address is the life-span of this platform. Comet Lake-S and Rocket-Lake-S will be the only two generations processors that will work on the LGA1200 series motherboards. That means you will not be able to upgrade fast based on your motherboard. Next year in 2021 architectures like Alder Lake-S already have been scheduled, it's mentioned that the pin count once again will change to 1700, a socket LGA1700. So while LGA1200 has just been introduced it will last roughly one year before becoming obsolete as an upgrade path.
 
Not only that, but you have give it a full core load, AND overclock it. Here is a screen grab for the 10900K power usage 5 minutes into a blender render.

Note that at stock speed, 10900K using slightly less power than a Ryzen 3900x:

View attachment 246805

It only goes nuts on Power when you give it a big All core overclock with a all core workload.

That's only after tau expires. On shorter workloads or with MCE enabled it is fat higher.
 
Well, that just about killed ANY chance of me jumping on a 10700k

Hasn't that been the case for a long time, that Intel boards work for two generations of CPUs? So this isn't any different.

It would surprise me if Gen 11 and Gen 12 were still on track for whatever their original release windows were meant to be. People figured Gen 10 would be out Q4 of last year, but instead it's out today, more than six months later. That should theoretically push the next couple gens by similar time frames as well, meaning Rocket Lake (LGA1200) this time next year and Alder Lake (LGA1700) this time in 2022. Releasing Rocket Lake at the end of this year, so close to Comet Lake's launch, wouldn't make sense from a business standpoint.*

*I could be totally wrong, I'm just hoping it plays out like I said so I didn't waste my money on the 10700k I just bought. :(
 
I do wonder what the power vs. frequency curve looks like, this feels like one of those cases where the processor got pushed too far and backing off a couple hundred MHz will really bring power consumption down. A 4.6 or 4.7GHz 10700K seems like it might sit in a sweet spot of price vs performance - it will more or less match the 3800X at the same price and will probably have a high-but-manageable TDP, at least low enough to work with an AIO. I think it should even be possible to tune the all core turbos on most boards so you can keep that delicious 5.3GHz favored core and enjoy better-than-stock multithreaded performance; it might even be possible to simply set the turbo power limit to something lower than 250W and let the processor take care of its own voltages.

I think most of the issues with high power consumption lie around the fact that almost every motherboard throws Intel's recommended power limits out the window, resulting in a thermally unmanageable mess. Ryzen is really no better when run at its limits, but AMD has a really slick implementation of boost that most boards stick to, which gives you balanced performance at reasonable powers. If you actually try to go run a 3800X anywhere close to the single-core boost frequencies it too becomes a hot mess.

Overall I think the 10700K is a solid contender against AMD's offerings with a little bit of tuning. The 10900K is a whole different question because you do give up some multithreaded performance over the 3900X, and because of the core deficit even getting close requires a huge amount of power.
 
Hasn't that been the case for a long time, that Intel boards work for two generations of CPUs? So this isn't any different.

It would surprise me if Gen 11 and Gen 12 were still on track for whatever their original release windows were meant to be. People figured Gen 10 would be out Q4 of last year, but instead it's out today, more than six months later. That should theoretically push the next couple gens by similar time frames as well, meaning Rocket Lake (LGA1200) this time next year and Alder Lake (LGA1700) this time in 2022. Releasing Rocket Lake at the end of this year, so close to Comet Lake's launch, wouldn't make sense from a business standpoint.*

*I could be totally wrong, I'm just hoping it plays out like I said so I didn't waste my money on the 10700k I just bought. :(

370 / 390 have been around for a while :) also, we're talking about this platform only lasting a year, so it's a short generation.
 
370 / 390 have been around for a while :) also, we're talking about this platform only lasting a year, so it's a short generation.

But Z370/Z390 only support 8th gen and 9th gen. If Z490 supports 10th gen and 11th gen (which it's supposed to), that's the same as ever, and it'd last two years if 12th gen isn't out until 2022. But I guess nobody knows for certain right now exactly when things are going to be releasing.

If 12th gen somehow comes out next year, I'll be right there with you being upset about Z490 lasting for such a short amount of time.
 
When you try to do a handbrake video encode it ramps right up to 180W+ Meanwhile, I throw the same encode on my R5 3600 and it runs with less than half the power. Even sitting here typing this, my 9900k is around 15-25W, not 2-3W like you're claiming in another thread. Gaming is a mixed bag. I don't have AC:Odyssey installed to check your numbers, but in COD:WW2 it would fluctuate between 50-110W depending on the situation, but definitely not 30-40W.

My experience is nowhere near as rosy a picture as you're claiming. You're complaining about having to tweak the AMD system, but then you're saying that you have to tweak the Intel system to get it to work at a lower power (I still haven't found a bios setting for core duty cycling in my ASRock Taichi board). I would never make a claim about "net power usage" like that.

Is it surprising that an overclocker’s board (tai chi) hides those settings?

re HDC, it’s in quickcpu and you can add it to the windows power plan from that, same for speedshift. Alternatively you can fiddle the power plan in power shell.

I would be happy to provide you some screenshots to prove my points if you prefer.

I’m not complaining about having to tweak anything. I find it a bit painful that you have to do it on intel to get low power consumption on the desktop. Maybe that is why AMD releases their own power plans?

Regarding encodes, QSV works, as does NVENC, using either would use less power than software encoding.
 
Last edited:
Is it surprising that an overclocker’s board (tai chi) hides those settings?

re HDC, it’s in quickcpu in and you can add it to the windows power plan from that, same for speedshift. Alternatively you can fiddle the power plan in power shell.

I would be happy to provide you some screenshots to prove my points if you prefer.

I’m not complaining about having to tweak anything. I find it a bit painful that you have to do it on intel to get low power consumption on the desktop. Maybe that is why AMD releases their own power plans?

Regarding encodes, QSV works, as does NVENC, using either would use less power than software encoding.

I saw that in the other thread. I'll have to check out that quickcpu. What do you use to measure your power consumption? Maybe that's the best place to start.

As for encoding, I'm aware of QSV and NVENC. Generally, they were considered inferior solutions and were prone to distortion, but I haven't looked into it in a while. Maybe it's improved.
 
As for encoding, I'm aware of QSV and NVENC. Generally, they were considered inferior solutions and were prone to distortion, but I haven't looked into it in a while. Maybe it's improved.

Years ago. NVenc is now considered superior to software x264 for streaming.
 
Years ago. NVenc is now considered superior to software x264 for streaming.

I'm converting a variety of video to x265. Not necessarily for streaming. I have a 1060 and 1070 available so obviously not the Turing version of NVENC. I also have a spare i5 8400 sitting around that I used for a while, but the R5 3600 is somewhat faster than that when not using QSV.
 
I just have a windowed copy of Intel XTU showing temps, power consumption, and clockspeeds, etc.

This is just after I booted up, idling at desktop. I have seen the CPU cores/package go as low as 0.67W/1.5W in Quickcpu when it settles down..

Just after boot.JPG
 
Last edited:
5.67w with 50% of cores parked and at what speed? i dont see current speed listed anywhere.
edit: is that 4GHz supposed to be current speed?

Correct/ 4.10ghz - Note speedshift is enabled so that is only a snapshot - it can flick up in 15ms - Max turbo is set to 5ghz (1-7 cores) and 4.8 for 8 cores, with no AVX offset.
 
Last edited:
Correct/ 4.10ghz - Note speedshift is enabled so that is only a snapshot - it can flick up in 15ms
so what is the actual speed when speedstep kicks in. cause im guessing that 4.06 is because that what you have the default speed at not what its currently sitting at. cpuz does the same thing with my system. if its idling and is low at like 3GHz it still shows the 3.6 default.
 
so what is the actual speed when speedstep kicks in. cause im guessing that 4.06 is because that what you have the default speed at not what its currently sitting at. cpuz does the same thing with my system. if its idling and is low at like 3GHz it still shows the 3.6 default.

There is no "actual speed" that's not how speedshift works. It can be as high as 5ghz, or it can be as low as 800mhz.. The key is that in 15ms it can be either. There could be a spike up (eg pressing a key) but that spike could be in the fractions of a second, you wouldn't see it.
1590013936186.png

1590014235526.png


Note the following is 6xxx related, the newer version (as above) executes the transitions in less than 15ms, but this illustrates how the processor doesn't need to spend as large an amount of time in between frequencies and can complete tasks faster
1590014571065.png


Images taken from here: https://www.anandtech.com/show/1095...ration-kaby-lake-i7-7700k-i5-7600k-i3-7350k/3

[edit] I think I might need to write a guide about this stuff [/edit]
 
Last edited:
There is no "actual speed" that's not how speedstep works. It can be as high as 5ghz, or it can be as low as 800mhz.. The key is that in 15ms it can be either. There could be a spike up (eg pressing a key) but that spike could be in the fractions of a second, you wouldn't see it.
View attachment 246918
View attachment 246920

image taken from here: https://www.anandtech.com/show/1095...ration-kaby-lake-i7-7700k-i5-7600k-i3-7350k/3
the actual speed is the speed its at when you took the shot. i understand how it works thanks. and two different programs may show totally different speeds.
ps: when i mentioned cpuz i may have been thinking of hwinfo this is what i see with [H] open and yt playing

1590014546459.png
 
the actual speed is the speed its at when you took the shot. i understand how it works thanks. and two different programs may show totally different speeds.
ps: when i mentioned cpuz i may have been thinking of hwinfo this is what i see with [H] open and yt playing

I'd be happy to take a shot with cpuz/task manager - but it'd be 4.1ghz or so in the above - system as per sig

See attached - playing a Youtube video with H open.
 

Attachments

  • Capture.JPG
    Capture.JPG
    285.7 KB · Views: 0
Last edited:
the actual speed is the speed its at when you took the shot. i understand how it works thanks. and two different programs may show totally different speeds.
ps: when i mentioned cpuz i may have been thinking of hwinfo this is what i see with [H] open and yt playing

View attachment 246926

So playing around with XTU, yes, the Max Core Freqency is the speed of the highest core at the time of the shot. It is roughly equivalent to windows "Speed" in the Task Manager (not Base Speed). If you look, it will also tell you the number of cores that are active.
 
I'd be happy to take a shot with cpuz/task manager - but it'd be 4.1ghz or so in the above - system as per sig

See attached - playing a Youtube video with H open.
yeah see all three have different speeds. which is correct?! 4 cores at 3.5 is gonna drop wattage waaaay down.
 
yeah see all three have different speeds. which is correct?! 4 cores at 3.5 is gonna drop wattage waaaay down.

I also have 3 different speeds the more I look at it. I'm not sure which is the most accurate (but then again, I'm here to figure out the 3W power draw ;) ).
 
yeah see all three have different speeds. which is correct?! 4 cores at 3.5 is gonna drop wattage waaaay down.

Does it matter when it can ramp up fast if required? Clearly it is hopping around, and all pieces of software are sampling at different intervals
 
Does it matter when it can ramp up fast if required? Clearly it is hopping around, and all pieces of software are sampling at different intervals
just trying to make sense of what you posted, it aint working nm.
 
Back
Top