Retail 7700K Not Up to 5GHz - 3600MHz

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
This is our first retail purchased Intel Core i7-7700K processor that we sourced through Amazon. This one makes four that we have had hands-on and three that we still have here in our possession. To my chagrin, it does not look like 5GHz/3600MHz is in the cards for this one when it comes to running stability tests. I have run the vCore up to an actual 1.38v, and I think I might call it there. That makes our 5GHz rate 25% so far.

The black marks are the IHS are in preparation for delidding.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
I guess we are going to get golden samples on ebay now much like we had them with SB.
 
I really wonder if 5Ghz will be more common on the 7600Ks and 7350Ks. Seems like ditching extra cores or HT would have a positive impact on OC.
 
This chip is seeming less and less impressive to me as days go by. I'll stick with my 4790K at 4.8GHz and 1.32v.

That is a fine voltage vs clock setting. Yea, not much reason to move on except for the new trimmings the platform comes with.
 
I'm impressed that you guys are taking it the extra step and getting an - albeit small - statistical sample of actual retail parts to see how they fare.

No one else (to my knowledge) does this. Major kudos.

(If you need any assistance crunching the numbers on the statistics, let me know, that's kind of my bag, baby)
 
Just as there is the theoretical window of stability with trans-uranic elements that get heavy enough, maybe there is an overclocking window of stability with enough voltage? MSI seems confident with going to about 1.5v. What's the worst that could happen besides burning your house down?
 
This chip is seeming less and less impressive to me as days go by. I'll stick with my 4790K at 4.8GHz and 1.32v.
That is a fine voltage vs clock setting. Yea, not much reason to move on except for the new trimmings the platform comes with.

And I'll stay with my 4930k at 4.7Ghz with 1.325v (This is with speedstep and other power saving settings enabled.)

I think I could get to 4.8 IF I had a water cooling setup.
 
Damn Kyle you really are trying to get that magical 5gh/3600mhz chip. Good luck!

Do you plan to sell the extra CPU's? Or maybe have a raffle or something?
 
I'm impressed that you guys are taking it the extra step and getting an - albeit small - statistical sample of actual retail parts to see how they fare.

No one else (to my knowledge) does this. Major kudos.

(If you need any assistance crunching the numbers on the statistics, let me know, that's kind of my bag, baby)

Check out the Silicon Lottery guy, he's claiming 56% of 7700k's can hit 5.0GHz, 26% 5.1GHz, and 5% 5.2GHz.
 
Turn off HT and temps drop greatly and it is stable at 5GHz.....for about 20 minutes now. Not BSOD instadeath as before.

For TIM testing, I need to find where it is the hottest on full load however and still stable.
 
This chip is seeming less and less impressive to me as days go by. I'll stick with my 4790K at 4.8GHz and 1.32v.
Ohhh come on you die heart gamer. Your just salivating at the bit to get that extra frame or two past 200 playing Road Rash. :)
 
Check out the Silicon Lottery guy, he's claiming 56% of 7700k's can hit 5.0GHz, 26% 5.1GHz, and 5% 5.2GHz.
Down to 56% already?

He was claiming something like 65% last week.
 
Interesting. I wonder if we really need HT after all.

Found this on TPU from early 2016, gaming testing on a skylake 6700k. They deemed HT to be fairly useless in games and in many cases hurts performance. That is some counter-intuitive thinking...

https://www.techpowerup.com/forums/...rks-core-i7-6700k-hyperthreading-test.219417/

That is interesting...I knew that back in the i7 920 days this was the case, didn't realize that it still was. Pretty crazy that games still don't really benefit from HT.
 
I'm considering buying the i5 for 5+ghz. Want to get the professional delider you mentioned and some liquid metal. Want to make a lean mean gaming machine, MicroATX, smaller footprint this time around with a single 1080 ti or possible 2080 toward the end of the year, early 2018
 
If anyone is in socal, newport area I'll delid it for ya. I have the relidder too.
 
That is interesting...I knew that back in the i7 920 days this was the case, didn't realize that it still was. Pretty crazy that games still don't really benefit from HT.

I am guessing it has to do with the CPU cache. When you have HT on, it effectively halves the cache available to each thread when comparing to having HT off.

This is going to lead to more system RAM access instead of being able to pull the data straight from cache.

If programmed with HT in mind and with data set size set specifically to be able to keep the needed data in cache, you can gain about 20% or so in speed in RAM intensive programs. If you have data sets that can stay completely in L1 cache and have HT enabled, you should be able to gain a lot more in regards to speed.

I highly doubt ANY game engine programmers would even bother trying to do this as dynamically setting the data set size (different CPUs have different L1, l2, and L3 cache sizes) and trying to keep track of it in the code is going to be nigh impossible and not worth the time to even attempt.

There is just waaaaayyy too much going on in a game engine to even attempt that kind of control.
 
last time i checke asus realbench it did a look of single threaded stuff. hardly what i would use for stabiltiy testing. it might have changed by now. but i would check it before using it as a tool for stability measuring
 
So basically a more realistic OC for KL is ~4.8 to 4.9Ghz...

And when you compare that to Skylake typical OC, we're gaining like 100mhz extra.

Or in the grand scheme of multiple Ghz clocks, that's like a 2% clock speed advantage.

Oh boy. How exciting.
 
What did you get it up to?

I went with a 6850k about 2 months ago, the built in overclocking said it could do 4.4 on all cores except 5, which it said could do 4.6. Wasn't stable there. I brought it down to a manual 4.2, then finally settled with 4.0. I left the CPU core voltage at auto, since the built in overclocking seemed to be moving it around on it's own.

Think I could get more out of it with more voltage? What would be safe, or expected vCore values on a 6850k, on air cooling? (Reeven Justice)
 
Interesting. I wonder if we really need HT after all.

Found this on TPU from early 2016, gaming testing on a skylake 6700k. They deemed HT to be fairly useless in games and in many cases hurts performance. That is some counter-intuitive thinking...

https://www.techpowerup.com/forums/...rks-core-i7-6700k-hyperthreading-test.219417/

To be fair, those tests were done on Windows 7 where the scheduling treated all 'cores' identically, whether it's an actual core or a SMT core. Windows 10 has a greatly improved scheduler afaik.
 
last time i checke asus realbench it did a look of single threaded stuff. hardly what i would use for stabiltiy testing. it might have changed by now. but i would check it before using it as a tool for stability measuring
When was the last time you sat there and watched it on a per core load basis for a couple of hours? You might update your experience before passing judgment. Then please chime in.
 
What did you get it up to?

I went with a 6850k about 2 months ago, the built in overclocking said it could do 4.4 on all cores except 5, which it said could do 4.6. Wasn't stable there. I brought it down to a manual 4.2, then finally settled with 4.0. I left the CPU core voltage at auto, since the built in overclocking seemed to be moving it around on it's own.

Think I could get more out of it with more voltage? What would be safe, or expected vCore values on a 6850k, on air cooling? (Reeven Justice)
Been stable under full load now for 168m at 4.9GHz at ~1.35v vCore. That said, I did NOT tune it for the least voltage possible. I tuned it to run hot for TIM testing.
 
last time i checke asus realbench it did a look of single threaded stuff. hardly what i would use for stabiltiy testing. it might have changed by now. but i would check it before using it as a tool for stability measuring
Pulled a screen shot for you using ASUS RealBench v2.44 from the system currently and put core utilization on the graph, the graph is from the last hour.

core usage.png

The heavy line across the top of that graph is four lines of core usage, so it just looks like one big line. You can see where one care MAY have fallen out for a bit, but from my experience I see graphic "weirdness" like that happen with the graphing utility in Intel XTU. Intel XTU is a great tool for monitoring by the way. You can customize all kinds of thing easily to cover what you want to see.
 
Do you have a feel for real world performance with HT on and HT off in games running on win 10? Is it still like the techpower up forum thread mentioned above (where all the charts and graphs seem to be missing?)
No. We will cover some of that in a future article. I am going to have Brent_Justice handle that because he is just world's better at making sure that real world gaming data is CORRECT than I am simply by experience.
 
Can I suggest to do the HT test with and without core parking? That extra information would be nice for a lot of users. But I can also see that the extra run of everything might take up to much time
That will not happen.
 
02-08-2016 16:44:52 was the time i reviewed Asus realbench as a stability programs. Please note this is en EU date so almost a year ago.
Also i didn't pass judgement i clearly stated that it might gave been updated. I simply notified of a behavior I've witness with real bench prior.





personally i don't use a live monitoring program like that during stress test,because it eases the load on the system (more CPU quanta that hast go to something else than the stress test). I simple take the process/threads cycle time before and at the end of the run. and then calculate the average usage.

off cause i don't get a nice looking graph that gives a nice live information.
and voltage monitoring is lacking as well
But you get a more exact calculations of the average CPU usage.and a miniscule harder stress test (hardly measurable)

plus and minuses for both approaches.


https://msdn.microsoft.com/en-us/library/windows/desktop/ms683223(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/ms683237(v=vs.85).aspx
I look forward to seeing YOUR full review of what your obviously know more about than I do. Send me the link when it is done please.
 
at this point minus well get a old sandy i7 or westmere xeon and pull it up to 5. you could probrably bin them if you spent the same as a new cpu.
 
I've found the best program for beating the snot out of your CPU is to run some folding. That tends to trip your system pretty quickly given the extreme stresses it puts on the entire system.
 
I've found the best program for beating the snot out of your CPU is to run some folding. That tends to trip your system pretty quickly given the extreme stresses it puts on the entire system.
Yes, but workloads differ greatly as do loads through those workloads. There are a whole lot of variables to be taken into consideration. I have been talking to ASUS and asked them to make some specific changes that would make RealBench a better tool for us.

That said, I moved to HandBrake as my primary stability tool a couple years ago IIRC. ASUS RealBench' backbone is centered around HandBrake encode. We share a lot of cross-information with ASUS during mainboard testing. :)
 
Well if Franky Ped is not happy with [H] drawing conclusions with only 3 or 4 CPU's he should send you a box to test.. that is sealed box processors, not cherry picked engineering samples.
 
Interesting. I wonder if we really need HT after all.

Found this on TPU from early 2016, gaming testing on a skylake 6700k. They deemed HT to be fairly useless in games and in many cases hurts performance. That is some counter-intuitive thinking...

https://www.techpowerup.com/forums/...rks-core-i7-6700k-hyperthreading-test.219417/

That is interesting...I knew that back in the i7 920 days this was the case, didn't realize that it still was. Pretty crazy that games still don't really benefit from HT.
That is flat out nonsense for some games. Both Watch Dogs games make huge use of HT and if you disable it the game will will slow down quite a bit in cpu heavy areas and have some occasional hitches. Heck the first game is a stutterfest in some areas if I disable HT on my 4770k.

And here is a post I made in another thread where some people were still delusional thinking an i7 gives nothing over an i5 in any games.

I get 25% better average and 43% better minimums with HT on than with it off in Mafia 3. This was tested with FRAPS just driving a loop with zero action going on so as to be repeatable. And even though the fps numbers look ok with HT off, the actual game plays like jittery shit as the cpu is pegged basically the whole time. It is perfectly smooth with HT on though. Bottom line is Mafia 3 needs an i7 to maintain 60 fps and run perfectly smooth.


GTX1080 and 4770k @ 4.3 with HT on

Min, Max, Avg
70, 94, 81.966


GTX1080 and 4770k @ 4.3 with HT off

Min, Max, Avg
49, 83, 66.119
 
Last edited:
...

I highly doubt ANY game engine programmers would even bother trying to do this as dynamically setting the data set size (different CPUs have different L1, l2, and L3 cache sizes) and trying to keep track of it in the code is going to be nigh impossible and not worth the time to even attempt.

There is just waaaaayyy too much going on in a game engine to even attempt that kind of control.

This what is called an ARB path render. Nvidia used an ABBR and AMD used ARBA syntax and when vanguard came out brad mcquaid decided to use the AMD syntax for some reason that was never disclosed. The game failed because the beta testers noticed that it ran slower on nvidia machines no matter what was done. Most people kept playing the game because it was fun until the nvidia 8800 GTX came out using the baseline ARB path and the only things that render on the hardware were the eye balls, hair and boots. It was really weird looking. After all the testers looked at that said I am not trusting my credit card info to a developer who out right lied to the testers, most developers chose to sit to using an API or a programming interface that does what you are talking about with the caches.

Direct X could absolutely create instruction sets to target different cache sizes because it already does that is why you have what is called device id that used by most video games. As far as hyper threading goes you still need relatively long data chains to make it work since processor need to be able to have enough data on hand to making branching logic profitable. hyper threading is a way to use registers that are not in use when the branching logic slows down. So if you need to compare hyper threading at it's most efficient you need something that can only be solved with long data chains like prime 95 but has simpler tasks running at the same time like comparing random numbers. Rendering a lighting setup and having it occlude objects is a good test. Take a bunch of flat shaded objects and render them with single 256 bit color, so they occlude an object in the path of ray tracing. Or setup up a complex render with objects sitting have way through a clip plane. The idea is that hyper-threading in software that is hand coded at the assembler level is very labor intensive when you use a programming interface or a compiler that can take the hardware specifics into account you can get far more performance out of the same hardware but debugging it is harder. You have to compare what you expected to what happened then see if it the compiled code targeting the actual hardware or the projected hardware. Meaning you have to figure out if the compiler is misidentifying the hardware, or worse the hardware can not keep up with the rate data is changed in memory.
 
Back
Top