Kaby Lake 7700K vs Sandy Bridge 2600K IPC Review @ [H]

Sure you could say that, but if you havent tested it in another system to see if it's the CPU, you cant make that claim. its super rare a CPU is that bad, And Intel will not ship a CPU with a stock cooler if it can't keep it cool. Thats why I keep saying there is something else wrong if you can even run it at stock.

Try it on another system see if you have the same issue. If it does run hot in the same setup, then I could agree with you.

Unfortunately, I don't have an extra Sandy bridge motherboard lying around. This is the second motherboard I have had that CPU in though. My original motherboard was an ASUS P8Z68-V PRO (Non GEN3) which had faulty USB 3.0 ports so I returned that one and purchased the one I am using now. I got the same CPU temps on both motherboards. This PC is also on its second case, since I didn't like the cable routing of my original case and I wanted native mounting slots for my SSDs. Changing cases also did not make a difference in my CPU temps.
 
I've been building PC's since I was 8 years old. I know how to properly seat a CPU cooler.

Don't know if it makes a difference, but I live in South Florida and right now, the ambient temperature of my home office where the computer is is around 83 degrees F. Doesn't help that I have 2 desktops and 2 servers running in that room.

I like in Australia. I had a 2500k at 4.5ghz for a long while. Temps around 70 degrees with a Noctua U9B SE2 92mm cooler. The 2600k I believe adds 10 degrees due to HT. Ish. What were your voltages like? by memory 1.35v would almost guarantee stability at 4.5ghz, but I had mine at 1.29v from what I can remember. Some better chips could do 1.25v. But with HT usually you need a touch more, so 1.3-1.32v for a 2600k and a 212 Evo should be doable? Temps Should sit around 75-80 degrees under prime. But YMMV.

Either way you can make a relative assumption that a 2600k which had a stock boost of 3.8ghz, vs a 4.5ghz OC'd is what, 18% increase? Given that doesn't equate to exactly the same linear performance increase, let's throw a rough figure of 10% increase across the board for actual translated performance.

So the 7700k where the STOCK is 4.5ghz turbo, which is a ~25% improvement, we can assume roughly a 30-35% improvement for you going from stock 2600k > 7700k. Obviously this is am assumption/guesstimate and shouldn't be taken as an exact measure.

If you see no issue with your current gear now, however, unless you require the new features/usb/m.2 nvme support, eh I'd keep as is and work on that OC. I think even pushing for 4.2-4.4ghz would see a reasonable bump. A clean up/remount may help a few degrees too if it hasn't been cleaned for a while.
 
I like in Australia. I had a 2500k at 4.5ghz for a long while. Temps around 70 degrees with a Noctua U9B SE2 92mm cooler. The 2600k I believe adds 10 degrees due to HT. Ish. What were your voltages like? by memory 1.35v would almost guarantee stability at 4.5ghz, but I had mine at 1.29v from what I can remember. Some better chips could do 1.25v. But with HT usually you need a touch more, so 1.3-1.32v for a 2600k and a 212 Evo should be doable? Temps Should sit around 75-80 degrees under prime. But YMMV.

Either way you can make a relative assumption that a 2600k which had a stock boost of 3.8ghz, vs a 4.5ghz OC'd is what, 18% increase? Given that doesn't equate to exactly the same linear performance increase, let's throw a rough figure of 10% increase across the board for actual translated performance.

So the 7700k where the STOCK is 4.5ghz turbo, which is a ~25% improvement, we can assume roughly a 30-35% improvement for you going from stock 2600k > 7700k. Obviously this is am assumption/guesstimate and shouldn't be taken as an exact measure.

If you see no issue with your current gear now, however, unless you require the new features/usb/m.2 nvme support, eh I'd keep as is and work on that OC. I think even pushing for 4.2-4.4ghz would see a reasonable bump. A clean up/remount may help a few degrees too if it hasn't been cleaned for a while.


Thank you.

The cooler has actually been remounted recently since I changed cases. I'm interested in upgrading since I am starting to see frame rate drops in some games because of the CPU.
 
Don't what to tell ya, but there is something seriously wrong.

It isn't Kyles fault you cant clock to 4.5ghz on your CPU. When 99.99999999999999% of Sandy Bridge CPU's do. So it is a totally valid comparison. I am sorry you are having heating issues, but it sounds more and more that is it not the CPU that is the issue.

Good luck

Silicone lottery. It happens.
 
Unfortunately, I don't have an extra Sandy bridge motherboard lying around. This is the second motherboard I have had that CPU in though. My original motherboard was an ASUS P8Z68-V PRO (Non GEN3) which had faulty USB 3.0 ports so I returned that one and purchased the one I am using now. I got the same CPU temps on both motherboards. This PC is also on its second case, since I didn't like the cable routing of my original case and I wanted native mounting slots for my SSDs. Changing cases also did not make a difference in my CPU temps.


This guy is a very effective troll. He's done a good job of working up everyone here - it's clear he's not interested in a productive conversation. Just ban him.
 
Kyle, how are you going to handle an IPC test of Intel vs AMD when RyZen becomes available? Clock for clock, core count the same, SMT off/on, etc...

I will tell you more when/if we get the parts in our hands.

I was just taken aback by his snarky comment to what I though was a legitimate question.

Now I was insulting and unprofessional up the thread. You are going to have to make your mind up there cowboy.
 
Maybe somebody should resurrect Cyrix. They could compete with Intel and AMD. Their MediaGX was awesome. Kidding, of course. I remember trying to talk friends out of buying PCs based on Cyrix crap.

But it would be cool to have a third player. I can't see that happening, ever.
 
Maybe somebody should resurrect Cyrix. They could compete with Intel and AMD. Their MediaGX was awesome. Kidding, of course. I remember trying to talk friends out of buying PCs based on Cyrix crap.

But it would be cool to have a third player. I can't see that happening, ever.

That would be great lol. I think there's too many patents and shiz out there now to prevent this from happening though without huge costs.
 
I though Athlon XP (Thoroughbred, Barton...) was only neck to neck with Pentium 4 (Willamette, Northwood). AMD was only able to surpass Intel when they release ClawHammer/NewCastle Athlon 64 and Intel release the Prescott P4 dud? Correct me if I'm wrong.

Clock for clock, the P4 was worse than the P3.

Athlon was already slightly competitive against the P2/P3 processors... So when the P4 came out, they were pulling ahead. The P4 may have been able to reach higher clock speeds but the AMD processors were absolutely destroying it in efficiency at the time. This period is right about when raw clock started becoming irrelevant in terms of actual performance. Hence why AMD had to start with the 1800+ style model numbering since Intel was still advertising their processors as faster... purely by clock speed alone.
 
I still running my 2600k at 5ghz and it still fast, I'm more amazed that the 4 Force 3 SSD's in raid 0 hasnt had a failure yet. Only the video card has been changed to stay current.
 
I still running my 2600k at 5ghz and it still fast, I'm more amazed that the 4 Force 3 SSD's in raid 0 hasnt had a failure yet. Only the video card has been changed to stay current.

I'm using an i7-3770 for the last 3 years and still see no need to update. Like you, only think I have been updating is video cards. Back in the day I'd updated my cpu every year. This is the longest I have ever had a cpu.
 
Fair enough. I guess [H] will just have to stand proudly as the only site left in 2017 with the 640x480 benches. And everyone else interested in CPU impact on gaming with resolutions relevant to them (1080p) will have to look to the myriad of other sites that provide their answer.

I don't think you understand what those benchmarks represent. At higher resolutions you are GPU limited more than CPU limited. There are few situations where a higher end CPU, or clock speeds make a significant impact on gaming. Most of the time you need either ultra-high resolution single displays or multi-monitor arrays (NVSurround / Eyefinity) to see any significant improvement in frame rates due to CPU changes. This isn't new information either. Take Lost Planet for example. The engine scales all the way up to 8 threads and did so back in late 2007. The performance difference at lower resolution is significant and as the resolution increases you quickly become GPU limited. We see the same thing today with modern games. In an article about IPC you have to isolate the difference between the CPUs you are comparing. To do that you have to drop the resolution in the gaming benchmarks to see that difference.

You also seem to have missed the part where Kyle stated that additional information was coming at a later date that would show real-world examples at actual gaming resolution.

I though Athlon XP (Thoroughbred, Barton...) was only neck to neck with Pentium 4 (Willamette, Northwood). AMD was only able to surpass Intel when they release ClawHammer/NewCastle Athlon 64 and Intel release the Prescott P4 dud? Correct me if I'm wrong.

Not quite. AMD's original Athlon was often faster than it's Pentium II/III counterparts. In general the Pentium IV always lagged behind AMD's Athlon XP and Athlon 64. Intel and AMD basically slugged it out with Intel playing catch up and only barely achieving parity in some cases and still losing out in others. Every time Intel would release a CPU it would close some of the performance gap only to lose it again to the subsequently released AMD CPU. There was roughly a five year period where AMD was often the performance winner in most cases.

The landscape would get a little more interesting when Intel introduced Hyperthreading technology. In the applications that supported it, as well as video editing / encoding tasks the Intel CPUs were usually if not always significantly faster. Either due to the platform or the performance, Intel remained the better choice for serious workstation builds. This was due to professional and content creation applications working very well on the Pentium IV's excellent cache design and by their nature, lending themselves to being somewhat multithreaded or being patched to do so. Professional applications like Photoshop, Adobe Premiere, AutoCAD, and 3D Studio Max were already multithreaded as they were intended to be used on dual processor workstations as it was. The argument could also easily be made for the Intel chipset and drivers being superior to AMD's offerings at the time. Many people also opted for Intel for the better drivers and support in professional usage scenarios. Intel became the multitasking build while the Athlon 64 was what gamer's and many enthusiasts used. AMD and Intel would have this lead / catch up / lead / catch up dynamic all the way up until the release of the Core 2 Duo.

This is in many ways similar to what we saw with Bulldozer. It was faster or nearly as fast as many Intel offerings in professional applications or video encoding but faltered everywhere else. Even when Intel's CPUs were faster in those areas you could make a value argument for Bulldozer / Vishera etc. It wasn't quite the same but clearly Bulldozer was AMD's Pentium IV. Of course if you go back through history you'll see that AMD was just returning to business as usual. Most of the time Intel and AMD have been rivals, the former generally held a performance advantage. Usually a significant one.
 
intel and amd have done this in the beta. The original pentiums were fast until amd built the t-birds and then the thunderbirds, barton era chips. then they slacked off because they had the working x64 licensing and intel had the branching logic of p4 which was joke doing actual work because they could not predict far enough out to justify the length of the branching logic. Then people started laughing at intel so they made the core two duo. which was very fast. the old i920 were the evolution from those. the problem is cache memory is really expensive but if they want to move most of the tasks on die then it needs to use the scheduler. This slows down the whole chip but allows some tasks to happen faster. what nvidia did that intel is trying to figure out how to duplicate without paying nvidia for it is having a multiple clock scheduler that can talk to itself. I am waiting to see the chips once they figure that out.
 
You have to understand how AMD got where it did with K7 and K8. Essentially they bought out NexGen Systems and hired the engineers behind the DEC Alpha processors as they were let go by Compaq after its acquisition of Digital Equipment Corporation. This is why those processors were as good as they were. The reason they "slacked" off is because AMD couldn't keep the talent in house due to its management style and internal politics. Intel worked hard to bring about the Core 2 Duo while AMD struggled internally to bring about a successor architecture to K8. Their new direction was based around a prediction for how the industry would be a few years down the road and sadly AMD miscalculated this direction badly.
 
It wasn't quite the same but clearly Bulldozer was AMD's Pentium IV.
I found that to be the biggest irony. They did the same miscalculation with BD that Intel made with P4 and basically flipped places. After the success of Athlon, how anyone at AMD thought it was a good idea to go down the P4 route is beyond me.
 
I found that to be the biggest irony. They did the same miscalculation with BD that Intel made with P4 and basically flipped places. After the success of Athlon, how anyone at AMD thought it was a good idea to go down the P4 route is beyond me.

It wasn't AMD's intention I'm sure. AMD had to regroup when they lost much of its K7/K8 design team. The new team chose a direction based on their prediction of market trends and where computing was going. They predicted greater parallelism in application development. In the server market this made sense. In the desktop market, we've always been held back by legacy applications. Many of the applications we use on the desktop such as games do not benefit from parallelism the way other applications can. It isn't simply a matter of application developers making the choice to utilize more CPU cores as is widely believed. It is a matter of some tasks simply not lending themselves well to parallel processing. Many armchair game developers used to talk about running physics on one core, AI on another and audio on another etc. The reality is that even if these tasks all worked in a way where they could each benefit from their own core, the workloads aren't equal. Different tasks require different amounts of resources. Some tasks simply get processed faster than others. You will always end up with some cores being more idle than others when it comes to gaming. Unless programming fundamentally changes with games and thus how they run on the CPU level this will always be the case.

On the server side it was a good bet for AMD to go the route they did with Bulldozer. Where they fucked up wasn't even with regard to lower IPC but the simple fact that the CPUs didn't have the performance per watt that Intel's offerings did. If AMD had an advantage with performance per watt with Bulldozer the IPC wouldn't have been as detrimental if the price of the processors and platform could have offered more cores for greater parallelism in the same space for the same power and thermal envelopes. Applications like VMware would have benefited from greater numbers of cores in the same space at a lower cost but Bulldozer utterly failed here. To make matters worse the extra power and heat didn't equate to better performance. That's basically the nail in the coffin as the only thing those CPUs had going for them was price. Even that wasn't enough of a benefit because datacenters do not look only at processor and motherboard price but overall operating costs. If you have a processor that has a TDP of 200watts compared to 145watts (or less) with less performance, suddenly that cheaper 200 watt TDP processor may not seem like such a good deal. Its even worse if you need a certain level of performance and you need to get a dual socket system to achieve your goals or more physical servers to get the same level of performance.

The IT industry has a long standing bias against non-Intel x86 processors. AMD needs to do anything it can to overcome that. When AMD delivers a server processor that eats more power and delivers less performance without that solid reputation to help drive sales it simply cannot succeed.
 
Last edited:
Was it explained what went wrong with the design that led to such high power draw?
 
Was it explained what went wrong with the design that led to such high power draw?

I don't know if anyone ever figured that out. I doubt AMD ever disclosed what went wrong with the design to cause such massive power draw and heat production. You probably need to know more about semi-conductor design than you or I ever will to really understand whatever AMD could tell us were they so inclined. My best guess is that the IPC performance was so disappointing that AMD had no choice but to run Bulldozer at the bleeding edge of what those CPUs were physically capable of. Unfortunately, running those CPUs at such speeds probably resulted in a massive amount of transistor leakage and inefficiency. This can be evidenced by relatively poor overclocking and the fact that the high end CPUs such as the FX-9590 were largely incapable of running much if any faster than their stock turbo frequencies.

Basically AMD gave us all those chips were good for out of the box. If you ran those CPUs at lower frequencies I suspect you might discover a sweet spot where they run at comparable wattage to Intel's offerings given the same core count. I think you would find that those CPUs would produce very mild temperatures. Unfortunately, I'd bet the clocks are so low at that point as to be virtually unusable when compared to the preceding Phenom II processors and anything Intel had offered from the Core 2 Duo onward.
 
Last edited:
Just some random IPC stuff I have recently run across for us to mull over..

in it's 7700K review, Eurogamer.net had some info on "clock-for-clock" comparisons of 3770K, 4790K, 6700K and 7700K CPUs, all at 4.5GHz. Excel and I crunched out the percentages based on Eurogamer's number for the seven games they tested with the 4.5GHz overclock..

7700K 6700K 4790K 3770K
------------------------------------------------
108% 108% 105% 100% - Assassin's Cred Unity
130% 127% 107% 100% - Ashes of the Singularity
118% 117% 109% 100% - Crysis 3
103% 104% 103% 100% - The Division
124% 124% 109% 100% - Far Cry Primal
130% 132% 110% 100% - Rise of the Tomb Raider
117% 115% 101% 100% - The Witcher 3
-----------------------------------------------
119% 118% 106% 100% - Average %​


TechReport also did a small Sandy Bridge/Kaby Lake comparison although not exactly "clock-for-clock" since the 7700K was at 4.8GHz compared to the 2600K's 4.5GHz, a 7% overclock advantage for the 7700K.

4.8GHz vs 4.5GHz - 7% overclock removed
--------------------------------------------------------------------------------------------------------
128% - 120% - 7-zip compression
111% - 104% - 7-zip decompression
146% - 136% - Blender Cycles
129% - 121% - Cinebench R15 - 1 thread
131% - 122% - Cinebench R15 - all threads
145% - 136% - Handbrake transcoding
136% - 127% - JetStream​

Long time buddy of [H] Gordon Mah Ung posted this ICP chart in his 7700K review with the comment "it’s a pretty sobering wakeup call to see just how slowly IPC is inching along in modern CPUs."

kaby_lake_cinebench_locked_at_2_5ghz_ipc_1007006.jpg


This chart has Kaby Lake having a 20% IPC increase over Sandy Bridge. Gordon went on to write.. "The good news for modern processors is IPC isn’t the only place you can pick up performance. Clock speed, core count and ability to hold Turbo Boost speeds longer (thanks to improved manufacturing) have all added up to better performance."
 
Lynch me for it, but I don't really see that as terribly negative. At least my wallet can be happy.
 
Great info Kyle, thanks! Still running a Sandy Bridge-E (3930k) OC'd to 4.2Ghz. I was really on the fence about jumping to Broadwell-E but held off; I think Q2 for the apparent release of Skylake-X / KabyLake-X might finally be the time for an upgrade.
 
Yay, absolutely no reason to "upgrade" from an x79 setup.

And about a week ago I upgraded my wife's computer to an x79 setup (E5-1650 v1) for less than $200 because all I needed was the CPU and RAM.

Rig in sig is not going anywhere soon unless Ryzen is a whole lot better, but with only dual channel RAM I am not expecting it to get anywhere near the RAM throughput I already have.

Arent you required to run ECC-Ram with Xeon setups? What mobo do you use?
 
Arent you required to run ECC-Ram with Xeon setups? What mobo do you use?

Nope, not required to run ECC.

The motherboard being used is an ASROCK x79 Extreme 6.

Pretty much any x79 board should support the XEON CPUs.
 
Arent you required to run ECC-Ram with Xeon setups? What mobo do you use?
Sandy Bridge-EP and Ivy Bridge-EP (v2) Xeons do not require ECC DRAM modules. You are only forced to use ECC modules from Haswell-EP Xeons (v3) onwards.
 
This review is one of the reasons i want to go to x99 or keep my [email protected] my 2600k does require alot of vcore now to hold 4.8ghz tho around 1.48 stays cool tho with my water loop.
 
Playing in 4k or even 2k there is no difference between 2600k and [email protected] for example...fuck even Xeon 14/28 at 3.0Ghz will give you same performance.
 
Any chance these future VR Sandy Bridge vs Kaby Lake tests could be run with HT both enabled and disabled? I'd love to see a real-world performance comparison of i5 and i7 between both generations.
 
  • Like
Reactions: Meeho
like this
With rumours of Intel 10nm Cannonlake being delayed to mid 2018 and that too only for ultra low power and Coffee Lake 14 nm being confirmed for 2018 for desktops and notebooks we are going to see 10nm for desktops in 2019 . We are probably going to see the next big architectural improvement (tock) from Intel by late 2019.

Coffee Lake-S is Q1-2018 according to the latest leaks. Ice Lake for desktops likely follows 1 year later, in early 2019. So you're off by nearly a year.


AMD should use this golden opportunity to catch up and overtake Intel. Mark Papermaster has stated that we will see tock tock tock after Zen. So there should be good IPC improvements in 2018 and 2019 along with higher frequencies due to process maturity. Intel is going to have a tough time for being complacent and downright indifferent towards the enthusiast market.

Every single post from you sounds like viral marketing from an AMD employee. Skylake-X with new cache structure + possible 12C/24T SKU and an insanely clocked Kaby Lake-X are not 'complacent and indifferent' towards enthusiasts.
 
Lynch me for it, but I don't really see that as terribly negative. At least my wallet can be happy.

The positive way to look at it is us guys with the 2/3/4 series intel cpus have gotten incredible value for money over the years.
 
Coffee Lake-S is Q1-2018 according to the latest leaks. Ice Lake for desktops likely follows 1 year later, in early 2019. So you're off by nearly a year.




Every single post from you sounds like viral marketing from an AMD employee. Skylake-X with new cache structure + possible 12C/24T SKU and an insanely clocked Kaby Lake-X are not 'complacent and indifferent' towards enthusiasts.

It's getting tiresome :(
 
I don't know enough about server type CPU's to comment, but at least on the enthusiast/home/gaming side it seems pretty obvious that we're on a 3-5 year cycle for any truly significant performance gains. It really seems like a greater effort is put into 'features' being added annually that are really up to the user to figure out if they really need. I'd say this is true on both sides of the fence.

Outside of USB3/SATA III from 4 years ago I haven't experienced anything that was that dramatic to get my attention. I think thunderbolt is a pretty cool evolution but that's about the only new thing in that last couple of years. Clock speeds,TDP, temps are usually in the single digit percentages per year which usually takes 5 years to equal roughly 25% performance increase.

I'm not planning on upgrading my CPU's/MOBO's till they basically stop working. I'm already considering a 1080TI/whatever for my 2600k this summer.
 
Playing in 4k or even 2k there is no difference between 2600k and [email protected] for example...fuck even Xeon 14/28 at 3.0Ghz will give you same performance.

Not quite. At 3.0GHz the difference is pronounced at higher resolutions with more powerful GPUs. It's been that way for a number of years with regard to a number of tests that have been done on multiple sites. The size of the performance gap varies somewhat based on the game in question, but there is little substitute for clock speed given the small IPC differences between everything from Sandy Bridge to Kaby Lake.
 
Looking at the recent articles on delidding, and TIM and direct die cooling, clearly he has been setting around on his ass, don't you think?
 
I have upgraded from a truely Golden Sandy that died ( it wasnt the board ) back in November and since then I have tried 4 different boards and 3 CPUs to find a good and foremost reliable replacement. I have to say I work as an IT Freelancer Admin for almost 20 years and know my ways around assembling PC's and using them but that journey I am going to tell you wasa hell of a ride, RMA wise.

I actually wanted the Asus Formula VIII as my replacement board alomg with a 6700k and 32GB 3200MHz Corsair XMPs, all the rest was to remain the same system. Watercool Heatkiller-IV waterblock, MoRa3 ext. Radiator, 980GTX Poseidon wc, AXi-1200, 2 x 850Pro, 2 x HDD, Corsair Carbide 500r ATX Tower ( it says E-ATX on their page as well, forgt that !!! ).

Well, they didint sell that board anymore anywhere in Europe in Nov. 2016 so I made the step and ordered the MSI counterpart, also watercooled, MSI Z170A M9 Gaming ACK with 6700k and said RAMs. What an epic fail that was. First off the board wouldnt fit the E-ATX Corsair tower, it was too wide. So I took an old Chieftec Mesh BigTower and put it in there to start with and putting an Evolv on my shopping list. The board didnt like my RAM, no XP, lots of stress booting and when it finally made it to OS overclocking was not as nice as with Asus boards I work with. All-Core oc failed miserably, couldnt keep 4.5 in prime, dropped to 4.3, ALWAYS.
Any oc beyond 4.6 was unstable, Volts up&down etc... to no glory. The board had lots of issues and finally bricked 4 weeks later after never really making any fun. It was damn fast at 4.5 compared to my standard 4.8 on the 2600k, even at 5 on the 2600k I would say, any GUI stuff in WIn10 was more snappy ( RAM at 3000 manual ). Gaming at 4.5 was great. I only sims DCS, digital combat simualator, which is highly IPC dependant and you must reach 2600+ points in Passmark IPC test to somehow fly fluent at high fps ( Asus 144Hz Gsync 27" ). It really did that.

I RMA'ed the board and got me the Asus Maximus VIII Extreme. Nice board, LOTS of gimmicks..and that is it !! The board killed itslef 3 times, lemme tell you how. It did oc that same CPU to 4.8 on all cores, what the MSI never achieved.

The first incident was after a reboot it got stuck, telling m,e it lost its PXE ROM...ahhh OK..went to Intel and wanted to reflash the FW...no luck...unsupported. Well, put in a Intel Gbit NIC PCIe and went on, screw that LOM I said. The bad thing was, about 3h later the board bricked completely and would not boot at all, like the MSI.

Called Asus, worked with their tech guy for half an hour again after flashing various Bioses in various ways on and off the bricked board. Nothing helped. I then RMA'ed the board again at my dealer and sent the CPU directly to Interl for inspection under a RMA case.

Both items got replaced but Intel did not state any reason why they exchanged the 6700k, I just got sent a new one with Greetings from Intel, alomng with a new Maximus VIII Extreme. Well...thought that should then be IT and all should be good....far failed !!

The board worked like a charm, new CPU as said too, installed OS, tools & drivers and went to see how far the journey may go and hit the TPU button.....BIG FAILURE !!! When the PC then went through the OC auto-overclocking it reached 4700MHz stability test and clipped out when it reached 100%.

I initially thought that was intentional but that was the last sign of life that board showed me, the rest was Qcode 16......with a NEW CPU. This was almost 3 month after my 2600k died and I still had no baord to fly DCS on :(


I called Mindfactory, told them the ultra long story and sent it ALL back, replaced CPU, 3rd board, RAM and got refunded.


The lesson learned was, dont overdo it ! Less is often more

During that disaster I just told you I also built a server based on a Asus X99-WS/IPMI board, very nice server board for SOHO DIY servers. With an i7-5820k this thing is really nice. Well...if there werent the IPMI FW.....that said goodbye when I flashed it.

I mean, you press abutton when you flash a firmware, there is no MAGIC involved, done it thousand times, in 99.5 % of the cases it works flawless, the other .5% get ya grey hair.

The board killed the said IPMI Firmware but kept operational, solely the IPMI function fell away. It took estimated 15min to overcome the timeout in Selftest to boot the Server unless you disabled the chip all together, then it booted fast and normal.

Asus sent me a new board same day, excellent WS Service they have. That worked but I didnt flash the FW for IPMI..it worked...connected...and is now in production..."F&%$&/ DON'T touch it !!!" ..has once again shown it's truth.



I ordered a new Z270 system based on the SIMPLE Asus Prime Z270-A board, 7700k, Corsair 3600XMP CL16......and it DAMN ROCKS YOUR SOX OFF :D


This simple board WORKS, what 3 other highest end boards couldnt !

It clocks the 7700k straight to 5G with 1 click ( thanks to Asus 5G-1click Option..it actually works ), runs my DDR4 at 3600 XMP profile and doesnt make any trouble with stupid gimmicks.


Forget those damn LED BS. Where are boards that hold what they promise for money my wife almost kills me for ? Hey Asus, hey MSI ?



The MSI board was my 2nd or 3rd MSI board in all those years...and definitely the very last one I got. Looks over anything. It darnn looked darn good...and that's about it. Voltages prolly killed the 1st 6700k, applied by MSI Auto-Tool.

Asus seems to have some issue too, 3 broken boards in 1 row is BAD.

Yes, 2 of them failed while OC'ing..but hey...this is a PREMIUM board..labelled "EXTREME"...its extreme frustration when you have to take you all-custom-watercooled rig apart a dozen times, insert RAMs till fingers bleed, flashing Bios' while you toss a prayer etc.. Hell NO !!!



What I actuaslly wanted to say, yes, the 7700k @ 5GHz outclasses the 2600k @ 5GHz in every aspect imho. It's more the overall step forward with RAM speed ( my old DDR3 wasnt slow either, but this 3600 rocks hard ), fine tuned 7th gen i7 architecture etc..

For example, DiRT RALLY, the menue just FLIES, not a splitt second delay. Performance on 1440p all maxed with my GPU is not below 100fps any scene, did that yesterday night with my pal for a ferw hours to test the rig.

Flying DCS is different, its a brutal hardest core flight simulator that taxes IPC, VRAM, LATENCY critical, etc....DCS is a diva I could put it.

Well, I flew a few rounds over Krasnodar Airport and :) ALL HAPPY ! This rig is solely for DCS, that's the reason it exists at all, for anything else I use Mac or Linux and avoid Redmond.



Yes, if you have a 2600k and get a 5G capable 7700k it well worth it. Overall it's faster, it scores 100 more in IPC ( 2900 vs 2800 Passmark bench, both 5G ).


WOuld I buy it again ? Maybe !


I hate the TIM Intel uses and I still refuse to delid my finally working CPU ( and thus complete system ).

Right now, I would wait for Ryzen and have a look how they score in IPC...and how they OC :)



Let's kill a CPU
Bit

P.S.
That "cheap" Prime board also fits my 500r Tower again. There was no need to cut that mobo tray loose as Idid to fit ANY E-ATX board...LOL never thought would need automotive size tools to work a board in...but I did. You CAN fit an Asus Maximus VIII Extreme into a Corsair Carbide 500r tower, YES...but get some steel cutting tools ready :D BAAAAHHHHH

ALSO..NOTE...NO WATERCOOLED BACKPLATE EQUIPPED BOARD ( MSI + ASUS ) fits this said Corsair tower, regardless of form factor !!!!!!!!!!!!!!!! DONT TRY !!!!!
The tower's 3 most upper threads for your mobo supportes are RISERS, yxou cannot screw in sleek and tiny risers as usual....and those DOMES interfere with the backplate !!!!!!! It WONT SETTLE DOWN !!!!

You either CUT the backplate to fit...or take a Chieftec Monster of the 90s that is standing around.

BTW..the Chieftec is also a very nice emergency stool or seat whereas if you settle yourself on the 500r you will see how an empty coke can folds itself to a tiny peace of cheap steel in seconds :D

I wished Chieftec would make some proper gaming tower, unsurpassed steel quality.
 
Last edited:
Another negative (if you're a Win 7 fan) is having to switch to Win 10 if you want a Z270 chipset along with the 7700K.

*8 Windows® 8.1 64-bit and Windows® 7 32/64-bit are only supported when using 6th Generation Intel® Processors
 
Back
Top