Skylake-X (Core i9) - Lineup, Specifications and Reviews!

Nice showing from i9-7900X @ SweClockers review, real CPU limited gaming tests at 720p and Skylake-X comes out on top most of the time compared to Broadwell-E.

http://cdn.sweclockers.com/artikel/diagram/13539?key=6c21255207d30f3f2af0beb82de116d5

Faster than Broadwell-E on all CPU-gaming tests except CIV 6.

Close to 7770k (+-7%) on four games.


juanrga Do SA mods also issue infractions if you don't agree with them, even when they are wrong?
EuphoricRage470 did beat me.
 
As long as you OC without AVX clock offset, you are instantly running AVX512 loads in Prime95 way above stock speed by running them at actual core speed. Even if you just got MCE enabled.

At 4.5Ghz in AVX512 loads, the FP output of a 7900X is higher than 32 and maybe up to 48 SB/IB/Zen cores at 4Ghz.

He also record a 40W difference between 4 boards, just on VRM alone.

He also state the CPU itself at 4.6Ghz pulls 270W. Gigabyte being the exception at 290W.

So no, its not 400W.

270--290W for CPU overclocked fall in the expected range.

140W * (4.6GHz / 3.3GHz)^2 = 272W

On the other hand, 400W must be measuring other components beyond the CPU. BitTech got a bit less than 400W for total system, with CPU @4.6GHz.
 
In the
It's so funny. When AVX-512 uses up a lot of power, people complain about lack of efficiency. And yet, AVX-512 capability isn't mentioned in discussing Ryzen vs Core X.

Hmm...

15:00 mark der8auer says that he has used it extensively for testing systems for years as they have to pass all tests. I would assume that all systems includes Ryzen? This isn't an AMD vs Intel thing; this is an Intel issue with their motherboard manufacturers.

OC3D found the exact same issue with the same 500w system load. Watch at least through the first 16:00 mark. It throttles to 700 MHz according to him.


 
As long as you OC without AVX clock offset, you are instantly running AVX512 loads in Prime95 way above stock speed by running them at actual core speed. Even if you just got MCE enabled.

At 4.5Ghz in AVX512 loads, the FP output of a 7900X is higher than 32 and maybe up to 48 SB/IB/Zen cores at 4Ghz.

He also record a 40W difference between 4 boards, just on VRM alone.

He also state the CPU itself at 4.6Ghz pulls 270W. Gigabyte being the exception at 290W.

So no, its not 400W.

270--290W for CPU overclocked fall in the expected range.

140W * (4.6GHz / 3.3GHz)^2 = 272W

On the other hand, 400W must be measuring other components beyond the CPU. BitTech got a bit less than 400W for total system, with CPU @4.6GHz.
If you set the bios to allow the CPU to draw as much power as it wants, it will reach 500W total system power draw (about 350-400W CPU) at 4.5 GHz. However, current motherboards can only sustain 250W-290W out of the VRM before the VRM overheats, causing the CPU to throttle to 1.2 GHz. 250W is only enough to sustain a 4.5 GHz overclock under load, with the CPU Current Capability limited to the default 100%.
 
So what is the final verdict with the VRM issue? Is it a thing or blown out proportion? I am about to pull the trigger on a 7820x this weekend.
 
If you set the bios to allow the CPU to draw as much power as it wants, it will reach 500W total system power draw (about 350-400W CPU) at 4.5 GHz. However, current motherboards can only sustain 250W-290W out of the VRM before the VRM overheats, causing the CPU to throttle to 1.2 GHz. 250W is only enough to sustain a 4.5 GHz overclock under load, with the CPU Current Capability limited to the default 100%.

Please show where the CPU draws 350-400W. And yes, I want to see CPU package usage.
 
PurePC at 4.5Ghz.
https://www.purepc.pl/procesory/test_procesora_intel_core_i97900x_skylakex_witaj_lga_2066?page=0,41

upload_2017-7-7_10-23-40.png
 
So what is the final verdict with the VRM issue? Is it a thing or blown out proportion? I am about to pull the trigger on a 7820x this weekend.

It's fine as long as you realize that it is wise to place a fan near the VRMs if overclocking. Seems to completely cure the issue as it will lower the temps 40 degrees or more on the VRMs. I've been doing this on various systems since the 1990's. Now if you are into hardcore benchmark runs where you would pull ridiculous amounts of power, like the 500w systems loads in the videos as some of us on [H]ardocp like to do for fun, then you need a motherboard with more than just a single 8 pin power connector on it. 8 + 4 or better is preferred. Asus said that their VRMs on the motherboards are rated to handle the load just fine according to OC3D. The silly cute RGB and other crap heatsinks on the VRMs are the only issue. Once again a simple fan cures it if overclocking.

Of course if you aren't going balls to the walls then everything is peachy. ;)
 
It's fine as long as you realize that it is wise to place a fan near the VRMs if overclocking. Seems to completely cure the issue as it will lower the temps 40 degrees or more on the VRMs. I've been doing this on various systems since the 1990's. Now if you are into hardcore benchmark runs where you would pull ridiculous amounts of power, like the 500w systems loads in the videos as some of us on [H]ardocp like to do for fun, then you need a motherboard with more than just a single 8 pin power connector on it. 8 + 4 or better is preferred. Asus said that their VRMs on the motherboards are rated to handle the load just fine according to OC3D. The silly cute RGB and other crap heatsinks on the VRMs are the only issue. Once again a simple fan cures it if overclocking.

Of course if you aren't going balls to the walls then everything is peachy. ;)
Sounds good. Yea going to do some over clocking but not balls to the wall benching other then for stress testing.
 
I think the VRM issue is interesting if your passion is bringing CPUs to the brink in testing, and learning how they work and what manufacturing practices have changed.

To call a video "VRM X299 DISASTER" and then 15 minutes into a rebuttal video claim that end users will literally experience no issues -- well, that's sensationalism. And as a result, we all have to hear about how terrible x299 is because the word is already out and misconstrued.
 
I think the VRM issue is interesting if your passion is bringing CPUs to the brink in testing, and learning how they work and what manufacturing practices have changed.

To call a video "VRM X299 DISASTER" and then 15 minutes into a rebuttal video claim that end users will literally experience no issues -- well, that's sensationalism. And as a result, we all have to hear about how terrible x299 is because the word is already out and misconstrued.

Next time don't prioritize Christmas lights over functionality?
 
I think the VRM issue is interesting if your passion is bringing CPUs to the brink in testing, and learning how they work and what manufacturing practices have changed.

To call a video "VRM X299 DISASTER" and then 15 minutes into a rebuttal video claim that end users will literally experience no issues -- well, that's sensationalism. And as a result, we all have to hear about how terrible x299 is because the word is already out and misconstrued.
It is a little sensational, but I agree with cageymaru on the below point.
Next time don't prioritize Christmas lights over functionality?
It seems that for the most part aesthetics was given priority over cooling performance when it came to the VRM covers. I would like to see some temps from the Gigabyte boards using heatpipes compared to others that are not, though.

We need to go back to the days of using actual, functional heatsinks. I had this board back in the Kentsfield days with a QX6700 and I think it's still very pleasing to the eyes.
upload_2017-7-7_11-14-46.png
 
So what is the final verdict with the VRM issue? Is it a thing or blown out proportion? I am about to pull the trigger on a 7820x this weekend.

The only way you should see temps for the VRM or CPU that are uncomfortable is if you turn up Current Protection to the max and run Prime95. That is the only way myself or anyone else has been able to get close to replicating Der8auers results.

That being said, why would anyone need or want to do that?
 
They should revisit this idea from the last decade:
Gigabyte Dual Power System (DPS), half of the power phases are on the mobo and the rest on an extension card:
GAk8NNAP_mobo.jpg

Good time... when AMD was actually faster than Intel.
 
Not just TIM, BAD TIM.

Here is a review of the 7740x on Guru 3d:
http://www.guru3d.com/articles_pages/intel_core_i7_7740x_processor_review,24.html

I know not many are interested in this processor, but it does allow comparison to the Kaby Lake 7700k:
http://www.guru3d.com/articles_pages/core_i7_7700k_processor_review_desktop_kaby_lake,22.html

The 7740x managed 5.2 Ghz while the 7700k managed 5.0 Ghz.

HOWEVER, temps reached 90* on the 7740x while the 7700k hit only 75*. That is a huge difference. Did they use toothpaste on these new processors??

Diminishing returns as well. The 7740x only gained about 10% performance on this overclock while the 7700k gained closer to 15% on average despite a smaller boost.
Also, the 7740x was using about 175 watts o/c while the 7700k was about 150 watts o/c. Both numbers are 50 watts over stock.

The CPU is using alot less than 175 watts. That was just the measurement at the wall. The AIO cooler should not have a problem dissipating that heat but it clearly does.
 
Alright. I can call my 7820x completely stable at 4.7GHz on 1.225v. Cooling is a H115i with push/pull Noiseblocker 140mm fans. RAM is Corsair 3200MHz running stock 16-18-18-36 timing. Mesh was set to 3200MHz.

Here are the real world numbers...


In Cinebench it scored 2035cb in Multi thread and 204cb Single Core. The hottest core was #1 hitting 72c.

AvdyvQu.jpg


In Geekbench it scored 33189 Multicore and 5831 Single-core. The hottest core was #3 and #7 both hitting 67c.

lhM2zTX.jpg


In Realbench for 15 minute test the hottest core was #3. It hit 78c.

yXuu6pA.jpg


In Prime with AVX in a 20 minute test the hottest core was #2. It hit 88c. AVX offset was -500MHz) Personally I consider Prime with AVX a very unrealistic usage scenarios for me, so I am OK with this.

XY6B9fp.jpg


AIDA Cache and Memory looks like this...

li3ffC2.jpg




Just for shits and gigs here are some 3D benches with a 1080ti FTW3...

http://www.3dmark.com/fs/13034422
uaqJJAv.jpg


http://www.3dmark.com/spy/2015990
XTHgrHC.jpg


tBIOcPL.png[IMG]

Wow. Not much of an improvement for the CPU score in Timespy, and your Firestrike result appears to have been deleted from the website. 11% faster at 4.7GHz than my 4.3GHz 1660V3 (5960X equivalent). Looks like I'll be sitting on this chipset for a while, just like some folks did with their 2500/2600Ks.

http://www.3dmark.com/compare/spy/2015990/spy/1584689#

Yes, I know there have been other improvements but they don't have much, if any, affect on what I use my PC for. Although it is really nice to see an Intel 8C/16T CPU for $600 - and I'm hoping that Threadripper will force Intel to drop prices more.
 
Maybe. But I know for sure that Coffee Lake will make Ryzen drop their prices and not the other way around.
 
Maybe. But I know for sure that Coffee Lake will make Ryzen drop their prices and not the other way around.

I'm hoping more for the reason that now many of Intel's HEDT chips have only 28 PCIE lanes, where in the previous generations they had 40 with the higher end chips. It doesn't affect everyone, but I am actually using 36 with my 16X GPU, 8X RAID controller, 8X dual 10 gig NIC, and 4X NVMe drive.

I could drop back down to 28 if HEDT motherboard manufacturers start putting dual 10 gig NICs on their boards.

Hell, that's one of the biggest reasons why I am really considering Threadripper in the future - PCIE lanes for days!
 
Intel has always been better than AMD. If Intel wanted? They would've buried AMD once and for all.
But, where's the fun on that!

Intel is shitting the bed right now. Even if they have the ability to bury AMD, they sure aren't showing it right now.
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
250W(VRM side) for 4.5Ghz with no AVX512 offset? Not bad :)

But mobo makers needs to step up on VRM cooling instead of bling bling.
 
Wow that is really bad it's getting that hot and at 300 watts of load from the cpu the VRM overheats and the system throttles itself. That is just horrible design work there and these should have never been released unless Intel would like to update their logo to a Fireball.
 
Wow that is really bad it's getting that hot and at 300 watts of load from the cpu the VRM overheats and the system throttles itself. That is just horrible design work there and these should have never been released unless Intel would like to update their logo to a Fireball.

There is no anything bad, just the result of pushing 10 X 512bits @4.5GHz. That huge amount of performance requires lots of power. The design can be improved, sure, but it is not the "horrible" that you pretend.

Maybe a graph can help to understand the huge performance that brings AVX512

intel_sklx_cpu_mm.png
 
Last edited:
The only way you should see temps for the VRM or CPU that are uncomfortable is if you turn up Current Protection to the max and run Prime95. That is the only way myself or anyone else has been able to get close to replicating Der8auers results.

That being said, why would anyone need or want to do that?
I think it also needs to be Prime95 with the option for maximum heat/consumption on top of that as well.
It is a fair amount of options that need to be selected from the motherboard and Prime95, far from real world even in terms of stability with such turned on.
That said I am still not entirely sure if AVX is performing correctly yet with regards to performance envelope management.

Still, would be nice if there were a selection of boards that were not bling over performance; feels they went 'cheaper' bling because a nice cooling solution would also look nice and with benefits even if it has to be kept within constraints such as sizing.

Cheers
 
There is no anything bad, just the result of pushing 10 X 512bits @4.5GHz. That huge amount of performance requires lots of power. The design can be improved, sure, but it is not the "horrible" that you pretend.

Maybe a graph can help to understand the huge performance that brings AVX512

intel_sklx_cpu_mm.png
The problem with AVX/FMA/AVX512 is that while they help generate amazing synthetic scores, they aren't very useful in practice. It's not even that applications haven't been recoded to take advantage of the new extensions, but rather FP throughput is, in almost all real-world cases, limited by bandwidth and/or latency and not raw FPU throughput. Linpack benefits from it (large amounts of data reuse, easy caching), but nothing uses a single giant LU decomposition, limiting its use to mostly nation-scale benchmarking. Signal processing problems using Fourier transforms similar to Prime95's Small FFT benchmark could benefit from it, but I have yet to see signal processing code that benefits from AVX (CERN's benchmarking a few years ago with their own collider analysis codes showed that there was little speedup going beyond the 128-bit vector units that SSE provides).
Conventional engineering problems certainly do not benefit from newer than SSE, if that. FEA typically uses iterative sparse matrix solvers, and the fundamental structure of a sparse matrix (you don't know which elements are nonzero or where they are) makes it hard to cache. Renderers don't seem to take advantage of AVX512 much either (we don't see the 7900X being 2x faster than the 7820K despite having 25% more cores and twice the FMA's).

Now this would be all fine and dandy if AVX were free. Quite frankly, on a stock-clocked processor it is - the extra power the FP units need is offset by lower AVX clocks, and in the end in the same thermal envelope you get almost twice the FP throughput in the best case, and minimal loss in the worst case (with an overly aggressive implementation of AVX offset, some applications which contain AVX code but does not benefit from it will see slightly decreased performance).
Unfortunately, from an enthusiast point of view AVX is a pain in the ass when it comes to overclocking. Nothing remotely consumer benefits from it (and this includes "workstation" apps such as Photoshop/Blender/Keyshot/Handbrake, etc), but you can't very well have a computer which goes and crashes under heavy AVX load, even if you don't expect such a load. AVX offset solves the problem, but makes stability testing hard - most of the existing power viruses used to test for stability and thermal headroom trigger the offset, making it difficult to find a representative heavy workload for accelerated stability testing.
 
Everyone's reporting on VRM heat issues and I'm over here pulling 230-260W at the package continuous with VRMs only in the 60C's.

Would be nice to see someone do one of these reviews with evidently the only decent x299 motherboards out there currently, the Gigabyte Aorus Gaming 7 and 9, which have dual VRM heatsinks connected via heatpipe.

Instead, everyone single one of these "the VRMs are overheating!" reviews are all using motherboards with a single VRM heatsink covered in stupid plastic bling.
 
Another review for x299 Taichi:


VRM temp is awesome even when the CPU is clocked very high. Cudos to Asrock not using nonsense plastic covers on their heatsink.

Still... I have yet to receive my 7820x LOL
Anyone have news on why the 7820x is soooo low on stocks?
 
Another review for x299 Taichi:


VRM temp is awesome even when the CPU is clocked very high. Cudos to Asrock not using nonsense plastic covers on their heatsink.

Still... I have yet to receive my 7820x LOL
Anyone have news on why the 7820x is soooo low on stocks?


Prefer Asrock to many partner boards, I find Asus excessively overrated for the amounts of RMA's on dead out the box boards.
 
Back
Top