Intel launches its 10th gen Comet Lake CPUs - breaks world record in soldier numbers (possibly)

zehoo

Limp Gawd
Joined
Aug 22, 2004
Messages
347
Am I reading this wrong or does this have more pcie lanes than their previous desktop consumer chips?
 

DejaWiz

Fully [H]
Joined
Apr 15, 2005
Messages
19,959
That's a nice offering, but Intel is still way behind the 8-ball compared to the current Ryzen 3000 + B550/X570 series platform:

PCIe 4.0
More PCIe lanes
Higher base clock
Oftentimes, more cores/threads for the same (or less) money
Higher RAM speed support
7nm vs 14nm+

...I will say that these new Comet Lake processors will be Business/Enterprise monsters. Relatively cheap OEM systems with a jump in cores/threads for the same money as the most recent last gen Intel offerings.
 

kirbyrj

Fully [H]
Joined
Feb 1, 2005
Messages
26,459
I'm guessing this was a paper launch as I can't find anything about availability.
 

Meeho

Supreme [H]ardness
Joined
Aug 16, 2010
Messages
5,047
Am I reading this wrong or does this have more pcie lanes than their previous desktop consumer chips?
Still only 16 usable for the CPU, but the socket is prepared for 20 for the next gen.
 

1_rick

[H]ard|Gawd
Joined
Feb 7, 2017
Messages
1,330
Am I reading this wrong or does this have more pcie lanes than their previous desktop consumer chips?
I haven't seen reviews spell this out explicitly but I think that for Intel, HSIO lanes are roughly equivalent to PCIe lanes. So you see those charts that show 16 PCIe, plus 24 HSIOs, and the latter are what can be grouped in various ways to provide network interfaces, USB ports, SATA ports, etc.

Oh, here's a nice explanation:
 

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,622
From the article:
Now it should be stated that for the motherboards that do support PCIe 4.0, they only support it on the PCIe slots and some (very few) on the first M.2 storage slot. This is because the motherboard vendors have had to add in PCIe 4.0 timers, drivers, and redrivers in order to enable future support. The extra cost of this hardware, along with the extra engineering/low loss PCB, means on average an extra $10 cost to the end-user for this feature that they cannot use yet. The motherboard vendors have told us that their designs conform to PCIe 4.0 specifications, but until Intel starts distributing samples of Rocket Lake CPUs, they cannot validate it except to the strict specification. (This also means that Intel has not distributed early Rocket Lake silicon to the MB vendors yet.)

So purchasing a Z490 motherboard with PCIe 4.0 costs users more money, and they cannot use it at this time. It essentially means that the user is committing to upgrading to Rocket Lake in the future.
Great, more artificial segmentation - outside of vendor lock-in, why anyone continues to invest in this menagerie of a shit show that Intel has made for themselves, and their unwitting customers, is beyond me.
At least they finally implemented some hardware-level fixes for Meltdown V4.
 

DanNeely

2[H]4U
Joined
Aug 26, 2005
Messages
3,671
From the article:

Great, more artificial segmentation - outside of vendor lock-in, why anyone continues to invest in this menagerie of a shit show that Intel has made for themselves, and their unwitting customers, is beyond me.
At least they finally implemented some hardware-level fixes for Meltdown V4.
I'm not sure what you were expecting. The Z490 chipset is PCIe3.0. That means at most when RocketLake is out only the lanes connected directly to the CPU can support PCIe 4.0. That would be the PCIe lanes coming from the CPU. PCIe4 signals can't cross the entire lengh of a mobo without extra hardware in the form of signal boosters; as a result extending beyond the top 1x16 slot would add useless cost for the vast majority of current customers who will never upgrade their CPU. The same is true for adding a PCIe switch so the top m.2 slot can be connected to either the Z490 or the extra 4 CPU lanes that rocketlake will bring to the table.

And of course this is assuming that Intel doesn't pull an AMD next year and decide avoiding customer confusion is more important than other considerations and ban turning on 4.0 support on any current boards because some of the cheaper ones can't do it.

If you just want to engage in fanboi flaming, the Wccftech comment threads are over thataway. You'll fit in just fine over there, and as a bonus they've gone full meme for extra fun.
 

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,622
I'm not sure what you were expecting. The Z490 chipset is PCIe3.0. That means at most when RocketLake is out only the lanes connected directly to the CPU can support PCIe 4.0. That would be the PCIe lanes coming from the CPU. PCIe4 signals can't cross the entire lengh of a mobo without extra hardware in the form of signal boosters; as a result extending beyond the top 1x16 slot would add useless cost for the vast majority of current customers who will never upgrade their CPU. The same is true for adding a PCIe switch so the top m.2 slot can be connected to either the Z490 or the extra 4 CPU lanes that rocketlake will bring to the table.
So it's another useless stop-gap to continue to fill Intel's coffers while they still continue to struggle to move beyond 14nm+++.
Is this like Z290, and what a pointless endeavor both it, and Kaby Lake, were?

Why would Intel not wait and just develop a CPU capable of PCI-E 4.0?
Oh right, to fuck more loyal customers over by offering a single generation of "alleged" backwards comparability, which Intel has pissed backwards on quite a few times, so, grain of salt and all that.

And of course this is assuming that Intel doesn't pull an AMD next year and decide avoiding customer confusion is more important than other considerations and ban turning on 4.0 support on any current boards because some of the cheaper ones can't do it.
In the link you posted, it showed that the X470 and X370 (and lower) platforms will remain PCI-E 3.0 - it had nothing to do with the "cheaper" platforms.
AMD was always up front about this with their Zen 2 CPUs, while remaining backwards compatible, would only utilize PCI-E 3.0 on the older platforms.

This was mainly due to the timings on the older boards not being fully to spec, and I remember reading from multiple sources that AMD could have enabled PCI-E 4.0 on those older boards, but that since it didn't hold to spec, they obviously didn't want things to break or perform worse; if Intel is doing this now, that's fine, but again, grain of salt.
So, what confusion are you talking about?

If you just want to engage in fanboi flaming, the Wccftech comment threads are over thataway. You'll fit in just fine over there, and as a bonus they've gone full meme for extra fun.
Do you not remember the 35+ years of Intel's anti-competitive and anti-consumer practices against both their customers, vendors, and AMD, illegally?
How about the 50+ hardware-level exploits that completely destroyed performance and value from Sandy Bridge to Kaby Lake, or how it effectively killed off anything pre-Sandy Bridge due to unpatchable security exploits, all the way from Nehalem back to the Pentium Pro?

Let me tell you, it's fun paying for multiple $400 Intel CPUs with $400 worth of performance, only to see the performance drop 20-60% on various workloads within only a few years of ownership, not to mention having to replace $$$$ worth of equipment which has become unpatchable and unsecure at the hardware-level no thanks to Intel cutting corners and fucking their loyal customers over.
Did Intel ever refund customers 20-60% of the cost and value lost in their products from their proven corner-cutting and lack of security? (hint: the answer is a resounding 'no')

In numerous threads, I have also stated that Intel needs to become competitive again, lest we see AMD begin to price gouge and go back to what they did pre-Conroe C2D release in the mid-2000s.
But please, continue to tell me what a fanboi I am. :meh:
 
Last edited:

5150Joker

Supreme [H]ardness
Joined
Aug 1, 2005
Messages
4,272
More 14nm? 😴 Did TSMC steal all of Intels fab engineers? Even Samsung has them beat. I guess American engineering has really fallen behind. At least we lead the world in social media output. 🤣
 

defaultluser

[H]F Junkie
Joined
Jan 14, 2006
Messages
13,636
More 14nm? 😴 Did TSMC steal all of Intels fab engineers? Even Samsung has them beat. I guess American engineering has really fallen behind. At least we lead the world in social media output. 🤣

Supposedly, they're banking on 38-core IceLake server chips. But as you can seem, they're not available yet! Once those are ready, they have all 10nm process time dedicated to those monsters!

Expect to see the server chips launch...sometime next year?
 

Dopamin3

Limp Gawd
Joined
Jul 3, 2009
Messages
165
Taken from the article at GN:
First, Intel claimed that, quote, “around 60% of games are optimized for single core.” We asked Intel in the press call to clarify this phrasing. Namely, we wanted to know from what year this approximation was calculated. Intel answered this by suggesting that we take the question “offline” and follow-up, then failed to follow-up for about 24 hours. If Intel is referring to all games made ever in all time, including Space War, which was made for the PDP-1 in 1962, then 60% is probably undershooting it a little bit.

You can't make this stuff up.

P.S. I wonder if this is the continuation of Ryan's quality work.
Of course it is. He's really doing a great job as Chief Performance Strategist.
 

Meeho

Supreme [H]ardness
Joined
Aug 16, 2010
Messages
5,047
Taken from the article at GN:
First, Intel claimed that, quote, “around 60% of games are optimized for single core.” We asked Intel in the press call to clarify this phrasing. Namely, we wanted to know from what year this approximation was calculated. Intel answered this by suggesting that we take the question “offline” and follow-up, then failed to follow-up for about 24 hours. If Intel is referring to all games made ever in all time, including Space War, which was made for the PDP-1 in 1962, then 60% is probably undershooting it a little bit.

You can't make this stuff up.
And when Intel got called out for that BS, it resulted in this gem:

Intel later did follow-up and noted this:

“Slide 3 in the file shared yesterday had a version error. The correct information is reflected in Slide 3 of the PDF deck saved to this folder. The bullet that was removed, pertaining to the 60% of games are optimized for single core, was meant to be an internal guide only. We regret the error, and appreciate your understanding.”
 

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,622
I'm trying to remember the last time a game was truly single-threaded and non-SMP.
The first game with full SMP functionality (2 threads) was Quake 4, and it had to be run with the optional -smp flag in order to enable the feature on then dual-core x86-64 CPUs - that was in late 2005.

The 360 and PS3 both ran multi-threaded games and software, and games that were single-threaded ran terribly on both of those consoles - that was in 2005 and 2006, respectively.
Not counting legacy or mobile games, the last time I remember seeing any x86-based PC game that was single-threaded was maybe 2007 or 2008?

I don't know if we should speak out against Intel's logic on this one, though, otherwise DanNeely might show us the door to Wccftech. :ROFLMAO:
 

defaultluser

[H]F Junkie
Joined
Jan 14, 2006
Messages
13,636
I'm trying to remember the last time a game was truly single-threaded and non-SMP.
The first game with full SMP functionality (2 threads) was Quake 4, and it had to be run with the optional -smp flag in order to enable the feature on then dual-core x86-64 CPUs - that was in late 2005.

The 360 and PS3 both ran multi-threaded games and software, and games that were single-threaded ran terribly on both of those consoles - that was in 2005 and 2006, respectively.
Not counting legacy or mobile games, the last time I remember seeing any x86-based PC game that was single-threaded was maybe 2007 or 2008?

I don't know if we should speak out against Intel's logic on this one, though, otherwise DanNeely might show us the door to Wccftech. :ROFLMAO:

Yeah, even Skyrim makes use of second core to do high-end shadow effects. Since the release of Fallou4 , they scaled their engine up to 6 cores.
 

EniGmA1987

Limp Gawd
Joined
May 2, 2017
Messages
429
So Intel started losing in enough areas, that now they advertise edge cases in specific video games they are good in when launching a CPU? Reminds me of AMD tactics during Bulldozer era.
 

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,622
So Intel started losing in enough areas, that now they advertise edge cases in specific video games they are good in when launching a CPU? Reminds me of AMD tactics during Bulldozer era.
...and Intel during the Netburst era.
Even though, in that era, Intel lost in every single category except for a few second gain on a large Microsoft Outlook mail merge. :D
 
  • Like
Reactions: N4CR
like this

DanNeely

2[H]4U
Joined
Aug 26, 2005
Messages
3,671
In defense of Ryan, he has very few positives to work with. Intel is so far behind the curve it's not even funny. I'll probably sell my Z390/9900K setup and go Zen 3 in the beginning of 2021.
Are you doing something that's highly CPU bound and would benefit from significantly more cores, or just have rapid upgrade fever?

I'm hoping to delay replacing my 4790k until at least 2022, which'd give me 7 years on it, the same as my previous i7-920/930 setup. Mostly I'm looking for DDR5, PCIe5 (or confirmation that it's too expensive to build a supporting PCB for the consumer market), and multi-gig ethernet. Secondary considerations would be significant USB-C penetration (at least 2 front ports, and 2 rear ports); the need for an extra chip to support reversibility is an frustrating drag here. USB4 would be nice, but is much lower priority due to things needing it being really niche, and an expected requirement for expensive active cables for anything longer than about a foot.

2022/23 would also align with my normal about every 2 year GPU upgrade cycle; having passed on the 2080 last year I'm probably going to upgrade my 1080 to a 3080 when its out in a few months.

Sooner would probably mean either a hardware failure; or some game I really fall in love with needing more CPU cores badly.
 

Mav451

Supreme [H]ardness
Joined
Jul 23, 2004
Messages
4,657
Krazy925 My college roommate was the ultimate Intel fanboy during Netburst era. Then the following year, I remember visiting him at his new off-campus place and he was proudly showing off an A64 rig. It was a s940 FX system too - didn't even wait for the mainstream s754/939 gear to make the change.

der8auer's video yesterday also brought up the soldiers lmao - Meeho just posted it while I was typing this...

In all seriousness, the only thing that would make this somewhat interesting is if MC brought back real, substantial Intel discounts. I'm talking the return of $149 i5 or $229 i7.
 

ManofGod

[H]F Junkie
Joined
Oct 4, 2007
Messages
12,005
View attachment 241585

Truly captivating stuff from Intel. AMD has their work cut out for them.

More info and real life marketing at:
https://www.gamersnexus.net/news-pc/3576-intel-gen-10-cpu-specs-10900k-delid-oc-support


P.S. I wonder if this is the continuation of Ryan's quality work.
So, it is true after all, stuff was optimized for Intel over the years and AMD processors lost out, at least in part, because of it. :D I will have to check out this video, this appears good for competition, even though it is not a game I have played and will probably never play, myself.
 

EniGmA1987

Limp Gawd
Joined
May 2, 2017
Messages
429

In the system configurations slide at the back of the latest press deck Intel says it set the PL2, the short-term power limit, at 250W. That's twice the base TDP of the Comet Lake chip, almost twice the base TDP of the 9900KS, and nearly 100W higher than that Coffee Lake processor's suggested PL2 rating.


(Image credit: Intel)
Intel also set the Tau, the set amount of time the chip will draw that much power, at 56 seconds just so it could beat the rest. Given that there's no information offered as to what the 9900KS was sat at we can assume it was running at the recommended 159W PL2 level and 28 second Tau.


With all that taken into consideration you can see how Intel is touting the 10900K as 'the world's fastest gaming processor' even though you've got to put a bit of effort, and a whole lot of power, to get it running like that.
So for a while now Intel has been basing its performance ratings on that of the default turbo speed, which is over the TDP they claim. But now Intel has moved beyond that, and modifies settings to specifically overclock the CPU past default turbo settings and is claiming that as their normal performance? lol
 

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,622
I don't remember the original release of Quake 3 allowing SMP; not saying you are wrong, but was this a thing back in 1999?
I know other software was, but games from that era (not counting the Saturn) were not SMP, and this is the first I have heard of that.
 
Last edited:

Red Falcon

[H]F Junkie
Joined
May 7, 2007
Messages
10,622
Just got out my copy (the installer doesn't like win10, but the game runs)
View attachment 242181
Damn, I stand corrected, and that was from 1999, wow!
Also found this from the r_smp option in your screenshot:

https://www.anandtech.com/show/368/33

graph25.gif

That is awesome, thank you for correcting me on this, and for letting me know! (y)
I actually have a Pentium II 2P system as well - going to need to fire that up and setup a Quake 3 server now - much thanks for the inspiration!!!
 
Top