Intel Allegedly Readying 10-Core Comet Lake-S Processors

You need to fix your quote tag.



This is probably what I will wind up doing when I finally replace the old i7-3930k, unless things have changed by then.

It's still problematic, though, as it messes with the binning. The many-core chips are binned for low power use at lower clocks. This isn't the same binning as the ones used to hit max clocks, so you wind up playing tje silicon lottery more than usual.

You are also paying for cores you'll never use, which irks me.

I agree. I wouldn't go back to quad cores, but I would appreciate a 6-8C/12-16T chip binned for max possible clocks above multicore thermal enveloped, while still getting the full amount of PCIe lanes of a HEDT system.

I'd pay for this. Generously. It's just that no one is making it.

My philosophy when it comes to HEDT is that I don't mind it, as long as it comes without sacrifices. If my cores are clocked as high, and I can get the same number of PCIe lanes, I won't complain about a couple of extra cores thrown in for good measure, bit as soon as there are sacrificed, I'd rather keep the fewer cores, as long as there are 6+C/12+T of them.

What about Skylake-X refresh, the 9800x is 8c with 44 lanes and 16.5MB of L2. Seems like it matches your request.
 
I hope they lock the TDP on this thing because it will melt people's systems.
 
To those saying this would be glued silicon, this is talking about 2 ring buses, not dies. Like the X299 chips on the HCC die.
 
If EITHER of you had read the Article that you are posting about in this forum, you would have seen the poll in the article that is talking about intel releasing a 10 core mainstream cpu. The fact that this poll was so lopsided shows how far AMD has come and how much excitement there is around the xen 2 architecture compared to the tiptoe releases from Intel at the moment.

Now if Intel was announcing a 16 core 10nm chip, at under $400 dollars, you would have seen the poll flip flopped, but since they essentially given up on progress for the last 5 years, this is where we're at.


Lol. "If either of you did something, my rant that doesn't address your comments that I quoted would make sense!"
 
Lol. "If either of you did something, my rant that doesn't address your comments that I quoted would make sense!"

So, one guy posts a poll from the linked article.

You attribute it to Reddit. Which is not where it came from.

Another guy calls you out on it.

And then you circle back around to something making sense?

Now I'M confused.
 
So, one guy posts a poll from the linked article.

You attribute it to Reddit. Which is not where it came from.

Another guy calls you out on it.

And then you circle back around to something making sense?

Now I'M confused.


Mission accomplished!
 
Must admit Zen 2 has peaked my interests, but I am waiting patiently for my "maor coahrs" upgrade
 
I am the all powerful Intel 10 core CPU! Bring me AMD's Infinity Fabric and I will grant your wish for CPU innovation. Pay no attention to the 14 nanometer man behind the curtain.
 
Anyone know of a resource that shows the top 100 games, top 100 productivity applications and how many threads they support?
 
I find that we have gotten to a sad state in CPU design.

It used to be only AMD trying to make up for lack of per core performance with core count.

Now everyone is doing it.

Let's me clear here. Unless you do rendering, encoding, muktiple desktop VM's or some sort of scientific applications there is no need for at most ~6C/12T.

The current core craze is nuts, wasteful and selling people stuff they don't need.

Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.


Yeah, if you hadn't noticed, the big pile of necessary bloatware required to do something on a windows machine keeps going up, so yeah those extra cores are gonna be handy soon.

example - Lets say you wanna play Assassins creed: Gotta have steam up, oh, it's an ubisoft game, gotta have their portal crap too. Game runs like crap, gotta have Special K on to fix that. Still running slow, so fire up Afterburner to O/C your GPU... One game, 5 simultaneous applications needed to get it to run correctly....
 
Yeah, if you hadn't noticed, the big pile of necessary bloatware required to do something on a windows machine keeps going up, so yeah those extra cores are gonna be handy soon.

example - Lets say you wanna play Assassins creed: Gotta have steam up, oh, it's an ubisoft game, gotta have their portal crap too. Game runs like crap, gotta have Special K on to fix that. Still running slow, so fire up Afterburner to O/C your GPU... One game, 5 simultaneous applications needed to get it to run correctly....

Special K?

upload_2018-11-26_12-57-13.jpeg
 
What about Skylake-X refresh, the 9800x is 8c with 44 lanes and 16.5MB of L2. Seems like it matches your request.

That does look nice. I hadn't seen it before. It's not listed on the Wikipedia LGA2066 CPU list. Has it only very recently been announced?

Still, according to Intel's Ark page it has a 600Mhz lower max turbo clock than the 9900k for some reason. That's 13.6 percent. Not insignificant. Hopefully with good cooling it can be overclocked to match the 9900k, but now the silicon lottery is coming into play.

I don't understand why they can't just make a LGA2066 version of the 9900k and call it a day. Does the quad channel IMC and the real estate for the extra PCIe lanes really cause that much more heat? Or is it just that the binning process is prioritizing high clocking parts for mainstream socket platforms?

(I don't even know if these parts have shared silicon, or if they are made on different lines.)
 
Last edited:
Hah, nah it's actually a dang life saver for a lot of bad PC port titles. Fixes stuttering, frame pacing, audio and a host of other things broken in a bunch of titles.

Huh. I had never heard of it before. What kinds of games does it fix? Maybe I've just been lucky, but I've never noticed problems in the titles I've played. At least not since I swore off multi-GPU good.

Any links? I'm trying to google it and all I find are kellogs products :p
 
If EITHER of you had read the Article that you are posting about in this forum, you would have seen the poll in the article that is talking about intel releasing a 10 core mainstream cpu. The fact that this poll was so lopsided shows how far AMD has come and how much excitement there is around the xen 2 architecture compared to the tiptoe releases from Intel at the moment.

Now if Intel was announcing a 16 core 10nm chip, at under $400 dollars, you would have seen the poll flip flopped, but since they essentially given up on progress for the last 5 years, this is where we're at.

No, I didn't read the wccftech article. And yes, I am an AMD fan. Can't wait for Zen 2 / Ryzen 3 but such polls mean absolutely nothing. What counts is throwing money their way which I already did for a laptop.
 
Huh. I had never heard of it before. What kinds of games does it fix? Maybe I've just been lucky, but I've never noticed problems in the titles I've played. At least not since I swore off multi-GPU good.

Any links? I'm trying to google it and all I find are kellogs products :p

A lot of RPG ports, namely the "Tales" series type games but also some bigger titles, like Assassins Creed and FFXV

Should be a list of some of the games on this first link:

https://steamcommunity.com/groups/SpecialK_Mods

https://github.com/Kaldaien/SpecialK/releases
 
I'm sure this will be an i9 which for me is a bit out of reach, but maybe that means 8c/16t i7 next year.

I'm upgrading in 2019 after 5 years with a 4790k. Whether it's AMD or Intel doesn't matter to me, whichever has higher IPC if priced similarly or whichever is cheaper if there is a large price gap. Right now 2700x is a no brainer over a 9900k for me due to the price differences.
 
That does look nice. I hadn't seen it before. It's not listed on the Wikipedia LGA2066 CPU list. Has it only very recently been announced?

Still, according to Intel's Ark page it has a 600Mhz lower max turbo clock than the 9900k for some reason. That's 13.6 percent. Not insignificant. Hopefully with good cooling it can be overclocked to match the 9900k, but now the silicon lottery is coming into play.

I don't understand why they can't just make a LGA2066 version of the 9900k and call it a day. Does the quad channel IMC and the real estate for the extra PCIe lanes really cause that much more heat? Or is it just that the binning process is prioritizing high clocking parts for mainstream socket platforms?

(I don't even know if these parts have shared silicon, or if they are made on different lines.)

So the 9800x is based on the Skylake-X refresh silicon, and they are using the HCC die for all of the parts this time (7000 series Skylake-X HEDT used the LCC and HCC dies depending on the core count, this time they all use HCC) so it's a pretty big die, ~480-500mm^2. It probably won't overclock as well as a 9900k but it should be able to do OK, I'd imagine you could get ~4.8Ghz all core turbo and probably 5Ghz on 2-4 cores. Maybe. They have been just recently announced, yes, and I don't think they are for sale yet, but soon. Anandtech has some good articles on them. Here's one And Here is another, focusing on the 18-core part
 
So the 9800x is based on the Skylake-X refresh silicon, and they are using the HCC die for all of the parts this time (7000 series Skylake-X HEDT used the LCC and HCC dies depending on the core count, this time they all use HCC) so it's a pretty big die, ~480-500mm^2. It probably won't overclock as well as a 9900k but it should be able to do OK, I'd imagine you could get ~4.8Ghz all core turbo and probably 5Ghz on 2-4 cores. Maybe. They have been just recently announced, yes, and I don't think they are for sale yet, but soon. Anandtech has some good articles on them. Here's one And Here is another, focusing on the 18-core part


Is the size of the die really relevant to heat production if the unused cores are gated, and thus completely turned off?
 
I'm not entirely sure, I mean it uses the mesh so some parts will be powered up all over it, plus there is probably some leakage even when power gated, plus it has a pile of cache, more memory controllers, more pcie lanes, etc.
 
Regardless how brilliant software devs are, they can't violate the basic laws of computer science.

Only a small percentage of code will ever lend itself to increased performance through multithreading. That is a fact. It is a fact today, and it will be a fact in 100 years. Nothing anyone can do can change this. Pretty much at this point, what can be done to take advantage of many cores already has been. There is no great improvement just around the corner.

Software developers can make better programs today by simply making more of their programs multi-threaded. I can't stand programs that can only do one task at a time.
 
Software developers can make better programs today by simply making more of their programs multi-threaded. I can't stand programs that can only do one task at a time.

There is only so much that can benefit to being multi threaded. It really isn't an issue of 'developers catching up' -- it's really an issue of the workloads, and the fact that most of them cannot scale like that. Amdahl's Law explains this, which was already brought up.
 
Software developers can make better programs today by simply making more of their programs multi-threaded. I can't stand programs that can only do one task at a time.

Again, most computing tasks (<80%?) depend on state. So something has to complete, before something else can happen. If you try to multithread code like this, you just wind up having locked threads waiting for each other to complete. There is nothing a software developer can do to "make this better". When ever you have a logical progression, "If A, then B", A has to complete before B. If you break this into multiple thread's you'll have the B thread waiting for the A thread to complete and then executing, and you won't get any performance improvement out of it anyway.

A stunning small proportion of tasks are possible to multi-thread. Those have mostly already been multi-threaded today, and the ones that benefit from it most are probably better suited for GPU's than they are for CPU's anyway.

It does not matter how good of a software developer you have, or how many of them you have. There are some things that are just logically impossible, and thus cannot ever be multi-threaded, no matter what you do.
 
Back
Top