Intel Allegedly Readying 10-Core Comet Lake-S Processors

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Wccftech has spotted a rumor on a Taiwanese forum claiming Intel will up the core count of their mainstream processors again with its next CPU family, which is being built on the 14nm process node. Allegedly called Comet Lake, one of the chips will supposedly feature 10 cores and use a dual, rather than single, ring-bus interconnect.

…a single ring bus can definitely handle 10 cores…but this would be an interesting change if Intel does go with the dual ring design, considering their core-to-core latency has been on point for a while on the mainstream side compared to the competitors who rely more aggressively on interconnects due to cores allocated to dual dies, hence slightly affecting the latency speeds.
 
The poll at the bottom speaks volumes.
upload_2018-11-25_9-54-7.png
 
Considering how hot the 9900k is with 8 cores.. This will either be substantially clocked down, or you will need their famous chiller cooling solution to run it, at least until they get off 14nm process and get their 10nm working.
My first thoughts too.
Also the quip about latency being 'on point' is pretty questionable, because for their higher core count parts the latency between Zen1 and their 14nm 79whatever is pretty much the same whilst better within CCX. Zen+ improved on that further..
 
I find that we have gotten to a sad state in CPU design.

It used to be only AMD trying to make up for lack of per core performance with core count.

Now everyone is doing it.

Let's me clear here. Unless you do rendering, encoding, muktiple desktop VM's or some sort of scientific applications there is no need for at most ~6C/12T.

The current core craze is nuts, wasteful and selling people stuff they don't need.

Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.
 
I find that we have gotten to a sad state in CPU design.

It used to be only AMD trying to make up for lack of per core performance with core count.

Now everyone is doing it.

Let's me clear here. Unless you do rendering, encoding, muktiple desktop VM's or some sort of scientific applications there is no need for at most ~6C/12T.

The current core craze is nuts, wasteful and selling people stuff they don't need.

Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.

Sure let's stop innovation because nothing can use it but again why develop something which could use it if it doesn't exist ? Something has to come first... They said the same thing for 1080P, 4K, HDR, HT, 8C, 16GB RAM, Cellphones, and the list GOES on.
Innovation needs to keep going on and if you're not happy with the offering, don't buy.
 
Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.

AMD makes an 8-core Threadripper CPU just for people in your situation. It gets you all the PCI-E lanes & I/O options of the Threadripper line-up, at a somewhat reasonable prices. And honestly, video editing is a common enough use case these days (look how many people put stuff on Youtube), that many people will benefit from all the cores they can get. And for the majority people whose need for PCI-E lanes ends at a video card & single NVM drive, there are plenty of lower core count options from both AMD & Intel that are quite affordable. A quad-core Ryzen 2200G is only $90 right now, for example, and is more than enough for any casual user.
 
Sure let's stop innovation because nothing can use it but again why develop something which could use it if it doesn't exist ? Something has to come first... They said the same thing for 1080P, 4K, HDR, HT, 8C, 16GB RAM, Cellphones, and the list GOES on.
Innovation needs to keep going on and if you're not happy with the offering, don't buy.


This is not innovation. It's just trying to throw more cores at a problem out of desperation to have something to one up the competition with.

We already know it is a fact that multithreading is not the solution, and never will be regardless of how much innovation there is. It's one of the fundamental laws of computer science.

We know that only a relatively small proportion of code is a candidate for performance improvement through multithreading, and unless a very large proportion of your code can be multithreaded it just doesn't scale. And this is not going to change. We arent just waiting for a breakthrough here. These are the fundamentals of software design, and cannot become any better unless you have code that can time travel to avoid thread locks.

640px-AmdahlsLaw.svg.png


There are applications that are well suited to high core counts, but they are typically not things that the average user, even enthusiasts are doing, and even when you want to do them, they are probably better done on a GPU than on a CPU.

So this - for most end users - is just a useless marketing number, like megapixels, which after about ~3.5 became pointless for the typical casual picture taker.
 
I love the competition with CPU's right now. I've started acquiring parts for a new build (shooting for spring or summer - not really in a hurry). I'm strongly leaning towards Ryzen right now - Microcenter had a 2700x for about $250 this weekend. I was tempted. I'm trying to decide if I want to go Threadripper or not.
Now, it'd be nice if AMD (or another company) could kick up some dust in the GPU world. Nvidia has a stranglehold on that market.
 
AMD makes an 8-core Threadripper CPU just for people in your situation. It gets you all the PCI-E lanes & I/O options of the Threadripper line-up, at a somewhat reasonable prices. And honestly, video editing is a common enough use case these days (look how many people put stuff on Youtube), that many people will benefit from all the cores they can get. And for the majority people whose need for PCI-E lanes ends at a video card & single NVM drive, there are plenty of lower core count options from both AMD & Intel that are quite affordable. A quad-core Ryzen 2200G is only $90 right now, for example, and is more than enough for any casual user.

The 8 core Threadripper was first gen only. If you want to benefit from the higher clocks and improved IPC of Zen+, the minimum core count is 12, in the 2920x, and then you are dealing with the whole infinity fabric latency issue with 3 of the cores always being further away from some of their ram, unless you are in "gaming mode" at which point you are paying $650 for what is essentially a hexacore :p

The key here is that when you grow core count, but keep the same power and thermal envelopes something has to give, and usually that something is core clocks. Once you hit about 6 cores, 99% of users would be better off with fewer cores clocked higher, than more cores clocked lower. The remaining 1% will do very well with these CPU's, but for most they are a silly waste, but people are going to keep buying them because the larger number is always better, right? :p

It reminds me of a certain 2011 review that got Kyle cut off from Intel, when he spoke this truth...
 
The 8 core Threadripper was first gen only. If you want to benefit from the higher clocks and improved IPC of Zen+, the minimum core count is 12, in the 2920x, and then you are dealing with the whole infinity fabric latency issue with 3 of the cores always being further away from some of their ram, unless you are in "gaming mode" at which point you are paying $650 for what is essentially a hexacore :p

The key here is that when you grow core count, but keep the same power and thermal envelopes something has to give, and usually that something is core clocks. Once you hit about 6 cores, 99% of users would be better off with fewer cores clocked higher, than more cores clocked lower. The remaining 1% will do very well with these CPU's, but for most they are a silly waste, but people are going to keep buying them because the larger number is always better, right? :p

It reminds me of a certain 2011 review that got Kyle cut off from Intel, when he spoke this truth...

Core clocks aren't going up anytime soon so more cores it is. Software devs need to catch up to hardware.
 
I find that we have gotten to a sad state in CPU design.

It used to be only AMD trying to make up for lack of per core performance with core count.

Now everyone is doing it.

Let's me clear here. Unless you do rendering, encoding, multiple desktop VM's or some sort of scientific applications there is no need for at most ~6C/12T.

The current core craze is nuts, wasteful and selling people stuff they don't need.

Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.


I dont know I agree.

Recently started playing WOW again. Playing at 4k with all in game settings at max quality including aa/af.

Just logged in to check the task manager and happened to land in the middle of an alliance incursion into Orgrimmar -maybe not RAID level hammering of the gpu/cpu, but up there.

Wow peaked at 49 threads. No one of my 16 listed CPUs in the task manager was being overly hammered. CPU 0 was a little higher then the rest and still was after exiting WOW.

I've seen the same kind of thing in ESO.

The whole x64 high number of cores has been good for MMO type games that are coded to take advantage.

The Secret World Legends on the other hand - wow I am surprised quite a bit - is using 90 threads, cores 0, 2, and 5 being hit hard - seeing all all cores doing work though.

I'd say the high core counts really help spread the work around. With less the 8 cores (guess on my part, be nice to test) I could see core 0 being at 100% rather then spiking up to the 80s from time to time.

Maybe ripping, VMs, and all that lead the way in terms of real need, I'd rather be on a 8/16 or higher then a 4/8 (or 6/12) when a poorly optimized, CPU intensive, high tread count game is making my box work hard.
 
I find that we have gotten to a sad state in CPU design.

It used to be only AMD trying to make up for lack of per core performance with core count.

Now everyone is doing it.

Let's me clear here. Unless you do rendering, encoding, muktiple desktop VM's or some sort of scientific applications there is no need for at most ~6C/12T.

The current core craze is nuts, wasteful and selling people stuff they don't need.

Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.

https://www.amd.com/en/products/cpu/amd-ryzen-threadripper-1900x
8 cores, 64 lanes. Or is that not enough lanes for you? (Yes sure it's two dies but... you said cores and pci lanes, nothing else).
 
This is not innovation. It's just trying to throw more cores at a problem out of desperation to have something to one up the competition with.

We already know it is a fact that multithreading is not the solution, and never will be regardless of how much innovation there is. It's one of the fundamental laws of computer science.

We know that only a relatively small proportion of code is a candidate for performance improvement through multithreading, and unless a very large proportion of your code can be multithreaded it just doesn't scale. And this is not going to change. We arent just waiting for a breakthrough here. These are the fundamentals of software design, and cannot become any better unless you have code that can time travel to avoid thread locks.

View attachment 122767

There are applications that are well suited to high core counts, but they are typically not things that the average user, even enthusiasts are doing, and even when you want to do them, they are probably better done on a GPU than on a CPU.

So this - for most end users - is just a useless marketing number, like megapixels, which after about ~3.5 became pointless for the typical casual picture taker.

Yeah well this is based on today HW/SW. So yeah, let's stop trying to solve problem / innovation, call it whatever you like. If nobody tries, nobody will make it happen.
I do agree that it's overkill but you have sooooo many choices today, pick the best suited for your needs. Nobody is forcing 8C/16T onto my mom's laptop... end user choice and I think it's "our" duty to guide those that can't choose properly.

Anyway, you can't say for certainty that it won't scale with tomorrow solutions. (nothing is 100% certain and I'm pretty sure you know that after reading so many of your posts). Tomorrow HW/SW/Engineer may solve that scaling issue we're facing now so why wait for that solution to build better HW for when that solution is availaible.
Again, I can't stress it enough, there's any choices available to buy and lower core counts are there... let's not forget that.
 
Software devs need to catch up to hardware.

Regardless how brilliant software devs are, they can't violate the basic laws of computer science.

Only a small percentage of code will ever lend itself to increased performance through multithreading. That is a fact. It is a fact today, and it will be a fact in 100 years. Nothing anyone can do can change this. Pretty much at this point, what can be done to take advantage of many cores already has been. There is no great improvement just around the corner.
 
https://www.amd.com/en/products/cpu/amd-ryzen-threadripper-1900x
8 cores, 64 lanes. Or is that not enough lanes for you? (Yes sure it's two dies but... you said cores and pci lanes, nothing else).

See previous post. The issue is sacrificing per core performance for many cores. the 1900x is a generation old and has much lower clocks and IPC than current gen. The lowest core count you can get in a current gen Threadripper is 12, in the 1920x.

If we are going to use old gen CPU's as an option, sure, there are plenty of LGA2011 CPU's that fit the bill too.

I have no problem with buying a CPU that has more cores, as long as it does not detract from per core performance. As soon as they start lowering core clocks to fit within the power or heat envelope, I call shenanigans. This is great for rendering farms, servers, special effects video renderers and home blu-ray rippers or transcoders, but for everyone else it is not an appropriate solution.
 
Did you get that from unbiasedpolls.amd.com?

No doubt dragged from a /AMD Reddit thing. Probably flip flops in an Intel Reddit thing.

I don't use the Reddit

If EITHER of you had read the Article that you are posting about in this forum, you would have seen the poll in the article that is talking about intel releasing a 10 core mainstream cpu. The fact that this poll was so lopsided shows how far AMD has come and how much excitement there is around the xen 2 architecture compared to the tiptoe releases from Intel at the moment.

Now if Intel was announcing a 16 core 10nm chip, at under $400 dollars, you would have seen the poll flip flopped, but since they essentially given up on progress for the last 5 years, this is where we're at.
 
See previous post. The issue is sacrificing per core performance for many cores. the 1900x is a generation old and has much lower clocks and IPC than current gen. The lowest core count you can get in a current gen Threadripper is 12, in the 1920x.

If we are going to use old gen CPU's as an option, sure, there are plenty of LGA2011 CPU's that fit the bill too.

I have no problem with buying a CPU that has more cores, as long as it does not detract from per core performance. As soon as they start lowering core clocks to fit within the power or heat envelope, I call shenanigans. This is great for rendering farms, servers, special effects video renderers and home blu-ray rippers or transcoders, but for everyone else it is not an appropriate solution.[/QUOTE
so just turn off 1/2 the cores and clock it a bit higher? i would rather have more cores just a tad slower by 100-200mhz then go back to quad cores.
 
I find that we have gotten to a sad state in CPU design.

It used to be only AMD trying to make up for lack of per core performance with core count.

Now everyone is doing it.

Let's me clear here. Unless you do rendering, encoding, muktiple desktop VM's or some sort of scientific applications there is no need for at most ~6C/12T.

The current core craze is nuts, wasteful and selling people stuff they don't need.

Meanwhile I simply can't find a CPU with 6-8 cores that also has enough PCIe lanes for my expansion. If I want those lanes, I have to buy into stupid core count territory.

It's so annoying.

You’re ignoring the huge benefits of e-peen.
 
You need to fix your quote tag.

so just turn off 1/2 the cores and clock it a bit higher? i would rather have more cores just a tad slower by 100-200mhz then go back to quad cores.

This is probably what I will wind up doing when I finally replace the old i7-3930k, unless things have changed by then.

It's still problematic, though, as it messes with the binning. The many-core chips are binned for low power use at lower clocks. This isn't the same binning as the ones used to hit max clocks, so you wind up playing tje silicon lottery more than usual.

You are also paying for cores you'll never use, which irks me.

I agree. I wouldn't go back to quad cores, but I would appreciate a 6-8C/12-16T chip binned for max possible clocks above multicore thermal enveloped, while still getting the full amount of PCIe lanes of a HEDT system.

I'd pay for this. Generously. It's just that no one is making it.

My philosophy when it comes to HEDT is that I don't mind it, as long as it comes without sacrifices. If my cores are clocked as high, and I can get the same number of PCIe lanes, I won't complain about a couple of extra cores thrown in for good measure, bit as soon as there are sacrificed, I'd rather keep the fewer cores, as long as there are 6+C/12+T of them.
 
You’re ignoring the huge benefits of e-peen.

Hehe, good one. You hit the nail on the head of the epeen. (Ouch)

Most people buying these will see little to no benefit from them, and may even see worse performance in their application, but they WILL have the epeen....
 
Haha, Im scared to build a PC currently, with the way the market is moving. It looks like parallelism is going to become increasingly important. These core wars will drive software innovation...eventually. Even on the gaming front my 1660 v2 (4960X) is only now being fully utilized. Took mainstream 5 years to get to this point, but im guessing this development path is going to continue now. Mainstream 16/32 inbound! now we just need to up the bandwidth to feed em! DDR5 and PCIe 5.0 hurry your asses up!
 
Just waiting with bated breath for AMD to catch up on IPC. otherwise I'll stick with my 5820k or upgrade intel when it dies.

A bit of sarcasm there.
 
Back
Top