The Desktop CPU Isn’t Dead, It Just Needs A Swift Kick In The Butt

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
This author thinks that the answer to making desktop processors exciting again is to add more cores, like Intel has done for their enthusiast line. The trouble with that is cost, but he hopes that potential competition from AMD will ultimately help with that.

…adding more cores is probably the easiest and best way to boost performance at the high-end and convince consumers to replace that three-to-five-year-old PC. Enough apps are built to take advantage of multiple cores that users would see benefits immediately. Intel’s Turbo Boost tech (particularly Turbo Boost 3.0, introduced in the aforementioned Broadwell-E CPUs but not available anywhere else yet) can maintain good performance for single-threaded or lightly threaded tasks. If the leaked roadmaps we cited before are to be believed, Intel may be planning to do this when the “Coffee Lake” processors are released in 2018.
 
The real problem with this is why? I can understand at the enthusiast line. But if you remove people that are doing high gaming or running specialized programs that have high cpu needs why does anyone else really need this? Your normally office drone is fine with today's computers to run some spreadsheets and surf the web, memory is probably an issue long before they are maxing out the CPU. Most home users don't need 32 cores as again they are surfing the internet or doing basic things. It used to be about every 2 - 3 years a computer needed to be replaced because there was something much better out there that would help everyone by making the change. Now, unless it is dead most people probably have no need to change out their computer, and when they do they don't need to buy anything extreme cpu wise.
 
CPU these days are usually more powerful than the software demands. So more cores???

I'm running 6c 12t @4.5ghz and I feel like it's overkill.
 
I wouldn't mind faster CPU's, but more cores is not the solution.

We've already hit a wall in multithreading. Most of what is able to be multithreaded already has been. More cores is never going to be the solution unless you do rendering, encoding or scientific computation type of work.

Increases in per core performance is what is important.

There is no point in just adding cores that 99.999% of people will never use.
 
I'm fine with only one Opteron 2.5ghz 8c processor on a dual socket mobo.

But my ex roomie had a lust for more speed and more cores. I've helped him build a brand new PC with 4ghz 8c AM3 on full sized MSI atx and 256gb M2 SSD and it installed Win 10 LTSB in 5 min.

Yet my little cube (another PC) sporting only Pentium 3.1 ghz 2c / 4t on a H67M-ITX with 2 Crucial 64gb in RAID 0 and this baby installed Win 10 LTSB in 4 minutes!

And the install? Both from an USB3.

I told him that having more core/higher ghz is overkill just for his surfing the net. His answer? Don't like waiting - needed it right there just like you flick light switch ON. :rollseye:
 
  • Like
Reactions: blkt
like this
always ready for more power! but honestly, i don't do much video encoding these days and games are for the most part gpu limited so meh. not so say i'm not excited about a more competitive market in the near future. i'm kicking myself for not getting the i7 4790k when i spec'd out my current system a few years back. i don't think the price has changed at all, still going for ~$300 used on ebay, maybe if enough people jump to amd with ryzen the price will come down enough to justify it.
 
I agree with the author that competition is important, but my concern's price, not performance.

When Intel and NVIDIA can set new records for rent-seeking/price gouging (Founder's Edition, I'm looking at you), it means the market's dysfunctional.

Hopefully Ryzen/Vega will fix that.

Remember the 939 days when Intel was running scared?
 
The real problem with this is why? I can understand at the enthusiast line. But if you remove people that are doing high gaming or running specialized programs that have high cpu needs why does anyone else really need this? Your normally office drone is fine with today's computers to run some spreadsheets and surf the web, memory is probably an issue long before they are maxing out the CPU. Most home users don't need 32 cores as again they are surfing the internet or doing basic things. It used to be about every 2 - 3 years a computer needed to be replaced because there was something much better out there that would help everyone by making the change. Now, unless it is dead most people probably have no need to change out their computer, and when they do they don't need to buy anything extreme cpu wise.

That is not the way it will always be, however. Things will advance, including basic functions, which means more resources will be needed. Otherwise, we could still be on a single core cpu from the 1990's. This is not a put down of what you are saying so much as a different point of view on the same point.
 
By the time I am ready to update my Skylake-based system Ryzen just might be the AMD architecture that can finally lure me away from my last 15 years of expensive Intel builds.
 
i love how sa many ppl that probably never had done any kind of low level optimizing of code suddenly can call out all devs for being lazy, like Multi-threading is just some magic wand you apply and then BOOM performance.

Some tasks need information from the previous calculations before it can do the next. . such tasks can't be multi-threaded. not every task can easily be split and calculated out of order.
Another example of this is dictionary compression. progam like 7-zip has to "Cheat" by dividing the data at a cost of efficiency to be able to multi-thread it. this increase the memory usages s you are basically running the programs twice.
its also only possible because we don care for the order of the results.

In games we do much care for the order of the results and there is a lot of task that need the results from previous operation so multi-threading becomes harder then it would with E.G Video encoder that can easily divide it up into multiple task's.
So please if you haven't touched deep level optimization or multi-threaded coding, don't insult the devs.

But hey that way you you cant look all smug on a forum....



also stating CPU are fast enough only counts for you. you are not the entire world and there is plenty of software that can use and need more core and more performance. Games are not the only thing Computer are for.
 
Last edited:
i love bhow sa many ppl that probalby never had doen any kind of low level optimizing of code suddenly can call out all devs for beeing lazy, like Multithrading is just some magic wand you apply and thne BOOM performance.

Some taks need information from the previous calculations before it do it next. . such task can't be multi-threaded. not every task can easily be split and calculated out of order.
Another example of this is dictionary compression. progam like 7-zip has to "Cheat" by dividing the data at a cost of efficiency to be able to multi-thread it. this increase the memory usagea s you ar basically running the programs twice.
its also only possibel because we don care for the order of the results.

In games we do much care for the order of the results and there is a lot of task that need the results from previous operation so multithreading becomes harder then it would with E.G Video encoder that can easily divide it up into multiple task's.
So please if you haven touch deep level optimization or multithreaded coding. dont insult the dves

Better hey that way you you cant look all smug on a forum....


also stating CPU are fast enough isonly for you. you are not the entire world and there is plenty of software that can use and need more core and more performance. Games are not the only thing Computer are for.


Excatly. The whole "lazy devs" argument is only made by people who do the have a freaking clue about software development.

There there are some kinds of tasks (maybe even most kinds) that can NEVER be properly multithreaded, because how the data depends on order of things.

Games engines are notorious for this. Apart from the rendering portion, which threads pretty well, if you try to multithreaded them you will just have thread locking and all kinds of other problems slowing things down rather than speeding it up.

Most games "cheat" and split the game into multiple threads (one for game engine logic, one for physics, one for sound, etc. etc.) and do get the game to utilize multiple cores, but this isn't even true multithreading, it's just many separate single threaded parts.

The main game engine is usually still the beefiest, and least able to be threaded, which is why you usually have one core maxxed and some smaller stuff going on on the other cores.

I guess, long story short, in 2017, if something isn't threaded, more likely than not, it's because it CANT be threaded. It's against the computer science equivalents of the laws of physics. NOT because inadequate time and effort has been spent on the problem.
 
i love how sa many ppl that probably never had done any kind of low level optimizing of code suddenly can call out all devs for being lazy, like Multi-threading is just some magic wand you apply and then BOOM performance.
also stating CPU are fast enough only counts for you. you are not the entire world and there is plenty of software that can use and need more core and more performance. Games are not the only thing Computer are for.

I was only commenting on the average desktop cpu user. Obviously there will always be a market for more performance for professionals and enthusiasts.
 
Only so much code can be multithreaded. Though we should always have the mentality of doing more with less, it is far easier said than done.
 
Last edited:
I was only commenting on the average desktop cpu user. Obviously there will always be a market for more performance for professionals and enthusiasts.

I totally agree. basic Office/internet/e-mail had not had any resone to upgrade for many many years.
gamers need for CPU power has also dwingled to a slower pace then in the past.

but there is still a bunch of CPU power hungry software out there. i have task that still runs for week at a 96-100% CPU usage non stop. i have to split it over multiple Computer to get somehow a decent performance.
 
Only so much code can be multithreaded. Though we should always have the mentality of doing more with less, it is far easier said than done.

over the last 30 years of software development code is only getting more and more complex. the ability to multithread becomes more and more difficult, while at the same time more and more necessary. Modern web servers really could not function without having a at least thread handle each socket connection. Same goes for database servers. Both of those are heavily multithreaded. In the world of applications, there is always a need to multithreading. Any time there is a lengthy task needs to be done, common programming practice is to spawn a thread for it and notify the main thread when it is done. When data needs to be accessed across threads, more often than not shared data is protected by semaphores or locks (at least any programmer worthy of the title does).

Sure there are a bazillion things devs can do to write tighter code, but when you have worked in the modern software development world, the onus is on the developer to get their code out there fast, no matter how they can, because development cycles are getting shorter and shorter. When that happens the bosses don't care if the code is efficient or tight. They only care that it works and you are moving on to your next line item.
 
Bring back NetBurst IMHO.


Who needs cores, bleh!

Poor comparison though.

Netburst failed not because it was a single core design (they could have easily added more cores) but because the long execution branch resulted in a ton of branch prediction errors, dropping the effective IPC into the toilet, to the point where even the relatively high clocks it achieved were unable to overcome the deficit.
 
What the desktop cpu needs is less lazy devs that don't use all cores.
That's not a problem with laziness, but with how much things can be made to run in parallel (see Amdahl's Law), and also how easy it is to make code closer to bug-free. Making things needlessly complex to extract another 5% in performance isn't worth it from a development standpoint. The bigger problem is that even if you optimize the hell out of most software, it's still not going to keep all cores busy.
 
That is not the way it will always be, however. Things will advance, including basic functions, which means more resources will be needed. Otherwise, we could still be on a single core cpu from the 1990's. This is not a put down of what you are saying so much as a different point of view on the same point.

do you also argue against virtualization because one day you might need all that power of one physical machine for a single server?

don't get me wrong, I am not saying that they should stop working on new technology. I am simply saying that we don't need to find a way to make the base line cpus out perform the current top level cpu. At the low end 4 cores is fine. We don't need 16 cores to be the standard for a low end computer on 2017. While trying to get 32 - 64 cores for gamers. You are looking more at there needing to be a split like there is for desktop vs laptop vs servers, and us add in average pc in there as a separate line of cpus.

the reason people are at 3 - 5 years for upgrades has zero to do with cpu speeds slowing down with how quickly they are upgraded. People go 3 - 5 years because they aren't maxing out their pcs anymore so they aren't pushed to upgrade every 1 - 2 years.
 
the reason people are at 3 - 5 years for upgrades has zero to do with cpu speeds slowing down with how quickly they are upgraded. People go 3 - 5 years because they aren't maxing out their pcs anymore so they aren't pushed to upgrade every 1 - 2 years.

No, the reason I still have people at the office using 5 year old laptops, is because the new laptops where only 5-10% faster.
There is no reason to spend $1,500 for a CPU that is only 5% faster.
 
The real problem with this is why? I can understand at the enthusiast line. But if you remove people that are doing high gaming or running specialized programs that have high cpu needs why does anyone else really need this? Your normally office drone is fine with today's computers to run some spreadsheets and surf the web, memory is probably an issue long before they are maxing out the CPU. Most home users don't need 32 cores as again they are surfing the internet or doing basic things. It used to be about every 2 - 3 years a computer needed to be replaced because there was something much better out there that would help everyone by making the change. Now, unless it is dead most people probably have no need to change out their computer, and when they do they don't need to buy anything extreme cpu wise.

Try opening 20 tabs in Chrome each running some type of active animation.

Or more specifically in my case, try compiling 500,000 lines of source code in release which takes 2+ hours on i7's

Then there's ripping DVD or BluRays at the highest quality ratings. Lets not forget re-codes on Plex servers streams

Then there's rendering software

Database servers etc etc...Entity framework can be slow. I've seen jqueries to entity take over a second per query sometimes. (They are complex queries on extremely large datasets, but when running web apps, and you get concurrent hits....)

Go [H]ard or go home.
 
By the time I am ready to update my Skylake-based system Ryzen just might be the AMD architecture that can finally lure me away from my last 15 years of expensive Intel builds.


Try paying $600 for a PII-450Mhz back in the day. And you think $330 is expensive?
 
What CPUs need is higher clocks. As others have mentioned, not every operation can be multithreaded, but all software benefits from higher clocks, multithreaded or not.
 
Try paying $600 for a PII-450Mhz back in the day. And you think $330 is expensive?

but reversed back then you could get at celeron 300a and overclock it to better performance for a lot cheaper. I have no recollection of price though but i remembe running 2 of those for around the price of a P2-450mhz but i might be wrong
 
Try paying $600 for a PII-450Mhz back in the day. And you think $330 is expensive?
I paid closer to $900 for a PII-400 back in 1998.
Throw in a Canopus Spectra 2500, Canopus Pure3D II, miro DC30, and 3x 9GB Seagate Cheetahs and I could buy a sweet X99 dual TitanXP setup now.
 
Poor comparison though.

Netburst failed not because it was a single core design (they could have easily added more cores) but because the long execution branch resulted in a ton of branch prediction errors, dropping the effective IPC into the toilet, to the point where even the relatively high clocks it achieved were unable to overcome the deficit.

I thought this is why it failed:

hot-computer2.jpg
 
If more people had CAT6 in every room, I think a single powerful desktop that integrates with their smarthome, and just use "dummy" devices everywhere else linked to it would make sense for having a powerful processor. That way you don't need multiple consoles or cable boxes or home camera hubs and so forth, just have one powerful desktop that is running the TV in one room, the music server for another, transcoding for tablets, while streaming a video game to the living room.

But otherwise, who needs that much processing power?

What are they using it for?
 
If more people had CAT6 in every room, I think a single powerful desktop that integrates with their smarthome, and just use "dummy" devices everywhere else linked to it would make sense for having a powerful processor. That way you don't need multiple consoles or cable boxes or home camera hubs and so forth, just have one powerful desktop that is running the TV in one room, the music server for another, transcoding for tablets, while streaming a video game to the living room.

But otherwise, who needs that much processing power?

What are they using it for?

This is todays thinking with millenials...basically enough to get the job done is enough. The guys at extremesystems are rolling around in their virtual graves. :(
 
but reversed back then you could get at celeron 300a and overclock it to better performance for a lot cheaper. I have no recollection of price though but i remembe running 2 of those for around the price of a P2-450mhz but i might be wrong

300A @ 464-504 was great for gaming, even with the smaller 128k cache vs the P2's 512k. Just like today with cores, cpu speed outweighs threading/cores most of the time. Things never change lol.
 
I 'think' the secret sauce to making "desktop processors exciting again" is making people feel like they are getting something for nothing. A.M.D. is on the right page making their entire line of RyZen CPU's overclockable. My first unshared CPU that was all my own was a K6 266 that I managed to OC all the way to 400 for almost 6 years untill a power surge killed it.
 
Agreed I think intel really dropped the ball with the focus too much on cores and silly stuff along with too many variants some non oc'ed and pricing. The old intel was just better for the enthusiest if you ask me. Maybe not so much for business though.
 
I 'think' the secret sauce to making "desktop processors exciting again" is making people feel like they are getting something for nothing. A.M.D. is on the right page making their entire line of RyZen CPU's overclockable. My first unshared CPU that was all my own was a K6 266 that I managed to OC all the way to 400 for almost 6 years untill a power surge killed it.


maybe you are on to something. They could make CPU's that have working, but un-utilized features that would get turned on by moving some small resistors. Remember how excited people were to do that stuff back in the day? It would have to be undocumented of course so that people think they are making out on the deal. But it could be like a multiplier bump or a frequency bump or maybe unlock additional pipelines to increase IPC or even a whole 'nother core.

Or maybe shorting a pin or something on the motherboard. You know, something that people "shouldn't" be doing, is easy to do, but has just enough fear of damage that you feel risky doing it.
 
What CPUs need is higher clocks. As others have mentioned, not every operation can be multithreaded, but all software benefits from higher clocks, multithreaded or not.

When companies strive for higher clocks it tends to come at the expense of IPC, and we get things like Netburst and Bulldozer with promises of super high clocks speeds (10GHz in 10 years anyone?) to make up the difference in IPC.

What we need is more instructions that do an equivalent amount of work executed per unit of time. That's getting harder to achieve with how processors are currently designed and manufactured. That's why Intel abandoned the tick-tock strategy.
 
Back
Top