AMD CTO Mark Papermaster: More Cores Coming in the 'Era of a Slowed Moore's Law'

He says, knowing full well that outside of some rendering and encoding apps that most people never use, practical use scenarios for anything beyond 6 cores is VERY limited :p
I remember people saying that we wouldn't need quad cores and 2 cores was all that was needed for gaming.. Now quad cores are not enough and I'm pretty sure it's going to keep going that way. I'll get back to your comment in 4-5 years when 6 cores aren't enough.
 
I remember people saying that we wouldn't need quad cores and 2 cores was all that was needed for gaming.. Now quad cores are not enough and I'm pretty sure it's going to keep going that way. I'll get back to your comment in 4-5 years when 6 cores aren't enough.

I am a little torn on this issue.

I too probably wouldn't go with fewer than 8 cores today just to have a little future proofing, but as it stands right now quads do great for most titles and some benefit from hexacores

The exceptions are for streaming purposes where having an extra core or two for all the shit running in the background is helpful. That's not something I spend a whole lot of time thinking about though, as overall I find streaming to be kind of dumb.

All that said, I'm really not expecting huge leaps in multithreading going forward. Multithreading is easy on tasks which can be broken into smaller portions that all need the same thing done to them over and over again. Examples being rendering, encoding, etc.

The problems arise when you have interdependencies, and time states like a game engine does. In loads like these, when you try to multithread them you usually just get thread locks and either crashes or slowdowns due to one thread waiting for another to finish before it can move on.

In fact, if you are a student of computer science, one of the things that is taught is that a very large proportion of code CANNOT be multithreaded. The underlying logic of the task that needs to be completed just doesn't support it.

This is why most gaming applications don't use true multithreading. Instead they split off various parts of the game engine to independent, but still single threaded tasks. This allows you to offload the main game engine thread to some of the other cores, but it is not true multithreading that can just spread out over cores.

This is why for gaming loads you usually see something like this:

  • One main game engine thread that pins, or heavily loads one core on the CPU. This thread handles all the game engine logic and time states and really cannot be multithreaded at all. Historically everything went into this one thread which is why old games tend to pin one core and touch nothing else, but over time more and more subparts have been able to be broken out of it to take advantage of additional cores.
  • 2-3 still noticeably loaded cores, but much lower than the main game engine core. These are things that the game engine devs have been able to break out. Things like the sound processing thread, the physics thread, a thread for the in game voice comms, etc. etc. Some are more independent than others, but in general they can reasonably run mostly independently, and just check in with the main game thread every now and then. Some of these are more threadable than others (physics for instance) but still difficult to fully multithread due to the time state issue
  • Rest of the cores on the CPU (if available) which have some very light load on them: This is where the truly threadable part of the game engine spreads out to. This is usually limited to modern 3D API calls (like DX, Vulcan, Etc). DX11 did some threading, DX12 does it much much better. These are the only types of tasks in game workloads that can be truly threaded.


There are obviously some titles which still suck here, but the above more or less represents the state of the art, and the part I question is whether things are really going to improve much in this area. The main game thread is still the largest load in a game, and at its core it just cannot be threaded. No way, will never happen. The logic is too time dependent. Just cannot be done. It violates the computer science equivalent of the fundamental laws of physics.

The parts that can be truly multithreaded (DX calls etc) have by and large already been done.

This leaves one area for potential improvement, and that is continuing to whittle away at the main game engine thread. What other parts can we break off and turn into separate threads? I don't have the answer to how much can still be done here, but my sense is that all the low hanging fruit has already been tackled at this point, meaning we have already hit the point of diminishing returns. We may see some gradual improvement over time, but I'm not expecting a whole lot.

So, all that said, we are finally at the point where six cores can show a benefit over fewer than six cores in some cases. It took a while to get here. I'm not convinced we are really going to get to the point where if running a game alone (without a bunch of streaming and other garbage running in the background) we are going to see 8 cores have a clear benefit over 6 cores.

It may eventually come, which is why I would for future proofing buy at least an 8 core if shopping today (that said, with how cheap some of these CPU's are and with AMD's AM4 socket compatibility one could easily buy a 6 core chip and drop in an 8 core if it gets to the point where it cant keep up, at relatively low expense)

I just don't think the narrative in gaming and hardware circles that if "those lazy devs would just get there shit together and code the engine right, we'd see games spread out over as many cores as a system has and never see a CPU core bottleneck again, because as many cores as possible is the future!" we have been hearing ever since AMD's ill-fated bulldozer was launched, is anywhere near accurate. Multithreading is much more difficult than the layman seems to think, and in many cases is not possible at all.

Just think about it logically. If one calculation depends on the output of the calculation that came just before it, you can never spread those two calculations over two separate cores and run them at the same same time. This is a limit of logic. No innovation in code or otherwise can solve this dilemma, unless someone invents a time machine.

Which brings us back to the age old conclusion. More cores benefit some workloads (and some of them, like rendering and encoding very well). Higher clocks or better IPC (greater per core performance) benefits ALL workloads.

Where I think the true benefit lies in more cores is worrying less about background tasks. I'm still old school. When I start any game, I make sure everything that is not relevant to the game is not running at all. These days kids have all kinds of software running in the background. 12 fucking game stores, multiple chat clients, streaming/encoding apps, realtime antivirus and anti-malware software, windows update downloads, maybe even a web browser with a gazillion open tabs all running little scripts consuming CPU cycles. With more cores you reduce the chance of these things impacting your game. Or you could just shut everything that isn't strictly necessary down and not worry about it.
 
Last edited:
I am a little torn on this issue.

<snip>
So, all that said, we are finally at the point where six cores can show a benefit over fewer than six cores in some cases. It took a while to get here. I'm not convinced we are really going to get to the point where if running a game alone (without a bunch of streaming and other garbage running in the background) we are going to see 8 cores have a clear benefit over 6 cores.
<snip>

What about physics in games? Wasn't the whole reason that video cards were better at it than CPUs was because they basically use tons of tiny cores? You would think we could use all of this extra power for that if nothing else. I feel like games haven't really evolved at all since the hardware accelerated physx days and it's kind of sad and boring.
 
What about physics in games? Wasn't the whole reason that video cards were better at it than CPUs was because they basically use tons of tiny cores? You would think we could use all of this extra power for that if nothing else. I feel like games haven't really evolved at all since the hardware accelerated physx days and it's kind of sad and boring.

Physics has some ability to multithread, which is why GPU's were pretty good for offloading this from the CPU, but even in those calculations there is some time dependency. Let's say you have a ragdoll effect with a body falling off a height. You need to complete the calculations for position, speed and direction of travel for the previous frame before you can calculate where it will then move in the next frame.

Physics scales better with multiple threads than other time dependent things though because there are many parallel calculations, like many bullets flying through the air, or multiple flags flapping in the wind, or simultaneous calculations for different parts of the flag, etc. You still can't completely get away from the time state problem though.
 
Last edited:
Maybe we will eventually get to the point where consumer level processors have so many "spare" cores and so much fast memory that the overhead of having some kind of abstraction layer doing the work of translating single threaded code to multi-threaded will be feasible without a huge performance penalty.

I guess it makes more sense to have this done at the API level than per game engine or game (or OS, heaven forbid) and I am not up on DX12 and Vulkan so maybe this is already being done to an extent.

Still, kind of a laughable comment from Papermaster, not quite Nvidia levels of marketing spin but close. Diminishing returns on more cores is pretty obviously going to be reached for gaming before it will be on IPC.
 
I am a little torn on this issue.

I too probably wouldn't go with fewer than 8 cores today just to have a little future proofing, but as it stands right now quads do great for most titles and some benefit from hexacores

The exceptions are for streaming purposes where having an extra core or two for all the shit running in the background is helpful. That's not something I spend a whole lot of time thinking about though, as overall I find streaming to be kid of dumb.

All that said, I'm really not expecting huge leaps in multithreading going forward. Multithreading is easy on tasks which can be broken into smaller portions that all need the same thing done to them over and over again. Examples being rendering, encoding, etc.

The problems arise when you have interdependencies, and time states like a game engine does. IN loads like these, when you try to multithread them you usually just get thread locks and either crashes or slowdowns due to one thread waiting for another to finish before it can move on.

In fact, if you are a student of computer science, one of the things that is taught is that a very large proportion of code CANNOT be multithreaded. The underlying logic of the task that needs to be completed just doesn't support it.

This is why most gaming applications don't use true multithreading. Instead they split off various parts of the game engine to independent, but still single threaded tasks. This allows you to offload the main game engine thread to some of the other cores, but it is not true multithreading that can just spread out over cores.

This is why for gaming loads you usually see something like this:

  • One main game engine thread that pins, or heavily loads one core on the CPU. This thread handles all the game engine logic and time states and really cannot be multithreaded at all. Historically everything went into this one thread which is why old games tend to pin one core and touch nothing else, but over time more and more subparts have been able to be broken out of it to take advantage of additional cores.
  • 2-3 still noticeably loaded cores, but much lower than the main game engine core. These are things that the game engine devs have been able to break out. Things like the sound processing thread, the physyics thread, a thread for the in game voice comms, etc. etc. Some are more independent than others, but in general they can reasonably run mostly independently, and just check in with the main game thread every now and then. Some of these are more threadable than others (physics for instance) but still difficult to fully multithread due to the time state issue
  • Rest of the cores on the CPU (if available) which have some very light load on them: This is where the truly threadable part of the game engine spreads out to. This is usually limited to modern 3D API calls (like DX, Vulcan, Etc). DX11 did some threading, DX12 does it much much better. These are the only types of workloads in game workloads that can be truly threaded.


There are obviously some titles which still suck here, but the above more or less represents the state of the art, and the part I question is whether things are really going to improve much in this area. The main game thread is still the largest load in a game, and at its core it just cannot be threaded. No way, will never happen. The logic is too time dependent. Just cannot be done. It violates the computer science equivalent of the fundamental laws of physics.

The parts that can be truly multithreaded (DX calls etc) have by and large already been done.

This leaves one area for potential improvement, and that is continuing to whittle away at the main game engine thread. What other parts can we break off and turn into separate threads. I don't hvae the answer to how much can still be done here, but my sense is that all the low hanging fruit has already been tackled at this point, meaning we have already hit the point of diminishing returns. We may see some gradual improvement over time, but I'm not expecting a whole lot.

So, all that said, we are finally at the point where six cores can show a benefit over fewer than six cores in some cases. It took a while to get here. I'm not convinced we are really going to get to the point where if running a game alone (without a bunch of streaming and other garbage running in the background) we are going to see 8 cores have a clear benefit over 6 cores.

It may eventually come, which is why I would for future proofing buy at least an 8 core if shopping today (that said, with how cheap some of these CPU's are and with AMD's AM4 socket compatibility one could easily buy a 6 core chip and drop in an 8 core if it gets to the point where it cant keep up, at relatively low expense)

I just don't think the narrative in gaming and hardware circles that if "those lazy devs would just get there shit together and code the engine right, we'd see games spread out over as many cores as a system has and never see a CPU core bottleneck again, because as many cores as possible is the future!" we have been hearing ever since AMD's ill-fated bulldozer was launched, is anywhere near accurate. Multithreading is much more difficult than the layman seems to think, and in many cases is not possible at all.

Just think about it logically. If one calculation depends on the output of the calculation that came just before it, you can never spread those two calculations over two separate cores and run them at the same same time. This is a limit of logic. No innovation in code or otherwise can solve this dilemma, unless someone invents a time machine.

Which brings us back to the age old conclusion. More cores benefit some workloads (and some of them, like rendering and encoding very well). Higher clocks or better IPC (greater per core performance) benefits ALL workloads.

Where I think the true benefit lies in more cores is worrying less about background tasks. I'm still old school. When I start any game, I make sure everything that is not relevant to the game is not running at all. These days kids have all kinds of software running in the background. 12 fucking game stores, multiple chat clients, streaming/encoding apps, realtime antivirus and anti-malware software, windows update downloads, maybe even a web browser with a gazillion open tabs all running little scripts consuming CPU cycles. With more cores you reduce the chance of these things impacting your game. Or you could just shut everything that isn't strictly necessary down and not worry about it.

I am SW dev, and I approve this message. This gets it just about 100% correct.
 
More cores seems to now make a difference in Photoshop tho, previously the domain of Intel and fast chips.

https://www.pugetsystems.com/labs/a...9th-Gen-Intel-X-series-1529/#BenchmarkResults

ps1.png
 
I chose a Wolfdale over the Q6600 back in the day. Same reason I chose the i5 over the i7 (yes, I know it's threads over cores) in my last purchase as well.
It's hard for me to 'overbuy' on cores or threads unless I see it as an absolute necessity.

Considering I only really follow Blizzard or Source games, I'm still not in a 100% rush to upgrade just yet.

RE: Photoshop - AMD Ryzen 3000-series chips are clocked to the gills off-the-shelf. Great for end-users, less so for overclockers. Notice the big split between 3000-series and 2000-series?
The cheapest 6-core 3600 beating a 16-core 2000-series Threadripper is a testament to that.
 
Future proofing. Sandy Bridge and Ivy Bridge hex core buyers for instance. 7-8 years on and those machines are arguably still adequate for a gamer. How long until they aren't viable for the non-gamers in the household? Another 7-8 years?
 
This is more about how much Zen 2 improved than about high core counts.

2700X vs 3800X, some core count, 26% perf boost.

3800X 8 core vs 3900X, 12 core. 50% more cores. 1.3% performance gain. It's really leveraging those extra cores. ;)

I see your point. It just seems that at least in PS, core speed alone aint the biggest thing anymore.

ps2.png
 
More cores seems to now make a difference in Photoshop tho, previously the domain of Intel and fast chips.

https://www.pugetsystems.com/labs/a...9th-Gen-Intel-X-series-1529/#BenchmarkResults

View attachment 205052


This makes perfect sense to me. Image processing seems like a perfect example of the type of thing that can benefit very nicely from multithreading.

That said, I really don't understand what people are doing in Photoshop that requires this type of processing power to the point where it makes a practical difference. My experience with image editing has been that any CPU manufactured since the 90's has more than enough power to do anything you'd need photoshop for in a matter of seconds

Video I understand. Hours of high resolution content with at a minimum 24 frames per second is a lot of data to tear through. Pictures though? Even really high resolutions ones from some of the top cameras in the world should easily be slayed by any CPU on the market.

Is it just large batch jobs of thousands of images that are the problem?
 
This makes perfect sense to me. Image processing seems like a perfect example of the type of thing that can benefit very nicely from multithreading.

That said, I really don't understand what people are doing in Photoshop that requires this type of processing power to the point where it makes a practical difference. My experience with image editing has been that any CPU manufactured since the 90's has more than enough power to do anything you'd need photoshop for in a matter of seconds

Video I understand. Hours of high resolution content with at a minimum 24 frames per second is a lot of data to tear through. Pictures though? Even really high resolutions ones from some of the top cameras in the world should easily be slayed by any CPU on the market.

Is it just large batch jobs of thousands of images that are the problem?

Batch scripts for sure. And applying various filters on 50mb+ images etc. Even browsing large catalogs of images if the images are large enough can tax things, especially in PS's cousin, Lightroom.
 
Future proofing. Sandy Bridge and Ivy Bridge hex core buyers for instance. 7-8 years on and those machines are arguably still adequate for a gamer. How long until they aren't viable for the non-gamers in the household? Another 7-8 years?
Sandy Bridge CPU's are struggling to keep up with the current top AAA game titles, and it's not due to lack of cores.

I have a 2600K overclocked to 4.5ghz. I play Battlefield V a lot. At 1080p, I can barely get 60-80 fps using a Nvidia 2070, where as people with better CPU's and the same gpu can push 120+ at the same res. I can crank the graphics to the highest settings to help with bottlenecking but it still will only show 25-50% usage. The game is still playable, but only just barely and I do experience quite a few slowdowns.

For everything non-gaming related, they are excellent CPU's that will last for quite sometime. For low end games like Fortnight or Minecraft, they will still be relevant for quite some time. For the high end, they are seeing their last days of relevancy.
 
For those who care (And have a copy of PS) Puget's PS bench is available free here: https://www.pugetsystems.com/labs/articles/Puget-Systems-Adobe-Photoshop-CC-Benchmark-1132/


I haven't used any Adobe products at home in ages. I will not subscribe to software. No way, no how.

I have an old version of Nikon's Capture NX2 software I bought a license to for about $100 almost a decade ago that still does the trick for my RAW camera processing, and small photo editing jobs. Everything else I use Gimp for. Adobe offers me nothing that warrants their crazy pricing.

I'd buy a one time Photoshop license with 5 years of free upgrades if it cost me less than $100. That's what I think its worth, not $20.99 a month for eternity.
 
Sandy Bridge CPU's are struggling to keep up with the current top AAA game titles, and it's not due to lack of cores.

I have a 2600K overclocked to 4.5ghz. I play Battlefield V a lot. At 1080p, I can barely get 60-80 fps using a Nvidia 2070, where as people with better CPU's and the same gpu can push 120+ at the same res. I can crank the graphics to the highest settings to help with bottlenecking but it still will only show 25-50% usage. The game is still playable, but only just barely and I do experience quite a few slowdowns.

For everything non-gaming related, they are excellent CPU's that will last for quite sometime. For low end games like Fortnight or Minecraft, they will still be relevant for quite some time. For the high end, they are seeing their last days of relevancy.


My 3930k is still viable for me, but that is because I am at 4k in games, so I was never going to be seeing frame rates much above 60fpos anyway...
 
It's funny because being an enthusiast is a lot about balls to the walls performance, as much as possible out of any and all parts, overkill builds

Now when AMD has an advantage in design and core counts, we should be careful about "paying for extra cores" ? More cores bring more performance in the majority of workloads, this is a fact. What kind of enthusiast refuses more performance
 
I am SW dev, and I approve this message. This gets it just about 100% correct.
I am an ex-sw dev and current data guy, and I (sadly) approve only the portion of this message that highlights that we need to change how we educate out brilliant young minds. Stop teaching them single thread first and uber alles. Teach the teachers. Help them help the next generation. In high school, the word "thread" didn't exist. In uni, we were taught single until third year. That is simply not right.

<3

Edit: P.S. The work I do now, the more cores the better. Our little setup here runs on 72 cores and we would like more. We're soon migrating to *jazz hands* "the cloud" and all indications point to a nice increase in performance.

Edit 2:
I have a 2600K overclocked to 4.5ghz. I play Battlefield V a lot. At 1080p, I can barely get 60-80 fps using a Nvidia 2070
I have an i7-6700(non-K) and an RX470 and while I run BF1 at 60FPS+ at 1080p no probleemo, I struggle with Cities:Skylines. :-( I want a 3950X...
 
Last edited:
It's funny because being an enthusiast is a lot about balls to the walls performance, as much as possible out of any and all parts, overkill builds

Now when AMD has an advantage in design and core counts, we should be careful about "paying for extra cores" ? More cores bring more performance in the majority of workloads, this is a fact. What kind of enthusiast refuses more performance

You should clarify that to more cores of the same IPC and clock speed should bring more performance then I would agree with you. This is rarely the case though that there is no trade off.

I went 9900KF over a 3900x since the extra cores of a 3900x would not be utilized and I'd be sacrificing single thread for nothing.
 
My 3930k is still viable for me, but that is because I am at 4k in games, so I was never going to be seeing frame rates much above 60fpos anyway...
Yes. But not much longer as I said. It likely won't be the "another 7-8 years" like the person I responded to suggested. Ivy Bridge cores got another 5% on Sandy's, so that will help.
I have an i7-6700(non-K) and an RX470 and while I run BF1 at 60FPS+ at 1080p no probleemo, I struggle with Cities:Skylines. :-( I want a 3950X...
BFV is not BF1. Like with every DICE and Battlefield game on the Frostbite engine, DICE seems obsessed with evolving it more than necessary. With BFV, I'm not sure whether that is due to actual improvements, sugar coating it with more graphics, or their usual code bloat for minor features that often results in even more bugs than before, I'm not sure. It seems that it's not a whole helluva lot different than BF1 or even BF4 when left in DX11 mode. Of course, DX12 is a whole other animal and I will leave it out during this comparison.

I've experienced this drop off in performance with every one of them since Bad Company 2. The least dramatic performance drop off so far was from BF4 to BF1.
 
Last edited:
You should clarify that to more cores of the same IPC and clock speed should bring more performance then I would agree with you. This is rarely the case though that there is no trade off.

I went 9900KF over a 3900x since the extra cores of a 3900x would not be utilized and I'd be sacrificing single thread for nothing.

Interesting. I just ordered a 3900x after debating the same thing in my head. My current CPU is 4 years old now, so I based my decision on what I think might run games ported from PS5/XBone 2 better, a few years from now. Not what benefits me the most today.

Of course, I'm just guessing. Not sure how that is going to turn out, but it is nice to finally have some competitive options in CPUs.
 
Interesting. I just ordered a 3900x after debating the same thing in my head. My current CPU is 4 years old now, so I based my decision on what I think might run games ported from PS5/XBone 2 better, a few years from now. Not what benefits me the most today.

Of course, I'm just guessing. Not sure how that is going to turn out, but it is nice to finally have some competitive options in CPUs.

I have no problem buying another cpu in 3-4 years if something is worth it (doubtful).

At the end of the day, either the 9900k or 3900x are great processors. I have two AMD rigs and one Intel. I wanted Intel for my VR rig due to slightly higher IPC and minimums. If it drops below 90Hz I get motion sickness. Also the Vive wireless adapter had some capatability issues with the 2000 series Ryzen. Not sure if that’s been addressed or not. I also had weird spikes with my frametimes with my 2700x. I think it was related to the USB controller. I gave the Vive the USB 3.1 straight to the CPU and it resolved it mostly... with my 9900k I have no spikes at all.

One thing is for sure - AMD did a great job competing with Intel and the Threadripper platform is very impressive. It tops the charts in a lot of games.
 
Sandy Bridge CPU's are struggling to keep up with the current top AAA game titles, and it's not due to lack of cores.

I have a 2600K overclocked to 4.5ghz. I play Battlefield V a lot. At 1080p, I can barely get 60-80 fps using a Nvidia 2070, where as people with better CPU's and the same gpu can push 120+ at the same res. I can crank the graphics to the highest settings to help with bottlenecking but it still will only show 25-50% usage. The game is still playable, but only just barely and I do experience quite a few slowdowns.

For everything non-gaming related, they are excellent CPU's that will last for quite sometime. For low end games like Fortnight or Minecraft, they will still be relevant for quite some time. For the high end, they are seeing their last days of relevancy.

Spot on.

Sandy bottlenecks on high end GPUs/Games. Absolutely need more single core performance. In well optimized games (think Vulkan) it still does well 1080p @ 60fps. ie DooM (2018).
 
Only looking at games is kind of myopic.

Higher core counts are GREAT if you're doing more than one thing at once. My work PC is an i7-8700, and 12 threads means I can run SQL Server on my own desktop instead of needing a dedicated machine. But I can also have Visual Studio, several non-MSSQL database servers, any number of browser tabs, and so on, all open at once.
 
I have no problem buying another cpu in 3-4 years if something is worth it (doubtful).

At the end of the day, either the 9900k or 3900x are great processors. I have two AMD rigs and one Intel. I wanted Intel for my VR rig due to slightly higher IPC and minimums. If it drops below 90Hz I get motion sickness. Also the Vive wireless adapter had some capatability issues with the 2000 series Ryzen. Not sure if that’s been addressed or not. I also had weird spikes with my frametimes with my 2700x. I think it was related to the USB controller. I gave the Vive the USB 3.1 straight to the CPU and it resolved it mostly... with my 9900k I have no spikes at all.

One thing is for sure - AMD did a great job competing with Intel and the Threadripper platform is very impressive. It tops the charts in a lot of games.

The 9900K is a beast, no doubt about it. Built a 9900K/2080 Gaming X rig for my buddy. Easily outpaces my 6700K/1080Ti. I feel bad for people that get motion sick in VR. I am one of the few that never has an issue, no matter what game or framerate. (conversely and ironically, I can't handle amusement park rides at all). That must be awful.

The one thing that ticks me off with the AMD platform, is the fans on the X570 boards. Ordered a X470 Gaming 7, because I don't have any small fans that still work well after a few years.
 
I am a little torn on this issue.

I too probably wouldn't go with fewer than 8 cores today just to have a little future proofing, but as it stands right now quads do great for most titles and some benefit from hexacores

The exceptions are for streaming purposes where having an extra core or two for all the shit running in the background is helpful. That's not something I spend a whole lot of time thinking about though, as overall I find streaming to be kind of dumb.

All that said, I'm really not expecting huge leaps in multithreading going forward. Multithreading is easy on tasks which can be broken into smaller portions that all need the same thing done to them over and over again. Examples being rendering, encoding, etc.

The problems arise when you have interdependencies, and time states like a game engine does. In loads like these, when you try to multithread them you usually just get thread locks and either crashes or slowdowns due to one thread waiting for another to finish before it can move on.

In fact, if you are a student of computer science, one of the things that is taught is that a very large proportion of code CANNOT be multithreaded. The underlying logic of the task that needs to be completed just doesn't support it.

This is why most gaming applications don't use true multithreading. Instead they split off various parts of the game engine to independent, but still single threaded tasks. This allows you to offload the main game engine thread to some of the other cores, but it is not true multithreading that can just spread out over cores.

This is why for gaming loads you usually see something like this:

  • One main game engine thread that pins, or heavily loads one core on the CPU. This thread handles all the game engine logic and time states and really cannot be multithreaded at all. Historically everything went into this one thread which is why old games tend to pin one core and touch nothing else, but over time more and more subparts have been able to be broken out of it to take advantage of additional cores.
  • 2-3 still noticeably loaded cores, but much lower than the main game engine core. These are things that the game engine devs have been able to break out. Things like the sound processing thread, the physics thread, a thread for the in game voice comms, etc. etc. Some are more independent than others, but in general they can reasonably run mostly independently, and just check in with the main game thread every now and then. Some of these are more threadable than others (physics for instance) but still difficult to fully multithread due to the time state issue
  • Rest of the cores on the CPU (if available) which have some very light load on them: This is where the truly threadable part of the game engine spreads out to. This is usually limited to modern 3D API calls (like DX, Vulcan, Etc). DX11 did some threading, DX12 does it much much better. These are the only types of tasks in game workloads that can be truly threaded.


There are obviously some titles which still suck here, but the above more or less represents the state of the art, and the part I question is whether things are really going to improve much in this area. The main game thread is still the largest load in a game, and at its core it just cannot be threaded. No way, will never happen. The logic is too time dependent. Just cannot be done. It violates the computer science equivalent of the fundamental laws of physics.

The parts that can be truly multithreaded (DX calls etc) have by and large already been done.

This leaves one area for potential improvement, and that is continuing to whittle away at the main game engine thread. What other parts can we break off and turn into separate threads? I don't have the answer to how much can still be done here, but my sense is that all the low hanging fruit has already been tackled at this point, meaning we have already hit the point of diminishing returns. We may see some gradual improvement over time, but I'm not expecting a whole lot.

So, all that said, we are finally at the point where six cores can show a benefit over fewer than six cores in some cases. It took a while to get here. I'm not convinced we are really going to get to the point where if running a game alone (without a bunch of streaming and other garbage running in the background) we are going to see 8 cores have a clear benefit over 6 cores.

It may eventually come, which is why I would for future proofing buy at least an 8 core if shopping today (that said, with how cheap some of these CPU's are and with AMD's AM4 socket compatibility one could easily buy a 6 core chip and drop in an 8 core if it gets to the point where it cant keep up, at relatively low expense)

I just don't think the narrative in gaming and hardware circles that if "those lazy devs would just get there shit together and code the engine right, we'd see games spread out over as many cores as a system has and never see a CPU core bottleneck again, because as many cores as possible is the future!" we have been hearing ever since AMD's ill-fated bulldozer was launched, is anywhere near accurate. Multithreading is much more difficult than the layman seems to think, and in many cases is not possible at all.

Just think about it logically. If one calculation depends on the output of the calculation that came just before it, you can never spread those two calculations over two separate cores and run them at the same same time. This is a limit of logic. No innovation in code or otherwise can solve this dilemma, unless someone invents a time machine.

Which brings us back to the age old conclusion. More cores benefit some workloads (and some of them, like rendering and encoding very well). Higher clocks or better IPC (greater per core performance) benefits ALL workloads.

Where I think the true benefit lies in more cores is worrying less about background tasks. I'm still old school. When I start any game, I make sure everything that is not relevant to the game is not running at all. These days kids have all kinds of software running in the background. 12 fucking game stores, multiple chat clients, streaming/encoding apps, realtime antivirus and anti-malware software, windows update downloads, maybe even a web browser with a gazillion open tabs all running little scripts consuming CPU cycles. With more cores you reduce the chance of these things impacting your game. Or you could just shut everything that isn't strictly necessary down and not worry about it.

Today, yes, tomorrow , no. It will not remain this way and hardware has to lead the way.
 
Today, yes, tomorrow , no. It will jot remain this way and hardware has to lead the way.

Did you read what I wrote?

There are many tasks which CANNOT be multithreaded no matter how badly you want them to be. This represents a pretty sizeable percentage of all code. There is an argument to be made that everything that can easily be multithreaded has already been done, and that we have entered the period of diminishing returns. The passage of time will not change this, no matter how much hardware leads the way. Things will improve slightly, but it will likely be marginal.
 
Did you read what I wrote?

There are many tasks which CANNOT be multithreaded no matter how badly you want them to be. This represents a pretty sizeable percentage of all code. There is an argument to be made that everything that can easily be multithreaded has already been done, and that we have entered the period of diminishing returns. The passage of time will not change this, no matter how much hardware leads the way. Things will improve slightly, but it will likely be marginal.

Cannot today, tomorrow is different. Things will not remain the same, we will not be using single core programming is 2119 or heck, even 2039, things will change. You need to think outside the box and hardware needs to lead the way. I have read your post and many like it in the past and they all tend to assume nothing will change and that programming will always remain the same, which is not going to be the case.

Oh, and a sizeable percentage of present day code.
 
Cannot today, tomorrow is different. Things will not remain the same, we will not be using single core programming is 2119 or heck, even 2039, things will change. You need to think outside the box and hardware needs to lead the way. I have read your post and many like it in the past and they all tend to assume nothing will change and that programming will always remain the same, which is not going to be the case.

Oh, and a sizeable percentage of present day code.

If C needs the output of B to run, and B needs the output of A to run, there is no way you can ever run these three at the same time.

Absolutely no change to computer technogy, programming techniques or languages can change this fact. It is a failure of logic.

Unless you believe in magic...

1395107690-0~2.jpg
 
If C needs the output of B to run, and B needs the output of A to run, there is no way you can ever run these three at the same time.

Absolutely no change to computer technogy, programming techniques or languages can change this fact. It is a failure of logic.

The only thing that could ever change this is magic.

View attachment 205129

I do not agree with that at all and that mindset appears to be extremely short sighted. Might as well just say that quantum computing is impossible because, you know, we do not have it yet.
 
Even quantum computing does not support time travel

LOL! :D ;) Look, things are going to change, that is just the facts, computing and programming will change as well. However, if hardware does not lead the way, why bother, which is why hardware must lead the way.
 
Back
Top