AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

I guess with 10 core on the horizon, AMD has no choice but to whip out the big guns at launch, and not wait for a refresh to do 16 core. At least the 16-core 135w TDP does't look too crazy when you realize how many watts the 9900k draws.

Give me the 8-core version at slightly lower boost clocks, and you have a deal. The number of applications scaling efficiently beyond 16 threads is still pretty small, and will take some time to grow.

I'll take the best performance for my dollar TODAY please:D
With the CCX being 8 cores now instead of 4, I think the sweet spot will be the 8 core version. No need to worry about windows putting related threads on different CCX blocks..
 
With the CCX being 8 cores now instead of 4, I think the sweet spot will be the 8 core version. No need to worry about windows putting related threads on different CCX blocks..
CCX is still 4-cores, still two per chip (for 8 total cores), but the chips are smaller so you can have twice as many. Epyc Rome has 8 cpu chiplets, whereas the previous gen had four cpu dies.
 
LOL! Keeping everything the same as it always was? Not going to happen, regardless of what you want to think or believe. You see, I am not an either or type of person, I understand that things need to change, even though it will not be today. AMD also understands the need for change, which is one of the reasons that are at the forefront of those changes. If no changes were needed, why is Intel in full on panic mode once AMD laid the hammer down in March of 2017 and on? Oh well, I prefer not having quad core refresh after quad core refresh after quad core refresh...... :)
You are arguing that the laws of computer science must change. Might as well ask the sun not to Ryzen... ;)

There is no magic compiler flag that will suddenly make all software benefit from more cores. The last major attempt around this was Epic / intel Itanium.

The next big gains in performance will be driving the memory and storage close to the compute resources. I have no idea if 3dxpoint is the winning product, but I think it is a glimpse of the future.
 
Last edited:
You are arguing that the laws of computer science must change. Might as well ask the sun not to Ryzen... ;)

There is no magic compiler flag that will suddenly make all software benefit from more cores. The last major attempt around this was Epic / intel Itanium.

The next big gains in performance will be driving the memory and storage close to the compute resources. I have no idea if 3dxpoint is the winning product, but I think it is a glimpse of the future.

There will come up a time, and is probably already happening in one fashion or another, when things will be built again from the ground up. The foundation of what is computer science today is going to change and must do so, single core computing is a complete dead end. In fact, nowadays, it holds us back from the possibilities of what could be.

As I said, living in the past and remaining there just is not going to happen, things will move forward. Ram and storage tech have a long, long way to go as well and in fact, pricing will play a big part in that.
 
There will come up a time, and is probably already happening in one fashion or another, when things will be built again from the ground up. The foundation of what is computer science today is going to change and must do so, single core computing is a complete dead end. In fact, nowadays, it holds us back from the possibilities of what could be.

As I said, living in the past and remaining there just is not going to happen, things will move forward. Ram and storage tech have a long, long way to go as well and in fact, pricing will play a big part in that.

Certain tasks, like games, are going to have processes that are not scalable. That is just a fact. The office and entertainment requirements of most home users can be done with a modern mid-range Android tablet, let alone mid-range laptops. That leaves gaming remains one of the primary drivers for high end sales to home users, and that means a continued focus on single core performance.

Intel is in full panic mode not because of AMD, but primarily because of their 10 nm process not performing as expected. They should have had 10 nm at least 2 years ago, and that would have kept them more than competitive with AMD had it panned out, even with all the recent security vulnerabilities. There is only so much they can extract out of their 14 nm process without doing a ground-up CPU redesign. Had AMD not put out a competitive product, Intel would still be scrambling because they need to convince people to continue upgrading. Instead of just being in a frying pan with the 10 nm troubles, they're in the fire with concurrent 10 nm troubles and AMD competition. Bad either way, just slightly worse because of AMD.
 
OMFG the amount of FUD in this thread.

Parallel computing/threading is very useful for (est >60% multithreaded code) - Class 1:
  • Rendering (in CAD)
  • 3D design (Maya, 3dsMax etc)
  • Compressing videos
  • Rendering video (in NLE suites, including filters)
  • Heavy numerical analysis in Matlab and Excel (Monte carlo analysis for instance)
  • Scientific computing
  • Simulation (including stress analysis)
  • Compiling software (Variable though, depends on the compilation target, platform and compiler)
  • Heavily multitasking (doing any of the above while gaming, doing multiple of "Class 2" list below at the same time)
Parallel computing/threading is somewhat useful for (est 40-60% multithreaded code) - Class 2:
  • Gaming (6-8 threads typically sufficient, comparatively light loads on secondary threads- usually in non critical background computation/physics)
  • Design work (CAD) - note heavy predominantly single threaded workload
  • Emulation
  • Image editing (filters) where neural chips/dsps aren't available
  • Running concurrent tasks on the same machine while doing other stuff (eg, compressing a video while web browsing)
Parallel computing/threading is minimally useful for (est 0-40% multithreaded code)- Class 3:
  • Desktop publishing (word/indesign)
  • Light image editing
  • Web browsing
  • Basic-Moderate excel work
  • Powerpoint
  • Notepad
  • Circuit design in kicad
  • Photoshop/drawing work

If 50% or greater of your time with a computer falls in class 1, more cores = better - 32gig or more ram
If 30-50% of your time with a computer falls in Class 1, and the rest falls in class 2/3, then 8 cores is sufficient (eg i7-9700k/i9-9900k/Ryzen 2700x) - 16/32 gig of ram
If 20-30% of your time with a computer falls in class 1, and the rest falls in class 2/3 then 6 cores will be enough (eg i5-9600k/i7-8700k/Ryzen 2600x) -16 gig of ram
if 0-20% of your time with a computer in class 1 and the rest falls in 2/3 then 4 cores will be enough - 8/16 gig of ram

You'll notice I said "time with a computer" not "core task"

The point I'm trying to get across is that the vast majority of people do not spend 50% or more of their time doing Class 1 tasks, thus the "more COARS is better, mkay!" mantra does not fly. I would hazard a guess that the majority of people fall in the 20-30% group, at best. Even with things like "VISC" the bottleneck is no longer the computer, it's the user.

I am a heavy user of CAD, NLE, compiling, and running things at the same time, and as much as I like to delude myself and get the fastest stuff, I typically fall into the 20-30% group.
 
Last edited:
I am more interested in IPC gains than more cores past 8. You can put 30 slow cores on a die but most applications are single threaded. Anyone who primarily uses their computer for games should as well. 8 cores is enough for games. IPC is where the improvement needs to happen.

This sounds more and more if it coming from the same people that were spamming the crap out of the AMD cpu forum (around the time of Bulldozer) when they said the I3 is the answer to everything.

All of the improvements in gaming these days is from better multi thread code rather then from IPC improvements.
To put this in perspective check this thread:
https://hardforum.com/threads/fx-8320-still-doesnt-suck.1981613/#post-1044196210

Yep that is the cold hard truth improvements in API are to blame for this if we could only jump back and have every game just scale of the single core cpu :) .

Games are limited to draw calls or the way there structured with the amount of time spent on the game engine so far we have seen limited progression in this area games stay about the same even if the API can handle way more then it is doing now. The demo from Oxide Nitrous engine Star Swarm showed this back in 2013.

If the footprint of the game engine stay the same lets say due to lack of cores in customer computers then there is little to make those for people with higher core count machines. That is why certain applications thrive on higher core count and others simply are made for lowest common denominator (gaming).

This in no way is a hard limit that is there for multi core machines since in 2013 this was already shown on Mantle:
https://en.wikipedia.org/wiki/Mantle_(API)
CPU-bound scenarios[edit]
With a basic implementation, Mantle was designed to improve performance in scenarios where the CPU is the limiting factor:
  • Low-overhead validation and processing of API commands[8][9]
  • Explicit command buffer control[8]
  • Close to linear performance scaling from reordering command buffers onto multiple CPU cores[8]
  • Reduced runtime shader compilation overhead[8]
  • AMD claims that Mantle can generate up to 9 times more draw calls per second than comparable APIs by reducing CPU overhead.[10]
  • Multithreaded parallel CPU rendering support for at least 8 cores.[11]
If this were the goals met in 2013 there is no progression to be met beyond 2019 it suddenly all stops dead in its track?

I would like to think not there is still progression to be made.
 
Last edited:
Certain tasks, like games, are going to have processes that are not scalable. That is just a fact. The office and entertainment requirements of most home users can be done with a modern mid-range Android tablet, let alone mid-range laptops. That leaves gaming remains one of the primary drivers for high end sales to home users, and that means a continued focus on single core performance.

Intel is in full panic mode not because of AMD, but primarily because of their 10 nm process not performing as expected. They should have had 10 nm at least 2 years ago, and that would have kept them more than competitive with AMD had it panned out, even with all the recent security vulnerabilities. There is only so much they can extract out of their 14 nm process without doing a ground-up CPU redesign. Had AMD not put out a competitive product, Intel would still be scrambling because they need to convince people to continue upgrading. Instead of just being in a frying pan with the 10 nm troubles, they're in the fire with concurrent 10 nm troubles and AMD competition. Bad either way, just slightly worse because of AMD.


Certain tasks, like games, have processes that are not scalable. The primary focus is have, not going to have for even games will eventually change, that is a fact. As for Intel being in full panic mode, their issues with their 10nm process would not really matter to them as much, if they had no threat to their dominance.
 
OMFG the amount of FUD in this thread.

Parallel computing/threading is very useful for (est >60% multithreaded code) - Class 1:
  • Rendering (in CAD)
  • 3D design (Maya, 3dsMax etc)
  • Compressing videos
  • Rendering video (in NLE suites, including filters)
  • Heavy numerical analysis in Matlab and Excel (Monte carlo analysis for instance)
  • Scientific computing
  • Simulation (including stress analysis)
  • Compiling software (Variable though, depends on the compilation target, platform and compiler)
  • Heavily multitasking (doing any of the above while gaming, doing multiple of "Class 2" list below at the same time)
Parallel computing/threading is somewhat useful for (est 40-60% multithreaded code) - Class 2:
  • Gaming (6-8 threads typically sufficient, comparatively light loads on secondary threads- usually in non critical background computation/physics)
  • Design work (CAD) - note heavy predominantly single threaded workload
  • Emulation
  • Image editing (filters) where neural chips/dsps aren't available
  • Running concurrent tasks on the same machine while doing other stuff (eg, compressing a video while web browsing)
Parallel computing/threading is minimally useful for (est 0-40% multithreaded code)- Class 3:
  • Desktop publishing (word/indesign)
  • Light image editing
  • Web browsing
  • Basic-Moderate excel work
  • Powerpoint
  • Notepad
  • Circuit design in kicad
  • Photoshop/drawing work

If 50% or greater of your time with a computer falls in class 1, more cores = better - 32gig or more ram
If 30-50% of your time with a computer falls in Class 1, and the rest falls in class 2/3, then 8 cores is sufficient (eg i7-9700k/i9-9900k/Ryzen 2700x) - 16/32 gig of ram
If 20-30% of your time with a computer falls in class 1, and the rest falls in class 2/3 then 6 cores will be enough (eg i5-9600k/i7-8700k/Ryzen 2600x) -16 gig of ram
if 0-20% of your time with a computer in class 1 and the rest falls in 2/3 then 4 cores will be enough - 8/16 gig of ram

You'll notice I said "time with a computer" not "core task"

The point I'm trying to get across is that the vast majority of people do not spend 50% or more of their time doing Class 1 tasks, thus the "more COARS is better, mkay!" mantra does not fly. I would hazard a guess that the majority of people fall in the 20-30% group, at best. Even with things like "VISC" the bottleneck is no longer the computer, it's the user.

I am a heavy user of CAD, NLE, compiling, and running things at the same time, and as much as I like to delude myself and get the fastest stuff, I typically fall into the 20-30% group.

Even those Class 3 tasks will change, that is just a simple fact. Not today but eventually, they will. Also, having more cores means better multitasking for the base OS, regardless of what the applications itself does or does not do.
 
Even those Class 3 tasks will change, that is just a simple fact. Not today but eventually, they will. Also, having more cores means better multitasking for the base OS, regardless of what the applications itself does or does not do.

Even if they did. There is a limit to how much you can do with, for example, a text editor when it comes to multithreading. The bottleneck is still the human (and has been since the 8086 days)
 
Last edited:
Even those Class 3 tasks will change, that is just a simple fact. Not today but eventually, they will. Also, having more cores means better multitasking for the base OS, regardless of what the applications itself does or does not do.

Change is good. Meanwhile, using the best tool for the job atm is good too.
 
OMFG the amount of FUD in this thread.

Parallel computing/threading is very useful for (est >60% multithreaded code) - Class 1:
  • Rendering (in CAD)
  • 3D design (Maya, 3dsMax etc)
  • Compressing videos
  • Rendering video (in NLE suites, including filters)
  • Heavy numerical analysis in Matlab and Excel (Monte carlo analysis for instance)
  • Scientific computing
  • Simulation (including stress analysis)
  • Compiling software (Variable though, depends on the compilation target, platform and compiler)
  • Heavily multitasking (doing any of the above while gaming, doing multiple of "Class 2" list below at the same time)
Parallel computing/threading is somewhat useful for (est 40-60% multithreaded code) - Class 2:
  • Gaming (6-8 threads typically sufficient, comparatively light loads on secondary threads- usually in non critical background computation/physics)
  • Design work (CAD) - note heavy predominantly single threaded workload
  • Emulation
  • Image editing (filters) where neural chips/dsps aren't available
  • Running concurrent tasks on the same machine while doing other stuff (eg, compressing a video while web browsing)
Parallel computing/threading is minimally useful for (est 0-40% multithreaded code)- Class 3:
  • Desktop publishing (word/indesign)
  • Light image editing
  • Web browsing
  • Basic-Moderate excel work
  • Powerpoint
  • Notepad
  • Circuit design in kicad
  • Photoshop/drawing work

If 50% or greater of your time with a computer falls in class 1, more cores = better - 32gig or more ram
If 30-50% of your time with a computer falls in Class 1, and the rest falls in class 2/3, then 8 cores is sufficient (eg i7-9700k/i9-9900k/Ryzen 2700x) - 16/32 gig of ram
If 20-30% of your time with a computer falls in class 1, and the rest falls in class 2/3 then 6 cores will be enough (eg i5-9600k/i7-8700k/Ryzen 2600x) -16 gig of ram
if 0-20% of your time with a computer in class 1 and the rest falls in 2/3 then 4 cores will be enough - 8/16 gig of ram

You'll notice I said "time with a computer" not "core task"

The point I'm trying to get across is that the vast majority of people do not spend 50% or more of their time doing Class 1 tasks, thus the "more COARS is better, mkay!" mantra does not fly. I would hazard a guess that the majority of people fall in the 20-30% group, at best. Even with things like "VISC" the bottleneck is no longer the computer, it's the user.

I am a heavy user of CAD, NLE, compiling, and running things at the same time, and as much as I like to delude myself and get the fastest stuff, I typically fall into the 20-30% group.

I'd say that my PC spends more time compressing pictures and videos than playing games, but that's mostly because it does it while I'm not present. Does it count as "time with a computer" if I'm not there?
My PC is always doing something in the background while I play games ever since I got a 1700 lol!
So much Powah! (At least to me)
 
Just remember everything is impossible until it's not, otherwise we would never have giant leaps forward.

A variation on this meaningless drivel has been stated by ignorant in this thread multiple times.
Previously answered here:
AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

But I will continue. What was the last "impossible" thing that was made possible?

Last I checked Laws of Gravity, and Thermodynamics are still in effect, the speed of light remains unbroken, there are no perpetual motion machines and 2+2 still equals 4.

Are you expecting any of these to change someday?
 
A variation on this meaningless drivel has been stated by ignorant in this thread multiple times.
Previously answered here:
AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

But I will continue. What was the last "impossible" thing that was made possible?

Last I checked Laws of Gravity, and Thermodynamics are still in effect, the speed of light remains unbroken, there are no perpetual motion machines and 2+2 still equals 4.

Are you expecting any of these to change someday?

*Sigh* Moving on........16 Core, 32 Threads on an AM4 platform with 64GB of ram would be a seriously great system, with only the Threadripper system being better.

Edit: I know, it is impossible for all this computing power to just be sitting on my desk.
 
A variation on this meaningless drivel has been stated by ignorant in this thread multiple times.
Previously answered here:
AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

But I will continue. What was the last "impossible" thing that was made possible?

Last I checked Laws of Gravity, and Thermodynamics are still in effect, the speed of light remains unbroken, there are no perpetual motion machines and 2+2 still equals 4.

Are you expecting any of these to change someday?

More cores and processing power allows us to do more things at once.
Sure Amdahls is true, but only if the CPU is doing one single task... What user here is running one single process on their PC at a time??? Amdahl's law is not as relevant as you think..
We are going to get more cores regardless... But mostly because of how bloated OSes are getting lol! (XP would be lightning on a 5ghz quad)
Just check how many processes are running on Windows 10, not a bare install but a normal user's install.
 
You didn't read my entire post. You only read the parts you wanted to read, and then chose to overreact to that one part., out of context of the rest.

You're overacting like a CHILD would.

I'm not suggesting that you ONLY need 2c/4t for the rest of all days here, INCLUDING GAMING AND WORKSTATION TASKS, I'm saying that 2c/4t is a GOOD ESTIMATE OF *MAXIMUM OS BACKGROUND LOAD FOR THE NEAR FUTURE*.

SO 8 CORES / 16 THREADS SHOULD BE PLENTY FOR GAMES OR WORKSTATION TASKS BECAUSE THAT LEAVES YOU TWO CORES FREE FOR THE OS (and all your preferred monitoring and voice chat programs), to allow for smooth multitasking, worst-case.

6 major threads is the current sweet spot for all major console ports, and we can continue to expect it for several more years.


Next time try reading first, then posting. I'm simply saying this is not the time or the place for most gamer to jump above 16 threads, unless you're made of money. Especially since this new core design will have about 20% faster performance vs the old one.


But I think we've hit the wall at 4 threads being the maximum required to multitask in Windows smoothly as a web/browser/office editor.


You didn't read your own post. Before sperging out at me and calling me a child, maybe read your own post first that I was referring to? I operated with four threads in windows for 'office editor' functions amongst others with an I3, it sucked ass with documents containing images and was not sufficient for the job. That's the point I tried to make - your assertion is incorrect in my experience.

But yes I agree with you saying that gamers don't need 16+ threads. 8 is enough for now. In future, maybe not though.

Edit to add the I3 I used was same speed clock-wise as my 2600k at stock. 2600k has a small OC around 4.2 or so for stability. The biggest difference was threads (maybe cache?) and it was night and day.
 
OMFG the amount of FUD in this thread.

Parallel computing/threading is very useful for (est >60% multithreaded code) - Class 1:
  • Rendering (in CAD)
  • 3D design (Maya, 3dsMax etc)
  • Compressing videos
  • Rendering video (in NLE suites, including filters)
  • Heavy numerical analysis in Matlab and Excel (Monte carlo analysis for instance)
  • Scientific computing
  • Simulation (including stress analysis)
  • Compiling software (Variable though, depends on the compilation target, platform and compiler)
  • Heavily multitasking (doing any of the above while gaming, doing multiple of "Class 2" list below at the same time)
Parallel computing/threading is somewhat useful for (est 40-60% multithreaded code) - Class 2:
  • Gaming (6-8 threads typically sufficient, comparatively light loads on secondary threads- usually in non critical background computation/physics)
  • Design work (CAD) - note heavy predominantly single threaded workload
  • Emulation
  • Image editing (filters) where neural chips/dsps aren't available
  • Running concurrent tasks on the same machine while doing other stuff (eg, compressing a video while web browsing)
Parallel computing/threading is minimally useful for (est 0-40% multithreaded code)- Class 3:
  • Desktop publishing (word/indesign)
  • Light image editing
  • Web browsing
  • Basic-Moderate excel work
  • Powerpoint
  • Notepad
  • Circuit design in kicad
  • Photoshop/drawing work

If 50% or greater of your time with a computer falls in class 1, more cores = better - 32gig or more ram
If 30-50% of your time with a computer falls in Class 1, and the rest falls in class 2/3, then 8 cores is sufficient (eg i7-9700k/i9-9900k/Ryzen 2700x) - 16/32 gig of ram
If 20-30% of your time with a computer falls in class 1, and the rest falls in class 2/3 then 6 cores will be enough (eg i5-9600k/i7-8700k/Ryzen 2600x) -16 gig of ram
if 0-20% of your time with a computer in class 1 and the rest falls in 2/3 then 4 cores will be enough - 8/16 gig of ram

You'll notice I said "time with a computer" not "core task"

The point I'm trying to get across is that the vast majority of people do not spend 50% or more of their time doing Class 1 tasks, thus the "more COARS is better, mkay!" mantra does not fly. I would hazard a guess that the majority of people fall in the 20-30% group, at best. Even with things like "VISC" the bottleneck is no longer the computer, it's the user.

I am a heavy user of CAD, NLE, compiling, and running things at the same time, and as much as I like to delude myself and get the fastest stuff, I typically fall into the 20-30% group.

Good post.

I will add that much of the Class 1 category is Embarrassingly Parallel, so typically well above 90% parallel in implementation, so you get a very good speedup per core. This is typically where they like to focus benchmarking new high core count CPUs. Cinebench is a favorite, even though almost no home users are doing 3d modelling, so it is more like a synthetic benchmark to most people. Also need to pay attention to the quality of GPU encoding/rendering as big speedups are possible regardless of core counts.
 
More cores and processing power allows us to do more things at once.
Sure Amdahls is true, but only if the CPU is doing one single task... What user here is running one single process on their PC at a time???
We are going to get more cores regardless... But mostly because of how bloated OSes are getting lol! (XP would be lightning on a 5ghz quad)
Just check how many processes are running on Windows 10, not a bare install but a normal user's install.

I am well aware that massive amounts of processes are in use, and have pointed this out in the past to people that get the misguided idea, that: "we are coding for 4 cores or 8 threads today". Now look at what % of CPU all those processes are using, and it's negligible. Typically not even 5% of single core.

It's yet another ignorant claim that OS bloat requires many more cores to handle, when in reality the OS does't use more than a tiny fraction of one core.

CPU's are ridiculously powerful today. A smartphone has more CPU power than than the multi-user (over a hundred users) Supercomputer that my university had in 1980's. Some background OS work is not going to make a modern CPU struggle.
 
With the CCX being 8 cores now instead of 4, I think the sweet spot will be the 8 core version. No need to worry about windows putting related threads on different CCX blocks..

If just there has been a solutions for that for years that was free and made by a forum member...


https://hardforum.com/threads/proje...booster-for-heavy-multitaskers.1858462/page-4
https://hardforum.com/threads/cs-go-9-more-fps-and-25-reudction-in-varians-for-free.1975925/
https://community.amd.com/thread/215217
 
  • Like
Reactions: c3k
like this

Sounds like something that AMD could work with Microsoft to have built into Windows, or failing that maybe GPU makes since they tend to have game profiles in with their drivers.

I really can't wait for the new generation to drop. Really curious to see how the chiplet design works out.
 
I am well aware that massive amounts of processes are in use, and have pointed this out in the past to people that get the misguided idea, that: "we are coding for 4 cores or 8 threads today". Now look at what % of CPU all those processes are using, and it's negligible. Typically not even 5% of single core.

It's yet another ignorant claim that OS bloat requires many more cores to handle, when in reality the OS does't use more than a tiny fraction of one core.

CPU's are ridiculously powerful today. A smartphone has more CPU power than than the multi-user (over a hundred users) Supercomputer that my university had in 1980's. Some background OS work is not going to make a modern CPU struggle.
A faster cpu can finish thousands of tasks in a second, but it cannot complete them simultaneously. It's part of the reason you still experience microstutter on the desktop when you only have a four thread cpu and you have multiple programs chugging away. The other reason being poor coding practices causing the ui to wait on the main thread.
 
A variation on this meaningless drivel has been stated by ignorant in this thread multiple times.
Previously answered here:
AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

But I will continue. What was the last "impossible" thing that was made possible?

Last I checked Laws of Gravity, and Thermodynamics are still in effect, the speed of light remains unbroken, there are no perpetual motion machines and 2+2 still equals 4.

Are you expecting any of these to change someday?

So I am guessing you havent been in this world very long to not notice things. Breaking the sound barrier was considered impossible yet we did it. Splitting the atom was considered impossible until we did it. You also notice you dont hear much about AIDS anymore and that is mainly due to the treatments and experimental vaccines for it now that pretty much keep people from dying. Our limited understanding of the universe is our biggest limitation and man made laws only serve to limit our vision of possibilities and even Einstein and Hawking got some wrong. The Hadron collider produced results that broke rules, so yeah computers are just waiting for a guy that can see outside the box to make far more use of those wasted cpu cycles.
 
A faster cpu can finish thousands of tasks in a second, but it cannot complete them simultaneously.

Much more likely if you experience that (I don't) that you have resource contention for RAM and swapping programs in and out of storage.

Background tasks on a modern CPU will happen simultaneously from a human perspective.
 
Much more likely if you experience that (I don't) that you have resource contention for RAM and swapping programs in and out of storage.

Background tasks on a modern CPU will happen simultaneously from a human perspective.
Dunno. My Sempron system exhibited it quite well with 8gb of ram until I switched to a multi core Athlon. I will admit that a ssd did improve even more beyond that. Also experienced a similar issue with an intel system that has 16gb, but that may have been a driver issue.
 
So I am guessing you havent been in this world very long to not notice things. Breaking the sound barrier was considered impossible yet we did it. Splitting the atom was considered impossible until we did it. You also notice you dont hear much about AIDS anymore and that is mainly due to the treatments and experimental vaccines for it now that pretty much keep people from dying. Our limited understanding of the universe is our biggest limitation and man made laws only serve to limit our vision of possibilities and even Einstein and Hawking got some wrong. The Hadron collider produced results that broke rules, so yeah computers are just waiting for a guy that can see outside the box to make far more use of those wasted cpu cycles.

I think you confusing some difficulty, with impossible. For Example scientists didn't consider the sound barrier an actual barrier. Bullets have been supersonic for more than a century before airplanes went supersonic. It was just an engineering problem involving strength of materials, aerodynamics and thrust.

Now compare with actual scientific and mathematical laws. Unlike objects travelling faster than sound, the Speed of light is actually considered a Scientific hard limit. You expect that to change?

If you want to contribute to a discussion of multi-processor theory, at least develop some basic understanding of the issues.

You need to parse Amdahl's law until you really get it. It's as solid as 2+2 = 4.

All the wishful thinking and silly platitudes about the impossible becoming possible aren't going to change that.

It will hold unless there is some complete change in how we fundamentally approach computation. Some possibilities might be Massive AI's for everything or Quantum computing. Actually it would still hold, but there may be some addendums.. But I will repeat what I said before, about some as yet Science Fiction future of computation.

That has nothing to do with arguments about current technology and core counts for home computer users.

Current and near future reality is subject to Amdahl's Law. IF the day comes that it isn't a significant factor, we won't be talking about conventional CPU core counts at all.

If you buy a 16 core CPU today, it will be subject to Amdahl's law as long as you can keep it running, regardless if that is 5 years or 500 years.
 
when in reality the OS does't use more than a tiny fraction of one core.
Some background OS work is not going to make a modern CPU struggle.

Lol!
You call me ignorant...
Windows 10 would like a word with you....(updates, windows defender/other AV, onedrive starting up, who knows what else.)

My personal system doesn't have this issue mostly since I minimize/get rid of that crap. Work systems however...
I always marvel when imaging a new PC with Win10 on how updates, onedrive updates and windows defender can take so much CPU power as to almost cripple a dual core and make a i5 quad struggle. PCs with really fast SSDs love to have CPU power as Storage has always been far and away the most bottlenecking component as far as user percieved responsivness.
 
Last edited:
I think you confusing some difficulty, with impossible. For Example scientists didn't consider the sound barrier an actual barrier. Bullets have been supersonic for more than a century before airplanes went supersonic. It was just an engineering problem involving strength of materials, aerodynamics and thrust.

Now compare with actual scientific and mathematical laws. Unlike objects travelling faster than sound, the Speed of light is actually considered a Scientific hard limit. You expect that to change?

If you want to contribute to a discussion of multi-processor theory, at least develop some basic understanding of the issues.

You need to parse Amdahl's law until you really get it. It's as solid as 2+2 = 4.

All the wishful thinking and silly platitudes about the impossible becoming possible aren't going to change that.

It will hold unless there is some complete change in how we fundamentally approach computation. Some possibilities might be Massive AI's for everything or Quantum computing. Actually it would still hold, but there may be some addendums.. But I will repeat what I said before, about some as yet Science Fiction future of computation.

That has nothing to do with arguments about current technology and core counts for home computer users.

Current and near future reality is subject to Amdahl's Law. IF the day comes that it isn't a significant factor, we won't be talking about conventional CPU core counts at all.

If you buy a 16 core CPU today, it will be subject to Amdahl's law as long as you can keep it running, regardless if that is 5 years or 500 years.

So can you clarify for me, what, regarding computing, you are saying was and is impossible for computers? I don't understand what point you're trying to make, originally.
 
So can you clarify for me, what, regarding computing, you are saying was and is impossible for computers? I don't understand what point you're trying to make, originally.

I never said impossible.

I said multi-core CPUs are subject to Amdahl's Law (significant diminishing returns as more cores are added for all but Embarrassingly Parallel problems).

The impossible becoming possible argument comes from wishful thinkers, wanting Amdahl's Law to go away.
 
Last edited:
OMFG the amount of FUD in this thread.

Parallel computing/threading is very useful for (est >60% multithreaded code) - Class 1:
  • Rendering (in CAD)
  • 3D design (Maya, 3dsMax etc)
  • Compressing videos
  • Rendering video (in NLE suites, including filters)
  • Heavy numerical analysis in Matlab and Excel (Monte carlo analysis for instance)
  • Scientific computing
  • Simulation (including stress analysis)
  • Compiling software (Variable though, depends on the compilation target, platform and compiler)
  • Heavily multitasking (doing any of the above while gaming, doing multiple of "Class 2" list below at the same time)
Parallel computing/threading is somewhat useful for (est 40-60% multithreaded code) - Class 2:
  • Gaming (6-8 threads typically sufficient, comparatively light loads on secondary threads- usually in non critical background computation/physics)
  • Design work (CAD) - note heavy predominantly single threaded workload
  • Emulation
  • Image editing (filters) where neural chips/dsps aren't available
  • Running concurrent tasks on the same machine while doing other stuff (eg, compressing a video while web browsing)
Parallel computing/threading is minimally useful for (est 0-40% multithreaded code)- Class 3:
  • Desktop publishing (word/indesign)
  • Light image editing
  • Web browsing
  • Basic-Moderate excel work
  • Powerpoint
  • Notepad
  • Circuit design in kicad
  • Photoshop/drawing work

If 50% or greater of your time with a computer falls in class 1, more cores = better - 32gig or more ram
If 30-50% of your time with a computer falls in Class 1, and the rest falls in class 2/3, then 8 cores is sufficient (eg i7-9700k/i9-9900k/Ryzen 2700x) - 16/32 gig of ram
If 20-30% of your time with a computer falls in class 1, and the rest falls in class 2/3 then 6 cores will be enough (eg i5-9600k/i7-8700k/Ryzen 2600x) -16 gig of ram
if 0-20% of your time with a computer in class 1 and the rest falls in 2/3 then 4 cores will be enough - 8/16 gig of ram

You'll notice I said "time with a computer" not "core task"

The point I'm trying to get across is that the vast majority of people do not spend 50% or more of their time doing Class 1 tasks, thus the "more COARS is better, mkay!" mantra does not fly. I would hazard a guess that the majority of people fall in the 20-30% group, at best. Even with things like "VISC" the bottleneck is no longer the computer, it's the user.

I am a heavy user of CAD, NLE, compiling, and running things at the same time, and as much as I like to delude myself and get the fastest stuff, I typically fall into the 20-30% group.


You forgot the biggest one. Who cares how people spend their own money.
 
I think you confusing some difficulty, with impossible. For Example scientists didn't consider the sound barrier an actual barrier. Bullets have been supersonic for more than a century before airplanes went supersonic. It was just an engineering problem involving strength of materials, aerodynamics and thrust.

Now compare with actual scientific and mathematical laws. Unlike objects travelling faster than sound, the Speed of light is actually considered a Scientific hard limit. You expect that to change?

If you want to contribute to a discussion of multi-processor theory, at least develop some basic understanding of the issues.

You need to parse Amdahl's law until you really get it. It's as solid as 2+2 = 4.

All the wishful thinking and silly platitudes about the impossible becoming possible aren't going to change that.

It will hold unless there is some complete change in how we fundamentally approach computation. Some possibilities might be Massive AI's for everything or Quantum computing. Actually it would still hold, but there may be some addendums.. But I will repeat what I said before, about some as yet Science Fiction future of computation.

That has nothing to do with arguments about current technology and core counts for home computer users.

Current and near future reality is subject to Amdahl's Law. IF the day comes that it isn't a significant factor, we won't be talking about conventional CPU core counts at all.

If you buy a 16 core CPU today, it will be subject to Amdahl's law as long as you can keep it running, regardless if that is 5 years or 500 years.

Oh look you were wrong limits and laws are made to be broken. https://www.reuters.com/article/us-...to-break-speed-of-light-idUSTRE78L4FH20110922
 

:D Nice Try.

I remember this well. I remember when it was announced. I shrugged and figured they failed to account for something and sure enough, then had an equipment/calibration errors:

https://www.reuters.com/article/us-science-neutrinos-idUSBRE82T0IP20120330

ROME (Reuters) - The Italian professor who led an experiment which initially appeared to challenge one of the fundaments of modern physics by showing particles moving faster than the speed of light, has resigned after the finding was overturned earlier this month.
 
I said multi-core CPUs are subject to Amdahl's Law (significant diminishing returns as more cores are added for all but Embarrassingly Parallel problems).

More like SINGLE programs running on Multicore CPUs are subject to Amdahl's law, to clarify the point you are trying to make...What PC runs a single process/program on Windows 10?

You make it sound like that muiltcore CPUs won't improve much of anything because of Amdahls law.
 
More like SINGLE programs running on Multicore CPUs are subject to Amdahl's law, to clarify the point you are trying to make...What PC runs a single process/program on Windows 10?

You make it sound like that muiltcore CPUs won't improve much of anything because of Amdahls law.

I am not suggesting a return to single cores, just that home users will face significant diminishing returns going beyond 6-8 cores unless they doing large amount of Embarrassingly Parallel work (Typically Encoding/Rendering).

The typical desktop user is not running multiple CPU intensive programs simultaneously, and generally can only interact with one at a time, typical the ones the background aren't doing much at all.

Realistically, going beyond 6-8 cores is all about doing a lot "Class 1" work, as Keljian defined it in a previous post.
 
Sounds like something that AMD could work with Microsoft to have built into Windows, or failing that maybe GPU makes since they tend to have game profiles in with their drivers.

I agree an SMT and CCX aware thread scheduler would be really nice in windows
 
Guys, can we get back on topic, please? Go and argue about single-threaded performance and the laws of physics in a new thread. This thread is about the AMD Ryzen 9 16 core CPU.
 
:D Nice Try.

I remember this well. I remember when it was announced. I shrugged and figured they failed to account for something and sure enough, then had an equipment/calibration errors:

https://www.reuters.com/article/us-science-neutrinos-idUSBRE82T0IP20120330

Meh, didnt hear about that. Still cant assume things will stay the same and I doubt single threaded performance will as important over the next 10 years. Also the average user leaves tons of programs open and running in the background so higher core count will be important for them as well and no one will complain about having more then they need. I look forward to seeing what the 16 core and 12 core can offer and what speeds they will run at.
 
Don't worry, AMD will have a lower core count CPU that will be better for gaming than this 16 core. Don't hate on a 16 core CPU just because "average home users" can't use the full potential of it, that's dumb. AMD has a CPU for them too!
I welcome our new 16 core, on a maintstream socket, overlords.......I can sure put it to use!
 
Back
Top