AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

I'll buy 16c AMD for my productivity rig and keep rocking Intel parts for gaming. Win win for both companies.

Depends on what type of productivity/ creative work too. For Adobe products single core performance still matters.
Photoshop, Lightroom etc.
 
Last edited:
I am more interested in IPC gains than more cores past 8. You can put 30 slow cores on a die but most applications are single threaded. Anyone who primarily uses their computer for games should as well. 8 cores is enough for games. IPC is where the improvement needs to happen.

For now..................

We all said the same for dual cores, quad cores...
I get your point though
 
Last edited:
Depends on what type of productivity/ creative work too. For Adobe products single core performance still matters.
Photoshop, Lightroom etc.

What if Adobe finally adds proper multithreading, then what.
For all around productivity I'd take a Threadripper over a 10Ghz single core lol
 
Kyle is smiling upon us and hopes this chip bombs.

I'm pretty sure Kyle would want this chip to beat all expectations and suceed because of the enthusiast in him.
But of course "Officially" wouldn't dare say that cause he works at Intel now.......But we know better....

Contrary to popular belief, companies one upping each other often benefits us the comsumers very much.............
Just take a look at what bulldozer did to Intel... several years of marginal improvements...
Now with Ryzen, all of a sudden we have 6-8 cores being mainstream in less than a year, soon to be 10-16 cores....In less than 3 YEARS!
 
What if Adobe finally adds proper multithreading, then what.
For all around productivity I'd take a Threadripper over a 10Ghz single core lol

I'm no expert so no clue as to why Adobe has not done more multithreading.

As it stands tho a 9600F/K seems about right for that environment.

Looking forward to how the new Ryzens will be received amongst the Adobe faithful.

I'm still using a problem free Xeon 1620 for that purpose, pretty good lifespan on that one.
 
Sure, in some SciFi Fantasy.

But In the foreseeable future, nothing is going to break up serial dependent code to run on multiple processors.

There seems to be a massive misunderstanding of how multiprocessing works among non-programmers. Every time there is talk of core count increase, people jump in and claim, now everything can be coded for 8 cores, or coded for 16 cores, etc...

That isn't how it works at all.

Code has serial dependent sections and parallel potential sections. Once the work has been done to split off the parallel potential sections for multiprocessing, it is done. It doesn't matter how many processor cores were used when the work was done, those now parallel sections will scale (within reason) to n cores. If you make a loop parallel, you don't spawn 4 threads to process because 4 cores are most prevalent, you spawn n threads, where n a value related to a system call telling you how many cores you have, so that parallel code will keep scaling automatically, the more cores you add.

Find a rendering/encoding program from 10 years ago, and run on a 16 or even 32 cores machine, and almost certainly it will scale to use all those cores.

People might like everything to work like Rendering/Encoding in responds to more cores, but it won't. Rendering and Encoding are known as Embarrassingly Parallel problems.

But hardly anything else that matters to home computer users is. Everything else is a combination parallel/serial code, and then Amdahl's law kicks the crap out of core scaling. Even if your code is 80% parallel, you quickly hit diminishing returns on current home CPUs.

View attachment 160644

80% Parallel is the 20% serial line represented by the triangles. Diminishing returns hit early. You get about 2X speedup with 3 processors, 3X with 6 processors, but 4X takes 16 processors. No amount of processor cores will get you a 5X speedup.

Having more CPU cores at home, won't change Amdahl's Law, and it won't enable developers to somehow make Serial dependent code, into Parallel code.

Well you can run 2 instances of the same program.....4...16.....32 instances of the same program. But of course that doesn't qualify for comsumer workloads...
 
HA! Adobe won't fix bugs in their software from years ago, no way are they going to do something that requires them to expend actual effort.

Probably the real reason behind no multithreading is that there are no real benefits to do so atm.

In my own experience most issues with Adobe software are hardware compatibility ones, most people that use their stuff for a living pick their gear accordingly.
 
Probably the real reason behind no multithreading is that there are no real benefits to do so atm.

In my own experience most issues with Adobe software are hardware compatibility ones, most people that use their stuff for a living pick their gear accordingly.
Most of the individual processes in Adobe don't scale with multiple threads because of the individual algorithms involved with their effects. More threads lets you simultaneously run more effects but there is no benefit running 1 effect over multiple threads at this stage. At this point Adobe needs to be spending more time getting their stuff ready for the Apple ARM roll out and work on tightening up their algorithms for higher efficiencies and widen their supported GPU list than anything else.

More cores lets you run more instances of Adobe or work with more imported objects simultaneously which is good but there is little more cores will do for single loads with out a complete rework of a lot of their effects and processes which wouldn't really yield any tangible benefits at this point I think.
 
Well you can run 2 instances of the same program.....4...16.....32 instances of the same program. But of course that doesn't qualify for comsumer workloads...


Right, the reason we started needing more than a single, and then more than two core to get a good productivity desktop performance is because once we got the extra cores, we started using them.

All those extra layers of added protections since Windows XP (user-space drivers. built-in virus scanner with advanced AI pattern matching, and sandboxing just to name a few) have added additional processing overhead. It's split out into multiple low-priority threads (so it doesn't bring your system to a crawl), but that means all those extra threads can cause stutter when another application is suddenly loading down the machine. Less threads = higher penalty for switching between all those background tasks.

But I think we've hit the wall at 4 threads being the maximum required to multitask in Windows smoothly as a web/browser/office editor. There's a point now where additional security layers are just looking like wasted effort, and there's these new exploit mitigations for Intel and AMD are sapping single-threaded performance. Researchers are trying to find more efficient ways of doing the same task, rather than add more layers to your system senselessly.

So I see 2 cores with 4 threads being the bare-maximum to need to leave free in the future for OS smoothness. That leaves you 6c/12t untapped on an 8-core processor, which is exactly the same as the PS5 will give. So for pure gamers, buying a Ryzen 3000-series 8 core today will be good enough for gamers for the next three to five years, and by the time you need more cores, the costs will fall again. It will also give you the sweet spot in occasional encoding and compression tasks, because Amdhal.
 
Last edited:
I find this all very interesting. Another very interesting point to add into this though not completely applicable. Many pre built VM's have a minimum requirement of GHZthey are allocated. Not sockets, not cores, but MHZ. So lets say a pre built VM needs 3.2 GHZ of of cpu. Your cores in your host are 2.4GHZ. So you just assign two cores to meet the need and the new VM appliance is as happy as pie. It's strange... in my opinion. But that is the way many do things.
 
While I presently favor AMD for the price / core / performance basis - I'm really waiting to see who / which company mitigates the Meltdown and Spectre flaws first / best. Intel has always been the single core performance king but for the best M/S mitigation presently - that seems to be AMD (just due to differences in design).
 
Yes really what has AMD officially stated on Paper regarding the 3000 series in terms of launch time, availability, core counts or clock speeds?
Well... if you don't limit yourself to "stated on paper"........ AMD "Officially" showcased a demo of a 8 core outperforming a 9900k in cinebench while using significantly less power......
From that it's not that hard to deduce that AMD increased performance on this engineering sample at least 15% compared to the 2700x while using less power....
So it's possible that single core performance is on par to a 9900k vs this engineering sample.

Clocks are guaranteed to be higher than the 2700x due to it being the same general design (Zen) on a better process node.

Also Lisa Su herself showed up a 8 core Ryzen chip and basically confirmed that there will be a 16 core AM4 chip due to the extra space.
 
Well... if you don't limit yourself to "stated on paper"........ AMD "Officially" showcased a demo of a 8 core outperforming a 9900k in cinebench while using significantly less power......
From that it's not that hard to deduce that AMD increased performance on this engineering sample at least 15% compared to the 2700x while using less power....
So it's possible that single core performance is on par to a 9900k vs this engineering sample.

Clocks are guaranteed to be higher than the 2700x due to it being the same general design (Zen) on a better process node.

Also Lisa Su herself showed up a 8 core Ryzen chip and basically confirmed that there will be a 16 core AM4 chip due to the extra space.
A demo of unknown clocks, of unknown configuration running a single application. Don't get me wrong I am impressed by it but they have been very tight lipped on this launch and supposed leaks make up 90+% of our knowledge of this line up and it frustrates me they have been good at alluding future configurations and approximate clocks and estimated IPC's but they have been very tight lipped. That that it is all that unusual, just that I have budget to spend and it needs to be here in my hands before the end of June or it has to count against next year budget, I have systems that have to be built and I really don't want to use a 2000 series chip if I can avoid it.
 
Right, the reason we started needing more than a single, and then more than two core to get a good productivity desktop performance is because once we got the extra cores, we started using them.

All those extra layers of added protections since Windows XP (user-space drivers. built-in virus scanner with advanced AI pattern matching, and sandboxing just to name a few) have added additional processing overhead. It's split out into multiple low-priority threads (so it doesn't bring your system to a crawl), but that means all those extra threads can cause stutter when another application is suddenly loading down the machine. Less threads = higher penalty for switching between all those background tasks.

But I think we've hit the wall at 4 threads being the maximum required to multitask in Windows smoothly as a web/browser/office editor. There's a point now where additional security layers are just looking like wasted effort, and there's these new exploit mitigations for Intel and AMD are sapping single-threaded performance. Researchers are trying to find more efficient ways of doing the same task, rather than add more layers to your system senselessly.

So I see 2 cores with 4 threads being the bare-maximum to need to leave free in the future for OS smoothness. That leaves you 6c/12t untapped on an 8-core processor, which is exactly the same as the PS5 will give. So for pure gamers, buying a Ryzen 3000-series 8 core today will be good enough for gamers for the next three to five years, and by the time you need more cores, the costs will fall again. It will also give you the sweet spot in occasional encoding and compression tasks, because Amdhal.

Much wrong in there. Falling into the trap of thinking core counts are somehow fixed in multiprocessing code (Ex. thinking same count as PS5 is something to aim for). If you have stutter on loading some new big application is from swapping memory around, nothing to do with core counts. OS overhead might be 5% of a single core. You certainly don't need to reserve 2 full cores with SMT to for that.

All you really need is what your applications need, and that changes is with every application.
 
Last edited:
What if Adobe finally adds proper multithreading, then what.

Most of the individual processes in Adobe don't scale with multiple threads because of the individual algorithms involved with their effects. More threads lets you simultaneously run more effects but there is no benefit running 1 effect over multiple threads at this stage.

30w1yk.jpg


Deterministic code means that threading is useless in some circumstances.

You need the result of x+y = z before you do z+a= b, there is no way of multi-threading that. You could have both of them running on separate threads, but the latter would have to be locked until the solution for the former (z) was solved. Same reason that if you have 9 newly pregnant women, you won't get a baby in one month.

Sometimes, sometimes you can rethink code to figure out z and b in the same equation, rarely however can you parallelise the flow.

Now if you have multiple x+y = z then z +a =b situations, that's a different story. Most of the time when you have 9 pregnant women in ~9 months, you have 9 babies, baring complications and life choices.
 
Last edited:
View attachment 160806

Deterministic code means that threading is useless in some circumstances.

You need the result of x+y = z before you do z+a= b, there is no way of multi-threading that. You could have both of them running on separate threads, but the latter would have to be locked until the solution for the former (z) was solved. Same reason that if you have 9 newly pregnant women, you won't get a baby in one month.

Sometimes, sometimes you can rethink code to figure out z and b in the same equation, rarely however can you parallelise the flow.

Now if you have multiple x+y = z then z +a =b situations, that's a different story. Most of the time when you have 9 pregnant women in ~9 months, you have 9 babies, baring complications and life choices.

To add to that, the more parallel you make your code, the harder it becomes to add an additional parallel thread without performance detriments unless your workload is inherently parallel (rendering is a great example of inherently parallel). The more cores you have, the quicker you run into the diminishing returns of adding more cores. 1 to 2 cores was a huge benefit, because it was the first step. 2 to 4 less so but still significant, 4 to 8 has become only somewhat significant for certain games and workloads. 8 to 16 will be of almost no benefit for anything not inherently parallel, as long as we are talking about what most home users (including enthusiasts) are likely to do.
 
Yep.

https://siliconangle.com/2016/09/09...y-bought-chip-startup-soft-machines-for-250m/
"The system supposedly requires so little computing capacity to perform the parallelization that it can squeeze out to four times more performance per watt than traditional CPUs. Additionally, Soft Machines’ architecture also simplifies application design since developers don’t need to bother with splitting up their software across multiple threads."

And there are many, many companies working on this.
 
Yep.

https://siliconangle.com/2016/09/09...y-bought-chip-startup-soft-machines-for-250m/
"The system supposedly requires so little computing capacity to perform the parallelization that it can squeeze out to four times more performance per watt than traditional CPUs. Additionally, Soft Machines’ architecture also simplifies application design since developers don’t need to bother with splitting up their software across multiple threads."

And there are many, many companies working on this.


yep.. and when one of them puts out a chip for consumers, then we can move forward.
 
Well... if you don't limit yourself to "stated on paper"........ AMD "Officially" showcased a demo of a 8 core outperforming a 9900k in cinebench while using significantly less power......
From that it's not that hard to deduce that AMD increased performance on this engineering sample at least 15% compared to the 2700x while using less power....
So it's possible that single core performance is on par to a 9900k vs this engineering sample.

2700X CB15MT is ~90% of 9900k, but this doesn't mean CB15ST is ~90% of 9900k. the ST gap is higher due to clocks and because CB15 has abnormally high SMT yields.
 
2700X CB15MT is ~90% of 9900k, but this doesn't mean CB15ST is ~90% of 9900k. the ST gap is higher due to clocks and because CB15 has abnormally high SMT yields.
Yep (useful word apparently), who didn't know this?
 
You didn't take time to look :)

A better imc won't make up for gaping bandwidth needs. (see differences between ryzen 1xxx and ryzen 2xxx for example)

DDR4, on release, was not faster than DDR3 - it started about 2133-2400. In fact DDR3 was faster.

I've seen reports of DDR5 being released towards the end of this year, at 4800MT/s and this being about 1.87x the speed of 3200MT/s DDR4.

I've also seen reports of DDR5 being postponed to 2020

I would say that DDR5 is due on platforms about mid next year, based on these reports - would make sense for AMD to persue a new socket for this...

I do remember the DDR4 release, what a sham that was.
IMC will help with reducing latency and increasing bandwidth if the memory is there to support it.

Way I look at it is this; the 2990 has 4 channels for 32 cores and has little to no impact on workload performance vs 2950. It's 16 cores for two channels...
I don't see for most users this being an issue with Zen2 with this in mind, let alone IMC improvements. Biostar release yesterday showed DDR4-4000 compatibility... this will help further. Only very specific (e.g. some scientific workloads) will be impacted by bandwidth of dual channel.

You're probably right re: DDR5. Zen2+ might have it, or wait for Zen3.
 
Depends on workflow.

Bandwidth doesn’t mean size of ram consumed

We are getting to the point where you’ll pop an extra 8 cores on there and get 2% improvement and think it is the best thing in the world.
 
  • Like
Reactions: Auer
like this
So I see 2 cores with 4 threads being the bare-maximum to need to leave free in the future for OS smoothness. That leaves you 6c/12t untapped on an 8-core processor, which is exactly the same as the PS5 will give.
Unlike most people on [H] I actually spent half a year working on an I3. 2C4T and it was an utter dogshit experience after a 2600k - forget video editing or anything too intensive. Gaming anything modern? Lol good luck. It sucks ass and is a very noticeable jump down in speed especially if multitasking.
So 4C8T is my minimum. Didn't notice a huge day to day jump from 2600k to 2600x (6c12T) but it helped for rendering and some MT tasks, few things are a little snappier here and there though.
 
Yep.

https://siliconangle.com/2016/09/09...y-bought-chip-startup-soft-machines-for-250m/
"The system supposedly requires so little computing capacity to perform the parallelization that it can squeeze out to four times more performance per watt than traditional CPUs. Additionally, Soft Machines’ architecture also simplifies application design since developers don’t need to bother with splitting up their software across multiple threads."

And there are many, many companies working on this.

More like there have been many claims, followed by failures in this area. In 10 years, Soft Machines, only ever demoed with one easy to optimize benchmark. That's it. There have been previous claims of concurrency extraction, that have fell by the wayside, just as this one apparently has.

Given the years they were in business (Founded in 2006), the failure to license or sell any product, and the price it sold for(250 million), it is pretty clear that their original claims flopped on any kind of real world testing. Key investors were Samsung and AMD, and they bailed out, so the claims were a bust. Intel probably purchased the skeleton for some patents and to pick up some talented people.

Also note that for concurrency extraction to work, the software has to be amenable to parallel operation in the first place. If hypothetically, this actually worked, it wouldn't make all software into the equivalent of an Embarrassingly Parallel problem. It would just find some elements of parallel work that exists.

You would still be subject to Amdahl's law. Also it strikes me that this system is more like slightly expanded speculative execution. It wouldn't be able to extract large scale concurrency as it would mainly be looking at a small set of localized instructions in flight.

IMO, a much better approach to concurrency extraction, is source code analysis tools, as that can work over the entire source code base, and not be limited to smaller scale localized concurrency extraction, you are also not spending run time resources searching for concurrency. You spend as much time as needed in deep analysis of the source code.

Short version: Nope.
 
Depends on workflow.

Bandwidth doesn’t mean size of ram consumed

We are getting to the point where you’ll pop an extra 8 cores on there and get 2% improvement and think it is the best thing in the world.
If you can use those cores they sure do, ask the more [H] threadripper and xeon users what such systems have done for their workflows (as you said).. not everything is single thread reliant, that's for people who have too much time to game.

By memory there to support it I mean faster ram and compatible with the IMC.


Snowdog I understand the challenges at a basic level but bet there is a way around it eventually. I've also been told 'x is impossible' by established industry leaders and proven them wrong. It just takes the right solution, right thinking and right approach, albeit with programming it will be even harder. It may come in the form of hardware via quantum solutions one day.. who knows, the 'quantum will save us all' approach ^_~. But never say never.
 
If you can use those cores they sure do, ask the more [H] threadripper and xeon users what such systems have done for their workflows (as you said).. not everything is single thread reliant, that's for people who have too much time to game.

By memory there to support it I mean faster ram and compatible with the IMC.


Snowdog I understand the challenges at a basic level but bet there is a way around it eventually. I've also been told 'x is impossible' by established industry leaders and proven them wrong. It just takes the right solution, right thinking and right approach, albeit with programming it will be even harder. It may come in the form of hardware via quantum solutions one day.. who knows, the 'quantum will save us all' approach ^_~. But never say never.

Graphic design, Desktop publishing, Photography....?
 
Snowdog I understand the challenges at a basic level but bet there is a way around it eventually. I've also been told 'x is impossible' by established industry leaders and proven them wrong. It just takes the right solution, right thinking and right approach, albeit with programming it will be even harder. It may come in the form of hardware via quantum solutions one day.. who knows, the 'quantum will save us all' approach ^_~. But never say never.

The mathematics of "Amdahl's Law" are so simple as to be unassailable. You may as well argue that someday 2+2 = 3.

Some unforeseeable time in the future we may have fundamentally different technology, with a fundamentally different approach to computation.

But that has nothing to do with arguments about current technology and core counts for home computer users.
 
Last edited:
The mathematics of "Amdahl's Law" are so simple as to be unassailable. You may as well argue that someday 2+2 = 3.

Some unforeseeable time in the future we may have fundamentally different technology, with a fundamentally different approach to computation.

But that has nothing to do with arguments about current technology and core counts for home computer users.

I say we take this "Amdahl's Law" and blow it to bits......
Who's with me?
 
Last edited:
Unlike most people on [H] I actually spent half a year working on an I3. 2C4T and it was an utter dogshit experience after a 2600k - forget video editing or anything too intensive. Gaming anything modern? Lol good luck. It sucks ass and is a very noticeable jump down in speed especially if multitasking.
So 4C8T is my minimum. Didn't notice a huge day to day jump from 2600k to 2600x (6c12T) but it helped for rendering and some MT tasks, few things are a little snappier here and there though.

You didn't read my entire post. You only read the parts you wanted to read, and then chose to overreact to that one part., out of context of the rest.

You're overacting like a CHILD would.

I'm not suggesting that you ONLY need 2c/4t for the rest of all days here, INCLUDING GAMING AND WORKSTATION TASKS, I'm saying that 2c/4t is a GOOD ESTIMATE OF *MAXIMUM OS BACKGROUND LOAD FOR THE NEAR FUTURE*.

SO 8 CORES / 16 THREADS SHOULD BE PLENTY FOR GAMES OR WORKSTATION TASKS BECAUSE THAT LEAVES YOU TWO CORES FREE FOR THE OS (and all your preferred monitoring and voice chat programs), to allow for smooth multitasking, worst-case.

6 major threads is the current sweet spot for all major console ports, and we can continue to expect it for several more years.


Next time try reading first, then posting. I'm simply saying this is not the time or the place for most gamer to jump above 16 threads, unless you're made of money. Especially since this new core design will have about 20% faster performance vs the old one.
 
Last edited:
Going back to the original topic/post, I hope that they're able to get clocks up to or near 5 GHz on boost. That'll really help their single threaded performance.

I'm so glad that AMD is back. Even if they aren't exactly the fastest right now (I mean...within a couple of FPS for games at 1080p and virtually the same as Intel at higher resolution...who cares. Close enough.), I'm glad AMD has pushed Intel. For a decade we've had 4c/8t parts at the top end of the consumer stack for no other reason than Intel knew they could make money that way without breaking a sweat and/or really trying, workstation CPUs not withstanding. AMD pushed Intel to compete again which is fantastic. I'm not a fan boy of anyone and usually will go with whatever the best bang for my buck is based on my needs. AMD right now has that advantage. I can't wait for Ryzen 3000. I hope it's even more amazing then we are all expecting.
 
Back
Top