AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

Don't worry, AMD will have a lower core count CPU that will be better for gaming than this 16 core. Don't hate on a 16 core CPU just because "average home users" can't use the full potential of it, that's dumb. AMD has a CPU for them too!
I welcome our new 16 core, on a maintstream socket, overlords.......I can sure put it to use!

Pointing out the diminishing returns is not "hate".

Bring on the core wars, as it should help lower the price of 6-8 core CPU I want, though not as fast as many would expect.
 
Wouldn't this prove beneficial if you were playing a game, streaming to Twitch and Youtube at the same time? If so this would save you the cost of having to setup a dedicated streaming PC. You'd likely have Discord, OBS and browsers open as well among other things.
 
Pointing out the diminishing returns is not "hate".

Bring on the core wars, as it should help lower the price of 6-8 core CPU I want, though not as fast as many would expect.

Bring on the core wars so I can get my Intel fix quicker. :D Intel? Lower Prices? Not going to happen. Also, if you are waiting on AMD, do you expect a 6 core for $50?

Edit: Obviously, I am referring to mainstream desktop parts, I just had to clarify that.
 
Now if we could only get games to be better optimized for more cores!

More than likely they will just put more anti cheat protection (that sort of works) and more drm (that sort of doesn't work) instead of actually improving the game. Then a game that could be ran on 2 cores, now needs to be ran on 8.
 
Now if we could only get games to be better optimized for more cores!

Like Crysis 3, that works great with 8 cores like the FX had. :) Oh well, things will eventually change, just going to have to give it time.
 
Lol!
You call me ignorant...
Windows 10 would like a word with you....(updates, windows defender/other AV, onedrive starting up, who knows what else.)

My personal system doesn't have this issue mostly since I minimize/get rid of that crap. Work systems however...
I always marvel when imaging a new PC with Win10 on how updates, onedrive updates and windows defender can take so much CPU power as to almost cripple a dual core and make a i5 quad struggle. PCs with really fast SSDs love to have CPU power as Storage has always been far and away the most bottlenecking component as far as user percieved responsivness.

Windows 10 on an i7 2640M, which is a dual-core hyperthreaded Sandy Bridge mobile CPU maxing out at 2.8 ghz on Windows 10 with a 480gb SSD and 8gb RAM. With Kaspersky, Skype, Teamviewer, Discord, Steam, and so on. Takes about 1 minute from cold boot to desktop, and maybe another 30 seconds to opening Chrome. Used primarily for web browsing, Youtube, and some lightweight gaming. I have not experienced background tasks causing my games to lag, so I highly doubt a modern desktop quad-core is going to get bogged down by background tasks on a home system. Business is a different story entirely, they still haven't figured out how to put SSDs in most of their laptops.

Like Crysis 3, that works great with 8 cores like the FX had. :) Oh well, things will eventually change, just going to have to give it time.

It's easy to fill out a core when you're doing unoptimized code that doesn't add much to the experience. I can max things out easily with PhysX calculated on the CPU.

Things can change, but as stated, I don't see programming fundamentally changing unless our processors fundamentally change, i.e. switch to quantum computing. The same old silicon design is going to have the same old restrictions.
 
Windows 10 on an i7 2640M, which is a dual-core hyperthreaded Sandy Bridge mobile CPU maxing out at 2.8 ghz on Windows 10 with a 480gb SSD and 8gb RAM. With Kaspersky, Skype, Teamviewer, Discord, Steam, and so on. Takes about 1 minute from cold boot to desktop, and maybe another 30 seconds to opening Chrome. Used primarily for web browsing, Youtube, and some lightweight gaming. I have not experienced background tasks causing my games to lag, so I highly doubt a modern desktop quad-core is going to get bogged down by background tasks on a home system. Business is a different story entirely, they still haven't figured out how to put SSDs in most of their laptops.



It's easy to fill out a core when you're doing unoptimized code that doesn't add much to the experience. I can max things out easily with PhysX calculated on the CPU.

Things can change, but as stated, I don't see programming fundamentally changing unless our processors fundamentally change, i.e. switch to quantum computing. The same old silicon design is going to have the same old restrictions.

However, Crysis 3 added considerably to the experience and from what I could tell, maxed out the FX8350, all 8 cores, more or less. It was one of the games where the FX processors shined the most, unlike the games that still think we all run single core processors from 2004.
 
I'd say that my PC spends more time compressing pictures and videos than playing games, but that's mostly because it does it while I'm not present. Does it count as "time with a computer" if I'm not there?
l

No- cause if it's doing it in the background or when you're not present, it doesn't need to be fast, overnight compression etc does not need to be fast as it's not time limited. 6 cores would adequately service your needs.
 
Memory bandwidth will double, power consumption will drop and latencies will rise making the bandwidth gained a moot point in the short term.

To expand, as I haven't seen anyone explore DDR4 vs. DDR5 yet, and absolutely not attempting to correct Dan:

Should specify that we're talking about access latency, which really hasn't changed much over time while bandwidth has jumped orders of magnitude.

Beyond that- 'latency' depends on what you're testing. Access latency is important for IPC, but when doing work, higher bandwidth drops latency as data targets increase in size.

To apply that, DDR5 won't help (and could hurt!) per-core performance for the hypothetical Ryzen 9 3000 16-core CPU, but it will help keep all of those cores fed if you have them all crunching away at one or more things where DDR4 could be limiting.
 
To expand, as I haven't seen anyone explore DDR4 vs. DDR5 yet, and absolutely not attempting to correct Dan:

My earlier post suggested 1.87x bandwidth differences between release DDR5 vs DDR4-3200.

Do I think latencies will rise? yerp, do I think that will make a difference? nope.

Modern CPUs have a lot of prefetching and large caches which is why latencies don't matter a whole hell of a lot.
 
Do I think latencies will rise? yerp, do I think that will make a difference? nope.

Well, clock latency will rise- but since clocks will go up, absolute latency as measured in nanoseconds shouldn't drift too much in either direction while bandwidth increases will drop latency on larger working sets.

That's what I'm getting at. So single-core workloads likely won't improve, but loading the whole CPU down could.
 
Anandtech just did a good comparison of the 2600k vs modern quad cores and an eight core 9700k. Further reinforces my view that CPU core counts above 8 may go mostly unused for years.

https://www.anandtech.com/show/1404...el-core-i7-2600k-testing-sandy-bridge-in-2019

while a single application may not use that many cores in most cases, you still have the ability to run more than one. I have a Threadripper 2950x and I can max it out quite easy.

I hate when people say x program cannot use all of the cores, because if that is the case you are doing it wrong....
 
No- cause if it's doing it in the background or when you're not present, it doesn't need to be fast, overnight compression etc does not need to be fast as it's not time limited. 6 cores would adequately service your needs.

When my "overnight" encoding tasks equal 12-24 hours and I only have 8 hours in a night to get it done, yes, I do need faster because it is time limited.
 
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.
 
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.

It drops the max core speed. That's not an 'Intel fanboy' thing, that's a weakness versus Ryzen 7. We're still stuck on this curve where single-core performance starts dropping at eight cores.
 
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.

It's not that we don't want more cores, but with gaming being the most intensive thing I typically do with my computer, I would rather have a focus on improved single core performance rather than cores I would almost never use. My 4930k rarely ever sees utilization past 35%, which corresponds to approximately 5 almost fully loaded cores, and Starcraft 2 still lags on maps with lots of units with typical utilization around 25%. And the reality is that there are still lots of people playing Starcraft 2 and older games that don't scale well with more cores, with even some modern games being this way.

More cores is great, but it's best when it comes with improved single core performance as well. More cores is also good if it pushes the price of lower core count processors down. More cores at a higher price... well, it'll fulfill a niche, but it's not what most of us want.
 
When my "overnight" encoding tasks equal 12-24 hours and I only have 8 hours in a night to get it done, yes, I do need faster because it is time limited.

Right - fringe case. 6 vs 8 cores, assuming everything is equal, for encoding you'll end up with 25% more performance. 8 vs 24 hours is 3x.

What I'm seeing from here: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/8 is that for encoding, AVX (2/512) is king, and even 50% more cores (threadripper/12 core) doesn't beat clock speed.

So take-aways from the 1080p HVEC chart:

1. The 8086k and 8700k beat out the 2700x and threadripper, with fewer cores.
2. 6->8 cores doesn't scale linearly on AMD (see 2600x to 2700x)
3. 6->8 cores on intel does scale linearly.
4. Hyperthreading gives you 10%.
 
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.
The "fanboy hate" hyperbole is total nonsense. The only true brand loyalty anyone rational has is to performance, as it relates to their type of usage. Video transcoders are outliers.

AMD releasing CPUs with more and more 8core chips glued together is impressive, but if they ever release something that beats Intel's single threaded IPC performance then we'll see perceived Intel loyalties quickly abandoned.
 
Right - fringe case. 6 vs 8 cores, assuming everything is equal, for encoding you'll end up with 25% more performance. 8 vs 24 hours is 3x.

What I'm seeing from here: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/8 is that for encoding, AVX (2/512) is king, and even 50% more cores (threadripper/12 core) doesn't beat clock speed.

So take-aways from the 1080p HVEC chart:

1. The 8086k and 8700k beat out the 2700x and threadripper, with fewer cores.
2. 6->8 cores doesn't scale linearly on AMD (see 2600x to 2700x)
3. 6->8 cores on intel does scale linearly.
4. Hyperthreading gives you 10%.

6 vs 8 vs 1,050,667,889,999 cores. Whats the damn difference if all you guys are only using software that support 6 cores max like Handbrake?


link to Anands is using what was free and easy. Handbrake LMAO.

Im just saying eat a box of rock salt when you reference handbrake as the defacto standard of encoding performance.

H265 can support gobs of cores if its utilized properly. Not just 6.
 
Last edited:
  • Like
Reactions: N4CR
like this
My earlier post suggested 1.87x bandwidth differences between release DDR5 vs DDR4-3200.

Do I think latencies will rise? yerp, do I think that will make a difference? nope.

Modern CPUs have a lot of prefetching and large caches which is why latencies don't matter a whole hell of a lot.

It depends on what CPU. Intel is rather unensitive to memory latencies. AMD is very sensitive due to microarchitectural choices. That is why you see AMD releasing latency-optimized AGESAs, and Ryzen users looking for the higher-speed memory stable in their systems. Zen2 brings moar cores at expense of a three die configuration that will increase latencies compared to a single die configuration. It remains to be seen if with microarchitectural optimizations and higher speeds engineers have reduced latencies compared to current Zen.
 
6 vs 8 vs 1,050,667,889,999 cores. Whats the damn difference if all you guys are only using software that support 6 cores max like Handbrake?


link to Anands is using what was free and easy. Handbrake LMAO.

Im just saying eat a box of rock salt when you reference handbrake as the defacto standard of encoding performance.

H265 can support gobs of cores if its utilized properly. Not just 6.

Wrong.

Handbrake supports what the underlying encoder supports. Nearly all non commercial programs use the open source x264 and x265 encoders, and they support a lot more than 6 cores. Though, due to the nature of how encoders work, there is a minimum block size passed to each thread, so if you want to fully utilize more than 32 threads, you should benchmark a high resolution file like 4K. The higher the resolution, the higher the amount of blocks to be processed, and the more threads fully utilized. If you can't fully utilize cores with one high resolution file, then start two copies and run them concurrently.

Also the original point remains, Core for Core and clock for Clock Intel CPUs are better at encoding Video, add in the clockspeed advantage, and they can beat CPUs with higher core counts.

But really we are splitting hairs over an activity that is usually batched off to when we aren't using the PC, so small margins of victory are somewhat meaningless outside of bragging rights.
 
The "fanboy hate" hyperbole is total nonsense. The only true brand loyalty anyone rational has is to performance, as it relates to their type of usage. Video transcoders are outliers.

AMD releasing CPUs with more and more 8core chips glued together is impressive, but if they ever release something that beats Intel's single threaded IPC performance then we'll see perceived Intel loyalties quickly abandoned.

No, they are not "perceived Intel Loyalties" but real ones. There are folks here that will not abandon Intel no matter the reason. Now, I prefer AMD and will always use them in MY systems but that does not mean I suggest AMD for all use cases and everyone else. I am an AMD fan and will stick with them, regardless of what is happening in the CPU arena.

I am just glad we are no longer stuck on the 4 core, 8 thread hurr durr Intel train. That one has finally reached the end of the line.
 
I can't believe people here are arguing about having too many cores. Intel fanboys will grasp at straws to find ANYTHING they can attach their hate to.

Not jumping on the 12-16 core hype bandwagon says nothing about which brand CPU I will buy. Unless there are big price changes in Intel's favor I am most likely going to buy a 6 or 8 core Ryzen 3000 for my next PC. Which might be delayed if a tarrif war drives up component prices by a lot.
 
6 vs 8 vs 1,050,667,889,999 cores. Whats the damn difference if all you guys are only using software that support 6 cores max like Handbrake?


link to Anands is using what was free and easy. Handbrake LMAO.

Im just saying eat a box of rock salt when you reference handbrake as the defacto standard of encoding performance.

H265 can support gobs of cores if its utilized properly. Not just 6.

An even easier way is to open another instance of handbrake...Yes you can do that!
I use vidcoder since it does that automatically.
With filters and such I find I have to open 3-4 x264 encodes to max out my 1700, and 2 for x265 encodes.
 
An even easier way is to open another instance of handbrake...Yes you can do that!
I use vidcoder since it does that automatically.
With filters and such I find I have to open 3-4 x264 encodes to max out my 1700, and 2 for x265 encodes.

You must be targeting fairly low resolution for your encodes.

As I explained above(#224), it's the nature of Video encoders to have a minimum block size. If you encode to a lower resolution, you get fewer blocks, and you can utilize fewer threads.

Benchmarking encoders should really uses 4K files these days.
 
No- cause if it's doing it in the background or when you're not present, it doesn't need to be fast, overnight compression etc does not need to be fast as it's not time limited. 6 cores would adequately service your needs.

lol...........

Right - fringe case. 6 vs 8 cores, assuming everything is equal, for encoding you'll end up with 25% more performance. 8 vs 24 hours is 3x.

What I'm seeing from here: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/8 is that for encoding, AVX (2/512) is king, and even 50% more cores (threadripper/12 core) doesn't beat clock speed.

So take-aways from the 1080p HVEC chart:

1. The 8086k and 8700k beat out the 2700x and threadripper, with fewer cores.
2. 6->8 cores doesn't scale linearly on AMD (see 2600x to 2700x)
3. 6->8 cores on intel does scale linearly.
4. Hyperthreading gives you 10%.

Well, It scales pretty linearly when I can run more than one instance. Why wouldn't I want to finish my encodes faster??? (Makes no sense) faster=better. 25% is pretty significant...
Also I can encode at a respectable speed while gaming...

....I just realized that x265 now has AVX stuff in it and intel is wiping the floor right now. o_0
I thought my Ryzen was an encoding beast when I got it lol! (was much faster than haswell quad and FX8350)
I'm still pretty new to x265 encoding, I just started using it(works out pretty well with Ryzen 3 around the corner!)

I've been enjoying my Ryzen since April 2017... If I got a 8700k later in the year when they launched, I would be stuck with it until I saved up enough for a new Mobo and CPU.
Good thing I can upgrade without changing motherboards!

Zen 2 with doubled AVX will be frickin awesome with 16 cores!
 
Last edited:
You must be targeting fairly low resolution for your encodes.

As I explained above(#224), it's the nature of Video encoders to have a minimum block size. If you encode to a lower resolution, you get fewer blocks, and you can utilize fewer threads.

Benchmarking encoders should really uses 4K files these days.

1080p CRF22 blu ray for x265 and whatever dvd is for x264 (480p?).
1080p must be too low of a rez to use 100% of 8 cores. (it uses about 90%)
 
1080p CRF22 blu ray for x265 and whatever dvd is for x264 (480p?).
1080p must be too low of a rez to use 100% of 8 cores. (it uses about 90%)

8 cores and 16 threads, but when it is up around 90%+ it just may be that the code has some serial sections in it, spent recombining the work of the threads, and spitting up the next frame to pass to the threads.

But Standard Def like DVD is really only about 1/4 the pixels of HD, so only 1/4 the blocks, and 1/4 potential for threads, so definitely hitting a thread limit there.

Back to Anandtech benchmarks, they should really include 4K files in it's encoder test, fine to keep lower resolution for historical purposes, but with high thread counts of CPUs today, they need max resolution, or concurrent instances to better utilize the available threads.
 
So is this AMD Ryzen 9 16 core CPU (and really, all upcoming Zen2 CPUs) going to have really poor latency to memory, thus causing an overall slowdown for tasks that do not fit into L1/2/3 cache, as compared to Zen+?

I can see how Cinebench R15 and R20 might allow AMD Zen2 to look super fast and efficient, such as what we saw at CES, but now I'm beginning to worry that in actual real-world performance Zen2 8C/16T CPUs will be slower than the 2700X in certain tasks simply due to the multi-die architecture and the memory latencies introduced within. The faster per-core speed and IPC boosts that Zen2 offers may allow for an overall speed boost compared to he 2700X, but if the Zen2 were artificially clocked at 2700X speeds and compared to the 2700X, it would be slower. Plausible?
 
So is this AMD Ryzen 9 16 core CPU (and really, all upcoming Zen2 CPUs) going to have really poor latency to memory, thus causing an overall slowdown for tasks that do not fit into L1/2/3 cache, as compared to Zen+?

I can see how Cinebench R15 and R20 might allow AMD Zen2 to look super fast and efficient, such as what we saw at CES, but now I'm beginning to worry that in actual real-world performance Zen2 8C/16T CPUs will be slower than the 2700X in certain tasks simply due to the multi-die architecture and the memory latencies introduced within. The faster per-core speed and IPC boosts that Zen2 offers may allow for an overall speed boost compared to he 2700X, but if the Zen2 were artificially clocked at 2700X speeds and compared to the 2700X, it would be slower. Plausible?
Unlikely, they would get slaughtered if they did that it is more likely that they will just increase the amount of memory available internally to make up for any speed short falls to act as a sort of buffer. The threadripper 16C/32T didn't really show any problems in that department and I think it unlikely they would introduce it in a further iteration. I do think though that they are not actually going to get the leaked clock speeds out the gate and we will see those in later product refreshes at a later date.
 
So is this AMD Ryzen 9 16 core CPU (and really, all upcoming Zen2 CPUs) going to have really poor latency to memory, thus causing an overall slowdown for tasks that do not fit into L1/2/3 cache, as compared to Zen+?

I can see how Cinebench R15 and R20 might allow AMD Zen2 to look super fast and efficient, such as what we saw at CES, but now I'm beginning to worry that in actual real-world performance Zen2 8C/16T CPUs will be slower than the 2700X in certain tasks simply due to the multi-die architecture and the memory latencies introduced within. The faster per-core speed and IPC boosts that Zen2 offers may allow for an overall speed boost compared to he 2700X, but if the Zen2 were artificially clocked at 2700X speeds and compared to the 2700X, it would be slower. Plausible?

It's an unknown, though it makes some sense that AMD wouldn't have used chiplets on the desktop if it really introduced a significant latency. For server type workloads it hardly matters, but for real time desktop usage (games) it can matter quite a bit.

AMD will have a relatively large cache in the I/O chip that should help a fair bit. It will really be interesting to see how this design works out.
 
It's an unknown, though it makes some sense that AMD wouldn't have used chiplets on the desktop if it really introduced a significant latency. For server type workloads it hardly matters, but for real time desktop usage (games) it can matter quite a bit.

AMD will have a relatively large cache in the I/O chip that should help a fair bit. It will really be interesting to see how this design works out.

Yeah I was about to say...
AMD wouldn't release a chiplet CPU for home users if it wasn't better than their previous CPU...
 
You didn't read your own post. Before sperging out at me and calling me a child, maybe read your own post first that I was referring to? I operated with four threads in windows for 'office editor' functions amongst others with an I3, it sucked ass with documents containing images and was not sufficient for the job. That's the point I tried to make - your assertion is incorrect in my experience.

But yes I agree with you saying that gamers don't need 16+ threads. 8 is enough for now. In future, maybe not though.

Edit to add the I3 I used was same speed clock-wise as my 2600k at stock. 2600k has a small OC around 4.2 or so for stability. The biggest difference was threads (maybe cache?) and it was night and day.

I'll add a different hot take: I used a 7500U, a 2C4T ultrabook CPU, in an ultrabook with an NVMe drive and 16GB of RAM. For any common desktop task, this machine ran at the same speed as the 4.7GHz 8700k with 32GB of faster RAM.

Now I have a folding ultrabook with an 8550U, which is materially faster for strenuous photo tasks like multi-shot merges of different types, but is otherwise indistinguishable. I just happen to both a) have a use for having the ability to use it in tablet mode and b) the workload to appreciate extra grunt for certain types of photo editing while on the go. My girlfriend now rocks the 7500U-based laptop for school, an XPS13, and it's still as blazing fast as the day I bought it.


I'll also add that I do not intend for my anecdotal experience to negate yours, N4CR.
 
  • Like
Reactions: N4CR
like this
Wow, now Intel is affected by some new found vulnerabilities and AMD is not affected by them. In fact, Intel is recommending turning off hyper threading on anything processor older than the 8000 series. Seriously, you cannot make this stuff up. This is another plus for AMD but they cannot sit on their laurels, they need to take advantage of this and not stop kicking just because Intel is on the ground. This 3000 series from AMD may be an even bigger deal because of this issue. :)
 
No, they are not "perceived Intel Loyalties" but real ones. There are folks here that will not abandon Intel no matter the reason.

Look, if someone refuses to use the best tool for the job- all technical variables objectively considered- then I wouldn't consider them [H]. That's an unsupportable subjective bias and should be confronted when expressed. It hurts the community and disincentivizes innovation.

If Ryzen were superior for my purposes, I'd be running a Ryzen CPU right now. If a Radeon were superior for my purposes (when I last purchased a GPU), I'd be running a Radeon GPU right now. Neither was true when I made my last few purchases, neither is true today, and neither is likely to be true in the near future.

Yet I still respect AMD's work, I respect that they've innovated to address niches that the likes of Intel and Nvidia have focused less on, and I respect that they do represent a significant value for a widening range of customer computing workloads and experiences.
 
Wow, now Intel is affected by some new found vulnerabilities and AMD is not affected by them. In fact, Intel is recommending turning off hyper threading on anything processor older than the 8000 series. Seriously, you cannot make this stuff up. This is another plus for AMD but they cannot sit on their laurels, they need to take advantage of this and not stop kicking just because Intel is on the ground. This 3000 series from AMD may be an even bigger deal because of this issue. :)

Older than the 8000-series? That's still older than Ryzen, and in single-core workloads, those CPUs are still faster than Ryzen too.

Keep on keepin' on.
 
Surprised no one is talking about the amd x570 chipset which will be running very hot. it has a fan. So if you buy a low end board that has no fan. Its going to throttle. My guess is they didn't die shrink the chipset
 
Back
Top