AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

sknight

Limp Gawd
Joined
Oct 7, 2009
Messages
131
AMD is giving finishing touches to its 3rd generation Ryzen socket AM4 processor family which is slated for a Computex 2019 unveiling, followed by a possible E3 market availability. Based on the "Matisse" multi-chip module that combines up to two 8-core "Zen 2" chiplets with a 14 nm I/O controller die, these processors see a 50-100 percent increase in core-counts over the current generation. The Ryzen 5 series now includes 8-core/16-thread parts, the Ryzen 7 series chips are 12-core/24-thread, while the newly created Ryzen 9 series (designed to rival Intel Core i9 LGA115x), will include 16-core/32-thread chips.

Thai PC enthusiast TUM_APISAK confirmed the existence of the Ryzen 9 series having landed himself with an engineering sample of the 16-core/32-thread chip that ticks at 3.30 GHz with 4.30 GHz Precision Boost frequency. The infamous Adored TV leaks that drew the skeleton of AMD's 3rd generation Ryzen roadmap, referenced two desktop Ryzen 9 parts, the Ryzen 9 3800X and Ryzen 9 3850X. The 3800X is supposed to be clocked at 3.90 GHz with 4.70 GHz boost, with a TDP rating of 125W, while the 3850X tops the charts at 4.30 GHz base and a staggering 5.10 GHz boost. The rated TDP has shot up to 135W. We can now imagine why some motherboard vendors are selective with BIOS updates on some of their lower-end boards. AMD is probably maximizing the clock-speed headroom of these chips out of the box, to preempt Intel's "Comet Lake" 10-core/20-thread processor.


vsk5GwnybyQasfiG_thm.jpg
 
I guess with 10 core on the horizon, AMD has no choice but to whip out the big guns at launch, and not wait for a refresh to do 16 core. At least the 16-core 135w TDP does't look too crazy when you realize how many watts the 9900k draws.

Give me the 8-core version at slightly lower boost clocks, and you have a deal. The number of applications scaling efficiently beyond 16 threads is still pretty small, and will take some time to grow.

I'll take the best performance for my dollar TODAY please:D
 
Last edited:
Memory bandwidth always had negligible impact on most real world applications so I wouldn't be worried about that. The biggest non-upgrade one can do is upgrading their memory bandwidth.

For desktop gaming and general pottering around, maybe so, but with 16 real cores, when you really need to crunch (eg adobe, or excel 64, or compiling big programs), getting significantly more than 46 GB a second is important. That just won’t happen with 2 memory channels on ddr4.

I swear 12-16 cores at the moment for the mainstream/mid-high end sounds like it is a solution looking for a problem
 
Ryzen 5 series now includes 8-core/16-thread parts, the Ryzen 7 series chips are 12-core/24-thread, while the newly created Ryzen 9 series (designed to rival Intel Core i9 LGA115x), will include 16-core/32-thread chips.

This is just hearsay.

The infamous Adored TV leaks that drew the skeleton of AMD's 3rd generation Ryzen roadmap, referenced two desktop Ryzen 9 parts, the Ryzen 9 3800X and Ryzen 9 3850X. The 3800X is supposed to be clocked at 3.90 GHz with 4.70 GHz boost, with a TDP rating of 125W, while the 3850X tops the charts at 4.30 GHz base and a staggering 5.10 GHz boost. The rated TDP has shot up to 135W. We can now imagine why some motherboard vendors are selective with BIOS updates on some of their lower-end boards. AMD is probably maximizing the clock-speed headroom of these chips out of the box, to preempt Intel's "Comet Lake" 10-core/20-thread processor.

...more hearsay.
 
2950x is so nice.

Here is a typical scenario for me.

Handbrake running h265 recodes
2nd handbrake doing h265 as well
Maybe some staxrip running but its heavy on CPU naturally.

Play TitanFall 2 at 144fps pegged (the max game engine speed)

Processor still has 50% available so I can run something else at same time as well.

Though I'm speculative as of the performance of 16 cores only fed via dual channel ram.

Well have to see. No this isn't a debate about dual vs. Quad. This is the fact we havent seen a desktop with 16 cores/32 thr. And we havent see how they will fare on only 2 ram channels. That's alot of cores to feed with only 2 channels.
 
As a gamer I've yet to see my quad core left wanting. As an IT guy I've seen very few workloads for office machines requiring more than quad cores. A few 2D CAD machines and extreme Excel users get hex core i7 8700 CPUs and users are happy.

I'll go 16 cores just because I want to but I laugh at the thought of it. One of my servers runs 16 cores and can support 44 VMs and serve data to 1100+ employees. Oh what glorious excess we have access to now.
 
If it can do 4.8+ ghz sustained on all cores (watercooled obviously), this may very well be my next system upgrade. i7 4930k is getting a little long in the tooth, but then again I hardly ever push this system to its max...
 
i want a high clock 8 core. cant ever see needing more but its nice that they are coming out with these
640kbytes is enough for everyone :D

Personally I do stuff that can eat an almost infinity amount of cores/ and the projeckt can take weeks at 100% cpu load. so i would love cheap 16 core system that also have enough core performance for games
 
640kbytes is enough for everyone :D

Personally I do stuff that can eat an almost infinity amount of cores/ and the projeckt can take weeks at 100% cpu load. so i would love cheap 16 core system that also have enough core performance for games

Yeah it will be nice for those that can use it. I bought a 1700 when they first came out and ive never got it over 50 percent most of the time. only time was ripping a few videos from disc.
 
Yeah it will be nice for those that can use it. I bought a 1700 when they first came out and ive never got it over 50 percent most of the time. only time was ripping a few videos from disc.

I work a lot with brute force compression. which bassicacally means the same "task" just slightly diffrent for each cpu
100's of iterations that are each individually done make a good use of any cores possible.
I/O can become a bottleneck thoug but havent run into it yet
 
No use for 16 cores?

I bitch hammer my 2950x. It lives 100% about 80% of the week.

It's a video compressing monster.

So a 16 core can be fully utilized. But the main stream doesnt do work outside of word with thier PCs. They just game, internet, and play music whatever.
 
For desktop gaming and general pottering around, maybe so, but with 16 real cores, when you really need to crunch (eg adobe, or excel 64, or compiling big programs), getting significantly more than 46 GB a second is important. That just won’t happen with 2 memory channels on ddr4.

I swear 12-16 cores at the moment for the mainstream/mid-high end sounds like it is a solution looking for a problem
x265 encoding disagrees with you!!
 
Time to upgrade the i7-7700k Plex server to 16 coares! Now I need gigabit internet so 15 of my friends can watch 4k streams at the same time.
 
x265 encoding disagrees with you!!

But, unless you are doing a lot of encoding/rendering then very high core counts are likely a waste. Note I am talking about home users, not server farms.

Encoding/Rendering makes for great benchmarks, but they aren't real time activities, so how much impact does doing it faster have for a home user? Hopefully no one is staring at the screen for 30 minutes waiting for an encode to finish. It's a background or batch activity, either completely away from your computer, or doing unrelated foreground tasks.

Moving your encode from 28 minutes to 18 minutes won't really change much, unless you do so much that it can't run in a batch overnight, or you are on some kind of deadline, which likely means in either case it's a paid gig.

Which is why I actually think gaming is one of the most important home CPU loads. It's real time, and it's impact is large and can be felt immediately if you are CPU isn't fast enough. Come back in 5 years and I bet you will still be hard pressed to find more than marginal gains for gaming beyond 8 cores.
 
And they will be starved of memory bandwidth

Memory bandwidth always had negligible impact on most real world applications so I wouldn't be worried about that. The biggest non-upgrade one can do is upgrading their memory bandwidth.

For applications that do not benefit from memory bandwidth, this isn't a big deal. However, there are many applications where this is the case. And of course, the applications that fall into this category are typically used by the HEDT crowd. The very people who would want a 16c/32t CPU on the cheap.

They will be starved for memory bandwidth in many applications that can actually make use of that many cores. We already see this with the Threadripper 2990WX. It performs great in specific applications but falls woefully short when memory bandwidth is needed. You have to keep in mind that the 2990WX has half the bandwidth of its Epyc counterparts and a Ryzen 9 series CPU with 16c/32t will have half as much bandwidth as any Threadripper CPU does. The latency issues we see with Threadripper parts in gaming may also rear their ugly head if the chip has two 8c/16t CPUs connected in the same package as the article claims. AMD would have to have some massive IPC gains to offset this in games.

Though truthfully, we've never seen seen processors with so many cores outside the HEDT space which has always had more bandwidth than the standard mainstream parts have. It will be interesting to see what those numbers look like.
 
For applications that do not benefit from memory bandwidth, this isn't a big deal. However, there are many applications where this is the case. And of course, the applications that fall into this category are typically used by the HEDT crowd. The very people who would want a 16c/32t CPU on the cheap.

They will be starved for memory bandwidth in many applications that can actually make use of that many cores. We already see this with the Threadripper 2990WX. It performs great in specific applications but falls woefully short when memory bandwidth is needed. You have to keep in mind that the 2990WX has half the bandwidth of its Epyc counterparts and a Ryzen 9 series CPU with 16c/32t will have half as much bandwidth as any Threadripper CPU does. The latency issues we see with Threadripper parts in gaming may also rear their ugly head if the chip has two 8c/16t CPUs connected in the same package as the article claims. AMD would have to have some massive IPC gains to offset this in games.

Though truthfully, we've never seen seen processors with so many cores outside the HEDT space which has always had more bandwidth than the standard mainstream parts have. It will be interesting to see what those numbers look like.

What about if they mate this with DDR5 when it comes out soon?
 
So you were the guy that bought those 3-core Athlons?
Ha I still have one running, but its unlocked to a quad :)
But, unless you are doing a lot of encoding/rendering then very high core counts are likely a waste. Note I am talking about home users, not server farms.

Encoding/Rendering makes for great benchmarks, but they aren't real time activities, so how much impact does doing it faster have for a home user? Hopefully no one is staring at the screen for 30 minutes waiting for an encode to finish. It's a background or batch activity, either completely away from your computer, or doing unrelated foreground tasks.

Moving your encode from 28 minutes to 18 minutes won't really change much, unless you do so much that it can't run in a batch overnight, or you are on some kind of deadline, which likely means in either case it's a paid gig.

Which is why I actually think gaming is one of the most important home CPU loads. It's real time, and it's impact is large and can be felt immediately if you are CPU isn't fast enough. Come back in 5 years and I bet you will still be hard pressed to find more than marginal gains for gaming beyond 8 cores.
I do mine in batches but more cores means less time. Of course there is an advantage. I am less likely to play with the settings to reduce encode time. Faster encodes frees up my server for other tasks.

I am not sure how anyone could be complaining about AMD offering these chips on AM4. If you don't have a use thats fine. But I say bring it on.
 
What about if they mate this with DDR5 when it comes out soon?

Admittedly, I hadn't considered that. For X570, that's certainly a mitigating factor. For X470, not so much. The same thing that always happens when new memory comes out will probably happen again. Memory bandwidth will double, power consumption will drop and latencies will rise making the bandwidth gained a moot point in the short term.
 
For applications that do not benefit from memory bandwidth, this isn't a big deal. However, there are many applications where this is the case. And of course, the applications that fall into this category are typically used by the HEDT crowd. The very people who would want a 16c/32t CPU on the cheap.

They will be starved for memory bandwidth in many applications that can actually make use of that many cores. We already see this with the Threadripper 2990WX. It performs great in specific applications but falls woefully short when memory bandwidth is needed. You have to keep in mind that the 2990WX has half the bandwidth of its Epyc counterparts and a Ryzen 9 series CPU with 16c/32t will have half as much bandwidth as any Threadripper CPU does. The latency issues we see with Threadripper parts in gaming may also rear their ugly head if the chip has two 8c/16t CPUs connected in the same package as the article claims. AMD would have to have some massive IPC gains to offset this in games.

Though truthfully, we've never seen seen processors with so many cores outside the HEDT space which has always had more bandwidth than the standard mainstream parts have. It will be interesting to see what those numbers look like.
Well luckily for me I'm not interested in applications that are memory bandwidth constrained. As for games I don't think many games use even half of 16 cores. I'm not familiar with ryzen architecture, isn't the bandwith shared? Meaning if half of the 16 cores are idle, can't the remaining 8 use all the bandwidth? If that is not the case, then yeah I revise my statement it could suck.
 
I'm interested in the cherry picking of cores. E.g. a 16 core with 4 cores on each die that run highest OC for gaming and general use. Maybe they can be spread out thermally this way which can increase maximum clocks.

For brute force video encoding you have all 16 going instead.. Seen something a little like this already but without the other cores being disabled or highly downclocked/deactivated.
 
I do mine in batches but more cores means less time. Of course there is an advantage. I am less likely to play with the settings to reduce encode time. Faster encodes frees up my server for other tasks.

I am not sure how anyone could be complaining about AMD offering these chips on AM4. If you don't have a use thats fine. But I say bring it on.

Who's complaining?

I was pointing out that just because someone does encoding, doesn't mean they need 16 cores, or that the cost/tradeoffs of 16 cores are worth it.

Sure I'll take 16 cores if they are free and have no trade-offs, but that isn't the case.

If I have to pay an extra $150-$200 to double core count so I can decrease a batch job from 2 hours to 1 hours, that is done while I sleep, then it isn't worth the money.

If I am trading off real time performance for a decrease in that same batch job, then again it isn't worth the trade-off.
 
Well luckily for me I'm not interested in applications that are memory bandwidth constrained. As for games I don't think many games use even half of 16 cores. I'm not familiar with ryzen architecture, isn't the bandwith shared? Meaning if half of the 16 cores are idle, can't the remaining 8 use all the bandwidth? If that is not the case, then yeah I revise my statement it could suck.

The problem is that sharing bandwidth across CCX complex, or in this case dies, generally comes with a latency penalty.
 
Back
Top