AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

I don't really need an upgrade per se and I really don't have the time, but I've been looking forward to building something new. The most current CPU in my house is a low power variant Ivy Bridge processor in my pfsense box.

+1, same boat exactly. Sandy Bridge in my main rig, I do have a $50 old Dell laptop with an Ivy Bridge mobile. My PC isn't holding me back from anything, but it just feels like time for a refresh. Couple SATA ports and hard drives have died over the years so I've got all my cold storage on externals, the boot drive is almost full and I don't want to spend a bunch of time and money replacing it with a bigger SSD and redoing Windows and all my programs, something has recently started making a faint whining noise. The hype for Zen 2 feels like the Sandy Bridge hype all over again, and Sandy Bridge was worth the hype. If you told me in January 2011 that the chip I was buying on release day for $180 at Microcenter was going to still be running fine in my main rig 8.5 years later, I would have been shocked. In the 8.5 years preceding that, I went from a P4 1.6 to a P4 2.4 to an Athlon X2 to a Core 2 Duo to a Core 2 Quad, and each upgrade was a huge improvement.
 
...it doesn't actually represent modern rendering applications either. CB15 represents CB15; that's the point.

No application or benchmark represents any other, as they all behave differently, modern or not. If both camps are still using CB15 and reviewers are still using it, then the results must be relevant. So the point you are trying to make seems to be pointless from that perspective.

anyhow, I can't wait till we have full reviews with a slew of different performance resluts.. I hope we are not disappointed.
 
Last edited:
+1, same boat exactly. Sandy Bridge in my main rig, I do have a $50 old Dell laptop with an Ivy Bridge mobile. My PC isn't holding me back from anything, but it just feels like time for a refresh. Couple SATA ports and hard drives have died over the years so I've got all my cold storage on externals, the boot drive is almost full and I don't want to spend a bunch of time and money replacing it with a bigger SSD and redoing Windows and all my programs, something has recently started making a faint whining noise. The hype for Zen 2 feels like the Sandy Bridge hype all over again, and Sandy Bridge was worth the hype. If you told me in January 2011 that the chip I was buying on release day for $180 at Microcenter was going to still be running fine in my main rig 8.5 years later, I would have been shocked. In the 8.5 years preceding that, I went from a P4 1.6 to a P4 2.4 to an Athlon X2 to a Core 2 Duo to a Core 2 Quad, and each upgrade was a huge improvement.
Those were fun times. I bought the same chip on launch day. Mine is long gone but SB surely did deliver. Hopefully Ryzen delivers here as well.
 
If both camps are still using CB15 and reviewers are still using it, then the results must be relevant. So the point you are trying to make seems to be pointless from that perspective.

That's the thing- they are relevent, for comparing with previous CB15 runs. That's useful for the type of processing that this specific benchmark does, and was more useful when it was first implemented, and is far less useful now because not only is it a rendering benchmark, but it is also old enough that it no longer tracks with modern rendering workloads.

These results are cool, and I also hope that they do track with the rest of the results, but we not only know that it may not but we also know that AMD has made changes to the architecture that could adversely affect other workloads vs. Ryzen 2.
 
In for premiere pro performance equal to 16 core threadripper. I think it will be an unmatchable value for video editors.
 
That's the thing- they are relevent, for comparing with previous CB15 runs. That's useful for the type of processing that this specific benchmark does, and was more useful when it was first implemented, and is far less useful now because not only is it a rendering benchmark, but it is also old enough that it no longer tracks with modern rendering workloads.

Please show them not tracking.

Results of Blender track exactly with CB15.

https://www.techpowerup.com/reviews/AMD/Ryzen_Threadripper_2970WX/9.html

Unless there is some outlier (ironic), Rendering hasn't evolved much in the past 10 years.
 
One example, and I'll add that it even depends on which particular workload you're pushing through a particular application. For rendering. You step outside of rendering and it's all over the place- which is the point.

POV, Blender, Crona and CB all generally fall into the same performance ladder. PCMark isn't a rendering test.

Now you're moving the goalposts. Either it's a valid rendering metric or it isn't.
 
It's valid for comparing CB15 :)

You still failed to show an outlier for rendering. Either retract your point or affirm it.

Edit:

...it doesn't actually represent modern rendering applications either. CB15 represents CB15; that's the point.

CB15 represents one rendering path for Maxon Cinema 4D.

Your whole argument is a fallacy.
 
  • Like
Reactions: N4CR
like this
One great score in one single application does not a great CPU make. Wait for the release to see the true metrics.
 
I thought Cinebench was good for comparing relative performance within Cinebench. Nothing more, nothing less.

I dont think it was ever meant to establish itself as a comparative benchmark to formal professional rendering performance. It was just another performance benchmark that lets you see how processor A vs. processor B in a very repeatable and predictable pattern each and every time. It is a very consistent benchmark each time you run it. I always thought that was the point. I think too many people do not understand this and make it out to be something it isnt. Possible .. yes/no?
 
ANY time AMD does well in a benchmark, this conversation happens. But when Intel does well, everyone's excited about how fast the chips will be.

You should scroll back to the Athlon days- the last, and only time AMD was as fast or ahead of Intel more or less across the board. You could make the same claim for Intel those days.

AMD is catching up, but that's key- they're still behind on a per-core basis. So yes, if there's an outlier, it'll be called an outlier, and in this case where the outlier is being portrayed as an across the board increase, that's going to be called out too.
 
Windows has problems (scheduler???) with more than 16 cores 32 threads...
Look at linux benches vs Windows and then come back...(not gaming benches)
It’s not so much a scheduled problem as it was a design problem, note that Windows has no problems with Intel chips at greater than 16C/32T and has no problems with multi CPU EPYC’s at far greater core counts. The issue seems to be in how the current gen AMD chips present those cores to windows. I am sure there are ways that Microsoft can fix it but I am not sure they will have too as the new design should be presenting itself in a much different manner.
 
Why do you keep subjugating zen as throughput optimized?

If you take a fixed set of data and measure how long it takes, it is a response metric.

If you take a fixed time period and measure how much data is processed, it is a throughput workload.

Since CB15 takes a fixed set of data and times how long it takes it is thusly the former.

By the above logic and your subjugation, zen is optimized for response not throughput.

No processor is optimized for a certain frame of reference.

Irony that you pull out your GROMACs whistle when complaining about an outlier.

Zen is optimized for throughput. and that is because it shines on rendering/encoding class of applications but doesn't in latency sensitive applications such as games. Since latency is the biggest defect of the muarch. AMD has released latency optimized AGESAs to reduce latency, has released new chipsets with improved memory support for higher-clocked modules that reduce latency, virtually any review of Zen is using OC memory/IF configuration for reducing latency, and users in forums are asking about how get the higher stable memory OC to reduce latency.

CB15 doesn't represent non-rendering applications? I swear I saw a statement that said basically the same thing. Now where did I see that.. hmm, oh wait, silly me! I said it in the very response you replied too (did you fully read what i said).. I think you just confirmed the point I was trying to make. You even took it so far as give examples, and basically using your logic you invalidated every benchmark/application used to judge performance because no single benchmark/application is capable of demonstrating relative performance in every situation for every workload category be it rendering, compression algorithm, gaming, etc. So basically with your logic, 7-Zip benchmarks are invalid because it doesn't represent rendering performance. Do you see how silly your argument is now?

BTW, how is a rendering benchmark not a representation of rendering? I get that a benchmark is going to behave different than an actual rendering application, that is a given. Just as gaming benchmarks behave different than actual game play. But they are still tools to give us indicators of how a piece of hardware will perform doing a particular work load and a way to judge performance between different manufactures/architectures etc.

I wrote "The problem here isn't that CB15 doesn't represent non-rendering applications [...] The problem is that CB15 is an outlier (it doesn't represent rendering, because Blender, Corona,... behave differently)" You can keep ignoring my point about outliers, but it will not go away. Also your claim "So basically with your logic, 7-Zip benchmarks are invalid because it doesn't represent rendering performance", not only is ridiculous, but it has zero relation to what I am saying.
 
If we take average performance over a range of benchmarks that say represent >95% of applicable workloads, and there's this one benchmark that really stands out one way or another, we can call it an outlier. CB15 more or less is that.

Indeed. As I wrote above in #304

The Stilt said:
Cinebench R15 is some sort of a best case benchmark for AMD, that's why it's an outlier.
The IPC difference is abnormally low (5.6% vs. 14.4% average) and the SMT yield is abnormally high (41.6% vs. 28.7% average).
 
ahhhh the forces of both sides come out to battle.... I will give the edge to the Evil side though so far.
 
ANY time AMD does well in a benchmark, this conversation happens. But when Intel does well, everyone's excited about how fast the chips will be.

Remember what happened before Zen launched. Before Zen launch people leaked CPU-Z benches and other non-relevant benches and people in forums started hyping the thing. Higher IPC than Broadwell! Then it was latter shown not only that CPU-Z was an outlier, but the tested version had a bug that favored Zen scores due to having 512KB L2. When Zen launched, reviews proved that IPC in real-world applications and games was 10--15% behind Zen.
 
ANY time AMD does well in a benchmark, this conversation happens. But when Intel does well, everyone's excited about how fast the chips will be.

Today's top consumer CPUs are already overkill for the average buyer, and probably most of us here if were completely honest.

If we are getting to the point of splitting hairs on performance there likely won't be "one best CPU" and you should look at benchmarks that fit your personal usage.

Cinebench is probably fairly representative of 3D rendering but how many people are actually doing 3D rendering on their home computers? It's more like a synthetic benchmark for most people.

Video Encoding will be a lot more common use-case. It would make sense for most people to focus on it, and not 3D rendering, or focus on some other use-case that you actually do.
 
Remember: when Intel dominates a benchmark, it's because it's a normal task. When AMD dominates: it's because of a freak accident.

We're probably going to see a lot of freak accidents soon...
Less pouting, more benching.
 
Zen is optimized for throughput. and that is because it shines on rendering/encoding class of applications but doesn't in latency sensitive applications such as games. Since latency is the biggest defect of the muarch. AMD has released latency optimized AGESAs to reduce latency, has released new chipsets with improved memory support for higher-clocked modules that reduce latency, virtually any review of Zen is using OC memory/IF configuration for reducing latency, and users in forums are asking about how get the higher stable memory OC to reduce latency.



I wrote "The problem here isn't that CB15 doesn't represent non-rendering applications [...] The problem is that CB15 is an outlier (it doesn't represent rendering, because Blender, Corona,... behave differently)" You can keep ignoring my point about outliers, but it will not go away. Also your claim "So basically with your logic, 7-Zip benchmarks are invalid because it doesn't represent rendering performance", not only is ridiculous, but it has zero relation to what I am saying.

Your argument, and the extreme "loose" term of the definition of outlier you are trying to use, places EVERY benchmark used to compare performance under the category of an outlier because NO benchmark will EVER give you real world,accurate numbers relative to other applications or even it's own application outside of benchmarking (game benchmarks are a great example of this), as those number will never represent real world performance and are only comparable to to their own results and no other applications/benchmarks results because... you guessed it, they all behave differently. So, it's not that I am ignoring you outright, it's the fact that either all benchmark's are invalid if we use your "loose" definition of outlier, not just CB15, or your argument is invalid. Obviously all benchmark's are not invalid as they are tools needed to compare and track performance differences, which means your argument is. So, I have chosen not to argue something that is not a valid argument.
 
Last edited:
the extreme "loose" term of the definition of outlier you are trying to use, places EVERY benchmark used to compare performance under the category of an outlier because NO benchmark will EVER give you real world,accurate numbers relative to other applications or even it's own application outside of benchmarking (game benchmarks are a great example of this), as those number will never represent real world performance and are only comparable to to their own results and no other applications/benchmarks results because... you guessed it, they all behave differently.

CB15 is an outlier, because it doesn't track with the average of all other benchmarks for the architectures in question. Please stop dancing around this. It's an outlier. It's of limited usefulness. When it is used on it's own, it can only be see as being used to put the referenced AMD part under the best light possible, and not present a realistic perspective that might be applicable to a broad swath of workloads.
 
CB15 is an outlier, because it doesn't track with the average of all other benchmarks for the architectures in question. Please stop dancing around this. It's an outlier. It's of limited usefulness. When it is used on it's own, it can only be see as being used to put the referenced AMD part under the best light possible, and not present a realistic perspective that might be applicable to a broad swath of workloads.

Since the majority of all benchmarks and applications are coded/optimized for Intel, which shouldn't be surprising as Intel has has 75% or more of the market for the past 10 years. Any that don't, and put AMD in a good light are considered outliers, because they don't conform with other benchmark results? That is False and short sited, and really just bias thinking. It is the same argument that Nvidia uses when AMD does well in games. All benchmarks have limited usefulness, and that is why you can only compare benchmark results with it's own results and not other benchmarks.

Your argument, and Juanrga's is just an attempt to invalidate AMD's accomplishment, and downplay the results and an attempt to invalidate the results because they don't conform to mythical and unrealistic idea that all results must be the same.
 
Last edited:
Since the majority of all benchmarks and applications are coded/optimized for Intel, which shouldn't be surprising as Intel has has 75% or more of the market for the past 10 years, any that don't, and put AMD in a good light are considered outliers, because they don't conform to all other benchmarks? That is False and short sited, and really just bias thinking. It is the same argument that Nvidia uses when AMD does well in games. All benchmarks have limited usefulness, and that is why you can only compare benchmark results with it's own results and not other benchmarks.

My God the bias.

Please, please note that I am using the terms 'average' and 'outlier' here. If CB15 tracked with the average for Ryzen up until now, it wouldn't be an 'outlier' and it'd be a better indicator of overall performance.

But it isn't. Trying to fit results from CB15 or any outlier to potential general performance of a new architecture is entirely improper and unscientific, and doing it on purpose is a clear indicator of bias.
 
It’s not so much a scheduled problem as it was a design problem, note that Windows has no problems with Intel chips at greater than 16C/32T and has no problems with multi CPU EPYC’s at far greater core counts. The issue seems to be in how the current gen AMD chips present those cores to windows. I am sure there are ways that Microsoft can fix it but I am not sure they will have too as the new design should be presenting itself in a much different manner.

Well they sure have no problem presenting those cores to linux.....
Under same becnmark:
All else the same
2990WX performance under Windows = not great as could be (doesn't scale with core count)
2990WX performance under Linux = great! (scales with core count)

Simple deduction will tell you that Windows is not handling Threadripper's cores like it should and that a free and open source OS does much better in the same benchmark.........
How does that NOT point the issue to Windows???
Surley MS has competent programmers.......................oh wait....

Also Epyc is indeed faster on linux, just not as pronounced because each CCX has its own DDR4 controller basically.
 
Last edited:
Well they sure have no problem presenting those cores to linux.....
Under same becnmark:
All else the same
2990WX performance under Windows = not great as could be (doesn't scale with core count)
2990WX performance under Linux = great! (scales with core count)

Simple deduction will tell you that Windows is not handling Threadripper's cores like it should and that a free and open source OS does much better in the same benchmark.........
How does that NOT point the issue to Windows???
Surley MS has competent programmers.......................oh wait....

The Windows Desktop OS was never really designed to work with that many cores and threads. However, they are working on it at a kernel level, from what I understand.

Edit: What about the Server OS?
 
The Windows Desktop OS was never really designed to work with that many cores and threads.

Well, that and with a horrific NUMA layer thrown in too. The Linux kernel has been pretty much designed from the beginning to deal with such deficiencies, while Threadripper is really the first example of such a product with so many orphoned cores for desktop use.
 
Well they sure have no problem presenting those cores to linux.....
Under same becnmark:
All else the same
2990WX performance under Windows = not great as could be (doesn't scale with core count)
2990WX performance under Linux = great! (scales with core count)

Simple deduction will tell you that Windows is not handling Threadripper's cores like it should and that a free and open source OS does much better in the same benchmark.........
How does that NOT point the issue to Windows???
Surley MS has competent programmers.......................oh wait....

Also Epyc is indeed faster on linux, just not as pronounced because each CCX has its own DDR4 controller basically.
It certainly is a windows problem but not solely one if it was strictly a scheduling issue then it would effect Intel as well, I was just trying to say that there is plenty of blame to go around on this issue.
 
The Windows Desktop OS was never really designed to work with that many cores and threads. However, they are working on it at a kernel level, from what I understand.

Edit: What about the Server OS?
I don't see any strange issues on my EPYC servers which are running 2019 server datacenter, not saying there isn't one just that my use cases aren't encountering it if there is.
 
My God the bias.

Please, please note that I am using the terms 'average' and 'outlier' here. If CB15 tracked with the average for Ryzen up until now, it wouldn't be an 'outlier' and it'd be a better indicator of overall performance.

But it isn't. Trying to fit results from CB15 or any outlier to potential general performance of a new architecture is entirely improper and unscientific, and doing it on purpose is a clear indicator of bias.

You are so far off base it isn't funny anymore. A benchmarks are just tools, or ONE process in gathering and collecting data to form a conclusion, the benchmark and it's results are not the conclusion, nor does it make the benchmark an outlier, or an invalid result, it is just one result out of many. No Benchmark tracks averages up till now, they only track the performance of that run, even benchmark sweets are just tracking the results from that run thru.. Some benchmarks will give positive results, some will give negative results. Gathering data and tracking averages are the job of the people collecting the data. In this case with CB15, it gives positive results. It is only 1 of the many results that we will use to determine the over all performance. What you are saying places ALL benchmarks in the category of being outliers.. which is not valid an is not accurate.

The reason you are trying to define CB15 as an outlier is because it's results differ from other benchmarks, but at the same time, while you are trying to define CB15 that way, you are ignoring that the majority of benchmarks are designed for Intel architecture, which invalidates your claim that CB15 is a outlier. It has nothing to with being bias, it has to do with fact. Just as most games are designed and optimized for Nvidia, because both Intel and Nvidia have the majority of the market share, and since the architectures are different developers are going to and do focus on the majority, not the minority. And as much as we want to believe that benchmarks are not bias and all tests are on fair even ground, they are not. To rightfully determine if any benchmark is an outlier, we have to have a unbiased control point to judge from, and right now, we do not have that, because nearly all points are designed for Intel architecture and are not unbiased.
 
Last edited:
2990WX performance under Windows = not great as could be (doesn't scale with core count)
2990WX performance under Linux = great! (scales with core count)
How does that NOT point the issue to Windows???
Surley MS has competent programmers.......................oh wait....
Stop it, I just a snorted Dr. Pepper and a bite of turkey sandwitch out my nose.:D
 
What you are saying places ALL benchmarks in the category of being outliers.. which is not valid an is not accurate.

No, what I will keep repeating is that CB15 is an outlier from the average. There are benchmarks that track closer to the average that would be more appropriate, and further, that would be more likely to enlighten us on the effects of AMD's architectural reorganization.

Given both the outlier status and that AMD reorganized their architecture, CB15 is likely to be an extremely poor measure of Ryzen 3's overall performance.
 
No, what I will keep repeating is that CB15 is an outlier from the average. There are benchmarks that track closer to the average that would be more appropriate, and further, that would be more likely to enlighten us on the effects of AMD's architectural reorganization.

Given both the outlier status and that AMD reorganized their architecture, CB15 is likely to be an extremely poor measure of Ryzen 3's overall performance.

Stop saying "Ryzen 3"! I keep thinking of the Ryzen 3 1200, 1300, etc.

How about Zen 2..................................Or Ryzen 3000 series....................

/rant
 
No, what I will keep repeating is that CB15 is an outlier from the average. There are benchmarks that track closer to the average that would be more appropriate, and further, that would be more likely to enlighten us on the effects of AMD's architectural reorganization.

Given both the outlier status and that AMD reorganized their architecture, CB15 is likely to be an extremely poor measure of Ryzen 3's overall performance.

The problem with this, is we don't have a proper unbiased control point to start from because like it or not, deny it or not, the majority of tests are built on the foundation of Intel architecture, not AMD's Ryzen Architecture. (this is not a bias statement, it is fact) That's why in the world of bench marking and performance testing you can't label any benchmark as a outlier, as most are designed to work better on one or the other architecture, with the majority of them geared towards Intel, but generally not both properly. Some are continually modifying their code to properly recognize and use Ryzen Architecture, but are basically still being ran on top of a foundation that was built for testing Intel's Architecture. Not to mention the underlying influences caused by the OS.
 
I don't see any strange issues on my EPYC servers which are running 2019 server datacenter, not saying there isn't one just that my use cases aren't encountering it if there is.

They're not really issues, more like "Linux is faster in benchmarks"
Nothing you would really see in real world use.
I want MS to catchup in that regard to linux(high core counts, e.g. 2990WX perf) to at least be within 5%
All my PCs run Windows and I want moar powah!
 
Your argument, and the extreme "loose" term of the definition of outlier you are trying to use, places EVERY benchmark used to compare performance under the category of an outlier because NO benchmark will EVER give you real world,accurate numbers relative to other applications or even it's own application outside of benchmarking (game benchmarks are a great example of this), as those number will never represent real world performance and are only comparable to to their own results and no other applications/benchmarks results because... you guessed it, they all behave differently. So, it's not that I am ignoring you outright, it's the fact that either all benchmark's are invalid if we use your "loose" definition of outlier, not just CB15, or your argument is invalid. Obviously all benchmark's are not invalid as they are tools needed to compare and track performance differences, which means your argument is. So, I have chosen not to argue something that is not a valid argument.

First, it seems you don't understand the concept of outlier. Second, you make it sounfs as if I am alone in this, but I have quoted other two people that claim CB15, and only CB15, is an outlier. And one of the persons I quoted did run several dozens of different benches on Ryzen systems.

Your argument, and Juanrga's is just an attempt to invalidate AMD's accomplishment, and downplay the results and an attempt to invalidate the results because they don't conform to mythical and unrealistic idea that all results must be the same.

And again you misinterpret completely my point. My point is not that "all results must be the same". Basically you have no idea of what I am talking.
 
Back
Top