AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast

Stop saying "Ryzen 3"! I keep thinking of the Ryzen 3 1200, 1300, etc.

How about Zen 2..................................Or Ryzen 3000 series....................

/rant

Yeah... I don't like it either. Ryzen 3000? Probably. Too much ratscrewery in naming conventions.
 
like it or not, deny it or not, the majority of tests are built on the foundation of Intel architecture, not AMD's Ryzen Architecture.

...and Zen has been built to run software that has been developed for the premiere x86 platform, so?

It's up to AMD to build a CPU for the software out there, not vice-versa.

That's why in the world of bench marking and performance testing you can't label any benchmark as a outlier

...and that's why this is wrong. You take the average, and any benchmark that departs that average is an outlier, regardless of architecture. Further, you cannot then take said outlier and use it to make predictions that would apply to other applications. The outlier is only useful for benchmarking itself, again, regardless of architecture.
 
Zen is optimized for throughput. and that is because it shines on rendering/encoding class of applications but doesn't in latency sensitive applications such as games. Since latency is the biggest defect of the muarch. AMD has released latency optimized AGESAs to reduce latency, has released new chipsets with improved memory support for higher-clocked modules that reduce latency, virtually any review of Zen is using OC memory/IF configuration for reducing latency, and users in forums are asking about how get the higher stable memory OC to reduce latency.

Whatever weight you place on a term, it does not define the architecture the way you use it.

Latency is just the time it takes to do an operation.

Throughput is the number of operations that can be done in a discrete period.

The memory optimization you site, has as much to do with frequency and width (throughput). as it does with wait states (latency). However, this latency should not be confused with instruction latency.

Memory latency is the amount of time it takes to move to a position. It varies with respect to row column, read write, turnaround.

Instruction Latency is the time it takes for it to move through the pipeline. Typically 10-15 cycles.

Instruction Throughput the number of operations moving though the stages and pipelines.

In general from, Agner`s CPU blog, long and short. Intel has better trimmings and retires data twice the rate of AMD. AMD's decoders and micro ops shine in tight loops allowing for up to a 6 vs 4 advantage over Intel. Of course Intel's FP unit shines in almost every way, although with 128 bit sse AMD can have more instructions in flight. (4 vs 2)

I wrote "The problem here isn't that CB15 doesn't represent non-rendering applications [...] The problem is that CB15 is an outlier (it doesn't represent rendering, because Blender, Corona,... behave differently)" You can keep ignoring my point about outliers, but it will not go away. Also your claim "So basically with your logic, 7-Zip benchmarks are invalid because it doesn't represent rendering performance", not only is ridiculous, but it has zero relation to what I am saying.

The problem is both you and the idiot, refuse to acknowledge that most rendering and ray tracing programs mirror each other.

From the Idiot's link the reduced order is

CB15 - TR i9 R7 i7
POV - TR i9 R7 i7
Crona - TR i9 R7 i7

see a pattern here. Only an Idiot could miss it.

Sure the 9900k would fit in there in a unique way, but that's not enough to declare CB15 as an outlier.

Ray tracing is unique. It's a lot of housekeeping with a fair amount of FP. This housekeeping (hierarchical ordering) probably favors AMD's ability to keep more instructions in flight.

Encoding is not rendering, it's basically compression. The Huffman encoding would slightly favor AMD while Intel blindly wins DCT processing.

Regardless as far as Intel is ahead in the majority of benchmarks (per core x frequency) AMDs margins are respectable enough.

Obviously the Intel side would win *convolution.

* Convolution is both the processing of 2 functions together as well as confusing the issues. (double entendre)
 
Last edited:
  • Like
Reactions: c3k
like this
would scale at near linear.

The work to be done scales linearly, which is why it works so well on GPUs (which are throughput monsters), but performance on CPUs with varying architectures may not, and may not scale linearly / proportionally with anything else.
 
The work to be done scales linearly, which is why it works so well on GPUs (which are throughput monsters), but performance on CPUs with varying architectures may not, and may not scale linearly / proportionally with anything else.

Ray tracing is horrible on GPUs. nVidia's RTX ray engine is pre-programmed by supercomputers. It's basically a neural network with said pre-programed weights determining light mapping in real time. In the true sense of ray-tracing it's more hybrid than true.

Edit: I am not insulting nVidia's implementation, it's as good as it gets. It's just a distant relative to standard ray-tracing
 
Today's top consumer CPUs are already overkill for the average buyer, and probably most of us here if were completely honest.

If we are getting to the point of splitting hairs on performance there likely won't be "one best CPU" and you should look at benchmarks that fit your personal usage.

Cinebench is probably fairly representative of 3D rendering but how many people are actually doing 3D rendering on their home computers? It's more like a synthetic benchmark for most people.

Video Encoding will be a lot more common use-case. It would make sense for most people to focus on it, and not 3D rendering, or focus on some other use-case that you actually do.

Would you mind sharing your GitHub repo?
 
First, it seems you don't understand the concept of outlier. Second, you make it sounfs as if I am alone in this, but I have quoted other two people that claim CB15, and only CB15, is an outlier. And one of the persons I quoted did run several dozens of different benches on Ryzen systems.



And again you misinterpret completely my point. My point is not that "all results must be the same". Basically you have no idea of what I am talking.

I completely understand the definition of an outlier, and the concept. I also understand there is one major flaw in defining CB15 or any other benchmark as such. See post #358. By defining CB15 as an outlier, it allows you, and others to ignore or invalidate the results, when in fact they could very well be the most accurate results when it comes to properly using Ryzen's architecture. But it appears that is a concept you won't/can't consider.

As for having no idea what you are talking about, well, I am sorry. I just can't slow my brain down slow enough, or allow myself to be so short sighted and/or narrow minded to understand something so shallow and flawed.

...and Zen has been built to run software that has been developed for the premiere x86 platform, so?

It's up to AMD to build a CPU for the software out there, not vice-versa.



...and that's why this is wrong. You take the average, and any benchmark that departs that average is an outlier, regardless of architecture. Further, you cannot then take said outlier and use it to make predictions that would apply to other applications. The outlier is only useful for benchmarking itself, again, regardless of architecture.


NOPE! Which came first, the chicken or the Egg? Software is coded for the hardware, not the other way around. All AMD is required to do is have the ability to be backwards compatible so current software can run on them, which does not translate to efficiency or optimal performance. That is why software has to be coded to properly to use the new architecture. The very fact that you don't understand this, and believe that architecture doesn't matter, when in fact it plays the biggest roll in the process demonstrates why you don't understand why CB15 or any other benchmark isn't an outlier. You also don't seem to realize that all benchmarks are only useful in testing against themselves, or comparing results with their own past results, and no others.
 
Last edited:
Since the majority of all benchmarks and applications are coded/optimized for Intel, which shouldn't be surprising as Intel has has 75% or more of the market for the past 10 years. Any that don't, and put AMD in a good light are considered outliers, because they don't conform with other benchmark results? That is False and short sited, and really just bias thinking. It is the same argument that Nvidia uses when AMD does well in games. All benchmarks have limited usefulness, and that is why you can only compare benchmark results with it's own results and not other benchmarks.

This is just as biased if nor more so. You insinuate that the reason Intel wins benchmarks is because they are optimized for Intel and presumably against AMD.

The reality is that Intel is usually winning by clock speed, not favorable optimization.

If you look at a Equal Clock Speed, equal core count comparison, then there is negligible difference in performance, outside of games.

In equal clock and core comparison Intel is still ahead on games and this really doesn't seem to be about optimizing for Intel or even IPC.

This does seem somewhat related to cache behavior, or inter-core communication as Intels own Mesh CPU fall back as well.

So where is the Intel optimization you seem to worry about?? Winning by higher clock-speed is not evidence of favorable optimization.
 
This is just as biased if nor more so. You insinuate that the reason Intel wins benchmarks is because they are optimized for Intel and presumably against AMD.

That is not what I said, it has nothing to do with who wins, but it does influence the results and it plays a part into why benchmarks don't have similar results. This is all about why CB15 cannot be classified as an outlier and/or it's results invalidated.
 
Last edited:
Edit: I am not insulting nVidia's implementation, it's as good as it gets. It's just a distant relative to standard ray-tracing

Well, yeah. We're not shrinking server farms onto single ASICs yet.

Of course, we're not demanding cinematic effects either. We've already seen very good implementations and they're just going to get better.

Software is coded for the hardware, not the other way around.

And if you're a second mover, you're serving the current install base. Ergo, either AMD is building CPUs for the software out there, or they're failing.
 
I don't know about you guys, but I will be processing all fifteen of my CinnaBons soon enough regardless of how many neural nuggets I can have pre-programmed per space invader.

We've seen the results of the redesigned 1700x/1800x and then 2700x/2800x which were good enough steps in the right direction. Third time's a charm?

I'm sure we are all interested to see how far they have improved infinity fabric, cache, memory controller, instruction sets, prediction, prefetch, floating point, IPC, scalability etc. AMD is trying; yeah it took them 13-15 years but hey it's time to support healthy competition.

One of my personal benchmarks will be RPCS3 emulator with Tekken Tag Tournament 2, which takes a 9900k to run smoothly and even then frame rate still dips during loading, character select screen etc. Want to push it harder? Use an open world third person game such as Read Dead Redemption.

Computex, E3, side by side comparisons, fancy charts and graphs, just give it to me already as I'm in full "shut up and take my money" mode. Edit: corrected all fifteen of my CinnaBons, per space invader.
 
Last edited:
We've seen the results of the redesigned 1700x/1800x and then 2700x/2800x which were good enough steps in the right direction. Third time's a charm?

Big issue I see is that they separated the memory controller- so it's a bit up in the air. From a performance perspective, that's a regressive move- it's going backwards, where AMD moved their memory controllers on die with the Athlon 64 and Intel did with their Core i7 (etc).

So while we can absolutely expect architectural improvements, we could also see stagnation or even regression; and that's why the use of CB15 is really not relevant except for CB15.
 
AMD aren’t dumb. They would have done it with good reason.

It's far easier to manufacture and far more flexible than say what Intel does with monolithic dies? Especially when AMD doesn't at all control their production as Intel does.

So it's a compromise. Maybe they've mitigated potential negative effects, maybe they've even overcome them and made across the board gains- so we'll see.
 
I am guessing that latency will be a bit higher vs Ryzen1000/2000 (Zen/Zen+) but will be more consistent across all cores/dies/chiplets. Moving it to the I/O die is also preferable to the situations with ThreadRipper where dies had no direct connection to memory and had to route through a neighboring die. It's certainly a compromise, but may well be worth it for AMD to do things that way.
 
And if you're a second mover, you're serving the current install base. Ergo, either AMD is building CPUs for the software out there, or they're failing.

False! Winning the speed crown has always been second to Innovation for AMD, it is true for the CPU's as well as their GPU's. If they where building CPU's for the software out there, we wouldn't have the Thread Ripper, the EPYC, or the up coming Rome CPU's, much less the Ryzen. They would have stayed with what works since the Athlon days, and be like Intel and just recycle the same old architecture.
 
I am surprised noone posted this yet, indicating that the new gen will be 30% faster compared to second gen TR. If true, that is quite an increase.

3rd-gen-ryzen-performance.jpg


coming from this article
https://www.forbes.com/sites/antony...st-exciting-processor-launch-in-a-decade/amp/
 
Analyzing the graph alone, and without any further speculation:

It's implied that it will surpass some of the lower core count TR CPUs, given that stock CB15 numbers for a 2700X is between 1700-1800.

Assuming it's a 16C/32T part at the same IPC/frequencies without any additional CCX overhead, then said part should do between 3400-3600. (Strictly x2 the score)

The chart score of 4200 indicates that it's about 15% faster than the double the 2700X. (4200-3600)/3600 = 16%.

I'm guessing the TR4 parts have always had difficulty maintaining full turbo speeds due to heat, and that the 7nm Ryzen somewhat solves that issue.
 
Whatever weight you place on a term, it does not define the architecture the way you use it.

Latency is just the time it takes to do an operation.

Throughput is the number of operations that can be done in a discrete period.

The memory optimization you site, has as much to do with frequency and width (throughput). as it does with wait states (latency). However, this latency should not be confused with instruction latency.

Memory latency is the amount of time it takes to move to a position. It varies with respect to row column, read write, turnaround.

Instruction Latency is the time it takes for it to move through the pipeline. Typically 10-15 cycles.

Instruction Throughput the number of operations moving though the stages and pipelines.

In general from, Agner`s CPU blog, long and short. Intel has better trimmings and retires data twice the rate of AMD. AMD's decoders and micro ops shine in tight loops allowing for up to a 6 vs 4 advantage over Intel. Of course Intel's FP unit shines in almost every way, although with 128 bit sse AMD can have more instructions in flight. (4 vs 2)

You don't know what is latency. No one here mentioned instruction latency. In fact, we are discussing workloads such as games, so we are talking at a much more coarse grained level.

You don't know what is memory latency.

Anger mixes uops and c-uops. Zen frontend supplies up to six uops per cycle. Since Haswell, the frontend supplies up to eight uops per cycle (joined into four c-uops). It doesn't matter that AMD breaks FMAs into two pipes (FMUL and FADD) in the Zen core, because the core cannot provide four 128 loads per cycle.

Ray tracing is horrible on GPUs.

ROFL

I completely understand the definition of an outlier, and the concept. I also understand there is one major flaw in defining CB15 or any other benchmark as such. See post #358. By defining CB15 as an outlier, it allows you, and others to ignore or invalidate the results, when in fact they could very well be the most accurate results when it comes to properly using Ryzen's architecture. But it appears that is a concept you won't/can't consider.

As for having no idea what you are talking about, well, I am sorry. I just can't slow my brain down slow enough, or allow myself to be so short sighted and/or narrow minded to understand something so shallow and flawed.

NOPE! Which came first, the chicken or the Egg? Software is coded for the hardware, not the other way around. All AMD is required to do is have the ability to be backwards compatible so current software can run on them, which does not translate to efficiency or optimal performance. That is why software has to be coded to properly to use the new architecture. The very fact that you don't understand this, and believe that architecture doesn't matter, when in fact it plays the biggest roll in the process demonstrates why you don't understand why CB15 or any other benchmark isn't an outlier. You also don't seem to realize that all benchmarks are only useful in testing against themselves, or comparing results with their own past results, and no others.

No. You don't understand the concept of outlier. That is why in #358 you write the nonsense "you can't label any benchmark as a outlier".

A CPU is a LCU. A GPU is a TCU, so your suggestion that a throughput workload as CB15 would be the proper way of using Ryzen's architecture is another nonsense. In fact, I already mentioned how AMD has been working in correcting the latency deficit on Zen muarch (higher clocks, optimized AGESAs,...) because they know a CPU is a LCU.

The latency deficit on Zen muarch doesn't have anything to do with how software is coded. It has to do with details of the microarchitecture (instructions latencies, front-end, issue, caches, interconnects, MC,...). That is the reason why AMD Zen has the same latency deficit when one uses source code and compiles optimized binaries for the Zen muarch using the Znver or Znver2 flags.
 
Last edited:
You don't know what is latency. No one here mentioned instruction latency. In fact, we are discussing workloads such as games, so we are talking at a much more coarse grained level.

You don't know what is memory latency.

Anger mixes uops and c-uops. Zen frontend supplies up to six uops per cycle. Since Haswell, the frontend supplies up to eight uops per cycle (joined into four c-uops). It doesn't matter that AMD breaks FMAs into two pipes (FMUL and FADD) in the Zen core, because the core cannot provide four 128 loads per cycle.



ROFL



No. You don't understand the concept of outlier. That is why in #358 you write the nonsense "you can't label any benchmark as a outlier".

A CPU is a LCU. A GPU is a TCU, so your suggestion that a throughput workload as CB15 would be the proper way of using Ryzen's architecture is another nonsense. In fact, I already mentioned how AMD has been working in correcting the latency deficit on Zen muarch (higher clocks, optimized AGESAs,...) because they know a CPU is a LCU.

The latency deficit on Zen muarch doesn't have anything to do with how software is coded. It has to do with details of the microarchitecture (instructions latencies, front-end, issue, caches, interconnects, MC,...). That is the reason why AMD Zen has the same latency deficit when one uses source code and compiles optimized binaries for the Zen muarch using the Znver or Znver2 flags.

You are ignoring why I said you can't label it an outlier, or label any benchmark an outlier.. because you don't have a proper control point that is non biased to start from. In the world of Science, your opinion would be laughed out the door because it it would be deemed an invalid testing method due to the improper control point which causes the indicators you are using to base your decision on, inaccurate.

You already stated that CB15 does not test for latency, it tests for through put. So what does latency have to do with this particular benchmark that you already said doesn't test for it??? NOTHING!! That is why we have various different benchmarks, because they all test different areas of performance. This one so happens to be through put (your own words).

In your previously implied architecture doesn't matter, but now you are saying it does. Which is it? Do benchmarks matter regardless of architecture or not? The fact that you don't understand the roll of software, coding, and what is needed on the software side (this includes the OS) based on different architectures, which includes micro architecture, indicates you don't have a clue about anything you are trying to defend or examples you are trying to use to defend your flawed opinion. It appears that you believe that software just has to be written in a generic format and that's it. Don't get me wrong, I am not saying that software is the only factor, but you have to have software properly written to fully and correctly utilize a given architecture. (The OS has to properly support the new architecture first).

If this is true, and software doesn't have to be written to work with the architecture, or CPU's are built for the software as you believe is AMD's responsibility, how did we ever get away from 4bit processors and make it to 64bit processors? How did we ever get software that would run on 64 bit processors or utilize 64 bit processing? (very, very broad generic, low scope, example). The problem is you some how believe that software is generic and it is all on the hardware side, and code has nothing to do with software/hardware interaction with the different architecture.

Lets put it this way: Why can't you take the software that is written for a car's computer (any car) and download it into the computer of ANY other car regardless of manufacture or model? Since your argument is software doesn't have to be written for the architecture being used? I mean, by your thought process, this should be possible and the car should run perfectly without any modifications to the software since software is not coded for the architecture or hardware in question.
 
Last edited:
Ground Hog Day in the AMD Ryzen 9 3000 is a 16-core Socket AM4 Beast thread.
sigh
 
You are ignoring why I said you can't label it an outlier, or label any benchmark an outlier.. because you don't have a proper control point that is non biased to start from. In the world of Science, your opinion would be laughed out the door because it it would be deemed an invalid testing method due to the improper control point which causes the indicators you are using to base your decision on, inaccurate.

You already stated that CB15 does not test for latency, it tests for through put. So what does latency have to do with this particular benchmark that you already said doesn't test for it??? NOTHING!! That is why we have various different benchmarks, because they all test different areas of performance. This one so happens to be through put (your own words).

In your previously implied architecture doesn't matter, but now you are saying it does. Which is it? Do benchmarks matter regardless of architecture or not? The fact that you don't understand the roll of software, coding, and what is needed on the software side (this includes the OS) based on different architectures, which includes micro architecture, indicates you don't have a clue about anything you are trying to defend or examples you are trying to use to defend your flawed opinion. It appears that you believe that software just has to be written in a generic format and that's it. Don't get me wrong, I am not saying that software is the only factor, but you have to have software properly written to fully and correctly utilize a given architecture. (The OS has to properly support the new architecture first).

If this is true, and software doesn't have to be written to work with the architecture, or CPU's are built for the software as you believe is AMD's responsibility, how did we ever get away from 4bit processors and make it to 64bit processors? How did we ever get software that would run on 64 bit processors or utilize 64 bit processing? (very, very broad generic, low scope, example). The problem is you some how believe that software is generic and it is all on the hardware side, and code has nothing to do with software/hardware interaction with the different architecture.

Lets put it this way: Why can't you take the software that is written for a car's computer (any car) and download it into the computer of ANY other car regardless of manufacture or model? Since your argument is software doesn't have to be written for the architecture being used? I mean, by your thought process, this should be possible and the car should run perfectly without any modifications to the software since software is not coded for the architecture or hardware in question.

I am not ignoring what you said about outliers. I am saying you are wrong.

CB15 is a throughput bench, so it cannot be used to infer how Zen2 will perform in latency-sensitive workloads as games. We have to wait to reviews to test games.

I didn't imply "architecture doesn't matter"; in fact I am saying the contrary since the beginning, but I am using the correct technical term: microarchitecture. I understand rater well how software works; that is the reason why I stated your viewpoint is BS and mentioned custom binaries to reinforce that your viewpoint is BS.

Any bet you will return with some new misunderstanding of my point?
 
Ryzen 3600 samples are out to reviewers it seems. Read the comments.

Do not read the article and then ask me why I posted it. :(
https://www.pugetsystems.com/labs/a...9900K-in-Pix4D-Metashape-RealityCapture-1461/

The mod William M George lamented being sent AMD Ryzen 3600 samples instead of faster chips because it didn't fit their viewership. He also said AMD wasn't sending out the enthusiast chips to reviewers yet. He edited his comments so...

If someone knows how to get a cached version of the page without the edited comments it would be awesome!
 
I am not ignoring what you said about outliers. I am saying you are wrong.

CB15 is a throughput bench, so it cannot be used to infer how Zen2 will perform in latency-sensitive workloads as games. We have to wait to reviews to test games.

I didn't imply "architecture doesn't matter"; in fact I am saying the contrary since the beginning, but I am using the correct technical term: microarchitecture. I understand rater well how software works; that is the reason why I stated your viewpoint is BS and mentioned custom binaries to reinforce that your viewpoint is BS.

Any bet you will return with some new misunderstanding of my point?

My appoligies, it was Idiotincharge that implied that architecture doesn't matter when he said "You take the average, and any benchmark that departs that average is an outlier, regardless of architecture" as if architecture and having proper support for such architecture, doesn't play a roll in the accuracy of the results. it wasn't you, sorry. It's difficult not to get the two of you confused. :D

As for being wrong, well, it's of my opinion that you are wrong because you don't have the proper control points to form a proper conclusion because you are dead set on believing that benchmarks are not geared towards Intel architecture. All thought up till Ryzen (well bulldoz, but it's failure didn't force any changes) where all based off nearly identical architecture be it Intel or AMD, which changed substantial wilth the release of Ryzen. Windows 10 is still having issues with getting all of the proper support for Ryzen Architecture in place and/or working correctly, so how can you expect benchmarks not to be effected, or that all benchmarks have proper code to support Ryzen properly?. .

Also, since the release of the CB15 results of the Ryzen 3000, you keep arguing latency in connection to CB15, yet you already said CB15 is a though put benchmark and not a latency benchmark, so why are you arguing latency points about a benchmark that is not designed to test it? Shouldn't you be saving that for a benchmark that is designed to test for such a thing. It's like trying to argue a car's top speed, using fuel economy arguments. They don't mix and make it all a confusing mess.

Here is the question, what other benchmarks test through put only? Do those benchmarks show different averages than CB15.? What it appears is you are trying to take a through put benchmark and throw it in to the mix with Latency Benchmarks or benchmarks that test just more than through put, which are different metrics and can't be compared.

Even if you take away all other points, regardless of who is right or wrong, you have to first compare benchmarks that test identical performance metrics to determine if something is an outlier.
 
Last edited:
Its going to depend on what the VRM requirements are like for the 16 core. I could see AsRock making a 16-core supported ITX board. Outside of that, its hard to say.
The Asus b450 and x470 itx boards have a good vrm setup. It's just a matter of if the 16 core CPUs are going to be backwards compatible. The higher tdp may be an issue.
 
  • Like
Reactions: N4CR
like this
You don't know what is latency. No one here mentioned instruction latency. In fact, we are discussing workloads such as games, so we are talking at a much more coarse grained level.

You don't know what is memory latency.

Well the big difference is, you just throw out buzzwords without reference. Often just to confuse.

Latency has different meanings in different contexts. You expect context to be known and ubiquitous.

Fine explain your use of the word Latency, because so far I do not believe you know what you're talking about.

If your posts are here to confuse, you win.

Anger mixes uops and c-uops. Zen frontend supplies up to six uops per cycle. Since Haswell, the frontend supplies up to eight uops per cycle (joined into four c-uops). It doesn't matter that AMD breaks FMAs into two pipes (FMUL and FADD) in the Zen core, because the core cannot provide four 128 loads per cycle.

Ehh fused macro ops are generally limited to conditional jump operations and only one can be encoded in a cycle. Any more than one will be queued as unfused. I trust Anger.


Is that a ROFL. You don't understand how Ray Tracing works so you just laugh it off.
 
Last edited:
False! Winning the speed crown has always been second to Innovation for AMD, it is true for the CPU's as well as their GPU's. If they where building CPU's for the software out there, we wouldn't have the Thread Ripper, the EPYC, or the up coming Rome CPU's, much less the Ryzen. They would have stayed with what works since the Athlon days, and be like Intel and just recycle the same old architecture.

Outside of the Athlon line (and Intel's diversion to Netburst), they've been playing catch up for their entire existence, so I guess you have that going for you.

Just to note: the Athlon was faster at stuff that was, from your perspective, 'optimized for Intel'. The sheer fact that you're making excuses for AMD's shortcomings is hilarious.
 
my go to guy has instructed me to get an ASROCK X570 Taichi Ultimate Board, look for it in itx form factor.

I can't imagine that not hurting... I run mostly ASRock myself, simply because they tend to have the features I'm looking for or are significantly cheaper (or both), but they do charge for their nice stuff!
 
Ryzen 3600 samples are out to reviewers it seems. Read the comments.

Do not read the article and then ask me why I posted it. :(
https://www.pugetsystems.com/labs/a...9900K-in-Pix4D-Metashape-RealityCapture-1461/

The mod William M George lamented being sent AMD Ryzen 3600 samples instead of faster chips because it didn't fit their viewership. He also said AMD wasn't sending out the enthusiast chips to reviewers yet. He edited his comments so...

If someone knows how to get a cached version of the page without the edited comments it would be awesome!



Someone on Reddit saw it also!


quote amd.png
 
Back
Top