AMD Ryzen R9 3900X Review Round Up

I think you should wait to get your board until a new revision comes out. What if they add a bunch of RGB to the board in the next revision, you wouldn't want to miss out on that right?

I can't wait for the day RGB PCB becomes a thing. I mean seriously why not, we already have red, orange (think ribbon cable color), green, blue as standard colors, so just gotta throw in yellow, indigo, and purple and we're done. And then we can throw in black and white on top to truly complete the color spectrum. Can't believe no manufacturer has thought of this, maybe I should patent this idea...
 
I don't think aesthetics has any place in technical hobbies, as long as things don't look like a mess. I hate how this hobby has turned into "pimp my computer" more than focusing on the basics, the most possible performance with the least possible noise.

View attachment 173788



I consider a lighted keyboard a functional thing, not an aesthetic thing. It's great for those of us who have self taught "bad" typing methods and need to see the keyboard every now and then in the dark.

I got myself a Ducky One with white LED's a while back. I'm not crazy about the Cherry MX switches, but I am starting to get used to them. I keep the back lighting down to one of the lowest settings so it doesn't reflect off of my screen or distract when it is dark. Just enough subtle light to see the lettering. During daylight you can barely tell it is lit up.

I am a function over form kind of guy, but I still like it when things look good. RGB isn't always the answer, but I am fine with accent lighting personally. That said, its a good thing your mindset isn't all that prevalent because we'd all still be running hideous beige boxes with primer gray interiors in our cases.
 
As far as milking the mainstream with low core counts and new sockets/chipset goes.. Intel definitely deserves all the criticism they get.

However, regarding single digit performance gaines, since AMD hasn't actually surpassed Intel yet, I'm not yet convinced that Intel hasn't wrung x86 for almost everything they can get. We need to wait and see if AMD can actually surpass Intel and kick off some real performance leapfrogging to know if they were holding back.

I don't entirely agree with this. Intel took a different approach to performance going for clock speed. At 14nm++++++++++++++ it can't simply just add cores and maintain clocks at 5.0GHz. Intel knows that on the desktop, you aren't generally going to use more than 8c/16t. I agree they've kept the core count limited in the past, but I don't think that's the case now. As for the sockets and chipsets, I again only partly agree with you. While Intel is certainly guilty of placing artificial limitations on what you can use at times, it has also avoided some of the issues that we see on the AMD side. Not having to shove 3 generations of processors and five processor families worth of support code into BIOS chips that are entirely too small is an example of this. Having motherboards that can universally support any CPU that will ever exist in a given socket is nice. On the AMD side, you have motherboards that can't necessarily handle the power requirements of all CPU's that exist at a given time.

Yes, Intel takes things too far, but on the other hand, they do spare us from some BS you have to go through when maintaining sockets for too long. I think somewhere in between what AMD and Intel does is an actual sweet spot for this.
 
I don't entirely agree with this. Intel took a different approach to performance going for clock speed. At 14nm++++++++++++++ it can't simply just add cores and maintain clocks at 5.0GHz. Intel knows that on the desktop, you aren't generally going to use more than 8c/16t. I agree they've kept the core count limited in the past, but I don't think that's the case now. As for the sockets and chipsets, I again only partly agree with you. While Intel is certainly guilty of placing artificial limitations on what you can use at times, it has also avoided some of the issues that we see on the AMD side. Not having to shove 3 generations of processors and five processor families worth of support code into BIOS chips that are entirely too small is an example of this. Having motherboards that can universally support any CPU that will ever exist in a given socket is nice. On the AMD side, you have motherboards that can't necessarily handle the power requirements of all CPU's that exist at a given time.

Yes, Intel takes things too far, but on the other hand, they do spare us from some BS you have to go through when maintaining sockets for too long. I think somewhere in between what AMD and Intel does is an actual sweet spot for this.

It's not AMD's fault that some motherboard makers cheaped out put in a smaller bios chip. You also cant buy the cheaper boards as a consumer and expect the same level of support as a higher end board as well. Id rather have options then just be forced to buy new like Intel makes you.
 
  • Like
Reactions: N4CR
like this
It's not AMD's fault that some motherboard makers cheaped out put in a smaller bios chip. You also cant buy the cheaper boards as a consumer and expect the same level of support as a higher end board as well. Id rather have options then just be forced to buy new like Intel makes you.

No, it isn't AMD's fault, but AMD's lack of control over motherboard partners and the sheer variation in quality we see from one extreme to the other is part of the problem on that side of the fence. Now, what is AMD's fault are the massive amounts of AGESA code work that it has to do to support multiple generations of CPU on a single socket. They may put allot of this on the motherboard vendors, but the fact is that it impacts the customer experience on the AMD side and AMD knows that. These upgrades aren't always easy. Especially if you went cheap on the board. The choices that AMD has made, aren't new. They've been doing this for well over a decade now. AMD damn well knew exactly what could happen in these cases. Many of the issues I described above don't just apply to AM4. AM3 and AM3+ were often a nightmare to deal with, especially in the beginning. Intel avoids all of that by forcing new chipsets and sockets on people. I think they do it too much, but again, you do avoid certain issues with that approach. On the subject of BIOS flashback / blind flashing, that's been a known issue for years. If you are on the AMD side and going cheap on that, then that's on you.

Look, I've been reviewing these things for 14 years. I've seen the spec information on the UEFI BIOS chips during all that time. While I saw 128Mbit and 256Mbit specs, there was never any reason to think that the smaller sizes would be an issue. You typically saw the smaller BIOS ROMs on less complicated motherboards, which helped confirm the thinking that those designs weren't complicated enough to need them. If you could make an educated decision about these things in advance, then sure, I'd agree. Unfortunately, people have purchased these motherboards without knowing what they were in for down the line. Even hardcore enthusiasts and reviewers didn't know this was going to be a problem. How could your average enthusiast who builds once every five years be expected to have made an informed decision on that one item?

I'm just pointing out that there are benefits to the way Intel does certain things. While I think some of the criticism they receive is fair, it's only right to point out that there are downsides to maintaining a socket for as long as AMD does.
 
I know you are all getting burnt out on Ryzen 3 reviews, but this is the mother of them all:



Just 6% short of the 9900k. Granted, the spread would open up with next gen cards and/or 720p, but these are respectable results.
 
While I saw 128Mbit and 256Mbit specs, there was never any reason to think that the smaller sizes would be an issue.
You, me, other enthusiasts, regular users and the MB makers themselves, too, apparently.

We all know why MB manufacturers chose the 16 MB chips: BOM (bill of material) costs. Every penny saved on parts is potentially hundreds of thousands of dollars saved in a production run of a motherboard.

I lucked out: The C7H wifi has 32 MB flash. Why Asus decided to use 32 MB I'll never know. Maybe the stock "Asus" portion of the BIOS and UEFI interface was already close to 16 MB in size and the size of AGESA made the total a little too close to 16 MB for the engineers to feel comfortable with choosing 16 MB flash. But then, why didn't Asus use 32 MB flash on all of their AM4 MBs?

I'd like to know how close to the 16 MB limit the UEFI BIOS footprint was when first gen Zen was released along with first gen chipsets (X370, etc.). Was it 10 MB? Was it 12 MB? Was it 14 MB? Second, how large is the AGESA portion of that BIOS? 5%? 20%? If the BIOS was 12 MB with Zen and AGESA was 20% of that (2.4MB) (I'm totally making this up as an example), then it would be reasonable to expect AGESA to at least double in size for Zen+ making the BIOS 12 MB + 2.4 MB = 14.4 MB (still under the 16 MB limit). And then Zen2 AGESA grows it another 2.4 MB making the BIOS now 16.8 MB which is over the 16 MB limit. In this scenario I would blame the MB manufacturers for not having the foresight to realize that 16 MB would be too small.

On the other hand, if AGESA grew exponentially rather than linear with each update to Zen, then we cannot expect MB manufacturers to know how much space would be required for future BIOS revisions. That would be on AMD for not strongly suggesting (or demanding) that MB manufacturers plan on using 32 MB flash chips back when Zen was originally being designed and introduced to MB manufactures. Or maybe AMD didn't even know how large AGESA updates would become way back when Zen was being designed and AGESA implemented.
 
My 6600k still plays everything just fine @ 1440p. I really wish I needed to upgrade.

There is no way that is not a stutter fest, evebbwith a 4.6+4.8Ghz OC. With all the security patches in place, it gets even worse.

I noticed my 5.2Ghz AC OC'd 3770k with ddr2600 (which is faster/on a par then your 6600k) took a huge kit with minimum fps and frame times vs my 1600@ 4.1Ghz (3200c16) . ..that was a over a year ago, and Intel has had even more security mitigation since.

The older intel cpu's such as yours took a much bigger hit when spectre/meltdown mitigations are applied that the newer intel cpu's do. It's just a few percent.

Then you just have some very low standards and/or are just oblivious to hitching and stuttering. Your CPU has absolutely zero chance of maintaining a smooth frame rate in some of the newer games regardless of what the actual FPS number show. And in some games you most certainly cannot maintain 60fps. Most of the culprits are Ubisoft games though of course. Mafia 3 is another one that will eat 4 cores for lunch and is not remotely smooth if I disable hyper-threading. But again like I said pretty much most modern games will have 4 cores pegged most of the time and in some cases all of the time. And even just leaving your browser open in the background can cause some additional hitching because you don't have any cpu power to spare. Heck even all 8 threads of my CPU get fully pegged at times in many if not most newer games.

I believe you are mistaken, and plenty of us running a couple year old Intel CPU's game just fine. my i7-6850k is quite adequate. I probably will not need to upgrade for 2 years or more, by which time there will hopefully be some faster cpu's with PCIe Gen 5.
 
The older intel cpu's such as yours took a much bigger hit when spectre/meltdown mitigations are applied that the newer intel cpu's do. It's just a few percent.



I believe you are mistaken, and plenty of us running a couple year old Intel CPU's game just fine. my i7-6850k is quite adequate. I probably will not need to upgrade for 2 years or more, by which time there will hopefully be some faster cpu's with PCIe Gen 5.


I agree it hit the older platforms harder, but my brother just went from a 6700k @ 4.5Ghz to a 3700x (default PBO) and hasn't stopped thanking me for the recommendation yet lol.

I guess it really depends on what you do/play, but to say a 4/4 SKL quad-core is perfect is bit of a broad stroke. (Referring to the guy I quoted)..

In the end, if he is happy then he/she is happy.
 
Yeah, I wasn't going to come back and argue with people about how all of my games run fine lol. The fact of the matter is I have friends still playing on 2500ks, 4770s, even one on a phenom II x6. They are all adequate for 60hz 1080p gaming. PUBG, Division 2, Deus Ex MD, etc.

I haven't needed a pc upgrade since I stopped using eyefinity \ surround and moved to 1440p. It's kind of depressing as I like upgrading, but am also at the point in life that I would rather put an extra 3k into retirement every year than putting that money towards something I already have that works fine.

I'm sure I could increase my frames with a new cpu but smooth is smooth. Turn off your FPS counters and go enjoy your games lol.
 
Yeah, I wasn't going to come back and argue with people about how all of my games run fine lol. The fact of the matter is I have friends still playing on 2500ks, 4770s, even one on a phenom II x6. They are all adequate for 60hz 1080p gaming. PUBG, Division 2, Deus Ex MD, etc.

I haven't needed a pc upgrade since I stopped using eyefinity \ surround and moved to 1440p. It's kind of depressing as I like upgrading, but am also at the point in life that I would rather put an extra 3k into retirement every year than putting that money towards something I already have that works fine.

I'm sure I could increase my frames with a new cpu but smooth is smooth. Turn off your FPS counters and go enjoy your games lol.

It was Arma 3 that finally pushed me off my Phenom 2 X4 975 and Radeon 6950. Stretched that machine for 7 years of problem free gaming.
 
That said, I may still end up with a 3700X if a great MC deals comes around. I have the itch, just not the need. (or maybe a game like The Outer Worlds, will be poorly optimized and finally give me the need lol )
 
The older intel cpu's such as yours took a much bigger hit when spectre/meltdown mitigations are applied that the newer intel cpu's do. It's just a few percent.



I believe you are mistaken, and plenty of us running a couple year old Intel CPU's game just fine. my i7-6850k is quite adequate. I probably will not need to upgrade for 2 years or more, by which time there will hopefully be some faster cpu's with PCIe Gen 5.
What does your i7 have to do with anything I said about a 4 core 4 thread CPU? What I stated is actual facts about a 4 core 4 thread CPU not being enough to maintain 60 FPS in some games and in most modern games being fully pegged nearly the whole time if not the whole time.
 
Yeah, I wasn't going to come back and argue with people about how all of my games run fine lol. The fact of the matter is I have friends still playing on 2500ks, 4770s, even one on a phenom II x6. They are all adequate for 60hz 1080p gaming. PUBG, Division 2, Deus Ex MD, etc.

I haven't needed a pc upgrade since I stopped using eyefinity \ surround and moved to 1440p. It's kind of depressing as I like upgrading, but am also at the point in life that I would rather put an extra 3k into retirement every year than putting that money towards something I already have that works fine.

I'm sure I could increase my frames with a new cpu but smooth is smooth. Turn off your FPS counters and go enjoy your games lol.
You are delusional if you think a 2500 K or especially older Phenom CPU can maintain 60fps in all modern games. And again like I said, a 4 core 4 thread CPU will be pegged in many modern games most of the time. So either you're playing older games or you just aren't the least bit aware of stutter and hitches or actual frame rate.
 
Last edited:
We need more online multiplayer reviews. While impossible to do side by side comparison it can still be reviewed on the experience and the data can be collected to provide a good review.

The world is mostly multiplayer now. The top 10 list is like 9/1 multiplayer vs single player now. Played some Metro exodus, got really boring fast, linear gameplay is terrible.

Read about the STLKER 2 game, if it is STALKER but online survival with NPCs like the original then it is going to be epic, unless they drop a fallout 76 where people wanted a modded fallout 4 with online players and got some arcade non fallout type game. Seriously fallout with no Radscorpions is not fallout
 
We need more online multiplayer reviews. While impossible to do side by side comparison it can still be reviewed on the experience and the data can be collected to provide a good review.

The world is mostly multiplayer now. The top 10 list is like 9/1 multiplayer vs single player now. Played some Metro exodus, got really boring fast, linear gameplay is terrible.

Read about the STLKER 2 game, if it is STALKER but online survival with NPCs like the original then it is going to be epic, unless they drop a fallout 76 where people wanted a modded fallout 4 with online players and got some arcade non fallout type game. Seriously fallout with no Radscorpions is not fallout

The problem is inconsistency in multiplayer games from one session to the next.
 
The problem is inconsistency in multiplayer games from one session to the next.

It is easy to know where the inconsistency is from, you can quickly rule out network issues the normal hitches are either storage device related, graphics card related, memory related or CPU related. The last two are the most common though over 16GB the chance of cache dump is low especially if with a SSD device. CPU hitching is prominent had that issue with the 4790K.
 
It is easy to know where the inconsistency is from, you can quickly rule out network issues the normal hitches are either storage device related, graphics card related, memory related or CPU related. The last two are the most common though over 16GB the chance of cache dump is low especially if with a SSD device. CPU hitching is prominent had that issue with the 4790K.

It isn't quite that simple. You end up with the same problems in multiplayer games that you have in single player games benchmarking actual gameplay. The issue you run into is that not everyone will do the same actions every time. Enemies don't always do the same things either. If I'm playing Destiny 2, and a bunch of random, low health enemies end up in my face one run and not another, I may or may not see dips in FPS. If I don't get that occasional dip into the 45 FPS range and run at a constant 60FPS, then the results will look very different from the runs where I did see a dip in FPS. The key is to try and minimize the differences between runs in a given game. This is much harder when you can't control other player's behavior.
 
Different player number, actions, and therefore animations. A lot of people shooting at eachother is gonna be a different GPU load than a bunch of explosions, for example.

Throw in the inconsistency of network lag, both on your side and on 30 other sides... It's basically impossible to get consistency when it comes to framerates. Not to mention how different the frames are if you're respawning vs constantly playing, etc.

Now, I wish something like Battlefield had a dedicated benchmark mode. That would be sweet.
 
  • Like
Reactions: Dan_D
like this
We can complain about consistency when benching multiplayer games, yet that's exactly what [H] has done in the past. It takes more time and really deserves a review all its own, but it's worthwhile information that almost no one is doing.
 
We can complain about consistency when benching multiplayer games, yet that's exactly what [H] has done in the past. It takes more time and really deserves a review all its own, but it's worthwhile information that almost no one is doing.

You can't properly benchmark multiplayer games as people have already told you. The exact same things are not going to happen over and over. Different people will be in different matches. Even the exact same people are going to do different things.

Nothing matches up. It simply doesn't. Because of that the results are literally useless because you cannot repeat them. If you cannot reliably repeat results you have no test.
 
You can't properly benchmark multiplayer games as people have already told you. The exact same things are not going to happen over and over. Different people will be in different matches. Even the exact same people are going to do different things.

Nothing matches up. It simply doesn't. Because of that the results are literally useless because you cannot repeat them. If you cannot reliably repeat results you have no test.

While I wish we could just get duration averages (say, 5 games of pubg: average FPS * [big asterisk of caution]) I understand why no high quality reviewer would touch it. The issue is now we turn to low quality youtube channels where the system specs aren't adequately described. I do think an average FPS over duration of time with one CPU getting 200FPS, and the other getting 150FPS, would be reasonable enough to conclude which CPU the person who focuses on that game *may* want to buy (no guarantee blahblah fine print)

Obviously, the solution would be for the developers behind these games (pubg, apex legends, fortnite, etc) to put a repeatable benchmark in the game. Intel could fund them to do so ;) ;)
 
We can complain about consistency when benching multiplayer games, yet that's exactly what [H] has done in the past. It takes more time and really deserves a review all its own, but it's worthwhile information that almost no one is doing.

I certainly think that kind of informatio has value, but I probably won't do it for CPU or motherboard reviews.
While I wish we could just get duration averages (say, 5 games of pubg: average FPS * [big asterisk of caution]) I understand why no high quality reviewer would touch it. The issue is now we turn to low quality youtube channels where the system specs aren't adequately described. I do think an average FPS over duration of time with one CPU getting 200FPS, and the other getting 150FPS, would be reasonable enough to conclude which CPU the person who focuses on that game *may* want to buy (no guarantee blahblah fine print)

Obviously, the solution would be for the developers behind these games (pubg, apex legends, fortnite, etc) to put a repeatable benchmark in the game. Intel could fund them to do so ;) ;)

Well, I thought for about a 50th of a second that aggregate numbers could work, but ultimately, I don't think it does. Game performance is too situational within the same game to use aggregate numbers and trust in them. For example: When computers got to where they could run the original Crysis well, the last level of that took all that you thought you knew about the game's performance and chucked it out the window. At some point the game ran fine 95% of the time and then that last bit, it ran horribly. So if I gave you aggregate numbers for a GPU throughout a small section of the game, you could think that the game will run awesome throughout and not realize that there are still potential chunks of the game where that's not true. The onyl saving grace in that scenario is that the CPU that tests worse, will probably be worse even in the part of the game we didn't test.

Even built in benchmarks are problematic. Sometimes they are representational of actual in-game performance and sometimes they are not. Take two games I've used in my articles for comparison. I haven't a clue how representational the benchmarks are for Hitman 2 and Shadow of the Tomb Raider. I haven't actually played them. I don't have any intention of playing Hitman 2. I do have a desire to play the other one, but I'm too busy trying to do reviews to get into it right now. I was going to use Destiny 2 in my articles but it wouldn't run on the Ryzen 3000 series so that was out in this review.

Ultimately, I think it boils down to this. If I think about it too hard, I can probably pick apart most tests for a CPU review. However, using the tests I've done and the more reputable sites used, I think in general, the CPU that wins in most of those cases is the faster one. In games, if your CPU wins 7 out of 10 gaming tests, its probably safe to assume that its faster at gaming, even if that isn't true 100% of the time. The best thing to do is try and find tests where they covered at least some of the games you play. Luckily for us, the gap between AMD and Intel is much smaller than it once was, or even was in the last generation.
 
FrgMstr benched multi-player games just fine, iirc. He gave avg fps, but the emphasis was on the graphs and variation in frame times they showed (where greater variance and sudden spikes were worse than a smooth graph, even if the average ended up higher on the messy graph), as that's more meaningful, and better represents actual gameplay experience.
 
FrgMstr benched multi-player games just fine, iirc. He gave avg fps, but the emphasis was on the graphs and variation in frame times they showed (where greater variance and sudden spikes were worse than a smooth graph, even if the average ended up higher on the messy graph), as that's more meaningful, and better represents actual gameplay experience.

But does it? That spike dropping to the bottom may never happen again. There may very well have been a confluence of events which brought about that spike which can't be reproduced. At this point your determination of the performance of something in that game is on a one off event that you'll never see again. Even worse is the fact that you can't reproduce the same event with competing hardware to see how it handles the same event. Since you cannot reproduce the event at will you can't even figure out what caused the spike. It may not be anything to do with the hardware tested but an issue with the game engine where it hiccuped. This is the very reason why you need reproducible tests and why multiplayer games are horrible testing grounds for benchmarking.
 
FrgMstr benched multi-player games just fine, iirc. He gave avg fps, but the emphasis was on the graphs and variation in frame times they showed (where greater variance and sudden spikes were worse than a smooth graph, even if the average ended up higher on the messy graph), as that's more meaningful, and better represents actual gameplay experience.

Actually, no he didn't. If your referring to GPU reviews, that was done by Brent, not Kyle. The CPU used on his testing rig didn't change that often. In a GPU review, that's fine. When we are doing things to isolate the CPU and not really have the GPU factor in, I don't necessarily think that's the way to go. What we need are repeatable tests with little variation from run to run. This is getting harder as GPU boost clocks like the ones we see on Ryzen 3000 series CPU's can be inconsistent as it is.
 
But does it? That spike dropping to the bottom may never happen again. There may very well have been a confluence of events which brought about that spike which can't be reproduced. At this point your determination of the performance of something in that game is on a one off event that you'll never see again. Even worse is the fact that you can't reproduce the same event with competing hardware to see how it handles the same event. Since you cannot reproduce the event at will you can't even figure out what caused the spike. It may not be anything to do with the hardware tested but an issue with the game engine where it hiccuped. This is the very reason why you need reproducible tests and why multiplayer games are horrible testing grounds for benchmarking.
Over the course of a playthrough, if it happens repeatedly on one setup but not another, and the only difference is the cpu, then there's no reason to discard it imho. I was talking about general consistancy though, although you can see spikes that way as well.

Actually, no he didn't. If your referring to GPU reviews, that was done by Brent, not Kyle. The CPU used on his testing rig didn't change that often. In a GPU review, that's fine. When we are doing things to isolate the CPU and not really have the GPU factor in, I don't necessarily think that's the way to go. What we need are repeatable tests with little variation from run to run. This is getting harder as GPU boost clocks like the ones we see on Ryzen 3000 series CPU's can be inconsistent as it is.
True, but if the setup is correct for isolating the CPU, and there are clear differences even beyond just the normal variance from differences in the game itself, then I believe the results are valid and useful data. Obviously, if it just looks exactly the same except for the graph shifting a few fps higher or lower, it's not very useful. But is that actually the case?

Edit: afa boosts being inconsistant, that'd likely show up as a drop and then spike in the framerate, if you've properly isolated the CPU. A CPU which doesn't boost as quickly or which can't clock as high on one or two busy cores because the cpu doesn't idle enough to lower local temperatures would struggle with maximum framerate but might have fewer spikes. If you recognize these differences in architecture and bring them to our attention, it would make the results that much more meaningful.
 
Last edited:
Over the course of a playthrough, if it happens repeatedly on one setup but not another, and the only difference is the cpu, then there's no reason to discard it imho. I was talking about general consistancy though, although you can see spikes that way as well.


True, but if the setup is correct for isolating the CPU, and there are clear differences even beyond just the normal variance from differences in the game itself, then I believe the results are valid and useful data. Obviously, if it just looks exactly the same except for the graph shifting a few fps higher or lower, it's not very useful. But is that actually the case?

Edit: afa boosts being inconsistant, that'd likely show up as a drop and then spike in the framerate, if you've properly isolated the CPU. A CPU which doesn't boost as quickly or which can't clock as high on one or two busy cores because the cpu doesn't idle enough to lower local temperatures would struggle with maximum framerate but might have fewer spikes. If you recognize these differences in architecture and bring them to our attention, it would make the results that much more meaningful.

Let me clarify what I mean by inconsistencies in boost clocks. It isn't necessarily inconsistent in a given work load. It can be inconsistent from day to day, or from run to run. Meaning, your 3900X might boost at 4448MHz on day or on one run, and then 4523MHz the next. Or, later in the week, you'll get yet another result. However, in a given workload the boosts will remain consistent at the time. If I run Cinebench R20 single-thread, I'll get the same boost clock at the time. I will get something similar every time, but ambient temps and other factors can alter these values slightly each time. If I run a multi-threaded application, the boost clock results tend to be more consistent every time I run it, even on different days or under slightly different conditions, the results are pretty much the same. I'm telling you from experience, boost clocks suck ass for benchmarking and reviewing CPU's.

Example, I have seen boost clocks on my 3900X sample from 4.3GHz to 4.573GHz and a whole lot in between. This even changes with different AGESA code versions, hence why I did an entire update on the testing from this change alone. In multi-threaded workloads, The thing normally boosts around 4.1GHz on all cores. As far as gaming goes, I haven't seen that much variance in the tests even with the clock speeds changing. As I pointed out in my update article, the clocks didn't really change the results a whole lot.
 
FrgMstr benched multi-player games just fine, iirc. He gave avg fps, but the emphasis was on the graphs and variation in frame times they showed (where greater variance and sudden spikes were worse than a smooth graph, even if the average ended up higher on the messy graph), as that's more meaningful, and better represents actual gameplay experience.

Piling on to this - the only multiplayer benchmarks I recall seeing on [H] over the course of the past 7 years have been for Battlefield series games. Everything else has been single player. Battlefield games aren't too bad to do benchmarking, as if you work in the same map doing the same general things (i.e. kamikaze jeep shenanigans around the same base flag in Conquest in a full 64 player server) then you can get consistent enough results. When maps are dynamically generated the comparison becomes near impossible....
 
No real surprises.

These destroy Intel in highly parallel workloads. Much better perf/watt than Intel.

Intel is still top dog in gaming.

OC seems to top out at 4.2-4.3GHz.
3 to 4% difference overall for gaming. That is really small potatoes, hardly enough to have a negative impact on game play. It totally destroys Intel in streaming, encoding, and every other productivity task. This will ALL tilt in AMD's favor as all gamees on console will be zen2 based next year ands will have to provide AMD optimizations for multicore. It will ONLY get better with time. When big Navi comes along in the 1st quarter on Next generation RDNA 2.0 AMD will at least equal 2080 TI performance. That will substantially be improved over time as rdna is still in its very early life. More and more features will be added to the mix which will mean even the 5700 and 5700XT will be substantially improving over time with new drivers.
 
Last edited:
  • Like
Reactions: mjz_5
like this
This will ALL tilt in AMD's favor as all gamees on console will be zen2 based next year ands will have to provide AMD optimizations for multicore. It will ONLY get better with time.
Oh boy this again. An AMD chip or other has been in the consoles - at least 6 years now? - and this long running fantasy has been that it would somehow translate to gains on the PC side for AMD. Still hasn't materialized.

Because of the way console dev works and the hardware level stuff abstracted away, game devs DNGAF about 'optimizing for AMD' or multicore or any particular hardware. There's no crossover. AMD drivers on PC may improve over time, but don't wait for consoles as some savior or multiplier, or you're in for disappointment

AMD will have to do the heavy lifting, it won't be done for them. Where I'd like to see them lean heavy is optimizing for and becoming the go-to GPUs for Vulkan.
 
Last edited:
What it comes down to in a computer enthusiast environment, when you look at what CPU you should buy, what display are you running, and at what resolutions, at what expected frame rates, and where do you become GPU limited... And that is in a gaming-only environment.

This of course is a totally different argument if your are a 1080p gamer, and what your needs are there.
 
Despite Zen 2 seeming like a solid improvement over Zen+, pretty sure I'm going to keep my 2700X until there's either a good deal on a 3600X/3700X or something. I'm playing at 1440p Ultrawide (120hz) anyways, so I'm GPU bottlenecked long before the CPU matters much.
 
Speaking to Dan_D, I'm wondering if it isn't possible to narrow the criteria a bit, and possibly bringing in a different tester to do the actual gameplay needed to produce results.

Might even just be 'one day of multiplayer testing' with caveats up front.

The basic idea is that objective and some subjective impressions based both on user experience and a best-effort correlation with frametimes.
 
Back
Top