Reviews for AMD’s APU Ryzen 2400G are in.

Asrock has been real good to me lately, Z170/270/370, X99, X370, X399 great boards. That said I picked up a gigabyte itx instead because the asrock itx fucked up with no displayport and no usb 3.1 (wtf, it comes with the chipset!). I don't like the realtek lan on the gigabyte but the x8 lanes don't need a gpu so...

Actually all the am4 itx and uatx boards are a big compromise, asus didn't even put fucking video ports on theirs WTH?! Its like they put the rejects on the design team, they clearly know better going by the intel z#70 itx and am4 atx configurations.
 
"Extremetech confirms that now AMD uses "nonmetallic TIM"."


The word "now" you used puts the conversation in a negative tone like AMD made a quick change to TIM going forward. The truth is, they always have for APUs

Facepalm at Extremetech. That's some stupid sensationalism on their part.
 
I am assuming you mean the gpu portion of the faster cpu.

Yes, this really shows when techspot overclocked. The 2400g was some 30% faster than the 2200g at stock speeds. When overclocking both, the 2400g was less than 10% faster, as if it hit a big wall and the only way to get through was with faster memory.

It looks like the 35 gb/s is enough bandwidth for Vega 8, even o/c, but it looks like Vega 11 wants a bit more.
Nope. I meant the CPU. An 8 thread Zen CCX will be more bandwidth hungry than a 4 thread Excavator proc since its getting a lot more work done. So overall bandwidth goes up but as a practical matter bandwidth to the GPU doesn't go up as much as you think for a mixed workload.

There is a lot of give and take/mutual dependencies. That's why 1080p and lower res benches are used for CPU gaming benchmarks. In an APU hitting the CPU's bounds also lowers the GPU bounds
 
"Extremetech confirms that now AMD uses "nonmetallic TIM"."


The word "now" you used puts the conversation in a negative tone like AMD made a quick change to TIM going forward. The truth is, they always have for APUs

The "now" I posted is a shorthand for the "AMD also notes it has transitioned to [...] a traditional nonmetallic TIM" quote that you find in the ExtremeTech review. Both my posts #16 #20 were about APUs and only about APUs. Everything else, including your pretension that I was talking about Pinnacle Rige (I was not), is only in your imagination.
 
Last edited:
TweakTown preliminary OC results were impressive.

Reviews are getting up to 4.2GHz in golden samples, which agrees with the 4.2GHz demoed by AMD at CES.

Those overclocks are about 200MHz above Summit Ridge, which agrees with the stock clocks of Raven Ridge being 100--300MHz higher than the stocks clocks of corresponding Summit Ridge quad-core.

1500X: 3.5/3.7GHz <--> 2400G: 3.6/3.9GHz

1400: 3.2/3.4GHz <--> 2200G: 3.5/3.7GHz


They didn't use the sleep bclk bug, 4.175ghz using multiplier but temps quickly got too high to be usable on the wraith stealth, with a decent cooler things really bode well for zen+ overclocking. Especially if that pcgames bclk oc ends up being real and not a sleep timer bug. Hopefully somebody starts doing some oc with a decent cooler so we see what zen+ is capable of. Still, with all cores at 3.9, vega at 1500 (using stock cooler) and 3200 ram, performance overall was amazing for an igp.

Those 4.56GHz are non-existent. It is the old sleep timer Ryzen bug again faking overclocks readings and scores

https://www.pcgamesn.com/amd-raven-ridge-overclocking
 
Reviews are getting up to 4.2GHz in golden samples, which agrees with the 4.2GHz demoed by AMD at CES.

Those overclocks are about 200MHz above Summit Ridge, which agrees with the stock clocks of Raven Ridge being 100--300MHz higher than the stocks clocks of corresponding Summit Ridge quad-core.

1500X: 3.5/3.7GHz <--> 2400G: 3.6/3.9GHz

1400: 3.2/3.4GHz <--> 2200G: 3.5/3.7GHz




Those 4.56GHz are non-existent. It is the old sleep timer Ryzen bug again faking overclocks readings and scores

https://www.pcgamesn.com/amd-raven-ridge-overclocking

Yeah, that's exactly what I posted, bclk bug. The 4.175ghz oc was NOT due to the bug, that was pure multiplier and on the stock wraith stealth cooler. I still haven't seen any tests with aftermarket (good) coolers.
 
Yeah, that's exactly what I posted, bclk bug. The 4.175ghz oc was NOT due to the bug, that was pure multiplier and on the stock wraith stealth cooler. I still haven't seen any tests with aftermarket (good) coolers.

The bug generated those fake 4.56GHz. The 4.175GHz you mention are real and correspond to the 4.2GHz that I mention in my post.

As mentioned in one of my links, AMD demoed the 2400G with the Wraith Max (a 140W cooler): 4.2GHz was the maximum overclock with that cooler.

The Stilt has delidded the APU, replaced the TIM and reduced tempts by 12 ºC. The max frequency overclock remain unchanged after the delidd.
 
Last edited:
Nope. I meant the CPU. An 8 thread Zen CCX will be more bandwidth hungry than a 4 thread Excavator proc since its getting a lot more work done. So overall bandwidth goes up but as a practical matter bandwidth to the GPU doesn't go up as much as you think for a mixed workload.

There is a lot of give and take/mutual dependencies. That's why 1080p and lower res benches are used for CPU gaming benchmarks. In an APU hitting the CPU's bounds also lowers the GPU bounds

What?! In no goddam way is the 3.7 ghz 4/8 cpu portion of the APU the bottleneck in ANY of these tests. MAYBE with older games at 480p on low running 150 fps+ would you see any cpu bottleneck. What in the world are yoi talking about?
 
What?! In no goddam way is the 3.7 ghz 4/8 cpu portion of the APU the bottleneck in ANY of these tests. MAYBE with older games at 480p on low running 150 fps+ would you see any cpu bottleneck. What in the world are yoi talking about?
I'm not saying that happened; I'm just illustrating on an APU the CPU and GPU compete for resources. Last I checked they don't share a contiguous memory space for game assets. So for instance any texture might have a copy for the CPU and another for the GPU and each has to be loaded from memory separately which means 2x the bandwidth is used, on average.
 
Has anyone reviewed with meltdown patches installed?

There is no meltdown patch for AMD and never will be since it does not have that flaw. Spectre patches I have no idea if they are out or not for the consumer for AMD.
 
Hmmmm...seems the 2400G has an interesting throttle mode when stressing the IGP.

Our testing did uncover an interesting phenomenon that we'll refer to as a self-preservation measure (rather than thermal throttling). We only observed it using the Ryzen 5 2400G. When the CPU cores exceed 94.5°C, their clock rates, power consumption, and waste heat are reduced. The working hypothesis is that this is a voltage and current limit. It's different from the throttling behavior you might expect to encounter based on temperature alone, and we tested that theory by cooling our 2400G to 60°C. Power consumption pulled back sharply anyway, despite low temperatures.

Separately, if the integrated graphics engine is stressed beyond its limits for several minutes, then both the GPU and CPU are throttled equally. A moderate clock rate reduction helps limit waste heat, ameliorating the issue. But this doesn't just affect stress testing. It can also be triggered by extensive GPU-accelerated video encoding and compute applications. If you push the GPU hard enough, the throttling can kick in with total power consumption under 50W and a sub-50°C temperature!

All of this wouldn’t really be worth writing about if it wasn’t for the fact that Ryzen keeps on throttling, even after the conditions that caused throttling in the first place are relaxed. A reboot is necessary to reset the chip's operating parameters. There’s no rhyme or reason to this, and AMD can't explain it.

http://www.tomshardware.com/reviews/amd-raven-ridge-thermal-power-benchmarking,5464.html
 
Reminds me of how gpus crash or near crash, crashing the driver to a point it can't recover thus hard reset.
 
Hmmmm...seems the 2400G has an interesting throttle mode when stressing the IGP.



http://www.tomshardware.com/reviews/amd-raven-ridge-thermal-power-benchmarking,5464.html

sounds more likely a driver or maybe a bios issue similar to what thesmokingman mentioned.. i remember nvidia had a similar issue with the 8/9/2x0 series where the cards would throttle and permanently stay that way until a hard reset, with the 400 and later series cards it would automatically crash the drivers to reset clock speeds.
 
I spent time price guaging locally and the 2200G makes a good buy even for discrete gaming. You get 1400-1500 performance sometimes better for about 1.7K less absolute steal of a deal for a budget orientated build.
 
There is no meltdown patch for AMD and never will be since it does not have that flaw. Spectre patches I have no idea if they are out or not for the consumer for AMD.

It is true that there is no current Meltdown patch for AMD hardware. But never say "never". Lisa Su claims they "believe" their hardware is not vulnerable to Meltdown, and the official Meltdown site continues labeling the vulnerability of AMD hardware as "unclear" at this moment.

Spectre patches are ready on Windows but only one of them is active because the other requires microcode update, which AMD has not still issued.
 
It is true that there is no current Meltdown patch for AMD hardware. But never say "never". Lisa Su claims they "believe" their hardware is not vulnerable to Meltdown, and the official Meltdown site continues labeling the vulnerability of AMD hardware as "unclear" at this moment.

Spectre patches are ready on Windows but only one of them is active because the other requires microcode update, which AMD has not still issued.

How many of the Intel patches were crushing rigs?
 
I want this APU for a custom Emulation rig I want to build. The problem is that the cooler for this beast is the size of the case I want to use. Are there any low profile AM4 coolers that would fit a mini-itx case and this board? Slimmer/smaller the better.
 
Newegg is apparently up to their shenanigans again:

As of right now @ Newegg:

Ryzen 3 2200G: BACKORDER status and back up to $129.99
Ryzen 5 2400G: IN STOCK and back up to $189.99
 
Does anybody know if the 'old boards + bios update + new cpu' gives us HDMI 2 output = UHD 60Hz ?

List is being compiled: https://smallformfactor.net/forum/t...am4-motherboard-test-request-megathread.6709/

Will be testing with my pair of 2400g this weekend on asus prime pro x370 ("extra" from ryzen launch) and gigabyte b350 itx.

To be honest, I'm not sure why this should not be the case for all boards, the hdmi is logically wired pretty much straight to the cpu socket. There is a bit inbetween (voltages and all that) but they designed those boards in late 2016 at the earliest.

I am prepared if they fail to do good 4k on hdmi, I tend to make setups with dualscreen 4k monitor + 4k tv (though not always both on) which is why I made sure both have DP 1.2 ports.
 
keep-calm-and-lets-get-back-to-the-topic.png
 
The 35W versions (GE) have been getting some nice buzz the last day or two.
 
Had the unused cooler laying around from my R7 1700. The 2400G cooler feels hefty compared to the old apu coolers, but man...thinking the kids can flip for who gets the nice cooler now.
 

Attachments

  • 20180214_182039.jpg
    20180214_182039.jpg
    1.3 MB · Views: 31
Its fine for someone who wants a decent rig now and can drop in a discrete GPU whenever (if ever lol) they can afford one.

Also would be able to upgrade to Zen2 or later procs down the road.

Whats not to like?
Crapping on AMD gets clicks I guess
 
Whats not to like?

Money's better spent (long-term) on a faster Intel or AMD CPU (well, one with more cores), because the iGPU just isn't that fast due to be strapped to a CPU memory controller.

However, if AMD were able to solve that problem in the same socket- shoehorn in HBM?- there's potentially some future-proofing. Just not enough to make it a 'sound' investment unless going dGPU-less is being forced.
 
Most of the complaints seem to be coming from people who don't want to accept these are for people who don't want dGPU at all.

No, its not. They're being entirely retarded. There's enough lanes there for one discrete add in gpu. What they're complaint is that they can't run multiple gpus roflmao.
 
Playing with my 2400G
https://imgur.com/a/acqeV

Was able to get GPU to 1600 MHz. Pretty awesome little chip.

Very cool! You're seeing decent (linear scaled?) improvements in synthetics from increased clockspeed, which allays one of my concerns: that memory speed would hamper overclock performance.

However, in the first benchmark, you're seeing a lower minimum FPS at 1600MHz vs. 1250MHz- is that repeatable, or is it an outlier? I'd hate to improve average framerates only for performance to be choppy!
 
Very cool! You're seeing decent (linear scaled?) improvements in synthetics from increased clockspeed, which allays one of my concerns: that memory speed would hamper overclock performance.

However, in the first benchmark, you're seeing a lower minimum FPS at 1600MHz vs. 1250MHz- is that repeatable, or is it an outlier? I'd hate to improve average framerates only for performance to be choppy!

Good observation. At the beginning of heaven when you click the benchmark button, sometimes it hitches and the first fps reading is low. I think that is the only time it will go that low. I could run it a few more times, but I don't think its always able to be reproduced.
 
Good observation. At the beginning of heaven when you click the benchmark button, sometimes it hitches and the first fps reading is low. I think that is the only time it will go that low. I could run it a few more times, but I don't think its always able to be reproduced.

Let the bench run a quick loop by pushing enter key so it loops around once then run the bench.
 
re heaven; when I see that happen I hit esc then hit F9 to restart the benchmark. there are also a couple other points during the run where the fps dips drastically on everything ive ever run it on.
 
Well, that behavior highlights the issue with minimum FPS as a concluding data point- we really need frametimes (and a graph!), which would isolate such issues and present a good overview of realized performance.

Still, it's nice to see the iGPU responding well to overclocking!
 
Well, that behavior highlights the issue with minimum FPS as a concluding data point- we really need frametimes (and a graph!), which would isolate such issues and present a good overview of realized performance.

Still, it's nice to see the iGPU responding well to overclocking!

It's a quirk with Unigine engine. Usually everyone knows to ignore obvious anomalous low fps in this bench. Or like seasoned users let it loop at least once then run it.
 
Back
Top