AMD Announces New Ryzen Laptop APUs

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
On their website, AMD officially revealed the Ryzen 7 2800H and Ryzen 5 2600H. The 2800H is the fastest version of Raven Ridge destined for laptops so far, with a 3.3Ghz base clock, a 3.8Ghz turbo clock and a Vega IGP with 11 CUs running at 1300Mhz. The 2600H has a 3.2Ghz base clock, a 3.6 Ghz boost clock, and a Vega 8 IGP running at 1100Mhz. Both chips feature a 35W-54W configurable TDP, which a considerable rise from the 15W TDP of previous Raven Ridge laptop parts, and support for DDR4 memory at speeds of up to 3200Mhz. Some performance specifications for the 2800H showed up in an AMD presentation some time ago, but it's not clear when laptops with these parts will launch.
 
Considering how long it took to get the first round of Raven Ridge into laptops, I'm not optimistic. Maybe by 1H 2019 we'll probably get a 15in plastic notebook with single channel 8gb ram and poor cooling solution. Ugh.
 
I'd love to see some performance laptop builds with Ryzen chips but I don't think that is the market they are targeting.... sadly.
 
Considering how long it took to get the first round of Raven Ridge into laptops, I'm not optimistic. Maybe by 1H 2019 we'll probably get a 15in plastic notebook with single channel 8gb ram and poor cooling solution. Ugh.

Yeah :(

The 15W Raven Ridge laptop launches were still an improvement over previous APU launches though. I'm typing on a Llano laptop right now, and I remember OEM designs being nearly nonexistant back when I was shopping for it.
 
Looks like these will compare favorably against the i5-8300H. I do hope we see these in laptops soon with options for a dGPU solution as well.
 
I'd love to see some performance laptop builds with Ryzen chips but I don't think that is the market they are targeting.... sadly.

Things change. AMD is coming after all segments with a vengeance. There is plenty of CPU power for a long time for most laptops. However AMD always consumed a lot more power with the Bulldozer->Piledriver architecture. I think they found a balance between energy consumption and performance CPU/GPU wise that gives them an edge over Intel. Being able to play basic esports games on a laptop is a pretty big deal. No more spending $1000 on an Intel with 3rd party graphics add ons.
 
  • Like
Reactions: c3k
like this
So, the 2800H is a slightly slower version of the 2400G, 45W vs 65W. It doesn't sound impressive at first, but it's far better than the Intel integrated graphics.
 
Considering how long it took to get the first round of Raven Ridge into laptops, I'm not optimistic. Maybe by 1H 2019 we'll probably get a 15in plastic notebook with single channel 8gb ram and poor cooling solution. Ugh.

I gave up after the 8800P 15w vs 25/35w fiasco. OEM's refuse to offer a good AMD laptop with respectable cooling/hardware.
 
I gave up after the 8800P 15w vs 25/35w fiasco. OEM's refuse to offer a good AMD laptop with respectable cooling/hardware.

Problem was the additional cooling hardware and additional battery + heat made them no longer competitive cost or performance wise. Hence why it was hard to find OEM laptop designs in all but extremely niche markets.

Ryzen APU's are a whole new ballgame energy wise.
 
So, the 2800H is a slightly slower version of the 2400G, 45W vs 65W. It doesn't sound impressive at first, but it's far better than the Intel integrated graphics.

the 2400G is a "desktop" APU, the 2800H is a "laptop" APU, so beyond clock speed and the fact both are built on Ryzen core, it is likely the 2800H is more "tweaked" so will be able to use its power envelope at least that much better before throttling, usually laptop grade chips (mobile in general) seem to handle the power they have available to them more effectively (because they basically have no choice)

2400g compared to 2200g from what I read has throttling problems because TDP range is very limited to feed both the cpu cores AND Vega core, in this case, yes clocked a bit lower (cpu wise), but maybe they did their homework and KNOW it is very unlikely to throttle at all (cpu or igp) could be that is exactly why they tuned the cpu clock down a touch and opted to increase the igp clock so it could "easily" stay within TDP numbers) reviews/time will tell I suppose.

either AMD was waiting for enough dies to make it happen, or they took their time to do some various tweaks to ensure it will work as best as possible comparatively speaking (better binning, lower thermals etc) I think IMO because of the simple fact it is classed as a "high performance gaming mobile" them turning down the cpu clock just a touch to increase gpu clock absolutely makes sence as 4c/8t at base clock of 3.3Ghz is not THAT bad, hell many folks are uing similar (but overall slower) aged cpu to this day.

like me LOL..Phenom II 955 (980 clock speed) at 3.7 and a touch Ghz all core, this at 3.3Ghz still takes its lunch money ^.^

Ryzen overall has been IMHO a massive blessing to AMD and consumers everywhere, I do not see this being any different, I suppose to put another way, so what, you will get at least very similar performance that 2400g can give you in a lower TDP (power consumption) space, for laptops sweet deal...as long as the vendors (such as HP) do not boch them by using stupid RAM amounts or a vastly reduce Mah battery, they are likely to be quite good, though I can picture Acer, HP, Lenovo plopping them in crazy expensive laptops because "best spec" (as well as screw something up spec wise to not show them at their best)
 
Last edited:
Looking forward to these chips being able to show the actual potential of the Ryzen/Vega APU. Just hope the coolers spec'd are up for gaming duty. RAM speeds as well, at least CL14 2800 to open that Vega core up! Love my little 2200G in the livingroom PC, 1080P Medium settings all day long. Would love to see on-chip HBM at somepoint, but DDR5 and PCIe 4.0/5.0 should help traditional layouts as well. Bright future for AMD for the next couple of years me thinks. So much marketshare to capture
 
so what's this compare to? a 6700hq?

probably gets its ass kicked by the 8300h i5
 
Can’t wait to see how the laptop vendors will fuck these up like they do every other AMD laptop.

Single channel RAM with a USB 1.0 connected thumb drive for storage...
 
so what's this compare to? a 6700hq?

probably gets its ass kicked by the 8300h i5

Nope. But that all depends on what benchmark you measure it by. Intel might win on CPU intensive task, but AMD would win on graphics performance. But as I stated, most users can more than suffice with middle tier CPU's in today's laptops. There's been more than enough power for years for things like Excel, Word, Youtube, etc... Video games however are now within grasp at reasonable price points thanks to AMD
 
  • Like
Reactions: N4CR
like this
Being able to play basic esports games on a laptop is a pretty big deal.

I've been doing this for years on Intel-based ultrabooks. Intel's GPU is plenty fast for many of these, and without discrete graphics memory, the AMD solution is only a tad faster. In either case you'd want discrete for competitive, and then Intel is the faster overall solution. But for something like League of Legends? That's been at 1080p60 for a while already.

But as I stated, most users can more than suffice with middle tier CPU's in today's laptops.

This is absolutely true; I upgraded to an 8550U ultrabook because I like to do photo editing on the road and HDR blends were just too slow with the 7500U. The 8550U hits 4.0GHz easily under load, damn hard to complain about it.


And the real kicker in all of this is going to be battery life. I have no doubt that AMD can hit the performance needed, but being able to keep going when unplugged is the real kicker.
 
These would make great laptop chips if someone would make one that was affordable and not cheap and shitty
 
so what's this compare to? a 6700hq?

probably gets its ass kicked by the 8300h i5
The base frequency of the i5 8300h is only 2.3GHz, so the base frequency of the Ryzen 2800H, 3,3GHz, would trounce it, in CPU power. The 8300H has a little edge in turbo speeds, but Intel is much more miserly on turbo with laptop chips, so that won't account for much. Extremely short jobs might get ahead of the 2800H, but anything more than 5 seconds long would be dropped down to 2.3GHz, and therefore lose to the 2800H. All around, Intel's mild clock for clock advantage can't make up for a 30% lower clock speed of the i5 8300H.

As for the graphics, the Ryzen 2400g has nearly the same graphics settings as the 2800H. The 2800H might be a little slower due to power savings, but it can't be by much considering they both have 11 compute units. So, it would absolutely trounce the Intel graphics in any benchmark. Intel has absolutely sucky graphics in any case, and they haven't improved the integrated graphics since the Broadwell series. My i5 5200U laptop has the same graphics as the i5 8300H, except for the support for 4k, and I know for certain that graphics subsystem sucks.

The 2800H should trounce the 8300H in every conceivable way.
 
The base frequency of the i5 8300h is only 2.3GHz, so the base frequency of the Ryzen 2800H, 3,3GHz, would trounce it, in CPU power. The 8300H has a little edge in turbo speeds, but Intel is much more miserly on turbo with laptop chips, so that won't account for much. Extremely short jobs might get ahead of the 2800H, but anything more than 5 seconds long would be dropped down to 2.3GHz, and therefore lose to the 2800H. All around, Intel's mild clock for clock advantage can't make up for a 30% lower clock speed of the i5 8300H.

You can't really just compare numbers from Intel's Ark directly; the system being compared has a huge effect. You can't really take base speeds for granted at all.

To make a determination, you're going to need to compare these two in the same enclosure.
 
Yeah it will be nice when AMD hits all of the OEM lineups, so direct comparisons can be made. I think they have finally woken up and realized they need to build some decent laptops with AMD chips. No more bottom feeding/outdated designs.
 
I'd love to see AMD provide a dGPU equivalent to the IGP portion in both of these new mobile Ryzen processors so laptop OEMs can make a Crossfire enabled laptop that'll still be more power efficient than many laptops equipped with a dedicated 1050 or 1060.
 
I'd love to see AMD provide a dGPU equivalent to the IGP portion in both of these new mobile Ryzen processors so laptop OEMs can make a Crossfire enabled laptop that'll still be more power efficient than many laptops equipped with a dedicated 1050 or 1060.

That would be terrible?

The dGPU would have access to VRAM, while the iGPU would be fighting with the CPU for access to main memory... and then you want to Crossfire that?

Be better off just running the dGPU.
 
That would be terrible?

The dGPU would have access to VRAM, while the iGPU would be fighting with the CPU for access to main memory... and then you want to Crossfire that?

Be better off just running the dGPU.

Not if the dGPU has it's own VRAM, then Crossfire it with the IGP in hybrid mode, just like older APUs were capable of. It would likeky make for a very capable entry-level gaming laptop that would probably blow away the graphics performance of the competition in the same price brackets.
 
Not if the dGPU has it's own VRAM, then Crossfire it with the IGP in hybrid mode, just like older APUs were capable of. It would likeky make for a very capable entry-level gaming laptop that would probably blow away the graphics performance of the competition in the same price brackets.

But now you're into a different price bracket- you have to put eDRAM on package/on die.

Now, I'm not discounting the potential performance or how cool it would be, just that it would likely not be cost competitive :).
 
The base frequency of the i5 8300h is only 2.3GHz, so the base frequency of the Ryzen 2800H, 3,3GHz, would trounce it, in CPU power. The 8300H has a little edge in turbo speeds, but Intel is much more miserly on turbo with laptop chips, so that won't account for much. Extremely short jobs might get ahead of the 2800H, but anything more than 5 seconds long would be dropped down to 2.3GHz, and therefore lose to the 2800H. All around, Intel's mild clock for clock advantage can't make up for a 30% lower clock speed of the i5 8300H.

As for the graphics, the Ryzen 2400g has nearly the same graphics settings as the 2800H. The 2800H might be a little slower due to power savings, but it can't be by much considering they both have 11 compute units. So, it would absolutely trounce the Intel graphics in any benchmark. Intel has absolutely sucky graphics in any case, and they haven't improved the integrated graphics since the Broadwell series. My i5 5200U laptop has the same graphics as the i5 8300H, except for the support for 4k, and I know for certain that graphics subsystem sucks.

The 2800H should trounce the 8300H in every conceivable way.

It doesn't drop down to 2.3.

actually intel does a pretty good job at holding turbo.

my 6700hq holds max turbo on all 4 cores through an entire encoding project, and other reviews of newer core i7s show that turbo holds for a long time too, except when there's a shitty Apple-like cooling solution.
It actually never dips close to base clocks ever.
 
... Crossfire enabled laptop...

Them is fightin' word right there! My experiences with Crossfire with igpu + dedicated gpu is it doesn't work right. It sounds neat in theory but in real life with the quality of AMD/ATI's drivers, not worth the effort unless benching?

I guess the DX12 multiple GPU options but has anyone seen any real gains with that? Even on desktop hardware / dedicated gpus?
 
But now you're into a different price bracket- you have to put eDRAM on package/on die.

Now, I'm not discounting the potential performance or how cool it would be, just that it would likely not be cost competitive :).

I disagree. It would probably only add about $50-75 at the cash register for the second 11 CU dGPU with 2GB RAM.
 
Them is fightin' word right there! My experiences with Crossfire with igpu + dedicated gpu is it doesn't work right. It sounds neat in theory but in real life with the quality of AMD/ATI's drivers, not worth the effort unless benching?

I guess the DX12 multiple GPU options but has anyone seen any real gains with that? Even on desktop hardware / dedicated gpus?


Well, that would be the real trick: AMD would need to dedicate some effort in bringing stable drivers to such a setup.
 
It doesn't drop down to 2.3.

actually intel does a pretty good job at holding turbo.

my 6700hq holds max turbo on all 4 cores through an entire encoding project, and other reviews of newer core i7s show that turbo holds for a long time too, except when there's a shitty Apple-like cooling solution.
It actually never dips close to base clocks ever.

My dad an I have different versions of the same laptop, a Dell Inspiron 15" convertible. His is a Core i5 7200U (model 7579) and Mine is a 5200U (model 7558), with each having their proper chipsets and memory, but otherwise they're the same, including the same cooling solution, single channel memory, and battery. They both sit at base clocks except for some spurts up to the turbo clocks that last for just a couple seconds. I've monitored that through games and encoding repeatedly. Granted, that's basically just one model, so I can't say for certain on other models, but I have read about the same experiences on many other laptops, and prebuilt desktops.
 
Toss this in with a 15.6", 1080p, screen. No optical drive needed. An nvme m2 OS drive. Some other modern stuff.

Then, tell me what games play at what fps on that. Next, how long will it last on the battery? Give me 12 hours of productivity (typing, document work, email, etc.) and 4+ hours of gaming. Win.
 
I disagree. It would probably only add about $50-75 at the cash register for the second 11 CU dGPU with 2GB RAM.

Your opinion is touching, but seems quite naive.

Look at what Intel has charged for parts with eDRAM, or what they're charging now for their AMD collaboration. You're not just talking extra parts but special runs and more complicated manufacturing processes- you lose economy of scale, and price goes up more.
 
Your opinion is touching, but seems quite naive.

Look at what Intel has charged for parts with eDRAM, or what they're charging now for their AMD collaboration. You're not just talking extra parts but special runs and more complicated manufacturing processes- you lose economy of scale, and price goes up more.

Who said anything about expensive EDRAM in an ASIC for such a solution?

It would, instead, be a cheap dGPU (matching the 8 or 11 CUs of the IGP) with a meager amount of cheap VRAM, both soldered onto the laptop motherboard...no different than how past implementations of cost-effective dGPUs with varying amounts of VRAM have been manufactured.
 
Who said anything about expensive EDRAM in an ASIC for such a solution?

It would, instead, be a cheap dGPU (matching the 8 or 11 CUs of the IGP) with a meager amount of cheap VRAM, both soldered onto the laptop motherboard...no different than how past implementations of cost-effective dGPUs with varying amounts of VRAM have been manufactured.

Why are you talking about the dGPU?

Of course it has DRAM. It has VRAM.

But the iGPU has to content with the CPU for slower main memory. Trying to combine the two for mGPU would be hell for frame pacing. Why bother?

[unless you're going to give the iGPU some VRAM...]
 
Why are you talking about the dGPU?

Of course it has DRAM. It has VRAM.

But the iGPU has to content with the CPU for slower main memory. Trying to combine the two for mGPU would be hell for frame pacing. Why bother?

[unless you're going to give the iGPU some VRAM...]


I'm talking about hybrid Crossfire...it would take a bit of effort on AMDs part to get the drivers driver's where they need to be, since their past implementations have been lackluster, at best.

The VRAM for the dGPU side could even be cheap DRAM, even a SODIMM. Would there be tons of memory bandwidth? No, of course not. But it *could* work as a viable solution...again, the driver support from AMD would have to be on point.
 
Last edited:
I'm talking about hybrid Crossfire...it would take a bit of effort on AMDs part to get the drivers driver's where they need to be, since their past implementations have been lackluster, at best.

The VRAM for the dGPU side could even be cheap DRAM, even a SODIMM. Would there be tons of memory bandwidth? No, of course not. But it *could* work as a viable solution...again, the driver support from AMD would have to be on point.

AMD could make it 'work', I agree, but it's not going to be worth it. Crossfire has a horrific track record with keeping frametimes consistent, and this is basically a worst-case scenario unless both GPU complexes have access to enough memory bandwidth, and by that time cost, power usage, and performance would both favor a larger dGPU.
 
Back
Top