AMD Announces Ryzen 7 3700X, 3800X and Ryzen 9 3900X

15% ipc improvement is something AMD has sorely needed for awhile. How close does this get them to Intels' IPC (fully side-channel mitigated Intel)? Once these come out, can't wait to see some comparisons.

If the IPC becomes closer to Intel, and they are hitting >4.5Ghz boost, at those core counts and prices, it is really amazing! And it is good for all of us, whether you buy Intel or AMD. If you buy AMD, here is a worthy upgrade, if you buy Intel, chances are your pricing just dropped a few hundred $$..

You guys arguing that IPC isn't important are mistaken. AMD has been behind Intel in this area for years... if they catch up, it means real competition.

The IPC is also more important than ever, as the shrinking transistors manufactured with silicon have hit the speed barrier already. So overall performance improvements can only come from more cores and better IPC (which is a single core performance metric based on a single clock cycle). And we all know that many games do not benefit all that much from large core counts, so the single core performance (IPC x Ghz) matters most.

So, question for those of you already running these newer AMD cpu's and the Boost speed. The 3900x is reported to have a 3.8Ghz base with a 4.6Ghz boost speed. How does that work in real world experience? I mean is it only a few cores that get boosted to that speed, or could all of the cores boost to 4.6Ghz simultaneously? Like if you were running something on all cores?

Excited for some reviews once these come out. Hopefully we can get some comparisons to intel for a variety of workloads from somewhere. The raw benchmarks aren't bad but do not translate into gaming performance directly.

IPC was already about the same with Ryzen 2000 series when they are both clocked at 4GHz.

It was really only gaming that Intel still had an advantage when equally clocked. That might be more to do with some oddity of cache layout/interaction.
 
IPC was already about the same with Ryzen 2000 series when they are both clocked at 4GHz.

It was really only gaming that Intel still had an advantage when equally clocked. That might be more to do with some oddity of cache layout/interaction.
I figured it had to do with the way the ccx modules passed data around and the relatively high latency involved. From a technical standpoint I think that is something z3 brings as an improvement, even if it is just allowing the memory to work on a divider. Hopefully they can speed up the Infinity fabric. Reviews are needed.
 
I figured it had to do with the way the ccx modules passed data around and the relatively high latency involved. From a technical standpoint I think that is something z3 brings as an improvement, even if it is just allowing the memory to work on a divider. Hopefully they can speed up the Infinity fabric. Reviews are needed.

We still don't know what inter-core (and inter-CCX and inter-die) latency looks like, but the screenshot of the memory latency looks exceedingly promising vs. what we were seeing with Zen and Zen+. That goes a long way toward easing the concerns around AMD separating all of the CPU cores from the memory controller.

[also, that was at DDR4-4000 and CAS18; while I'd expect latency to step down linearly when less boutique memory is used, that does need to be tested too- the more common DDR4-3000 CAS15 and DDR4-3200 CAS16 stuff needs to work well too]
 
Seems it is being said at chiphell that the AMD Ryzen 3000 series OC frequency limit is ~5GHz.
They report that 4.8 GHz all core is achievable.
5.0 GHz single core is doable, but its a challenge.
5.0 GHz all core is pretty much a no-go.
1.35V for all core 4.5GHz
4.4GHz performs similar to a 5GHz 9900k in Cinebench.
 
  • Like
Reactions: N4CR
like this
Can I use a rotary woofer? Even then you're going to need gigawatts to reach 182db. Physics and the conservation of energy are a bitch.
Rotary is the shizniz but yes no 182db, even SPL comp cars struggle for that.
I designed a single 21" housing that can get down to 8Hz with a maelstrom X-21.. just. 64' organ pipes are the lowest instrument I'm aware of.

Bruh............ dat latency...
View attachment 164321
Pretty reputable people analysed that and said fake due to CPU string and a few other irregularities.

I think latency will be pretty similar to Zen+, which at same frequency ram as Intel, is same or lower latency under 8 threads. Intels' only advantage is IMC/memory speed when ramped up. AMD just needs better memory controller with Zen2.
 
Where I live offsetting better ram for say cooling is something I am happy to do. Example

A Corsair H100i RGB is 2.3K where I am while Trident Z RGB 3200 16GB kit is 2K, since I know AMD cooling solutions are good out the box I can offset costs of cooling for better RAM in my upcoming ITX build.

Just for the lol H115i is 2.5k so I guess exotic is cooling

A R7 2700 is 4.3K the i9 9900K is 9K now but its on special, normal price is 11K. Intel has almost nothing to value prop me with.
 
since I know AMD cooling solutions are good out the box I can offset costs of cooling for better RAM in my upcoming ITX build.

While I wouldn't hesitate to do this running an mATX system or larger at stock... in mITX I'd be worried about suffocating the stock cooler with lack of airflow, which the AIO takes care of.

And if going mITX, in general, it's worth getting the right parts. You're already making compromises.
 
Where I live offsetting better ram for say cooling is something I am happy to do. Example

A Corsair H100i RGB is 2.3K where I am while Trident Z RGB 3200 16GB kit is 2K, since I know AMD cooling solutions are good out the box I can offset costs of cooling for better RAM in my upcoming ITX build.

Just for the lol H115i is 2.5k so I guess exotic is cooling


What kind of rediculous pumped up currency is that?!
 
While I wouldn't hesitate to do this running an mATX system or larger at stock... in mITX I'd be worried about suffocating the stock cooler with lack of airflow, which the AIO takes care of.

And if going mITX, in general, it's worth getting the right parts. You're already making compromises.

I have good exhaust fans, built a 2600 ITX build with no heat issues, that rig I subsequently sold as I relocated for a substantially higher career move. I think the 5960X is far hotter and it has cooling.
 
Seems it is being said at chiphell that the AMD Ryzen 3000 series OC frequency limit is ~5GHz.
They report that 4.8 GHz all core is achievable.
5.0 GHz single core is doable, but its a challenge.
5.0 GHz all core is pretty much a no-go.
1.35V for all core 4.5GHz
4.4GHz performs similar to a 5GHz 9900k in Cinebench.

Maybe. The Reddit source for the shoddily written WCCF article can't agree on exactly what was said. So that might be the case, might not be. They can't even agree if the poster is talking about all 3000 chips or one specific model.
 
Any expectations from those more fluent with Ryzen as to which memory speed/timing one would aim for "optimally" with the upcoming Ryzen chips? Primary concern there will be games and hopefully lasting for 3-5 years. I'm told DDR4 3200 (no timings given) was the sweet spot for the 2700x.

All of my non gaming compute loads (compiling, rendering, etc) will be done on a TR4 machine, so I'm strictly talking games.

I'm just going over ballpark pricing so I can get the cash together while we're still over a month out, thus avoiding any pans hitting me in the head from certain feline cohabitants.


As for inter ccx communications, maybe there will be some sort of "l4" buffer on the IMC to help reduce the amount of misses going through all the cores? Just thinking out loud, I don't know this architecture.
 
As for inter ccx communications, maybe there will be some sort of "l4" buffer on the IMC to help reduce the amount of misses going through all the cores? Just thinking out loud, I don't know this architecture.

I also thought there would be L4 cache on the I/O chip, but I think we would have heard about it by now if it existed.
 
  • Like
Reactions: N4CR
like this
Any expectations from those more fluent with Ryzen as to which memory speed/timing one would aim for "optimally" with the upcoming Ryzen chips? Primary concern there will be games and hopefully lasting for 3-5 years. I'm told DDR4 3200 (no timings given) was the sweet spot for the 2700x.

All of my non gaming compute loads (compiling, rendering, etc) will be done on a TR4 machine, so I'm strictly talking games.

I'm just going over ballpark pricing so I can get the cash together while we're still over a month out, thus avoiding any pans hitting me in the head from certain feline cohabitants.


As for inter ccx communications, maybe there will be some sort of "l4" buffer on the IMC to help reduce the amount of misses going through all the cores? Just thinking out loud, I don't know this architecture.
Well I got the core speed bang on in early predictions quite far out (simple just look at TSMC speed predictions and they were right on the money) so I'll take a crack at RAM for you.

Considering Zen+ runs no problems on default settings at 3200, with Zen 2 showing bios support for over 4000MHz, I would guess something in the 3600 range as a 'upper base ram speed' and 4k pushing it. 4400 or whatever was shown recently I doubt it without serious tweaking, memory controller killing voltage or lax timings. Other thing to realise is AMD will make the platform for common ram speeds not exotic speeds and 3600 is widely available and cheap.

There is no info on an L4 or similar on the chipset. It is huge though.. biggest thing to consider is the L3 cache is doubled for now.
 
Well I got the core speed bang on in early predictions quite far out (simple just look at TSMC speed predictions and they were right on the money) so I'll take a crack at RAM for you.

Considering Zen+ runs no problems on default settings at 3200, with Zen 2 showing bios support for over 4000MHz, I would guess something in the 3600 range as a 'upper base ram speed' and 4k pushing it. 4400 or whatever was shown recently I doubt it without serious tweaking, memory controller killing voltage or lax timings. Other thing to realise is AMD will make the platform for common ram speeds not exotic speeds and 3600 is widely available and cheap.

There is no info on an L4 or similar on the chipset. It is huge though.. biggest thing to consider is the L3 cache is doubled for now.
3200 is supposed to be the default on zen3, iirc, from the horse's mouth. Higher will of course depend on the specific kit, and I expect unsupported memory to drop to jedec standard but still work (or else Imma be very disappoint).
 
3200 is supposed to be the default on zen3, iirc, from the horse's mouth. Higher will of course depend on the specific kit, and I expect unsupported memory to drop to jedec standard but still work (or else Imma be very disappoint).
Thanks Nobu, I missed they said 3200. But would expect that to be minimum because Zen+ almost always gets there no problem if you get (a wide range of now) ram sticks. I would expect 3600 to be the same case again this round as 3200 - do your homework and it's not an issue.

Keen to see what they can do on the new micron hotness.

Edit to add: Hynix CJR is doing damn well too
https://hardforum.com/threads/cavea...b-14-15-17-21-129-99.1973648/#post-1043987026
 
Last edited:
I'm mostly keen to see how Zen2 performs with non-B-die memory since Samsung is halting production of it soon if not already. Not being limited to B-die to get good timings will be a nice thing if it is possible, saving everyone some $$$ for their builds.
 
I'm mostly keen to see how Zen2 performs with non-B-die memory since Samsung is halting production of it soon if not already. Not being limited to B-die to get good timings will be a nice thing if it is possible, saving everyone some $$$ for their builds.
Micron e-die is supposed to do well with ryzen, i read a reddit post where a guy was clocking it to 3600. Stuff is cheap right now too

 
Micron e-die is supposed to do well with ryzen, i read a reddit post where a guy was clocking it to 3600. Stuff is cheap right now too

Sherloc09 on here is getting better than your average B-die sub timings on a dual rank 32gb kit of Hynix (yes Hynix) CJR. Enough said.
Just got 14-16-17-22 this evening on the IntelBurnTest. Going for 14-15-17-22. Personally never seen DDR4 3200 get RAS Active of 22, heck B-Die is like upper 20s or low to mid 30s IIRC.

The sticks are Hynix, C Die [CJR], 2 rank, etc.

He got it to 32Gb 3200 @ 14-15-17-21
9f9cb0415b785b5b3b0acd8d0760b0ba.jpg

That's the real sauce right there folks - you all know what you are looking at. Sherloc is a trailblazer!
I have some of it in single rank here 16Gb on a 2600x but it's a workstation so not going to fuck around with it pushing volts much. But I will give it a tweak tomorrow just for you cats when I get the RMA mobo going again.
https://hardforum.com/threads/cavea...b-14-15-17-21-129-99.1973648/#post-1043987026
If 32Gb works that well I know what memory I will be running for Zen2 also.. the cheap stuff lol

edit to add, this reminds me of the old days. To get best gaming and average performance, latency was king. You paid for latency. Back then at one point the Corsair XMS 3200LL hand matched stuff was the shit, CL2 lmao. Funny how all the numbers and situations repeat in many ways - the lesson of the day.
 
Last edited:
member when intel said 10GHz...i member...maybe incorrectly

yep, they thought netburst would scale up to 10ghz.. Whoopsie daisies !

Sherloc09 on here is getting better than your average B-die sub timings on a dual rank 32gb kit of Hynix (yes Hynix) CJR. Enough said.


He got it to 32Gb 3200 @ 14-15-17-21

That's the real sauce right there folks - you all know what you are looking at. Sherloc is a trailblazer!
I have some of it in single rank here 16Gb on a 2600x but it's a workstation so not going to [bleep] around with it pushing volts much. But I will give it a tweak tomorrow just for you cats when I get the RMA mobo going again.
https://hardforum.com/threads/cavea...b-14-15-17-21-129-99.1973648/#post-1043987026
If 32Gb works that well I know what memory I will be running for Zen2 also.. the cheap stuff lol

edit to add, this reminds me of the old days. To get best gaming and average performance, latency was king. You paid for latency. Back then at one point the Corsair XMS 3200LL hand matched stuff was the shit, CL2 lmao. Funny how all the numbers and situations repeat in many ways - the lesson of the day.

Interesting. I'd love it if a few more folks could confirm he isn't just playing with top 3% silicon lotto luck. B die kits are pretty much out of the picture now as far as I can tell and I need a 32GB (16x2) kit for my upcoming ryzen 3000 build. I'm completely unsure of what to buy this time around. A 3600 kit of sammy b-die C17~ would have suited me just fine, but can't find em for sane prices.
 
  • Like
Reactions: blkt
like this
Why is everyone so obsessed with 5ghz? Why is the frequency always a determining factor. It's like AMD should have suffered an IPC reduction to get the clocks up higher. Then people will say it's not fast enough. Let's see what the chip does before we come to any conclusions. Looks like the 12 core will need that chipset fan on the X570. Not so much the 8 core models and below. I'll likely go for a 3700x and call it a day.

Yeah, the 5GHz fawning is pretty much just because it's a nice even step in speed. Within the CPU realm it's been sort of the unattainable "next step" for nearly a decade. The P4's Netburst architecture ramped us up in raw clock frequency so rapidly that in 2001 there was serious talk about 10GHz+ processors arriving on the market by 2011.

https://www.geek.com/chips/intel-predicts-10ghz-chips-by-2011-564808/

AMD's FX-9590 Black Edition 5GHz CPU stunt in 2013 was basically Icarus flying too close to the sun in terms of pushing the limits of their their architecture (and TDP), and both companies wisely refocused on IPC and multi-core.

5GHz is a great number, but it's only one factor of many in a chip's design and in what determins the performance of a particular processor. People who want 5GHz just for the sake of 5GHz are kind of silly... so...uh... don't look at my sig.
 
Yeah, the 5GHz fawning is pretty much just because it's a nice even step in speed. Within the CPU realm it's been sort of the unattainable "next step" for nearly a decade. The P4's Netburst architecture ramped us up in raw clock frequency so rapidly that in 2001 there was serious talk about 10GHz+ processors arriving on the market by 2011.

https://www.geek.com/chips/intel-predicts-10ghz-chips-by-2011-564808/

AMD's FX-9590 Black Edition 5GHz CPU stunt in 2013 was basically Icarus flying too close to the sun in terms of pushing the limits of their their architecture (and TDP), and both companies wisely refocused on IPC and multi-core.

5GHz is a great number, but it's only one factor of many in a chip's design and in what determins the performance of a particular processor. People who want 5GHz just for the sake of 5GHz are kind of silly... so...uh... don't look at my sig.
Yeah I always wanted 5Ghz with my 3970X but I couldn't get anything beyond 4.7Ghz stable and even that wasnt worth the heat and voltage required. I just daily drive at 4.4Gz. I seriously hope the rumors and leaks are true on Ryzen 2. I'll buy scooping one up in a heartbeat if so.
 
I guess I don't get the point. With a PBO-enabled 2700x and a single 2080ti I can push 60+ FPS in most modern titles @ 4k. On the gaming front CPU's are already fast enough. For other purposes, like video editing/rendering/etc, more threads is generally going to work better, and this is what AMD is offering at a fairly low price point of entry.
 
5GHz is a great number, but it's only one factor of many in a chip's design and in what determins the performance of a particular processor. People who want 5GHz just for the sake of 5GHz are kind of silly... so...uh... don't look at my sig.

If my 5820K could do 5Ghz I'd be running it. Sadly, I'm lucky if it runs stable at 4.3. I really lost the silicon lottery with this chip.
 
I guess I don't get the point. With a PBO-enabled 2700x and a single 2080ti I can push 60+ FPS in most modern titles @ 4k. On the gaming front CPU's are already fast enough. For other purposes, like video editing/rendering/etc, more threads is generally going to work better, and this is what AMD is offering at a fairly low price point of entry.

The real demand now is in high refresh monitors. I just sold off my 4k monitor because I was too used to my 144hz 1080P panel and how smooth it was, I just can't stand 60hz anymore. So I'm "upgrading" that to a 144hz 1440P instead. The jump from 60FPS to 100-144 is massive in terms of CPU requirements, single core performance becomes a serious limitation.

You're mostly right though, I was just watching a video showing even a 1950x threadripper will push quite a few AAA titles right over 100FPS. I would REALLY like to own a 16 core TR4 setup, but ...muh frames... :p
 
If my 5820K could do 5Ghz I'd be running it. Sadly, I'm lucky if it runs stable at 4.3. I really lost the silicon lottery with this chip.

Yeah, this CPU has done me well! 4.3 isn't bad on Haswell-E, though - still a solid CPU. Does it just get too hot at 4.4/4.5, or is it a clock/voltage wall?
 
Yeah I always wanted 5Ghz with my 3970X but I couldn't get anything beyond 4.7Ghz stable and even that wasnt worth the heat and voltage required. I just daily drive at 4.4Gz. I seriously hope the rumors and leaks are true on Ryzen 2. I'll buy scooping one up in a heartbeat if so.

I'm waiting for independent benchmarks before deciding to pull the trigger or not, but I think it's going to be a pretty solid set of processors. I was really hoping to jump to a 16 core part, and may still hold off for that. What I expect is that we'll see relative parity with a stock i9-9900K with the 3800X, but I have a feeling that the 3900X is going to be down slightly vs the 9900K in single/low thread but make it up with all-thread performance. It's going to get messy comparing the 9900KS or otherwise overclocked all-core 5/5.1GHz 9900k vs an overclocked 3800X though. I think the 9900 will continue to pull ahead in single/low thread count when both are overclocked, with the 3900X again taking a healthy lead in all-thread loads and the 3800X essentially pulling up even in all-thread. The wild card is memory latency, and whether there have been enough improvements in that area on the Ryzen 3 chips.

Exciting stuff. Intel's getting a talking to and it's delightful. Competition is good.
 
Yeah, this CPU has done me well! 4.3 isn't bad on Haswell-E, though - still a solid CPU. Does it just get too hot at 4.4/4.5, or is it a clock/voltage wall?

Either a clock/voltage wall or (somewhat unlikely) a VRM limitation. I used to have it in a custom loop, by itself, with a 280mm rad. Even with the AIO it stays cool, it just won't stay stable at higher clocks.
 
The real demand now is in high refresh monitors. I just sold off my 4k monitor because I was too used to my 144hz 1080P panel and how smooth it was, I just can't stand 60hz anymore. So I'm "upgrading" that to a 144hz 1440P instead. The jump from 60FPS to 100-144 is massive in terms of CPU requirements, single core performance becomes a serious limitation.

You're mostly right though, I was just watching a video showing even a 1950x threadripper will push quite a few AAA titles right over 100FPS. I would REALLY like to own a 16 core TR4 setup, but ...muh frames... :p

My 2700x is pushing 100-120 FPS @ 4k just fine in the games that my 2080ti will keep up in, i've got a X27 so my target is 4k@120hz.
 
My 2700x is pushing 100-120 FPS @ 4k just fine in the games that my 2080ti will keep up in, i've got a X27 so my target is 4k@120hz.

Nice! Oh baby that is one sexy monitor.. One day in the distant future when those specs are available at 32" for $500 I'll catch up to you :p
 
Slightly disappointed. All this time we were lead to believe a 16 core monster was going to release at $500. But instead it's the 12 core part, and even worse there is no 16 core option right now. I still plan to go for the 3900X on X570, with the hope that we may see a 3950X or 4900X down the line as a potential upgrade path.
 
Slightly disappointed. All this time we were lead to believe a 16 core monster was going to release at $500. But instead it's the 12 core part, and even worse there is no 16 core option right now. I still plan to go for the 3900X on X570, with the hope that we may see a 3950X or 4900X down the line as a potential upgrade path.

"Lead to believe" by a bunch of rumors. This is why people should NEVER base their hype and desires off rumors. Even the most "credible" looking "leaks" can turn out to be false. Rumors like we saw leading up to the announcement should be treated as nothing more than someone's wishful thinking. The same way you'd treat some random stranger on a forum saying the same thing.
 
yep, they thought netburst would scale up to 10ghz.. Whoopsie daisies !



Interesting. I'd love it if a few more folks could confirm he isn't just playing with top 3% silicon lotto luck. B die kits are pretty much out of the picture now as far as I can tell and I need a 32GB (16x2) kit for my upcoming ryzen 3000 build. I'm completely unsure of what to buy this time around. A 3600 kit of sammy b-die C17~ would have suited me just fine, but can't find em for sane prices.

I'm in the same baot. We probably can't and won't know for sure until the chips are out and they can be tested. Maybe reviews will be out the day of preorder like the Radeon VII and we can get some decent info early.
 
I'm in the same baot. We probably can't and won't know for sure until the chips are out and they can be tested. Maybe reviews will be out the day of preorder like the Radeon VII and we can get some decent info early.

lets hope they found a fix for higher clock speed hynix chips, maybe then the increase in options will drive down samsung B-die stuff. because right now it just feels like they're riding the "you have no other option but to buy our shit if you want more than ddr4 3200 on ryzen" card.
 
"Lead to believe" by a bunch of rumors. This is why people should NEVER base their hype and desires off rumors. Even the most "credible" looking "leaks" can turn out to be false. Rumors like we saw leading up to the announcement should be treated as nothing more than someone's wishful thinking. The same way you'd treat some random stranger on a forum saying the same thing.

It's just typical AMD. I get rumors can be false, but they have to start from somewhere. Usually when it's Intel or Nvidia, the leaks turn out to be fairly accurate with the hardware we getting - pricing being a different story. But with AMD we're either getting the Ti killer for half the price that never arrives, or some non-existent CPU. Like I said, still planning on the 3900X since that will be the best all-purpose chip out there, but I'm just getting a little tired off how off some of the leaks always seem to be from the AMD side. I'm not sure if it starts from AMD themselves or just the internet full of hopium.
 
It's just typical AMD. I get rumors can be false, but they have to start from somewhere. Usually when it's Intel or Nvidia, we have a good idea of what's coming and the leaks turn out to be fairly accurate with the hardware we getting, with pricing expected to be hight. With AMD we're either getting the Ti killer for half the price that never arrives, or some non-existent CPU. Like I said, still planning on the 300X since that will be the best all purpose chip out, but I'm just getting a little tired off how off some of the leaks always seem to be from the AMD side.

How are a bunch of false rumors AMD's fault? The fact that fake rumors are so prevalent with AMD stuff is even more reason to never trust them.
 
Back
Top