Paper Launch for AMD Ryzen 9 3900X

fanboy whichever company you want.

when intel drops it's die size, it's going to still have the same tdp envelopes that amd cpu's have more or less depending on how they decide to market their processors. producing 110 watts at half the size of their current chips will yield the same issue we're seeing with zen2 in keeping the cores at below thermal throttling temps. Unless they chunk their cores into even smaller groups and spread them around on a much larger area without performance losses due to distance apart. Which is an unlikely design choice.

Going to 5nm with 110watt cpu's doesn't appear to be possible without a novel approach to moving heat out of those dies. Obviously, this is assuming we'll want ever faster cpu's instead of just ever more efficient ones at current day performance levels.

Better still: don't fanboy at all.

I post a lot of shit in the AMD forums these days. Because I have an AMD CPU, and because that's where all the really wild CPU development is happening right now.

But believe me, back in 2011, I was all in on Sandy Bridge hype. Because badass at the time - and still pretty decent today.

Sooner or later we'll see some really interesting moves on the Intel side again. Enjoy it! More good hardware for all of us.
 

Ah yeah, a throughput test. That's... not at all representative of user workloads, as nice as it is to have- I believe Anandtech has gone into detail with respect to task energy for their server reviews.

it's going to still have the same tdp envelopes that amd cpu's have more or less

Which is because those are the TDP envelopes that OEMs have standardized around.

producing 110 watts at half the size of their current chips will yield the same issue we're seeing with zen2 in keeping the cores at below thermal throttling temps.

Depending on how Intel addresses heat conduction.

Unless they chunk their cores into even smaller groups and spread them around on a much larger area without performance losses due to distance apart.

Intel doesn't have AMDs fab limitations.

Going to 5nm with 110watt cpu's doesn't appear to be possible without a novel approach to moving heat out of those dies.

This is an extreme overstatement of the problem. It's a problem, but addressing it is more a function of will than technical capability.

Obviously, this is assuming we'll want ever faster cpu's instead of just ever more efficient ones at current day performance levels.

Of course manufacturers are pushing for more efficiency. That's also going to be pushed harder by the software side too, from the compilers on up.
 
Ah yeah, a throughput test. That's... not at all representative of user workloads, as nice as it is to have- I believe Anandtech has gone into detail with respect to task energy for their server reviews.

It is useful for determining the max load efficiency of the uarch, as you need to really push the CPU to determine this, and a throughput workload is the best way to do that.

Anandtech's test doesn't go into that kind of detail - i.e. they give you overall efficiency in various workloads, but they don't do it by underclocking/undervolting to discover efficiency sweet spots, as The Stilt did. Nonetheless, Anandtech concluded that Zen 2 (stock) is more efficient than CFL (stock) overall, also:

https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/19

Unfortunately their server review doesn't give us good performance per watt at the wall figures.

There are pointed exceptions to this rule:

1. The 3900X performs worse in games and uses similar-to-more power, depending on conditions. So for gaming, this CPU is not very efficient. However, the 3700X games almost as well the 3900X and uses significantly less power, and is thus similar-to-more efficient (albeit slower), even in gaming, against the 9900k.
2. Idle efficiency is unclear. I've read reports that indicate Intel has better idle efficiency. But then der8auer did a whole song and dance about Zen 2 is extremely efficient due to fast voltage drops. So on that metric, pick your poison.

Overall efficiency favors Zen 2. This is almost completely due to TSMC's process advantage over Intel 14nm. 10nm Icelake is the actual efficiency king in x86 at the moment, but no desktop versions exist yet, sooo.... *shrug*
 
How intel addresses heat conduction is the entire point. of my comments.

You're making them out to be something that relies on arch. It doesn't. It's completely independent. Of course if they've fabbed up some nanoscale cooling solution within the die then I guess that would be considered part of the arch. But that's exactly the kind of novel cooling solution I was talking about being needed. But most likely it will be something added on top of the cpu arch and more specifically, physically on top of the die added after fabrication.

As far as the last comment about efficiency, You miss the point.

When you drop a die size you have a choice. Use the benefits of the more efficient circuit to drive more performance, yielding the same heat outputs as previous generation cpu's. Or drive the cpu's at the same performance of last gen cpu's but using far less power to do so.

You can sometimes get a bit of both when you actually improve the architecture, but in terms of just die size improvements ...those are your choices. Without a better way to cool 5nm than we have for 7nm, and with 7nm already seeming to hit a wall at 120watts for the die density we see with zen2, we're not going to see an ability to push the same wattage out on the 5nm chips that we do on the 7nm. that means sticking roughly to the same performance as today but using less power being the only step forward.

Carbon nanotubes can move heat 6x better than copper in what little experimental tests have been done on the subject. Something along those lines may be plenty to get us back to a heat output density that conventional coolers can handle. Maybe they'll go that route. Maybe they'll replace silicone with something that produces less heat in general.
 
How intel addresses heat conduction is the entire point. of my comments.

You're making them out to be something that relies on arch. It doesn't. It's completely independent. Of course if they've fabbed up some nanoscale cooling solution within the die then I guess that would be considered part of the arch. But that's exactly the kind of novel cooling solution I was talking about being needed. But most likely it will be something added on top of the cpu arch and more specifically, physically on top of the die added after fabrication.

As far as the last comment about efficiency, You miss the point.

When you drop a die size you have a choice. Use the benefits of the more efficient circuit to drive more performance, yielding the same heat outputs as previous generation cpu's. Or drive the cpu's at the same performance of last gen cpu's but using far less power to do so.

You can sometimes get a bit of both when you actually improve the architecture, but in terms of just die size improvements ...those are your choices. Without a better way to cool 5nm than we have for 7nm, and with 7nm already seeming to hit a wall at 120watts for the die density we see with zen2, we're not going to see an ability to push the same wattage out on the 5nm chips that we do on the 7nm. that means sticking roughly to the same performance as today but using less power being the only step forward.

Carbon nanotubes can move heat 6x better than copper in what little experimental tests have been done on the subject. Something along those lines may be plenty to get us back to a heat output density that conventional coolers can handle. Maybe they'll go that route. Maybe they'll replace silicone with something that produces less heat in general.

It's tied to the uarch because AMD decided to do a non-monolithic setup. By separating into tiny chiplets, instead of packing all of it together, there was actually a heat dissipation cost, one that may even be exacerbated by the fact that the chiplets aren't centered on the heatspreader.

I understand why AMD did it, and fully agree with their reasoning - and think it was a clever solution - but the solution still imposes a cost/tradeoff. No getting around that.
 
It seems unlikely that a slightly offset design significantly weakens the efficiency of the heatsink on top as they could just design the heatsink on top to be offset with it to solve the problem.

They are using something called compound silicon ...which combines gallium alloys with silicone to create the wafer in things like 5g modems and such. Would be interesting to see if that can scale up to complex cpu designs because it's apparently many times more efificient than pure silicone at doing the job of a semiconductor. That would drop waste heat tremendously. Not necessarily solving how to get heat out of such a small area, but there'd be far less heat to remove, giving us much more headroom to boost performance without hitting whatever limit in wattage we have at any given density.

Yeah, it does.

It really doesn't. it is simply a heat density problem that I was questioning that will exist with both manufacturers exactly the same way. Even with amd doing a chiplet design and separating out io from the cpu cores compared to intel mixing it all together in a monolithic design. The transistor density is the cause of the problem and they should be roughly the same once they're on similar die processes.

We'll either see a new tech get used to help move heat faster than current solutions or we'll see a new compound being used to replace silicone. Because I doubt we'll be satisfied with just making the cpu's use less power ...or adding more and more cores (at least for desktop use)
 
Last edited:
I 'fanboy' performance, and I have no qualms about calling fanboys on their BS.

"AMD will destroy Intel this time!!!1". Yeah, we've heard it. Both ways.

It takes me seconds to deconstruct those arguments :D

Strong brand loyalty has always puzzled me. These are just corporations. They deserve no loyalty they haven't earned.

Also, I am confused by people who want to make either product family out to be more than it is. Both have their strengths/weaknesses. In fact, overall, things are more equalized than I can remember in a long time.

I think for general mixed-use and value for the dollar, Zen 2 is a bit better.
For gaming, CFL is a bit better.
AMD gives you a few extra baubles like PCI Express 4.0.
Intel gives you a proven, more stable/mature platform.

Can't really go wrong either way, IMHO.
 
Can't really go wrong either way, IMHO.

Biggest concern I've had has really been the setup shuffle. People (in general and ones I knew) went through hell getting Zen 1 to work with available memory. Zen+ flew with the right memory and other stars aligning, and Zen 2 has been... cleaner.

None have approached 'plug in, turn on, go'.

That's fine for most of us, I'd have bought a 2700X if I were in the market at the time, and once that platform stabilized I really haven't recommended anything else, but for your average builder that just doesn't know what they don't know, or even where to look for help?

Unless you're spending up for X570, I'd recommend holding off today. And that doesn't mean 'buy something else', unless you have to, it means wait for the platform to stabilize.
 
It seems unlikely that a slightly offset design significantly weakens the efficiency of the heatsink on top as they could just design the heatsink on top to be offset with it to solve the problem.

I have no idea what you're talking about.
 
I think the limitations of frequency are also very dependent on TSMC's process.

It wouldn't surprise me. But I also suspect Zen's basic design probably contributes at some level, too. But I'm an armchair enthusiast. I'm not an engineer, and I don't work for these companies. So it's hard to say.

One thing that stands out to me, though, is the high boost clocks, but seeming inconsistent nature of the all core OC potential. There's a large spread here. The spread was much tighter with Zen+. Zen+ all core max was ~4.2. Boost was 4.35. 150 MHz spread.

3900X... 4.2 is still max typical OC, but boost is 4.6. 400MHz spread.

Part of this is due to the 3900X's nature with one golden chiplet, and one "shitlet" (I'm keeping this term). The shitlet holds back all-core OCs severely. Though I've seen folks on reddit using an OC tool that allows individual CCX and CCD overclocking, and results on the golden sample are much higher.

Anyway, I'm rambling. The POINT is that there's a huge variance in quality even sometimes on the same silicon, but definitely between different chiplets. This seems to suggest an immature process producing inconsistent results.
 
Biggest concern I've had has really been the setup shuffle. People (in general and ones I knew) went through hell getting Zen 1 to work with available memory. Zen+ flew with the right memory and other stars aligning, and Zen 2 has been... cleaner.

None have approached 'plug in, turn on, go'.

That's fine for most of us, I'd have bought a 2700X if I were in the market at the time, and once that platform stabilized I really haven't recommended anything else, but for your average builder that just doesn't know what they don't know, or even where to look for help?

Unless you're spending up for X570, I'd recommend holding off today. And that doesn't mean 'buy something else', unless you have to, it means wait for the platform to stabilize.

Yeah, I agree. And I can attest to memory problems with Zen 1. Stock it was no problem, but try to get the rated OC out of your RAM... it was like pulling teeth. Zen+ was much better. I can't report on Zen 2 yet (STILL can't snag a 3900X... damnit), but reports are favorable. In fact, this launch would have been great if not for the last minute BIOS mixup re: max boost clocks, and the RDRAND bug. Sigh. It's AMD, so there's always going to be something at launch.

You pay less for equivalent performance from AMD, but don't fool yourself, there's a tradeoff here too. It's usually a tradeoff worth making if you know what you're doing, but it exists, and ought to be accounted for.
 
You pay less for equivalent performance from AMD, but don't fool yourself, there's a tradeoff here too. It's usually a tradeoff worth making if you know what you're doing, but it exists, and ought to be accounted for.

is that tradeoff less architectural design shortcuts that yield to new exploits found every other month? ;) If only we'd be willing to account for poor design that impacts decades worth of chips as hard as we seem to be willing to hold them accountable for issues that impact brand new ones and are resolved in a couple months or less.
 
is that tradeoff less architectural design shortcuts that yield to new exploits found every other month? ;) If only we'd be willing to account for poor design that impacts decades worth of chips as hard as we seem to be willing to hold them accountable for issues that impact brand new ones and are resolved in a couple months or less.

Well, Intel was innovating and improving performance during that decade that AMD was... irrelevant in the CPU space.

Improving performance without a regular succession of die shrinks is going to incur risk, and some are likely to be exploitable.

Welcome to the Information Age!
 
Yeah, I agree. And I can attest to memory problems with Zen 1. Stock it was no problem, but try to get the rated OC out of your RAM... it was like pulling teeth. Zen+ was much better. I can't report on Zen 2 yet (STILL can't snag a 3900X... damnit), but reports are favorable. In fact, this launch would have been great if not for the last minute BIOS mixup re: max boost clocks, and the RDRAND bug. Sigh. It's AMD, so there's always going to be something at launch.

You pay less for equivalent performance from AMD, but don't fool yourself, there's a tradeoff here too. It's usually a tradeoff worth making if you know what you're doing, but it exists, and ought to be accounted for.

Amazon has 3900X up for order.
 
Well, Intel was innovating and improving performance during that decade that AMD was... irrelevant in the CPU space.

Improving performance without a regular succession of die shrinks is going to incur risk, and some are likely to be exploitable.

Welcome to the Information Age!

Everything has security flaws. The longer Zen is around, the more flaws we'll be finding in it, probably. Intel hasn't done too bad. And frankly, the RDRAND bug is about on the same level in terms of screw up anyway.

Folks can spin either company out to be horrible/unreliable if they want to. The truth is that AMD is more prone to get burned from not thinking through all the edge cases because, frankly, they are always tight on money. And Intel is likely to get burned based on sheer volume and market share. More shit in the market (and for longer periods) means more time and motive to discover its flaws.

I'd buy from either company anyway. If I seem a little AMD-biased, well that's because I'm really excited about increasing core count. But if the shoe was on the other foot, I'd jump ship.
 
Keep checking. I ordered mine on the 15th from Amazon its already on my desk.

Ordered mine on the 15th and it's out for delivery today. Can't wait to put it in my X370 board and see what it does.
 
Ordered mine on the 15th and it's out for delivery today. Can't wait to put it in my X370 board and see what it does.

Let me know how it goes with you. If it's the rig in your signature, then you have the same board I do. Am curious how that works out for you.
 
Ingram Micro and SYNNEX are both telling me they hope to start shipping the 3900’s shortly, bust as of yet they haven’t had many to send out at all.
 
Lastly, let's talk about binning. The Ryzen 9 3900X is not a pair of highly binned 3600 or 3600X chiplets. At least, we have no evidence this is the case since most of the cores in a Ryzen 9 3900X are incapable of a 4.4GHz or greater clock speed. Many people report 4.4GHz all core overclocks on 3600 / 3600X's, which isn't what we can get out of a 3900X. At least, very few of them can achieve this.
People who disable the worst chiplet can often.
What it seems to be is two different binned 3600 cores. One can do 4.3-4.4 on a decent chip and other can do 4.0-4.2 usually. Single cores have seen many at 4.5-4.6 now people figured out how to set them up, what bios etc.

Yes, The Stilt did this.

https://www.overclock.net/forum/10-amd-cpus/1728758-strictly-technical-matisse-not-really.html

Relevant graph:

View attachment 181123

Note he explains the left side of the graph (where the 9900k appears to be getting more efficient, albeit it still less than the 3900X and 3700X) that he could not lower the voltage any further on the 3900X due to a motherboard limitation, so the efficiency gains flatline < 3Ghz. Also no results for Zen 2 or the 9920X at 4.5GHz, because neither could sustain an all-core OC at this level within any reasonable power envelope.
Thanks for posting that. It's very obvious when looking at server results where AMD wins in practically every perf/power metric and usually by huge margins, almost double sometimes.

Intel doesn't have AMDs fab limitations.
That's why they had to outsource to TSMC? And had production shortages due to 10nm failing?
https://www.hexus.net/business/news...tel-will-outsource-14nm-chip-production-tsmc/
https://www.extremetech.com/computing/287445-intel-cpu-shortage-could-worsen-in-q2-2019-arm-amd

You also said AMD is less efficient earlier in the thread which is blatantly incorrect, server results show large efficiency and massive price/perf gains for AMD over the best Intel can make right now. Beat Intel at every corner, even in AVX benchmarks where they can't even run it properly (not optimised) they perform the same and use less power.

AMD-EPYC-7002-GROMACS-STH-Small-Case-Not-Zen2-Optimized-Benchmark.jpg


TDP does not equal power consumption. The Intel Xeon Platinum 8280 system was using 40% more power than the AMD EPYC 7742 system here. The Intel Xeon Scalable family is well known for pushing higher power consumption for AVX-512 heavy workloads.
 
People who disable the worst chiplet can often.
What it seems to be is two different binned 3600 cores. One can do 4.3-4.4 on a decent chip and other can do 4.0-4.2 usually. Single cores have seen many at 4.5-4.6 now people figured out how to set them up, what bios etc.

Disabling the worst chiplet defeats the purpose of getting a 3900X in the first place. We already knew the rest. De8auer had a video talking about overclocking individual CCX's to 4.4GHz dating back to the first week or so the Ryzen 3000 series was out. We also know that you get at least one to two cores that can reach maximum boost clocks.
 
Disabling the worst chiplet defeats the purpose of getting a 3900X in the first place. We already knew the rest. De8auer had a video talking about overclocking individual CCX's to 4.4GHz dating back to the first week or so the Ryzen 3000 series was out. We also know that you get at least one to two cores that can reach maximum boost clocks.

However, the utility that allows clocking individual CCXs to different speeds is interesting. Your bad "shitlet's" CCXs can be clocked lower, and the golden chiplet higher, without disabling the shitlet (thus still gaining the benefit of the 3900X). Of course, it's arguable whether this is really worth anybody's time. But hey, we're all [H]ard, right?
 
  • Like
Reactions: N4CR
like this
However, the utility that allows clocking individual CCXs to different speeds is interesting. Your bad "shitlet's" CCXs can be clocked lower, and the golden chiplet higher, without disabling the shitlet (thus still gaining the benefit of the 3900X). Of course, it's arguable whether this is really worth anybody's time. But hey, we're all [H]ard, right?

Indeed. I don't know if its worth doing either. So far, 100MHz or so of clock speed by itself isn't really worth much on these chips. So my instincts tell me "probably not", but that doesn't mean there isn't a gain to be had somewhere. It's worth checking out.
 
You also said AMD is less efficient earlier in the thread which is blatantly incorrect,

Compare Ice Lake on the 10nm process you just criticized. Yes, AMD is less efficient, by a longshot, and yes, it does depend on the products being compared.
 
Yeah no one cares about their low clocked 10nm chip, secondly we have a forum if you want to talk mobile which is all you want to bring up lately. https://hardforum.com/forums/mobile-computing.73/

It's relevant. AMD has the most efficient x86 uarch on desktop and *probably* server (generally speaking, there are specific workload exceptions). But mobile... Icelake is king. Despite low clocks, Icelake performs very well, the IPC increase compensating for the clockspeed loss, and probably then some.

It's too bad for Intel that 10nm isn't ready for desktop yet.
 
Sure. First, it's the only example we have of Intel's next-gen architecture and it's on their 10nm, and second, AMD doesn't have anything close to a competing product.

Sadly, I think AMD is perfectly capable of competing here, and it's confusing that they haven't bothered. A low power chip with one 8 core CCD and a Navi GPU chiplet would be awesome for mobile. Still not quite as efficient as Icelake, but it would make up for that by offering high core counts on mobile, and a powerful integrated GPU solution.

Why they haven't done this confuses the hell out of me. Maybe they are still developing something like that?
 
Last edited:
their interest in it is probably related to how they've been consistently getting the shaft by laptop oem's. Be it from backroom deals or borderline illegal monopolistic influences or just insufficient funds to foot the same amount of support bill that intel does in these agreements ... It's hard to put your money in an area you know will fight you back when you can use those cores in an area you know you'll win much more easily ...and then maybe carry over that reputation later to make inroads in the more difficult to penetrate, mobile market.
 
their interest in it is probably related to how they've been consistently getting the shaft by laptop oem's. Be it from backroom deals or borderline illegal monopolistic influences or just insufficient funds to foot the same amount of support bill that intel does in these agreements ... It's hard to put your money in an area you know will fight you back when you can use those cores in an area you know you'll win much more easily ...and then maybe carry over that reputation later to make inroads in the more difficult to penetrate, mobile market.

AMD is going to go where the real money is at and that is servers, I am pretty sure AMD will be content to let Intel worry about the Laptop side.
 
AMD is going to go where the real money is at and that is servers, I am pretty sure AMD will be content to let Intel worry about the Laptop side.

You might look into where the 'real money' is before excluding laptops from that classification.
 
the real money is in the server space. That's what's making the most growth right now.

But amd is gaining significant market share across all segments. So they're not looking bad in any market ...even the very unfair mobile market where intel leverages it's vastly greater wealth to provide incentive kickbacks to oem's. This makes any kind of "let the best product win" capitalism a much harder battle to win for amd when they do have products that are fit for a particular segment.

Pretty sure amd is pretty happy at where things are going in all the segments they create products for right now.

edit: looking at just pc's sales, notebook is growing faster than desktop but both are basically stagnant when combined. Notebook sales are just cannibalizing desktop.

In either case, amd market shares are increasing in all segments. Factory to market is much faster on PC than it is in mobile, since mobile just comes from oem's and needs a lengthy leadup time and investment. Amd made a strategic decision to maximize their market presence with this new die size (something rare for a company with so much less money than intel to do) while their competition is stuck on older tech. If they had leaned toward mobile, they would have lost fab capacity and potentially hurt their desktop and server release by not being able to meet demand ...a problem they're having to a slight degree as-is.
 
Last edited:
Back
Top