New Zen 2 Leak

Status
Not open for further replies.
Maybe Adored was right more or less again, that the part shown was to be the 180USD entry level Ryzen 5 with a 65w TPD, if that is true then it is very impressive to have half the power used on much lower frequency and still beat the elite part of the competitor on power and performance with a chip that is a third of the price, that does open up exciting things about what may be the 3850x on the leak sheet, if that is a 16C with a 5.1ghz boost for 500 bucks, it will make the question of buying a competitor product irrelevant.

Anyway time will tell and it is quite exciting.
 
Meh, I don't know. AMD clearly has the room for another chiplet. We'll probably see 12 or 16 cores somewhere in the mid-to-high 4GHz range (maybe 5GHz single core turbo). But I've changed my mind on the specific leak. I think, now, that Adored was full of it and his 'source' probably just extrapolated the obvious with some fair-to-middling guesswork.

As for the demoed chip being a 65watt product... well, yeah, probably so. Cinebench shows Zen in its best light, really. So an effective tie between the 65watt AMD part and the 95 watt Intel part under favorable conditions for AMD... means the future 95 watt AMD part might actually be kinda-sorta competitive in single core workloads. Spitballing this with the new information we have, I think AMD just reached rough core-to-core parity with Intel...

...with the possible ace up the sleeve of delivering a second chiplet and more cores.

I'm sure Intel has a response in the wings, too.
 
Meh, I don't know. AMD clearly has the room for another chiplet. We'll probably see 12 or 16 cores somewhere in the mid-to-high 4GHz range (maybe 5GHz single core turbo). But I've changed my mind on the specific leak. I think, now, that Adored was full of it and his 'source' probably just extrapolated the obvious with some fair-to-middling guesswork.

As for the demoed chip being a 65watt product... well, yeah, probably so. Cinebench shows Zen in its best light, really. So an effective tie between the 65watt AMD part and the 95 watt Intel part under favorable conditions for AMD... means the future 95 watt AMD part might actually be kinda-sorta competitive in single core workloads. Spitballing this with the new information we have, I think AMD just reached rough core-to-core parity with Intel...

...with the possible ace up the sleeve of delivering a second chiplet and more cores.

I'm sure Intel has a response in the wings, too.

I concur on clock speed, don't really see these chips running high frequency, maybe 4-4.5 on the highest end parts with the best silicon binning. The notion of best light is subjective, I use professional music software and Sony Vega, both are the highest end products and both are extremely parallel, going from a 5ghz 4790K to a 2700 was notable in render times. My brother is a 3rd year cum laude electrical engineering student who has been selected for the Scuderia Ferrari intern program, they have super computers rendering multiple outcomes simultaneously, a 5 ghz processor with single thread performance means nothing when you are at the elite end of technological evolution such as formula one motor racing. So again if a 3.7ghz maybe with 4.4ghz all core turbo is beating a 5ghz assuming MCE is auto using half the power that is pretty impressive in any language. Sure in gaming they may offset some gains but that itself is a very niche market and where there are people playing the game ie e-sports, those e sport games run well on potatos anyway, those gamers are more likely to buy next gen APU's than 16 core processors.

So again its subjective, not like Intel doesn't try and push their best foot forward, in this case it could be that the best case was shown on a similar thread count, lower clocked, lower powered part, cinebench is not even an arbitrary bench that shows bias, it is a very universal and linear bench that weights up clockspeed and threads and quantifies output, blender is similar and when you go over to gamers nexus who use extreme renders to stress test and put the CPU's under high loads it is always AMD that shows more efficiency, when TR uses half the power the 7960X does to render the same work shows more towards efficient multi scaling architecture over brute mhz and that was 14nm vs 14nm ++++++++++++

As for intels response, that may be 2020 when maybe 10nm is available, but they are on the limit of 14nm's ability to milk clockspeed.
 
Across the entire Ryzen(exception TR) line?

What do you think the underlined part means?

Our contacts at AMD also discussed the TDP range of the upcoming range of Matisse processors. Given AMD’s definition of TDP, relating to the cooling performance required of the CPU cooler, the range of TDPs for Matisse will be the same as current Ryzen 2000-series processors. This means we could see ‘E’ variants as low as 35W TDP, all the way up to the top ‘X’ processors at 105W, similar to the current Ryzen 7 2700X. We were told that the company expects the processors will fit within that range. This should be expected on some level, given the backwards compatibility with current AM4 motherboards on the market with a BIOS update.
 
As for the demoed chip being a 65watt product... well, yeah, probably so.

The power consumption figures shared by AMD during the demo suggest it was a 65W chip. And an engineering sample with 8C, 65W, and clocks in the range that agree with what we saw in the demo also there exists: the 5D0108BBM8SH2_37.
 
I concur on clock speed, don't really see these chips running high frequency, maybe 4-4.5 on the highest end parts with the best silicon binning. The notion of best light is subjective, I use professional music software and Sony Vega, both are the highest end products and both are extremely parallel, going from a 5ghz 4790K to a 2700 was notable in render times. My brother is a 3rd year cum laude electrical engineering student who has been selected for the Scuderia Ferrari intern program, they have super computers rendering multiple outcomes simultaneously, a 5 ghz processor with single thread performance means nothing when you are at the elite end of technological evolution such as formula one motor racing.

That is because they are doing HPC workloads (such as fluid simulation) which are massively parallel and run better in lots of cores clocked low (as in GPUs) not because they are "at the elite end of technological evolution". 5GHz single thread cores continue being needed for other workloads.

So again if a 3.7ghz maybe with 4.4ghz all core turbo is beating a 5ghz assuming MCE is auto using half the power that is pretty impressive in any language. Sure in gaming they may offset some gains but that itself is a very niche market and where there are people playing the game ie e-sports, those e sport games run well on potatos anyway, those gamers are more likely to buy next gen APU's than 16 core processors.

So again its subjective, not like Intel doesn't try and push their best foot forward, in this case it could be that the best case was shown on a similar thread count, lower clocked, lower powered part, cinebench is not even an arbitrary bench that shows bias, it is a very universal and linear bench that weights up clockspeed and threads and quantifies output, blender is similar and when you go over to gamers nexus who use extreme renders to stress test and put the CPU's under high loads it is always AMD that shows more efficiency, when TR uses half the power the 7960X does to render the same work shows more towards efficient multi scaling architecture over brute mhz and that was 14nm vs 14nm ++++++++++++

An 4.2--4.4GHz zen2 on pair** with a 4.7--5GHz Coffeelake on nT CB15 is not that impressive, because CB15 is a favorable case for AMD

Cinebench R15 is some sort of a best case benchmark for AMD, that's why it's an outlier.
This.
The IPC difference is abnormally low (5.6% vs. 14.4% average) and the SMT yield is abnormally high (41.6% vs. 28.7% average).

Also, efficiency scales as ~1/f. So more cores clocked low is more efficient for throughput, than less cores clocked high. That is why servers are lots of cores clocked at ~2GHz or lower clocks. It doesn't have anything to do with "efficient multi scaling architecture" neither nodes. So a TR using 'half' the power the 7960X does to render the same work shows nothing.

NOTE:

** On pair because AMD won on stage (2057 vs 2040) whereas Intel won in the pre-brief (2023 vs 2042).
 
That is because they are doing HPC workloads (such as fluid simulation) which are massively parallel and run better in lots of cores clocked low (as in GPUs) not because they are "at the elite end of technological evolution". 5GHz single thread cores continue being needed for other workloads.



An 4.2--4.4GHz zen2 on pair** with a 4.7--5GHz Coffeelake on nT CB15 is not that impressive, because CB15 is a favorable case for AMD



Also, efficiency scales as ~1/f. So more cores clocked low is more efficient for throughput, than less cores clocked high. That is why servers are lots of cores clocked at ~2GHz or lower clocks. It doesn't have anything to do with "efficient multi scaling architecture" neither nodes. So a TR using 'half' the power the 7960X does to render the same work shows nothing.

NOTE:

** On pair because AMD won on stage (2057 vs 2040) whereas Intel won in the pre-brief (2023 vs 2042).

Cinebench and Blender are not HPC loads and it shows AMD is able to produce lets call it parity using less power making it more efficient, so yes a slower CPU clock speed wise matching a faster clockspeed part using less power and producing lets call it parity is impressive. More so that one will cost 180USD to the other one costing over 500USD.

On the other point this was a 8/16 thread parity with clockspeed advantage to Intel, so the idea that less thread clocked high or low is moot. So understanding how slower cores beat or "reached" parity with a high end part running higher clocks and did so with much lower power is something that will be explained in due coarse by people that probably know more than you on the subject.
 
The power consumption figures shared by AMD during the demo suggest it was a 65W chip. And an engineering sample with 8C, 65W, and clocks in the range that agree with what we saw in the demo also there exists: the 5D0108BBM8SH2_37.

Yeah. Makes sense. If the 65 watt part is similar to the 9900k's multithreaded Cinebench score, it is likely the single core performance, and threaded performance in many other apps (Cinebench favors Zen) is still lower. Which, in turn, suggests the possibility that the 95 watt version will achieve rough parity with the 9900k, converting the extra headroom into performance. Maybe.

If so, the question will be price point.
 
Yeah. Makes sense. If the 65 watt part is similar to the 9900k's multithreaded Cinebench score, it is likely the single core performance, and threaded performance in many other apps (Cinebench favors Zen) is still lower. Which, in turn, suggests the possibility that the 95 watt version will achieve rough parity with the 9900k, converting the extra headroom into performance. Maybe.

If so, the question will be price point.

Price point is not a question, the top supposed SKU is $500 USD.
 
I think the pricing is spot on, and there really is no need to raise the prices to meet Intel pricing...

Ryzen entered the desktop CPU market with an 8 core / 16 thread high-end SKU for 500 bucks, the Ryzen 7 1800X...

At that time, Intel had the i7-7700K as their top end consumer (non-HEDT) desktop CPU, for around 350 bucks...

Now we have seen Intel pricing for their top end (non-HEDT) CPU climb to 500+ bucks...

Once Zen 2 is released to the wild, and we have 16c / 32t parts out there for the same 500 buck starting price that AMD has maintained for their top end (non-HEDT) desktop CPU all along, Intel will need to take some massive price cuts to compete on price alone...

And then when the lower cost Navi cards come out to give gamers who do not have the scratch to pay thru the nose for Nvidia offerings a solid GPU option; well, let's just say that 2109 is going to be a bad year for Intel & Nvidia...
 
Meh. I don't know that I particularly care anyway.

If I can drop a 12 or 16 core chip into this X370 board, I'll be happy. Not sure if that's in the cards, but it'd be nice.
 
Ryzen 9 3850X 16c/32t CPU, X570 ITX motherboard, & Radeon VII GPU is the high-end SFF all-AMD way to go...

At least until we get Zen 3 & Arcturus GPUs... ;^p
 
Nah. I like Zen... really do. Have a 2700X. But for GPUs, I stay Nvidia.

Mostly. Except when I was mining.
 
Ryzen 9 3850X 16c/32t CPU, X570 ITX motherboard, & Radeon VII GPU is the high-end SFF all-AMD way to go...

At least until we get Zen 3 & Arcturus GPUs... ;^p

Is ITX really the platform you expect a 135 Watt cpu to thrive on ? How is that going to work when you need space and cooling on VRM.
 
Ryzen 9 3850X 16c/32t CPU, X570 ITX motherboard, & Radeon VII GPU is the high-end SFF all-AMD way to go...

Is ITX really the platform you expect a 135 Watt cpu to thrive on ? How is that going to work when you need space and cooling on VRM.

I did specify SFF, meaning Small Form Factor, meaning a sub-20 liter chassis...

And if ASRock can make a X299 ITX motherboard, I am sure they can make an ITX motherboard for the R9 3850X CPU...

And while my above "high-end SFF all-AMD" build is a powerful build, I would be happy with a lesser build...

Ryzen 5 3600 CPU (probably what was demoed at CES)
B550 ITX motherboard
Radeon RX 3080 GPU

Or there is the forthcoming ASRock Deskmini A300 barebones unit, which is a great platform to make a nice SFF APU build; and a platform I am sure will be updated once the next gen of APUs rolls out...
 
Last edited:
Mmmmmm yummy 64 core threadripper (maybe dreaming) even though I wont buy one because thier probably gonna be 5k
 
Okay, someone deleted their comment regarding how a X570 ITX motherboard supporting the 135 watt TDP R9 3850X CPU would be a niche item...

Right when I was in the middle of quoting them...!

Yes, it would be a niche item, but then so is the ASRock X299 ITX, but that still sells...

SFF is all about making the chassis, well, small; but it is also about packing power into that diminutive envelope...

Hence the X299 ITX motherboard, so why not a proper Enthusiast AM4 / X570 ITX motherboard...?!?
 
The flipside to packing a ton of power into a SFF is removing the heat whilst also not requiring ear piercing levels of fan speed to do it. Can you do it? Sure. Should you? It really depends on why you need so much power in a SFF and if you can accept the compromises for doing so.
 
I would pay 500 for 12/24 configuration with 5Ghz single core and 4.6 all core turbo. Realistically, I probably only need 6/12 with high single core turbo.

Of the GPUs I had in the last couple of years, 1060, 1070, 1080, 1080ti, Fury, Vega 56 and Vega 64 (now that I typed this out I realized how many damn GPUs have gone through my system), I think Vegas are more interesting and more fun to tweak. The GTXs were all plug and play in terms of OC. Vega oc is an art (e.g, higher clock could mean less performance), and its more enjoyable to dial in the setting just right so it performs better than 1080.
 
Cinebench and Blender are not HPC loads and it shows AMD is able to produce lets call it parity using less power making it more efficient, so yes a slower CPU clock speed wise matching a faster clockspeed part using less power and producing lets call it parity is impressive. More so that one will cost 180USD to the other one costing over 500USD.

On the other point this was a 8/16 thread parity with clockspeed advantage to Intel, so the idea that less thread clocked high or low is moot. So understanding how slower cores beat or "reached" parity with a high end part running higher clocks and did so with much lower power is something that will be explained in due coarse by people that probably know more than you on the subject.

We know that Cinebench and Blender are not HPC loads. We know that Cinebench is not even a real appliation. We also know that Cinebench and Blender are throughput loads, and many HPC code are throughput loads. That is why many HPC code can be ported to run of GPUs, like rendering...

My explanation of efficiency and of the 1/f scaling was for your argument involving TR and 7960X. Why you mix that with the CES demo is a mystery to me. Probably because you cannot read. The lower power demoed at CES is a consequence of Zen2 using 7nm node, whereas Coffeelake uses 14nm...
 
Price point is not a question, the top supposed SKU is $500 USD.

Base on what?

I speculate that the 12C/24T Ryzen 9 3800/X* will be around $500 to be around the same price as Core i9-9900K and the 16C/32T Ryzen 9 3900/X* will be $700-$800

* = hypothetical name
 
I think the pricing is spot on, and there really is no need to raise the prices to meet Intel pricing...

Ryzen entered the desktop CPU market with an 8 core / 16 thread high-end SKU for 500 bucks, the Ryzen 7 1800X...

At that time, Intel had the i7-7700K as their top end consumer (non-HEDT) desktop CPU, for around 350 bucks...

Now we have seen Intel pricing for their top end (non-HEDT) CPU climb to 500+ bucks...

Once Zen 2 is released to the wild, and we have 16c / 32t parts out there for the same 500 buck starting price that AMD has maintained for their top end (non-HEDT) desktop CPU all along, Intel will need to take some massive price cuts to compete on price alone...

And then when the lower cost Navi cards come out to give gamers who do not have the scratch to pay thru the nose for Nvidia offerings a solid GPU option; well, let's just say that 2109 is going to be a bad year for Intel & Nvidia...

8C/16T Ryzen 7 3700/X* ~$300-$400

12C/24T Ryzen 9 3800/X* ~$500-$600

16C/32T Ryzen 9 3900/X* ~$700-$800

* = hypothetical name
 
Last edited:
Let me make it clear why AdoredTV's list is very very likely fake. (If, for whatever reason, you are still unconvinced otherwise.)

Games, by design, are hard to parallel across many cores.

You would see a huge benefit going from 2C to 4C, but a significant (but smaller) benefit going from 4C to 6C.

You hardly see a benefit going from 6C to 8C. (a few games can benefit, but the majority cannot)

That is hence the reason why the the 6C Ryzen 5 is AMD's best selling product.

Now, according to AdoredTV's list, AMD would cut its 6C processor's MSRP in half, from $199 to $99.

Someone would point out that Ryzen 5 2600 is selling for ~$160, which is below MSRP.

This is true, but the hypothetical Ryzen 3 3300 would likely also go on sale after being released (possibly to $80).

So let's think about that for a moment: AMD's new best selling processor is selling for $80. (down from $160)

That would significantly hurt AMD's bottom line.

_______________________________________________________________________________________

More likely than not, AMD is going to keep the number of cores in the Ryzen 3, Ryzen 5, and Ryzen 7 the same.

Ryzen 9 would target new price points.

12C/24T Ryzen 9 3800X at $500 to $600 would target Core i9-9900K

16C/32T Ryzen 9 3900X at $700 to $800 would target Intel's unreleased 10-core Coffee Lake Refresh

There's some overlap with Threadripper, but the people who buy Threadripper probably need the extra PCIe lanes that the Ryzen 9 doesn't have.
 
Last edited:
Let me make it clear why AdoredTV's list is very very likely fake. (If, for whatever reason, you are still unconvinced otherwise.)
Games, by design, are hard to parallel across many cores.
You would see a huge benefit going from 2C to 4C, but a significant (but smaller) benefit going from 4C to 6C.
You hardly see a benefit going from 6C to 8C. (a few games can benefit, but the majority cannot)
That is hence the reason why the the 6C Ryzen 5 is AMD's best selling product.
Now, according to AdoredTV's list, AMD would cut its 6C processor's MSRP in half, from $199 to $99.
Someone would point out that Ryzen 5 2600 is selling for ~$160, which is below MSRP.
This is true, but the hypothetical Ryzen 3 3300 would likely also go on sale after being released (possibly to $80).
So let's think about that for a moment: AMD's new best selling processor is selling for $80. (down from $160)
That would significantly hurt AMD's bottom line.
_______________________________________________________________________________________
More likely than not, AMD is going to keep the number of cores in the Ryzen 3, Ryzen 5, and Ryzen 7 the same.
Ryzen 9 would target new price points.
12C/24T Ryzen 9 3800X at $500 to $600 would target Core i9-9900K
16C/32T Ryzen 9 3900X at $700 to $800 would target Intel's unreleased 10-core Coffee Lake Refresh
There's some overlap with Threadripper, but the people who buy Threadripper probably need the extra PCIe lanes that the Ryzen 9 doesn't have.

It is all explained here why this is not the case you are making is the same argument as what others extremetech/hardware unboxed made and in detail countered.

Making the same argument twice does not suddenly make it true ...
 
Pass the bong. You have some sweet herb cooking as far as I can tell, Boil.

Boil was right about his recommendation to buy AMD stock though. It will see $25 a share by August.

I still want a hit though... Puff, puff, give Boil!
 
I kinda want the mods to lock this thread, TBH. The speculation is fun... I did not mean to start a flame war about a Youtube reviewer, though.
 
It is all explained here why this is not the case you are making is the same argument as what others extremetech/hardware unboxed made and in detail countered.

Making the same argument twice does not suddenly make it true ...


Summarize the points for me.

I don't want to listen to this BS for 37 minutes.
 
We know that Cinebench and Blender are not HPC loads. We know that Cinebench is not even a real appliation. We also know that Cinebench and Blender are throughput loads, and many HPC code are throughput loads. That is why many HPC code can be ported to run of GPUs, like rendering...

My explanation of efficiency and of the 1/f scaling was for your argument involving TR and 7960X. Why you mix that with the CES demo is a mystery to me. Probably because you cannot read. The lower power demoed at CES is a consequence of Zen2 using 7nm node, whereas Coffeelake uses 14nm...

Cinebench is a real application and its benchmarks are very true, there are no biases to cores or clockspeed and generally more of both is higher throughput. So when a slower clocked CPU with the same cores and threads beats a faster clocked CPU using less power it clearly is more than just 7nm being the difference.

As with the issue on the 7960x and TR was gamers nexus showing the 7960X hitting upward of 500w in there blender and Cinebench extreme render test, which was more than double thread ripper with less threads and on the same 14nm.

I will wait on those that actually know what they are talking about to provide insight in due course
 
I kinda want the mods to lock this thread, TBH. The speculation is fun... I did not mean to start a flame war about a Youtube reviewer, though.
He started the chiplets thing in his masterplan series. In a way he got it right when he was going there and that was a good while back. In most of his speculation he shows that what people perceive as not viable or to costly he uses the die per wafer calculator to show that certain things are more obvious then what other people in the press say to believe.
People have been flaming him , but then again look on the front page linked videos and check the comment that Kyle made about his original Ryzen 3000 speculation.

Would we be talking that much about this without his videos to begin with?
 
He started the chiplets thing in his masterplan series. In a way he got it right when he was going there and that was a good while back. In most of his speculation he shows that what people perceive as not viable or to costly he uses the die per wafer calculator to show that certain things are more obvious then what other people in the press say to believe.
People have been flaming him , but then again look on the front page linked videos and check the comment that Kyle made about his original Ryzen 3000 speculation.

Would we be talking that much about this without his videos to begin with?

It has long been talked about on AnandTech forums before he posted the video.

He has just been grabbing things from the forums and post them as "leaks" in his videos.
 
Cinebench is a real application and its benchmarks are very true, there are no biases to cores or clockspeed

Cinebench is not a real application only a synthetic benchmark. And, as proven before, Cinebench is a favorable case for AMD.

As with the issue on the 7960x and TR was gamers nexus showing the 7960X hitting upward of 500w in there blender and Cinebench extreme render test, which was more than double thread ripper with less threads and on the same 14nm.

GN power measurements are incorrect. They have had issues with BIOS, MCE, and other settings.
 
https://www.maxon.net/en/products/cinebench/

- based on Cinema 4D award winning software

https://www.maxon.net/en/products/cinema-4d/overview/

- One of about 10 software suites based on CGI, Graphics design and rendering, Model creation, Animation etc.

As for GamersNexus, they have corrected all their problems and Steve is not some incompetent idiot that doesn't know how to test hardware. Steve is also one of the major protagonists against MCE based results and re did all his benches when it came to light.
 
Cinebench is not a real application only a synthetic benchmark. And, as proven before, Cinebench is a favorable case for AMD.

By that logic any fixed benchmark that uses a product is not a real application.
 
https://www.maxon.net/en/products/cinebench/

- based on Cinema 4D award winning software

https://www.maxon.net/en/products/cinema-4d/overview/

- One of about 10 software suites based on CGI, Graphics design and rendering, Model creation, Animation etc.

As for GamersNexus, they have corrected all their problems and Steve is not some incompetent idiot that doesn't know how to test hardware. Steve is also one of the major protagonists against MCE based results and re did all his benches when it came to light.

"Based on" something else doesn't mean it is a real application. It is a synthetic bench.

https://www.techspot.com/article/1039-ten-years-intel-cpu-compared/page2.html

https://www.tomshardware.co.uk/razer-core-x-egpu,review-34456-2.html

https://www.anandtech.com/show/7963...iew-core-i7-4790-i5-4690-and-i3-4360-tested/5
 
By that logic any fixed benchmark that uses a product is not a real application.
What logic, that program was so great when AMD had Phenom II and Bulldozer, that most AMD folks said that the benchmark would only be any good if it wasn't compiled with ICC as noted by some people on Anandtech .
 
Status
Not open for further replies.
Back
Top