New Zen 2 Leak

Status
Not open for further replies.
This is why we are here....
Not all of us have or can afford a 1080ti or higher. Doesn't mean folks can't tweak their systems and play the [H]ell out a game!
Also I think there will be a legacy of 4/8 gaming with some huge titles because monie$. How big is E-Sports?
 
Elitist bleeding-nose gaming? Sure.
But I think it will be adequate for years to come.

Next set of consoles are supposedly powered by Ryzen. So for now yeah 4/8 is fine, but in another couple of years....
 
Next set of consoles are supposedly powered by Ryzen. So for now yeah 4/8 is fine, but in another couple of years....

Just because there is hardware configs with lots of cores doesn't automatically mean heavily threaded games support because coding for multiple cpu cores in mind is also more complex/time consuming and generally hard to make "good use of" as in just because you have say 8 threads at your disposal doesn't mean you'll get anywhere close to 8x the CPU processing power capability and when you factor in the added complexity and time it requires in coding you often only see AAA titles with proper ~4 cores(+) support.

The best would be better automatic multicore API code in DirectX that devs could make use of with very little/no extra effort required. While it probably wouldn't be as effective as some own dev's specialized multithread code for that particular game say if you get 40% improvement in a very optimized code in your own engine and DirectX multithreaded code would give more like say 20~25% or thereabouts benefits that would still be IMO very good for PC games for the future in general as everyone could make use of it with little effort.

But who knows, maybe AI will be present in the future as well in game engines (Unreal Engine 5 maybe?) even to tell what tasks should be given to which cores on the fly as currently the devs have to be pretty specific about what tasks to do with what threads.
 
Last edited:
Anyone remember that early leak over a year ago before Zen+ with 5GHz and lots of cores? I think it was adored TV and it was just an image that appeared in one of his videos.
I also predicted back then that Zen2 would be ~4.5-4.7 all core and under turbo may approach 5GHz or with max effort OC. Looks about right and in line with current leaks. Chalk that up with being the only guy I've seen on the net predicting the 2080Ti would launch out of the gate and that they'd use it to shift pricing up.. I've been pretty on point with major releases if I may toot my own horn a bit ;P I guess that's what happens when you've been watching releases for 15+ years..
 
Next set of consoles are supposedly powered by Ryzen. So for now yeah 4/8 is fine, but in another couple of years....

Sony gave programmers the option of a console with 12 or 16 cores, but they rejected and chose the 8 core option for the Ps4. Games aren't throughput workloads. And how many cores will have the Ryzen-powered consoles? Eight?
 
Next set of consoles are supposedly powered by Ryzen. So for now yeah 4/8 is fine, but in another couple of years....

As has been said before:

- PS3 Cell Processor had 8 cores of one type and two of another
- PS4 (both models) and Xbox One (all models) currently have 8 cores

This has not impacted games on the PC.

And when the consoles get Ryzen chiplet designs, they will almost certainly be some of the lowest binned varieties with low clocks. I mean, look at what they are using now. Jaguar cores. AMD's low power Atom-esque cores.

Add to that that systems will always be limited by the main game engine process which cannot now, or ever be multi-threaded, because it depends on state. It doesn't matter how many cores Sony or Microsoft use in their consoles, they can't magically violate the fundamentals of logic and computing. The most you will get are a few more processes broken out into their own process on another core (audio on one, physics on another, etc. etc.) and these can probably be run just as well on one faster core.
 
Zen 2 16 cores may have problems making use of on all its cores on Windows 10. It seems Windows 10 cannot use more than 10 cores efficiently and it seems to go down to how the kernel works. No workaround has come to make the Threadripper 2990 a better chip than the 2950 on Windows 10.
Hopefully Zen 2 will work on Windows 7 Pro, who has no such limitation.
 
Zen 2 16 cores may have problems making use of on all its cores on Windows 10. It seems Windows 10 cannot use more than 10 cores efficiently and it seems to go down to how the kernel works. No workaround has come to make the Threadripper 2990 a better chip than the 2950 on Windows 10.
Hopefully Zen 2 will work on Windows 7 Pro, who has no such limitation.

Not sure here you got that from but my system does not have a problem at all using 16C/32T.

There is also a brand new utility that may help even more since it tries to make up for UMA issues involving Windows 10's task manager: https://bitsum.com/portfolio/coreprio/

It is an early version of a freeware utility by the same guys that put out process lasso. It has been tested to almost double the performance of a TR 2990WX and should dynamically help those processes that need to run closer to local memory channels by introducing a Dynamic Local Mode to the windows low level tasking subsystem.
 


Ronnie Morgan Cool story bro
SmartSelect_20190106-180527_Chrome.jpg
 
If you look at Silicone Lottery site only 60% of TR4 can hit 3533, 3733 is incredible. I'm gussing based on the capacity of 32gb on both boards they are both running single rank DIMMs with 4x8gb config. That might be a perfect build scenario, but is certainly not reasonable to expect.

From what I've seen once you to to 8 by XX configs or higher capacity DIMMS in a 4 up setup (read: double rank) you won't hit those speeds often.

But I've only built a half dozen HEDT systems so far and not everyone springs for B-Die or E-Die so YMMV, as always.

But it isn't common.


I'm not sure what you mean by most quad channel setups max out at like 3200?

Here is my Intel X299:
[email protected] 3.2GHz mesh / MSI Gaming 7 ACK / 4x8GB HyperX DDR4-4000@4000-18-18-18-38-420 / Intel Quad Channel
View attachment 132352

Here is a friends TR:
TR [email protected] / ASRock X399M Taichi / 4x8GB Patriot Viper 4 DDR4-3733@3733 14-14-14-28 1N / AMD Quad Channel
View attachment 132353
 
If you look at Silicone Lottery site only 60% of TR4 can hit 3533, 3733 is incredible. I'm gussing based on the capacity of 32gb on both boards they are both running single rank DIMMs with 4x8gb config. That might be a perfect build scenario, but is certainly not reasonable to expect.

From what I've seen once you to to 8 by XX configs or higher capacity DIMMS in a 4 up setup (read: double rank) you won't hit those speeds often.

But I've only built a half dozen HEDT systems so far and not everyone springs for B-Die or E-Die so YMMV, as always.

But it isn't common.


I've been keeping an eye out on SL to see when they were going to offer a TR...I still do not see it.
 
If you look at Silicone Lottery site only 60% of TR4 can hit 3533, 3733 is incredible. I'm gussing based on the capacity of 32gb on both boards they are both running single rank DIMMs with 4x8gb config. That might be a perfect build scenario, but is certainly not reasonable to expect.

From what I've seen once you to to 8 by XX configs or higher capacity DIMMS in a 4 up setup (read: double rank) you won't hit those speeds often.

But I've only built a half dozen HEDT systems so far and not everyone springs for B-Die or E-Die so YMMV, as always.

But it isn't common.

So I'm still confused. You say at least 60% of TR can hit 3533 but you're comparing 2400MHz quad to 3866MHz dual channel? I would venture to say most AMD setups can't hit 3866mhz, and getting 64GB/s out of 3866MHz isn't possible either. Likewise people not springing for B-Die aren't going to see 3866MHz on the ram either. You're skewing the numbers to try to make your argument valid but that doesn't work. I can post some pictures of dual channel at 4000MHz and sadly it's still just under 60GB/s.
How much real world benefit comes from quad vs dual channel would be a better point to make. Sadly that's really application dependent. As we start to see 12+ cores on dual channel I think the memory bandwidth issue will become more apparent.
 
Application, OS, and build dependent, but theoretical bandwidth isn't so different.

Real world bandwidth is variable, and the performance differences won't be evident to most people for a long while (unless you're running quad GPUs in an intense rendering scenario or multiple SAS RAID controllers in a heavy IO scenario) And almost no effect in windows because, well, heck the Windows 10 scheduler can barely handle more than 12c/24t scheduling correctly anyhow, so how would you ever see the benefit of the memory bandwidth if the threads can't even perform correctly?
 
Application, OS, and build dependent, but theoretical bandwidth isn't so different.

Real world bandwidth is variable, and the performance differences won't be evident to most people for a long while (unless you're running quad GPUs in an intense rendering scenario or multiple SAS RAID controllers in a heavy IO scenario) And almost no effect in windows because, well, heck the Windows 10 scheduler can barely handle more than 12c/24t scheduling correctly anyhow, so how would you ever see the benefit of the memory bandwidth if the threads can't even perform correctly?

So far you really haven't proved your point, just a lot of tail chasing. Quad channel provides more bandwidth. Some applications take advantage of it, some do not. The majority of people don't need quad channel memory, but the majority of people don't need more than four cores. I wasn't picking at your comment that quad channel isn't for everyone. It's the 2400MHz quad is equal to 3866MHz dual when neither of those are realistic (if you're going to do quad then may as well have faster ram, likewise 3866MHz is likely a pipe-dream for the current AMD lineup).
 
900MHz lol

This could be a low power variant, or an engineering test sample.

Just because someone is testing an idea they had (Hey Todd, let's see how low we can get the power consumption of we drop the clocks and the voltage) doesn't mean it is actually reflective of an actual product either.

You wouldn't expect internal results like these in a public database, but oftentimes simple testing like this is given to engineering interns, and kids fresh out of school, and you wouldn't believe the common sense these kids lack.
 
Last edited:
This could be a low power variant, or an engineering test ssmple.

Just because someone is testing an idea they had (hey Todd, let's see how low we can get the power consumption of we drop the clocks and the voltage) doesn't mean it is actually reflective of an actual product either.

You wouldn't expect internal results like these in a public database, but oftentimes simple testing like this is given to engineering interns, and kids fresh out of school, and you wouldn't believe the common sense these kids lack.


Lol, I'm not expecting anything much outta something that's a qualfication/ES part. I just thought it was funny, that's all
 
Which is strange because if you load up an all core OC, you're probably way beyond the 135W that the 16C is at already and current boards handle it. It might come down to a case by case basis with each board where higher end ones get support, and lower end ones don't.

One of the enduring problems on the AMD side of the motherboard business is that the vendors can do whatever they want with VRM design so long as the board works with whatever CPU's are on the market when the board launched. Some motherboards end up with VRM designs that barely make the grade for compatibility while others are overbuilt. On the Intel side, Intel has a lot more control over how the motherboard manufacturers do things.
 
  • Like
Reactions: Boil
like this
One of the enduring problems on the AMD side of the motherboard business is that the vendors can do whatever they want with VRM design so long as the board works with whatever CPU's are on the market when the board launched. Some motherboards end up with VRM designs that barely make the grade for compatibility while others are overbuilt. On the Intel side, Intel has a lot more control over how the motherboard manufacturers do things.

Yeah, the MB manufacturers really need to step it up some in regards to VRMs & such if the Ryzen rumors become fact...

That 135 watt R9 3850X is going to need a good bit more power than the base 135 watts once you get all 16 cores up & running...
 
One of the enduring problems on the AMD side of the motherboard business is that the vendors can do whatever they want with VRM design so long as the board works with whatever CPU's are on the market when the board launched. Some motherboards end up with VRM designs that barely make the grade for compatibility while others are overbuilt. On the Intel side, Intel has a lot more control over how the motherboard manufacturers do things.


I wonder why AMD doesn't set specs? Just not enough leverage given their market share?
 
One of the enduring problems on the AMD side of the motherboard business is that the vendors can do whatever they want with VRM design so long as the board works with whatever CPU's are on the market when the board launched. Some motherboards end up with VRM designs that barely make the grade for compatibility while others are overbuilt. On the Intel side, Intel has a lot more control over how the motherboard manufacturers do things.

Do you think that there is a minimum spec where they would be required to handle 135 Watts? Overclocking is just a bonus at that point.
 
This could be a low power variant, or an engineering test sample.

Just because someone is testing an idea they had (Hey Todd, let's see how low we can get the power consumption of we drop the clocks and the voltage) doesn't mean it is actually reflective of an actual product either.

You wouldn't expect internal results like these in a public database, but oftentimes simple testing like this is given to engineering interns, and kids fresh out of school, and you wouldn't believe the common sense these kids lack.

Pretty sure it's a 1.4 base, 2.2 boost part - or something like that - which is currently underclocked for test reasons.
 
One of the enduring problems on the AMD side of the motherboard business is that the vendors can do whatever they want with VRM design so long as the board works with whatever CPU's are on the market when the board launched. Some motherboards end up with VRM designs that barely make the grade for compatibility while others are overbuilt. On the Intel side, Intel has a lot more control over how the motherboard manufacturers do things.

it's kind of a catch 22, while AMD gives more leeway on the VRM side of things so that manufactures can differentiate their products, intel doesn't and instead uses 30 different chipset options for board manufactures to differentiate their product lineup.. personally i'm ok with AMD's approach but both options have their faults when it comes to consumers that don't have a clue what they're buying.
 
There has to be AMDspedification requirements, specially with AMD keeping the same socket thru 2020. Just like Intel, lower quality motherboards (lower tier) do not have as quality of VRM's. All you have to do is look at the release of the 9900k to see that Intel suffers from the same problem since if you run a 9900k on a lower tier motherboard, which uses lower quality VRMs, aka built to a lower standard, the 9900k tempuratures are a lot higher, and the chip throttles. You run it on a higher quality motherboard, and the temperatures are lower, and it does not throttle. It's the same situation with AMD. I am not sure where the idea that AMD does not have any set specification requirements and just lets Motherboard manufactures do what ever they want, it is just not a logical conclusion. I would be safe to say that AMD may have looser requirements, but they do still have requirements.
 
I wonder why AMD doesn't set specs? Just not enough leverage given their market share?

Both AMD and Intel have all kinds of white papers and design guidelines for motherboard manufacturers to follow. The difference is that AMD isn't as strict with these as Intel is. AMD lacks the ability to force motherboard manufacturers to do anything. Intel on the other hand has quite a bit of control over the motherboard makers.

Do you think that there is a minimum spec where they would be required to handle 135 Watts? Overclocking is just a bonus at that point.

Absolutely. As far as I know, every Ryzen motherboard can handle 135 watts. Going beyond that is another matter.

it's kind of a catch 22, while AMD gives more leeway on the VRM side of things so that manufactures can differentiate their products, intel doesn't and instead uses 30 different chipset options for board manufactures to differentiate their product lineup.. personally i'm ok with AMD's approach but both options have their faults when it comes to consumers that don't have a clue what they're buying.

I don't think its about differentiation as much as AMD lets motherboard manufacturers do what they want in order to keep costs down. One of the things that makes AMD more attractive is the lower platform cost. You can get an AM4 motherboard for substantially less than an equivalent LGA 1151 motherboard. On the HEDT side things are closer, but that's because of the platform's complexity and feature set.

There has to be AMDspedification requirements, specially with AMD keeping the same socket thru 2020. Just like Intel, lower quality motherboards (lower tier) do not have as quality of VRM's. All you have to do is look at the release of the 9900k to see that Intel suffers from the same problem since if you run a 9900k on a lower tier motherboard, which uses lower quality VRMs, aka built to a lower standard, the 9900k tempuratures are a lot higher, and the chip throttles. You run it on a higher quality motherboard, and the temperatures are lower, and it does not throttle. It's the same situation with AMD. I am not sure where the idea that AMD does not have any set specification requirements and just lets Motherboard manufactures do what ever they want, it is just not a logical conclusion. I would be safe to say that AMD may have looser requirements, but they do still have requirements.

There are. However, AMD keeping a socket through a projected period of time means nothing. Back in the AM2 and AM3/AM3+ days you had motherboards with shitty VRM implementations which couldn't handle going above 95 watts. In the future, we could see a similar dynamic with AMD. You might have some motherboards that can do 135 watts no problem, but a theoretical 145watt or 150watt TDP chip is a no go. I'm not saying we'll definitely see that, but something along those lines is always a possibility when motherboard manufacturers are allowed greater latitude on VRM design.

That said, I'm seeing some things I don't like on the Intel side right now, so motherboard manufacturers may have more leeway than they've had in the past.
 
it's kind of a catch 22, while AMD gives more leeway on the VRM side of things so that manufactures can differentiate their products, intel doesn't and instead uses 30 different chipset options for board manufactures to differentiate their product lineup.. personally i'm ok with AMD's approach but both options have their faults when it comes to consumers that don't have a clue what they're buying.
If you look at something as Asus X370 Crosshair VI hero then what is the fuss about? It could already support such a beast of a cpu with 4 pin and 8 pin cpu connectors.

There is just little uniformity across X370 but it is not that it can not handle it.
 
Ahh Intel motherboards had overheating VRM's, I think it's always a bit up to the consumer to take a closer look before buying a motherboard these days. Plus AMD is always covered by just saying we support the AM4 socket and it's up to the motherboard manufacture to support new AM4 processors. Sadly I think most of it comes from being more flash then function.
 
It is not an engineering sample. It is a qualification sample. I explained the differences between both concepts here.

I am very familiar with the difference. I have worked in Engineering development and testing my entire career.

What I don't understand is how you can see a random result pop up in a results database and say with certainty which type of sample it is.
 
Status
Not open for further replies.
Back
Top