AMD Further Unveils Zen Processor Details

I said I'm not familiar with blender CPU test, meaning I don't know if it scales well with core count, clock, FP performance...

So I went and looked the results database? Did you do that? Did you do anything at all? No. Instead you come here and claim AMD presented hard data when they did not, they did not even specify whether quad channel memory was used on the Intel system.

Anyway here is deus ex at 4k

dem_3840_12.png

You mean you are not done posting more off topic messages in the AMD forums , maybe next time just keep spamming Nvidia crap the mods don't care ...
 
You mean you are not done posting more off topic messages in the AMD forums , maybe next time just keep spamming Nvidia crap the mods don't care ...

That's not off topic, but this post of yours is.
 
How does it not mean anything? Zen was originally aimed at competing with Haswell for IPC and now we see IPC at the same level as Broadwell-E which is the latest die shrink/architecture from Intel.

IPC is often conflated with single thread performance got marketing purposes.

Believe what you will, but if you go and look at the actual results for the blender CPU test you'll see why it means very little

A 6 years old hexacore CPU based on Nehalem (gulftown) is outperforming a 5960x in Blender. So while AMD has matched broadwell, it's still to match six year old gulftown. You can't pick and choose, if you choose to validate this benchmark as proof that Zen outperforms broadwell then you also must accept it performs worse than gulftown.
 
Last edited:
The hard data is that they ran both at 3 GHZ.
You are funny
Why do you lack knowledge of what Blender does then still comment on it. Because your lack of knowledge you claim no performance numbers , the performance is that it beats the Intel cpu at equal cores threads and gigahertz in the blender demo.
AMD did not say there faster they showed it , as in everyone could see it performs better in the blender demo.

Can't resist:

Lighten up, Francis....
 
I would like someone to explain the reason for this. It does not make a lot of sense to me.
Looks to me like it doesn't scale too well with core count and frequency matters most, also probably uses older (worse performing libraries) for compatibility reasons.


This is just a guess though
Certainly isn't using AVX
 
Haswelll-e has more cache overall and more cache per core

But how is the latency? Not to mention L3 clock.

Skylake even got slower cache latency than Haswell. But higher throughput. Same applies for Nehalem vs Haswell.

Skylake: L1:4, L2:12, L3:44
Haswell: L1:4, L2:11, L3:34
http://www.intel.com/content/dam/ww...4-ia-32-architectures-optimization-manual.pdf

If we look at Zen, we also know that Zen got a larger L1. And we know Zen does very poorly in AVX/AVX2. So throughput isn't something its optimized for. So latencies could be better. Uncore clocks can also affect it. A 5960X for example runs its uncore (Includes L3) at 3Ghz. A 6700K runs its uncore at 4.1Ghz.
 
Last edited:
I also think Zen actually consist of 2x8MB L3 as well that is linked, rather than 1x8MB. Since the CPU contains 2 quad core clusters.

Anandtech got this on the subject.

Edit 7:18am: Actually, the slide above is being slightly evasive in its description. It doesn't say how many cores the L3 cache is stretched over, or if there is a common LLC between all cores in the chip. However, we have recieved information from a source (which can't be confirmed via public AMD documents) that states that Zen will feature two sets of 8MB L3 cache between two groups of four cores each, giving 16 MB of L3 total. This would means 2 MB/core, but it also implies that there is no last-level unified cache in silicon across all cores, which Intel has. The reasons behind something like this is typically to do with modularity, and being able to scale a core design from low core counts to high core counts. But it would still leave a Zen core with the same L3 cache per core as Intel.

AMD Zen Microarchitecture: Dual Schedulers, Micro-Op Cache and Memory Hierarchy Revealed

The console SoCs got a somewhat similar configuration. But there its a disaster due to the game logic. Because there is a massive ~200 cycle penalty between the 2 clusters. But with some tile based render and anything in the same order, you obviously have no penalty. How Zen will turn out on the subject and what penalty if any between the 2 clusters we have to see.
 
Last edited:
The first engineering samples were 2.8Ghz with a 3.2Ghz turbo clock. This appears to be a newer engineering sample with 3Ghz which appears to have turbo mode turned off for the purpose of the demonstration. That's already a 200mhz increase from one engineering sample to another in the span of a very short time. I don't know why you are disappointed in clock speeds when Intel doesn't have more than 3.2Ghz on the 6900K or even on the E5-1680 V3. That's the reality is that more cores means more heat and bigger TDP in the same package.

This is a good point. Broadwell-E have lower clocks than we are used to. I still think that AMD is going to have to hit clocks equivalent to Haswell and Skylake 4C/8T parts, at least on their 4C/8T APU parts down the line.

It doesn't mean anything yet, because we have a very incomplete picture, but my concern is that they won't be able to get there. As soon as I see a demonstration at 3.8-4.0 Ghz, even if just on a single thread with max single core turbo in effect, my concerns will be allayed.
 
I think it is how AMD measures the temperatures. My old Phenom II had a limit of 60 degrees. It's in where the T-Junction is measured and how far from the central cores it is.

And there I was thinking they used a different type of solder in their old chips. :p

Intel used to have lower Tmax a long time ago as well. I remember being astonished when I got my first newer design intel chip that could go all the way up to 100C.
 
  • Like
Reactions: N4CR
like this
AMD also is not stupid. They are not going to price themselves out of the market before they even get started. Take some time and think that through, you will see where I am coming from.


If they have the goods, they will have the price to match. They might discount them a little bit under equivalently performing Intel chips to gain market share, but it won't be a wide margin.

The reason AMD chips are cheap now is simply because they lack the ability to compete, not because of some sort of AMD charity.

Of course, if they do have the goods, and price it a little lower than Intel to gain market share, Intel may counter, creating a bit of a price war, which would lower prices on both sides. That would be nice.


AMD already did it before, remember the 800$ 220W FX? Dont be naive, this is business. So yes, AMD wont be stupid and try to sell it for less than they can.

Also if they can actually compete in server, all volume will go there first.


Exactly. I recall there being some stupidly expensive Athlon 64 x2's before Intels C2D came out as well, like the Athlon 64 x2 3800+ which sold for over $600, if I recall.

They will want to undercut Intel a little bit to regain market share, but not by a lot.
 
Last edited:
I said I'm not familiar with blender CPU test, meaning I don't know if it scales well with core count, clock, FP performance...

So I went and looked the results database? Did you do that? Did you do anything at all? No. Instead you come here and claim AMD presented hard data when they did not, they did not even specify whether quad channel memory was used on the Intel system.

Anyway here is deus ex at 4k

dem_3840_12.png

What does this graph of Deus Ex have anything do with Zen???
 
If they have the goods, they will have the price to match. They might discount them a little bit under equivalently performing Intel chips to gain market share, but it won't be a wide margin.

The reason AMD chips are cheap now is simply because they lack the ability to compete, not because of some sort of AMD charity.

Of course, if they do have the goods, and price it a little lower than Intel to gain market share, Intel may counter, creating a bit of a price war, which would lower prices on both sides. That would be nice.





Exactly. I recall there being some stupidly expensive Athlon 64 x2's before Intels C2D came out as well, like the Athlon 64 x2 3800+ which sold for over $600, if I recall.

They will want to undercut Intel a little bit to regain market share, but not by a lot.

That is not correct at all. For now AMD has to rebuild the user base and increase market share. Now, they will not make them super cheap but then again, then are not going to make them stupidly expensive like Intel is doing now. What had happened with the X2 processors cannot be directly compared to what is going on today. If they only undercut Intel a "little", they will only regain a little market share. Now, if they are successful, with future releases, prices could be higher but they first need to reestablish themselves.
 
That is not correct at all. For now AMD has to rebuild the user base and increase market share. Now, they will not make them super cheap but then again, then are not going to make them stupidly expensive like Intel is doing now. What had happened with the X2 processors cannot be directly compared to what is going on today. If they only undercut Intel a "little", they will only regain a little market share. Now, if they are successful, with future releases, prices could be higher but they first need to reestablish themselves.

True I expect amd to price competitive part significantly cheaper than intel. Now I don't expect them to sell for 300, they still gotta make profit. If they can sell for 500-600 for a part with in 10% of intel when intel charges a grand then it will be a win. They will still make profit. They need to gain market share yet they need to make profit. Believe it or not if zen is a winner it will make not only CPU side competitive it will make gpu side more competitive as well in the long run. More budget for r&d.
 
I expect AMD to continue the same policy that they have used for the last decade. I mean price their CPU at a discount to what AMD considers its closest competition. The current 8 core is priced to compete with an i5. If the new 8 core / 16 threaded processor is competitive with Intel's 6 core / 12 threaded processors I expect it to be a ~$350 CPU. If it is competitive with Intels 8 core I expect it to be an $800 CPU.
 
That is not correct at all. For now AMD has to rebuild the user base and increase market share. Now, they will not make them super cheap but then again, then are not going to make them stupidly expensive like Intel is doing now. What had happened with the X2 processors cannot be directly compared to what is going on today. If they only undercut Intel a "little", they will only regain a little market share. Now, if they are successful, with future releases, prices could be higher but they first need to reestablish themselves.

The problem is that they have been hemorrhaging cash for years and the second their board smells a money making opportunity they might be inclined to grab as much cash as possible, long-term consequences be damned.

I agree with your assessment otherwise. AMD needs to regain marketshare in a shrinking landscape, which will be even harder to do if they only end up price competitive with the even more expensive alternatives. Hopefully they will have a top-to-bottom marketing strategy that addresses several key points from a pricing perspective and there seems to be evidence of that with their video card strategy. Whether or not that carries over to the CPU/APU division is another matter.
 
Clock to clock speed doesn't matter when the standard clock speed of a 6900K is 3.2 (turboed to 3.7 under load) compared to a zen at 3.0.

Clock speed is fairly irrelevant when a zen will have trouble going higher than 3.0 ghz.

Like this?
AMD FX 8150 Looks Core i7-980X and Core i7 2600K in the Eye: AMD Benchmarks

Or this?
AMD posts blatantly deceptive benchmarks on Barcelona | ZDNet



True, if they could show any higher they would. It seems both the 4C and 8C models is stuck at the same clocks. 2.8Ghz base, 3.05Ghz all core turbo and 3.2Ghz peak turbo.

At this point in time, there isn't going to be any changes till release.

I dont think they would show the clock speeds "if they could".
1. They can't give us everything all at once. Bit by bit(lol) to keep the Zen talk alive.
2. Early silicon, early mobo, especially if they are still talking late Q4/16.
3. Why would they show the shipping clock speeds so far away from launch. Do you wanna give your competitor time to blow you away before you even launch? No, you don't. You announce shipping clock speeds close to launch/availability. You want to attempt to catch your competitor with their pants down, even for just a brief moment.

I don't think you understand how prototyping and marketing work.

I have feeling and agree with above, they are best to NOT release all info at once as it might castrate sells of their own product
but primarily, many since bulldozer, "ZOMG AMD doesnt compete with Intel clock for clock, they are dead!, they dont compete die shrink they going out of business" etc and so forth
them showing Zen(desktop version FYI, might be "server test board" but the 8c 16t is desktop variant, and eventually will see 4c 8t and 2c 4t as well but initial launch supposed to be ONLY the 8c version with server side getting 8/16, 16/32 etc
 
I have feeling and agree with above, they are best to NOT release all info at once as it might castrate sells of their own product

That's not a rationale argument. Just about nobody who might be convinced to wait for a Zen CPU is going to buy one of AMD's currently-available products instead. AMD has no products in the performance segment Zen is touted to be in. Only Intel does. You can't damage your sales in a product segment when you don't have any products in that segment.

And putting out information that might convince someone to not buy an Intel product and instead wait for Zen is exactly what AMD should be doing - if they can do so without running afoul of the law. So if they could credibly claim to have higher-speed Zens in the works, they'd be well-served to cherry-pick some engineering samples and show higher-speed results. But they have not done so.
 
If 8 core zen is beating 8 core intel in a rendering bench at same clock speeds, I don't see this being anything but AMAZING.
Even if the AMD tops out at 3.8 Ghz, it is still a massive winner in my book over the 1000$ intel part.

Hopefully they are able to make 4c/6c chips that reach 4.5 Ghz as 8 core with a 3.5 - 3.8 ghz clock is of no use to a gamer from either company. The IPC improvement looks promising here.
 
If 8 core zen is beating 8 core intel in a rendering bench at same clock speeds, I don't see this being anything but AMAZING.
Even if the AMD tops out at 3.8 Ghz, it is still a massive winner in my book over the 1000$ intel part.

Hopefully they are able to make 4c/6c chips that reach 4.5 Ghz as 8 core with a 3.5 - 3.8 ghz clock is of no use to a gamer from either company. The IPC improvement looks promising here.


Yeah, but as has been mentioned several times in this thread, it is a whacky benchmark, where even Intel's old Gulftown chips are beating current Broadwell-E, so it probably doesn't mean much.
 
If 8 core zen is beating 8 core intel in a rendering bench at same clock speeds, I don't see this being anything but AMAZING.
Well, yeah, if that rendering bench is what you actually are buying the processor to run, and they haven't altered the code of the rendering bench to make themselves look better ...

Personally, I have never run "blender." I don't even know why I would. I image a lot of potential customers for Zen are in the same boat.
 
We need more information about the blender benchmark really, a brief Google search on mobile yielded an eol announcement cause the author no longer had the time to maintain it.

I also found a forum post about compiling with sse2 in 2010 so I dunno, maybe that's a different or older one.

What does the benchmark actually test?

As usual, I'm sure some or you are thinking, ieldra is bashing anything remotely positive for amd - but really the benchmark results are weird, they make very little sense if you just look at them.

Combine that with a demo of deus ex at 4k, which I posted the results for because there was a fury X CF result which almost certainly runs better than the pro duo used in the demo.


We're talking cinematic 30fps, maybe 60 with low settings I dunno, but still nothing that *should* really stress a cpu.

Then again deus ex has some weird performance apparently, dx12 is absolutely abysmal and even dx11 seems heavily cpu bottlenecked with very small reduction in framerate moving from 1080p to 1440p.
 
Here's my Aida64 cache/memory screenshot from a while back. Not sure if the info you guys are looking for in in there or not.

View attachment 6886

Nah we want to see the X5650/90 which are pretty old Gulftown GPUs; Nehalem hexacores, before sandy bridge even, they outperform your 5960x at stock somehow
 
Ok best that I can tell is that Blender 2.77, obviously, is open source using Python 3.5.1 if they are being current.
Blender 2.77a - blender.org
Python Release Python 3.5.1

So being open source AMD can in fact influence the test by using AMD friendly code, key word being CAN not DID. I have no doubt they used libraries kind to them but being a new chip it could be completely neutral and an accurate representation as well.

Ildra, as far as the benchmark list, I found that it is poorly structured and completely mixed up having different render versions and such (found as in others talking about its poor structure). And being it hasn't been updated and the latest version I could see and was willing to look for was 2.50 Alpha (latest being 2.77) I don't think we can conclude much from even it.

All we can say for now is that it is looking good that it exists but in no way can we infer actual performance, good or bad, from this alone.
 
Nah we want to see the X5650/90 which are pretty old Gulftown GPUs; Nehalem hexacores, before sandy bridge even, they outperform your 5960x at stock somehow

Sorry, you had mentioned Haswell-E in the post previous in regards to cache, I thought we were still on that tangent.
 
Nah we want to see the X5650/90 which are pretty old Gulftown GPUs; Nehalem hexacores, before sandy bridge even, they outperform your 5960x at stock somehow
upload_2016-8-19_22-45-46.png


upload_2016-8-19_23-19-30.png


upload_2016-8-19_22-58-21.png


That took way too long and I looked thru over 1000 pics to find these. Most were OCed to 4.4Ghz and such so finding lower clocks was a pain but there you go.
 
If AMD were to sell a 3.0Ghz Zen for $300, I don't think anyone would complain about the clock speed.
If it only matches Broadwell in IPC but doesn't clock worth a shit, people will complain about the clock speed. If final silicon shows clocks that significantly favor Intel, then it won't matter. Zen will be a loser in the desktop market. Low stock clocks also likely indicate poor overclocking performance. If the fastest Zen CPU comes at at 3.0GHz, I wouldn't expect it to clock as high as Broadwell or Skylake. As I've said before, IPC is only part of the equation. Actual clock speeds are what we need to really get an idea of how Zen will perform against Intel's offerings.
 
The "Core architecture" arrived in the Pentium Pro processor, in 1995. Pentium Pro begat Pentium II begat Pentium III begat Pentium M begat Core.

The Pentium Pro itself was derived from the BiiN joint venture between Intel and Siemens, which after it failed begat the i960MM and i960MX 32-bit RISC processors. The team that developed the i960MM and i960MX went on to develop the Pentium Pro.

Within Intel, there was a lot of heat in the engineering community over the choice to go with the Pentium IV microarchitecture instead of improving the Pentium III. Fortunately for Intel, the Pentium III survived as Core in the laptop space, where P4 was just too power-hungry, and so was available for re-instatement as the primary desktop microarchitecture when the P4 finally flamed out.

You knew exactly what I meant. Most everyone assumes when someone refers to "Intel core architecture" they are referring to 2006+. Yes it was born from Pentium Pro, but your post comes off like the neckbeard but that says "but actually" and everyone rolls their eyes.
 
Last edited:
Back
Top