AMD Further Unveils Zen Processor Details

If AMD were to sell a 3.0Ghz Zen for $300, I don't think anyone would complain about the clock speed.

If it performs like a 5960X and uses 95W only. Its not going to be 300$. Its not going to be 600$ either, its going to be more. If they beat Intel in perf/watt in servers. Then you may not even be able to buy Zen for a desktop before 2018 :p

AMD isn't a charity company.
 
Tying a Broadwell-E chip at 3Ghz is great. It shows that they might be over-delivering on their IPC goals.

Of course the test may be highly selectively picked, including down to the point of selecting a version that gives them as much advantage as possible, as intel recently did vs Nvidia, but it still looks promising.

What is alarming to me is that it is a 3Ghz part. The rumor out there is that GloFo's 14mm FinFet process can't produce parts that go much beyond 3Ghz.

it's great to go head to head with Intel at 3Ghz, but we are going to need parts that can hit 4Ghz and beyond as well.


We also saw rumors that the Polaris chips were having problems getting past 850mhz...
 
You don't think AMD has in-house fabs for when they're protyping new silicon?
Probably not just to avoid the environmental regulations. It's why you see them flying a handful of boards all over the place on the Zauba manifests. CNC machine maybe, but all the acids and hazardous materials used in the fabrication process are a bit more problematic.
 
Getting higher clock speeds is just a matter of time with tweaks and better manufacturing.
LOL. No. Clock speed is a function of the amount of delay between pipeline latches, among other things, and the delay between pipeline latches is a function of how the processor's architecture partitions the pipeline.

Sure, it can get a little better as your manufacturing process matures (if it is an immature process) and as you tweak driver sizes to correct for errors between your simulated delays and the ones you actually got in silicon (which won't be much if you had half-decent parametric extraction and delay simulation). But the pipeline partitioning limits what you can do - a lot.
 
If it performs like a 5960X and uses 95W only. Its not going to be 300$. Its not going to be 600$ either, its going to be more. If they beat Intel in perf/watt in servers. Then you may not even be able to buy Zen for a desktop before 2018 :p

AMD isn't a charity company.

AMD also is not stupid. They are not going to price themselves out of the market before they even get started. Take some time and think that through, you will see where I am coming from.
 
AMD also is not stupid. They are not going to price themselves out of the market before they even get started. Take some time and think that through, you will see where I am coming from.

AMD already did it before, remember the 800$ 220W FX? Dont be naive, this is business. So yes, AMD wont be stupid and try to sell it for less than they can.

Also if they can actually compete in server, all volume will go there first.
 
As usual, AMD's plan is to give people a glimmer of hope with no third party verification, then tell them to wait six months for something important to happen.
 
That's not a valid comparison because that was the initial launch price. Look at what the price of the FX-93/95 series processors are right now and even how it was after launch a bit further. They are slightly more expensive than the 8370/8350.

The price matches its performance now. AMD isn't giving you a discount. AMD is set in the world to make money, not to be some kind of welfare program.
 
The price matches its performance now. AMD isn't giving you a discount. AMD is set in the world to make money, not to be some kind of welfare program.

But they are not stupid and will not price themselves out of the market and you know that to be the case. Now you are arguing just for the sake of arguing. And your example is invalid since those cpu's were not supposed to be sold that way initially but ONLY be in new computer builds. It does not take a genius to figure this out and I am no genius.
 
But they are not stupid and will not price themselves out of the market and you know that to be the case. Now you are arguing just for the sake of arguing. And your example is invalid since those cpu's were not supposed to be sold that way initially but ONLY be in new computer builds. It does not take a genius to figure this out and I am no genius.
Also AMD hadn't released a Msrp and most of those sales were tray sales meant for OEMs.
 
Otherwise 3.0Ghz is the rated base speed for a 5960X.

Yeah, that's where things get a little trickier, the 50% overclock I mentioned. My 5960X has been running at 4.6GHz for over a year now. If Zen can do the 50% OC, then we could really have a fight on our hands.
 
Have any of you taken a look at Blender CPU render benchmarks ?

OMZnFYF.png


Some odd results... Gulftown Xeons doing pretty well lol
 
I'm not pulling for one company over the other in regard to Zen, but I will add this tidbit of info to the mix. The same GlobalFoundries fabs that are going to be pumping out Zen will also be pumping out IBM's version of 14 Nm FinFET alongside Samsung's. They have an agreement in place to work together to improve each others fab processes as strategic partners. IBM is still confidently aiming for POWER9 cores to be able to clock to 5 Ghz individually. I think AMD will benefit from this improvement in the process node if they can survive long enough. The clock speed will come eventually. Can they live long enough to be a part of it or not?
 
I see AMD is expecting to release Zen, which competes with Broadwell, in early 2017 - at about the same time Intel is expected to release Cannonlake.

No matter how cheap Zen is, Inte's response should be quick, easy and painless: reduce prices on Skylake. They won't even need to reduce prices on Kaby Lake.
 
That's not a valid comparison because that was the initial launch price. Look at what the price of the FX-93/95 series processors are right now and even how it was after launch a bit further. They are slightly more expensive than the 8370/8350.

nope.. the FX 9590 wasn't not 800$ was 900$ and it was instantly a failure of chip, it had no market as it was too close to the 999$ intel Sandy bridge-E and Ivy Bridge-E hexa core CPUs which just destroy the FX 9590. it had not market even in the OEM segment where was launched only initially as was too power hungry, so system builders and some re-sellers were selling those chips at 350$-450$ with included AMD Water Cooling kit from the 9370 up to the 9590 only that way they were able to sell some units..

Then couple of months after that what AMD did? the obvious insta-panic AMD button, a cut down of prices based on what was sold at system builders prices just where the belong at 300$ with CLC the 9370 and 370$ with CLC for the 9590.. and course cut the price of the entire FX line up chips for example the FX 8350 at 190$, what's worse is the fact that the rehashed FX 8370 pushed even more the price of the FX 8350 to 170$..

isn't like AMD is santa or a non-profit organization it's just that when they aren't selling well they just have no other choice or alternative than cut prices same history as their GPU division.
 
Have any of you taken a look at Blender CPU render benchmarks ?

OMZnFYF.png


Some odd results... Gulftown Xeons doing pretty well lol

Yeah, those are pretty odd. Sounds more and more like Zen might have performed surprisingly well in this benchmark, and that's why they chose it.
 
I dont think they would show the clock speeds "if they could".
1. They can't give us everything all at once. Bit by bit(lol) to keep the Zen talk alive.
2. Early silicon, early mobo, especially if they are still talking late Q4/16.
3. Why would they show the shipping clock speeds so far away from launch. Do you wanna give your competitor time to blow you away before you even launch? No, you don't. You announce shipping clock speeds close to launch/availability. You want to attempt to catch your competitor with their pants down, even for just a brief moment.

I don't think you understand how prototyping and marketing work.
That would be a change of strategy for AMD... They have been using Netburst type hype for years. Oh my look at the clocks and "cores", but efficiency was shit compared to Intel and modules aren't cores.

If this is legit and "realworld" performance they should be showing off the lower clock speed. It would be the same as Intel when Core architecture arrived (and ever since). Plus they can price the well binned silicon MUCH higher as an extreme version with higher clocks.
 
It would be the same as Intel when Core architecture arrived (and ever since). Plus they can price the well binned silicon MUCH higher as an extreme version with higher clocks.
The "Core architecture" arrived in the Pentium Pro processor, in 1995. Pentium Pro begat Pentium II begat Pentium III begat Pentium M begat Core.

The Pentium Pro itself was derived from the BiiN joint venture between Intel and Siemens, which after it failed begat the i960MM and i960MX 32-bit RISC processors. The team that developed the i960MM and i960MX went on to develop the Pentium Pro.

Within Intel, there was a lot of heat in the engineering community over the choice to go with the Pentium IV microarchitecture instead of improving the Pentium III. Fortunately for Intel, the Pentium III survived as Core in the laptop space, where P4 was just too power-hungry, and so was available for re-instatement as the primary desktop microarchitecture when the P4 finally flamed out.
 
I'm fairly certain a 6700k at 3ghz is 45w or under
Since power dissipation tends to scale with almost the cube of clock speed, you are probably correct.

More specifically, for CMOS, clock speed scales with Vcc-Vt, and power scales with clock speed and Vcc-squared. So, just making up some possible numbers, If Vt is 330mV, and you can get 4GHz at 1.15V, then you should be able to get 3GHz at 0.95V, for power dissipation that is 50.6% of your 4GHz amount, which for the 65W@4GHz i7-6700K is about 33W.
 
As a Team Red fan boy all I can say at this point is...¯\_(ツ)_/¯

I will be very happy if Zen pans out, but I am prepared to be underwhelmed.
This is the correct attitude.
As we used to say back in my R&D days, paraphrasing Arthur C. Clark, "Any sufficiently advanced technology is indistinguishable from a rigged demo."
So like the Phantom.
Don't be naive. Intel knows better than we do what's coming from AMD. Engineers talk, and they move from company to company. Plus Intel has competitive analysis teams whose entire job is predicting what the competition will roll out, and when.
You'd think that team would have been retired a few years back.
 
Zen looks promising so far, be interesting what the final clock speeds will be. I look forward to seeing what it can do at the end of this year.
 
Zen looks promising so far, be interesting what the final clock speeds will be. I look forward to seeing what it can do at the end of this year.

The final clock speeds will be disappointing, otherwise they would have put on a better showing with their initial released benchmarks. Zen will follow the trend of AMD products that would be fantastic is they were about 15% faster on clock speed and used less power.
 
Boy I hope this thing brings back the glory days of CPU competition. I sort of miss my old Athlon XP and X2 rigs.
 
Seriously maybe we should take a poll... Do you believe AMD will beat Intel with the release of ZEN? I don't and cant for the life of me see why so many of the AMD-bashing-brigade are posting as if someone here believes they will. I expect it to perform well but under Intel. Maybe Haswell IPC but I don't expect clocks above 4.0Ghz and likely less than 3.5Ghz for an 8 core part. New architecture on a new process node, I will be happy if it performs well and operates within a moderate efficiency envelope with heat and power. In other words showing some forward progress with the ability to grow with future iterations.
 
So I'm mixed about this announcement. I think everyone is thinking the same thing: 'dat 3.0Ghz doe'.

An AMD CPU with 8 cores and Broadwell(ish) IPC is a damn good thing, but it's not really a great option if the maximum clockspeeds are in the low 3.0gs. If the clockspeed is just a hard throttle to meet up with the 95w TDP, we could be looking at a pretty cool option if the CPU can be OC'd to the low 4.0gs. If the 3.0GHz is a 'close to the limit' clockspeed, we're looking at a CPU that can stomp a quad i7 in threaded tasks, but struggle against an i5 in games. And lets face it, we ALL want more competition for games.

This begs the question of the alleged quad-core zen part. With this level of IPC, a highly-clocked quad-core can go toe-to-toe with an i7 quad. If that '3.0GHz' is an omen of speeds to come, then we are looking at an i5 competitor at best...

I don't know, my powers of prophecy aren't giving me any great news for competition in the HEDT range, or even the high-end gaming range...
 
I think we can all pretty much agree that even if this is a selected benchmark that AMD has progressed a-lot more than their pre reported 40% ipc gain. On the clock speed part, well if you look at intel eight core cpus they aren't really clocked that much faster. This is an engineering sample not even a retail part so it's hard to say what the final clock speeds will be. Either way beating a intel broadwell-E at the same clock speeds in the same benchmark is as impressive as it gets. I realize it's not the whole story, but you have to remember people expected haswell type performance and here they are matching broadwell.

If they can reach 4ghz with an overclock I will likely buy one day one. However my lessons were learned last time I was looking to upgrade when I bought a new motherboard for the upcoming "FX" without seeing benchmarks first. This time I will definitely be waiting for benchmarks, boards, and plenty of reviews before I make my purchase. I've been thinking about going to a hexcore broadwell-e, over the 6700k. Hopefully when Zen comes out it will force Intel to be less absurd with their prices allowing me to pick a better option in my price range with either company.
 
Broadwell level single thread performance would be a huge victory, I just won't believe it until I see hard numbers across a range of benchmarks.
 
  • Like
Reactions: NKD
like this
This sounds great. Surely the Blender test was specifically chosen since Zen is winning, but let's take what we can get. That's certainly more than I was expecting yesterday.

Deus Ex at 4K is nice, but there are so many variables there, and for a game that no one in the public has benchmarked yet, we just don't know what that means.

Honestly, I'll be a little sad if we have to wait until 2017, but I can understand if they're just not ready. Was really itching to build something new soon.
 
This sounds great. Surely the Blender test was specifically chosen since Zen is winning, but let's take what we can get. That's certainly more than I was expecting yesterday.

Deus Ex at 4K is nice, but there are so many variables there, and for a game that no one in the public has benchmarked yet, we just don't know what that means.

Honestly, I'll be a little sad if we have to wait until 2017, but I can understand if they're just not ready. Was really itching to build something new soon.

ANY game at 4K is GPU bottlenecked. Remember, Resolution scaling does not change CPU-load at all. in fact, running at a lower resolution (all else being equal) stresses the CPU MORE because the GPU is able to complete frames faster and ask for instructions a LOT more frequently.
 
This sounds great. Surely the Blender test was specifically chosen since Zen is winning, but let's take what we can get. That's certainly more than I was expecting yesterday.

Deus Ex at 4K is nice, but there are so many variables there, and for a game that no one in the public has benchmarked yet, we just don't know what that means.

Honestly, I'll be a little sad if we have to wait until 2017, but I can understand if they're just not ready. Was really itching to build something new soon.

Deus ex at 4K, what did I miss?
 
ANY game at 4K is GPU bottlenecked. Remember, Resolution scaling does not change CPU-load at all. in fact, running at a lower resolution (all else being equal) stresses the CPU MORE because the GPU is able to complete frames faster and ask for instructions a LOT more frequently.
Oh man don't tell me they demoed Zen using deus ex at 4k
 
Back
Top