AMD Zen CPUs?

Amd said, they are specifically targeting the high end market. Not the main stream market although i'm sure they will have designs to target that market as well.

They don't scrap their entire current cpu advancement and go to work on a new architecture to be released in 2016 to shoot for a mainstream market.

what do they consider high-end, and how does that differ from what intel considers high-end?

if they aren't selling a whole family of cpu's above $320 then they don't need a separate platform, and cannot afford the support burden a second platform entails.

fm3 with dual-channel DDR4 will easily take them up to $320, are they intending to go (a lot) higher?
 
AMD has never competed in the 'E' segment. They have two server-only sockets (C32 and G34) and none of them have consumer parts. AMD's AM3 is a relic that competed with 775/1156. FMx is what future chips will be released on.

I guess you don't remember the original FX Athlon 64 chips and their corresponding price tags, which is what prompted Intel to actually start the whole "Extreme" thing, which a lot of people at the time mockingly described as "Emergency Edition" lol. Granted that was a decade ago, but still.
 
what do they consider high-end, and how does that differ from what intel considers high-end?

if they aren't selling a whole family of cpu's above $320 then they don't need a separate platform, and cannot afford the support burden a second platform entails.

fm3 with dual-channel DDR4 will easily take them up to $320, are they intending to go (a lot) higher?

I'd wish that they would just start using stacked RAM for APUs.
 
Your response is a standard troll tactic. Enjoy your short stay here.

I consider you post a troll post, I guess we are even.


If you lived in the time of Athlon FX, had the proper age to be able to buy hardware and read reviews, my point still stands:

http://www.xbitlabs.com/articles/cpu/display/athlon64-fx51_2.html

Look at the architecture...it more of a "tick-tock" than a "brand new core under development for years that altered computing as we know it!!!"

But people seem to like to fake history and damn you if you question overhyped notions!
 
why are people so anti-apu? the current high end i7s are all "APUS"

- Paying for a useless portion of the die is not something i like.
- The performance is a joke, my household now runs on GPU's, not a single iGPU being used.

And I would not call socket 1150/1151 "highend", you confuse midrange with the E-series.
 
- Paying for a useless portion of the die is not something i like.

By this logic the only CPU's you'd ever buy are Intel -E series. Just about every processor today has logic on it's floorplan that most people wouldn't ever make use of, this line of thinking is just silly.

But people seem to like to fake history and damn you if you question overhyped notions

As if what you're doing is any better? You're literally just derailing the thread (which is about Zen) with your inane shit-posting.
 
I'd wish that they would just start using stacked RAM for APUs.

I bet we see stacked ram in specialised systems that aren't modular one bit by 2016, but I doubt we see it as normal memory sticks.
 
By this logic the only CPU's you'd ever buy are Intel -E series. Just about every processor today has logic on it's floorplan that most people wouldn't ever make use of, this line of thinking is just silly.

It seems like my daughters i5 will be the last none "E" CPU I have gotten yes.



As if what you're doing is any better? You're literally just derailing the thread (which is about Zen) with your inane shit-posting.

Because the facts are pouring in right now?
I will make this prediction though about the Zen:

It will be overhyped and under-perform opposed to Intel's offerings..
(the overhyped part is already taking place)

Now you can make yours?
Then we can return at launch and see who was right and who was wrong?
 
252134-990165_troll_spray_super_super.jpg
 
It seems like my daughters i5 will be the last none "E" CPU I have gotten yes.





Because the facts are pouring in right now?
I will make this prediction though about the Zen:

It will be overhyped and under-perform opposed to Intel's offerings..
(the overhyped part is already taking place)

Now you can make yours?
Then we can return at launch and see who was right and who was wrong?

Intel fanboy ALERT! Intel fanboy ALERT! Wooop, Wooop, Wooop, Red Alert! :D Enjoy your trolling, I will enjoy my computer. Oh crap, it has an AMD processor on a 30 year old ISA, whatever will I do?) :eek:

As far as overhyping goes, bullshit, you are clueless. Of course, you must be a processor engineer, right?
 
- Paying for a useless portion of the die is not something i like.
- The performance is a joke, my household now runs on GPU's, not a single iGPU being used.

And I would not call socket 1150/1151 "highend", you confuse midrange with the E-series.

Again, 1150/1151 is fine for high end home desktops, if you really want to split hairs, in comparison with their bigger server siblings, the E-Series is also a joke.

and I have 4 computers in my house, only 1 runs a dedicated GPU, the one I game on, why do I need any GPUs in machines that I DON'T game on?
 
Again, 1150/1151 is fine for high end home desktops, if you really want to split hairs, in comparison with their bigger server siblings, the E-Series is also a joke.

and I have 4 computers in my house, only 1 runs a dedicated GPU, the one I game on, why do I need any GPUs in machines that I DON'T game on?

My htpc has one in order to get better driver support, but that's it, it surely isn't there for gaming!
 
My HTPC has a decent (if you consider a 6850 decent) card for playing controller games on. I can't wait for the day a single high-end APU can replace the card with onboard for power/heat purposes.
 
So back on topic. Expectations for Zen. Currently AMD competes in cores per dollar and memory slots per dollar.
Both are pretty small markets, but do we expect them drop these markets? Maybe they'll push their memcached (ram/$) customers toward the arm platform and keep core count high?
 
Because the facts are pouring in right now?
I will make this prediction though about the Zen:

It will be overhyped and under-perform opposed to Intel's offerings..
(the overhyped part is already taking place)

Now you can make yours?
Then we can return at launch and see who was right and who was wrong?
I wouldn't bet on it being overhyped by AMD. For the last year they have managed to stay utterly silent before any release. Remember the 290X bus width? No one saw that coming. And the 295X2 as well with the additional watercooling (that caught Nvidia with their pants down). Sure there may be the few dreamers in forums that give their opinion based on their heart more than the facts, although precious few in number, but that doesn't constitute over hyped by AMD. And then there are the few that post in poor judgment and love nothing more than to tout their possessions as if their hardware impacted any part of how the others hardware ran or the enjoyment it gave.

At any rate I would love to see AMD bring something great and mind-blowing to the table, but being rational it is far more likely to be a sound step in the right direction at best.
 
My HTPC has a decent (if you consider a 6850 decent) card for playing controller games on. I can't wait for the day a single high-end APU can replace the card with onboard for power/heat purposes.
There are two hurdles amd has to overcome with this; first is the memory bandwidth issue, second is self-competition. I wouldn't count on amd releasing a product that could harm one of their lucrative lines.
 
There are two hurdles amd has to overcome with this; first is the memory bandwidth issue, second is self-competition. I wouldn't count on amd releasing a product that could harm one of their lucrative lines.

Stack 2 GB of memory on the die :p?
 
Stack 2 GB of memory on the die :p?


Ideal in theory. However HBM memory has a large footprint. Just look at the memory chips in your computer, just one of them is pretty much bigger than most entire CPU dies.

I'm sure with a few more node shrinks or it will be possible.

My HTPC has a decent (if you consider a 6850 decent) card for playing controller games on. I can't wait for the day a single high-end APU can replace the card with onboard for power/heat purposes.

They are getting pretty close with the 7850k. I would expect their next gen APU's to match a 6850 performance.


Also its hard to overhype something when AMD is being super tight lipped. All Amd have really said is its coming sometime in 2016, they are targeting the high end market, and announced its name. Other than that we don't have much to hype about.:confused:
 
Ideal in theory. However HBM memory has a large footprint. Just look at the memory chips in your computer, just one of them is pretty much bigger than most entire CPU dies.

I'm sure with a few more node shrinks or it will be possible.



They are getting pretty close with the 7850k. I would expect their next gen APU's to match a 6850 performance.


Also its hard to overhype something when AMD is being super tight lipped. All Amd have really said is its coming sometime in 2016, they are targeting the high end market, and announced its name. Other than that we don't have much to hype about.:confused:

if it's going on a GPU, no reason it cannot go onto an APU
 
I think something good will come out of Zen.
I mean, AMD engineering teams have had enough time with CMT chips in order to know their weaknesses and strengths. I don't know if they'll change to SMT or not, but at least chip modularity for adding high core counts is something to be harvested from CMT designs.

Additionally, by 2016 AMD will be able to use much better process nodes than now.

On the other hand, ARM and x86 on the same socket? Wow.
 
Well making something worse than current Bulldozer/Pilediver FX chips would be huge achievment.
 
if it's going on a GPU, no reason it cannot go onto an APU
I think what he meant is that the size of the memory alone would make the chip/die too large, but on smaller nodes it becomes more possible. Although being the HBM is stacked it would be quite thinner than traditional GPU ram memory so maybe node size is less of a hurdle.
 
Well making something worse than current Bulldozer/Pilediver FX chips would be huge achievment.
Neither is as terrible as many would have you believe. Everyday performance is outstanding. Benchmarks tell little of real world use. If using extreme parameters a lot of the market would become inept. The funny part is those that preach extreme also tend to push that little 2 core Intel for desktop use, quite hypocritical.

Honestly I wish posters would refrain from such degrading remarks and try a bit harder to be more positive and relative to the facts rather than hear-say. Point of view is helpful and I always advise: Ask the guy that owns one before you believe any review.
 
My HTPC has a decent (if you consider a 6850 decent) card for playing controller games on. I can't wait for the day a single high-end APU can replace the card with onboard for power/heat purposes.

My overclocked Richland performs darn close to my HD7770, I could imagine AMD's next-gen APUs performing as well at stock clocks, if Kaveri is any indication of the direction their integrated graphics are going.
 
Stack 2 GB of memory on the die :p?

Sure, that would fix the external bus problem, but the problem still exists internally :p, you are still sharing the same pipeline with the CPU/GPU and it would be insane to build two separate pipelines on one die. HBM is already trying to solve that problem but it's a bandaid, they need a better hardware solution honestly.
 
Sure, that would fix the external bus problem, but the problem still exists internally :p, you are still sharing the same pipeline with the CPU/GPU and it would be insane to build two separate pipelines on one die. HBM is already trying to solve that problem but it's a bandaid, they need a better hardware solution honestly.
Some speculate using it like a L2/L3 cache, even something like the esram in the Xbox One could be beneficial.
 
By this logic the only CPU's you'd ever buy are Intel -E series. Just about every processor today has logic on it's floorplan that most people wouldn't ever make use of, this line of thinking is just silly.

yes..... but!

i am all for a pure APU future; i believe in HSA and believe that general purpose shader cores integrated into the 'CPU' will become a normal thing.

but, if i am building a high-end PC, it is going to have a high-end GPU (read >200W), so i don't want AMD wasting half the die-space (and half the power budget) on a few additional shader cores that will:

1. make no differenece to graphics performance
2. seriously retard CPU performance

yes, an APU future, but differentiate the products:
for the low-end give me small-cores and small die-sizes
for the mid-range give me big cores on a 1:2 ratio with shaders (quad-core+1024 shaders)
at the high-end give me big cores on a 2:1 ratio with shaders (eight-core+256 shaders)
 
If the GPU isn't being used, how is half the power budget being wasted? It doesn't work that way lol. It's not as if the CPU will throttle itself down against your wishes because the iGPU is lying dormant. People got the Richland CPU's up to 5ghz ON AIR even WITH the iGPU enabled. This alone should show you why the iGPU's existence is a non-issue. The iGPU only interferes with your CPU overclock headroom if you somehow run into a thermal issue... Given that people with the Athlon variants of those chips (which have the iGPU's physically fused off period) didn't get any additional headroom whatsoever, more evidence.

Also the existence of an iGPU doesn't impact CPU performance at all. Just because you have an iGPU on the die doesn't mean that somehow the CPU will be fighting for resources from the GPU or anything outlandish like that. A Haswell i5 will have the same single-thread performance as an -E edition Haswell at the same clocks.

That being said, even for "high-end" machines the APU's would still have a purpose for gaming. I'll post a quote of something that is possible and something that AMD have mentioned more than once since around 2011 or so: http://www.rebellion.co.uk/blog/2014/10/2/mantle-comes-to-sniper-elite-3

The way DirectX11 handles multiple GPUs is “AFR” or Alternate Frame Rendering, which as the name suggests means if you have two comparably powered GPUs they simply take turns rendering frames. This is in many respects the easiest approach to take – and is a great way of making your game CPU bound! So possibly our Mantle version could show some big improvements when using this method.

However, with the independent control over the GPUs Mantle gives us, we could approach the problem very differently - for example one GPU could be rendering the basic geometry in the scene, while another handles lighting and shadows for the same frame, with the final image composited at the end. This may also provide a route for when GPUs aren’t of a comparable power level – for example an integrated APU motherboard coupled with a desktop GPU. It’s the potential for completely new approaches like this which excites me the most about Mantle and the APIs which will follow it.

So these "high-end" machines would get even more freed up performance to push higher framerates all due to the existence of the iGPU on an APU. I think this will completely come to fruition in 2016 when AMD has their new APU's out with even more advanced HSA features and much stronger iGPU's to go with them. And I'd like to think that the x86 performance will be "good enough" to satisfy enthusiasts that they would finally get over the "shame" of gaming on a chip labeled as an APU. We'll see, though.
 
i accept the argument that power optimisation is good these days, allowing them to push the thermal budget to whichever part of the chip needs it, but there is always a cost.

more to the point, you cannot ignore the cost and performance implications of the transistor budget being blown on shaders.

the argument for working a reverse 2:1 ratio of CPU/GPU on high-end products that WILL be paired with 200W GPU is absolutely unanswerable!

just as it makes sense to max out the CPU/GPU ratio (1:2) in a low end product, including minimising the transistors (and power) wasted on PCIe lanes that likely won't be used (a.k.a. carrizo).
 
The transistor budget isn't really being blown though. The APU's would be tiny in terms of die size if they had the GPU IP removed. Die-shrinks will further mitigate the inclusion of bigger iGPU sections, so it'll be a non-issue.

The argument for all these different configurations based on different ratios for CPU/GPU resources can be answered easily: Practicality. Why have two or three production lines for what is essentially the same product just with different CPU/GPU core counts, when you can just have one and fuse off different chunks of the chip to suit the target market...like they've already been doing for ages now? There's simply no reason to waste money that could be allocated elsewhere on doing something like that.

The transistors won't use power if they're not even gated on to be used. They're better off making one die that can easily scale upwards or downwards as needed. Intel already does this with their mainstream desktop platform and their big-core mobile platform. The same i5's you see in the laptops are literally the same physical silicon that comprises the i5 and i-whatevers on the 1150/1151 platform. The only differences are obviously yields and leakage characteristics (parts that are too leaky are mostly the ones that get relegated to desktop DIY channel).

AMD can further mitigate the "power usage" thing by simply designing a better architecture as well. Intel's current chips are pretty damned good on that front, I'm sure AMD can vastly improve their TDP's and power use as they are already showing with Carrizo.
 
The transistor budget isn't really being blown though. The APU's would be tiny in terms of die size if they had the GPU IP removed.

The argument for all these different configurations based on different ratios for CPU/GPU resources can be answered easily: Practicality.

Moar cores.

No; higher ASP.

"I buying a high-end PC with a £400/200W GPU, should I buy £225 Intel i7 or should i spend £125 on a 7850k?"

Easy answer.
 
Moar cores.

No; higher ASP.

"I buying a high-end PC with a £400/200W GPU, should I buy £225 Intel i7 or should i spend £125 on a 7850k?"

Easy answer.
But didn't you mention HSA and your affinity for it. The hard part going forward is changing your mind set. I agree if you are going balls-to-the-wall-DGPU then the 7850Ks iGPU does some what seem over-done. Rather maybe for you and I an APU with a smaller iGPU that is really just for the HSA component. Of course there is the mention of possibility HSA use of DGPU, although I would wager that it would be far less efficient than onboard use.
 
Moar cores.

No; higher ASP.

"I buying a high-end PC with a £400/200W GPU, should I buy £225 Intel i7 or should i spend £125 on a 7850k?"

Easy answer.

Not sure what the point is here, the GPU on the 7850 has nothing to do with the performance of the cpu, on top of that, the i7 has an iGPU as well..

if the 7850 provided you with 80% of the performance of the i7, which would you buy then?
 
Also the existence of an iGPU doesn't impact CPU performance at all.

Sure it does. A big fat GPU core is a waste of transistors that could be dedicated to bigger caches and/or more x86 cores. Look at the Kaveri chips; they waste over half the die on GPU units that are (A) too bandwidth starved to ever work well, and (B) will go unused the moment you install a dedicated video card. Nevermind the increased cost...

I'd MUCH rather AMD offered a 3 or 4 module Steamroller, perhaps fluffed with an L3, combined with about 25% of the current shader inventory. This would make for decent x86 performance on well threaded apps, and still allow enough GPU for HSA / OpenCL / etc. applications to run. Such a chip would give me some slim reason to ditch my Thuban and go for a newer FM2+ system.
 
There really should be two sets of die designs to bin:

One that is CPU geared: with 8 cores, a small L3 cache and about 20% of the die devoted to the GPU. This wouldn't be playing games off the IGPU, it's designed to maximise CPU power, while having a 'good enough' GPU portion for HD/4k movies and a smooth desktop. This could be binned down to cheaper, 6 and 4 core varieties, some with the GPU disabled completely. This chip could be used for HPC products, workstations and high-end gaming machines.

The second die design should be more APU focused, with a huge portion of the silicon devoted to the IGPU, and four cores with no L3. This could be binned down to 2 cores and chop the GPU down incrementally to match a price point. Obviously for mobile PCs, media centres and HSA workstations.
 
There really should be two sets of die designs to bin:

One that is CPU geared: with 8 cores, a small L3 cache and about 20% of the die devoted to the GPU. This wouldn't be playing games off the IGPU, it's designed to maximise CPU power, while having a 'good enough' GPU portion for HD/4k movies and a smooth desktop. This could be binned down to cheaper, 6 and 4 core varieties, some with the GPU disabled completely. This chip could be used for HPC products, workstations and high-end gaming machines.

The second die design should be more APU focused, with a huge portion of the silicon devoted to the IGPU, and four cores with no L3. This could be binned down to 2 cores and chop the GPU down incrementally to match a price point. Obviously for mobile PCs, media centres and HSA workstations.

So pretty much what they do now :D minus the gpu portion on the AM3+.
 
Back
Top