AMD Zen CPUs?

Wut.

That's because things are built with certain tolerances. For example, engine components are specifically engineered to be able to withstand a certain amount of stress for so long, but that doesn't mean that they can't be stressed further, just that they aren't guaranteed any further. That has nothing to do with being "more than the sum of its parts" at all. Nor does it have anything to do with this topic. That phrase insinuates a certain emotional element that we humans apply to machines. The machine will be able to perform up to when it fails no matter what we feel or think about it.

Many people for a very, very long time have experienced otherwise, whether it's because a human projects unto a device or perceives through a veil of emotion is functionally irrelevant.
Don't take my word for it though, look into it. Aircraft are a good dramatic example since when they fail, you generally die. Talk to some old pilots, check out the author I mentioned. That saying "there are no atheists in foxholes" applies, when you're high in the air and the possibility of being on the ground in dramatic fashion presents itself, you will speak to your aircraft and it will be more than just a machine since it's the only thing preventing you from ceasing to be.
 
The placebo effect, according to Google, is:


So let's break it down. Earlier you made the claims that an AMD system is smoother/fluid. There's no empirical data supporting that AFAIK. What if someone actually bought an AMD system based on what you claim? They buy that AMD setup, use it, and because they were told that it was smoother, believes that AMD is smoother?


No it's more of me wanting more solid evidence. I've been a computer enthusiast long enough to know not to fully trust the words of other computer hardware enthusiasts. I've learned a long time ago to get more data and evidence. It's not bashing if we're asking for proof. If there's solid evidence showing AMD is the better solution, then I want to see it. I also don't like seeing unsubstantiated claims made on the forums as well. Stuff like that spreads before real information is found and therefore continues a cycle of misinformation.

Just to be clear: I have no problem with other people recommending AMD. I do have a problem with people recommending AMD based on poor or unsubstantiated claims or stances or information.

You are trying too hard to read into what I say rather than take it at face value. Placebo in any case means that a person is expecting it and therefore appears to see that result. Placebo doesn't always mean an untruth. And because it is something that cant be proven with charts doesn't change the outcome of the result. It is 35F or 1C outside, if one person believes it to be cold and another not, does the person that thinks it is cold suffering from the placebo effect because the majority of people believe it to be cold, so that person now believes it to be?

Look I am not trying to grasp at straws and find irrelevant experiences to validate AMD, was just pointing out that there are some things a review cant show. I have on numerous occasions here said if one is desperate today for a CPU that will have some years on it from now, GO INTEL. But if the question is can an AMD system Game and function as a valid machine, then I am definitely gonna say yes. It doesn't mean I am saying it is better, though it seems many of you do your darnedest to infer it.
 
Yes the Phenom I was at a wall. AMD admitted that they "screwed the pooch" on the original Phenom I. They were going from a 90nm to a 65nm process and at the same time from the K8 to the K10. AMD admitted that they had "Thought to big on making the K10 a Monolithic Die @ 65nm", this can be seen in the Phenom II. With a Monolithic Die @ 65nm, AMD had to make to many trade-offs, it was also late to market. By the time AMD got their "Native Quad Core" to market, Intel had already caught up and had surpassed AMD.
The Phenom I didnot like overclocking " It had a Cold Bug", It was found to have a serious errant "This put AMD back another 6 months", and was AFAIK equal to or inferior to the Intel Quad Core CPU's (That had used duck tape to make Quad CPU's).
ALL of this was shown by the Phenom II @ 45nm to have been corrected.

This what Intel does with its TICK/TOCK production schedule. Take a MATURE CPU design process and drop it a node(TICK). Refine the new manufacturing process on the current last gen model. Develope and SHIP a NEW CPU design on the new manufacturing process(TOCK) before JUMPING to a new node.

1) AMD releases a NEW CPU Design and NODE @ 65nm FAIL
2) AMD releases a NEW CPU Design and NODE @ 32nm FAIL
3) AND it looks like they are "GOING FOR IT AGAIN" with the ZEN
AMD releases a NEW CPU Design and NODE @ 22nm FAIL

I will always believe, that if they have taken a little $$$ and had reduced the Phenom II to 32nm.......

Conspiracy Theorist Awake : What if AMD had taken a MATURE uarch (K10 series) and had produced it @ 32nm?????

1) The Phenom III @ 32nm with 8 (FULL) CORES and 10MB L3 cache, along with the normal tweaks that come with a new node could have been very competitive.
2) The x6 1100T was close or faster than the X8 8150 in many benchmarks.
3) The Phenom IV @ 28nm could have had 12 cores and 20MB L3 cashe.


History : AMD was smoking the Intel Pentium IV, with the A64. (Also FIRST with a IMC)
The A64 was running at 1GHz less than the P IV but still smoked it, in 8 out of 10 test. Intel states that "The Pentium IV is to HOT at 3.8GHz", so they DROP the upcoming TEJAS Revision of the Prescott uarch of the P IV. Both Intel and AMD switch to Dual core processors and then to Quad core.

AMD was producing the K8 Dual core processors at 90nm and NEEDED a CPU to "Win back the crown" from Intel. Intel had taken a betting from AMD, until they switch from the NETBURST uarch, to the Core uarch (which was based off the P III). Intel was already making Quad CPU's by taking (2) two Dual core CPU's and joining them together.(MCM
)

Very good summary! Not many remember!

It is also worth noting that AMD surpassed Intel back then, with a tiny R&D budget compared to Intel's. I'm not saying they WILL do it again, but it's not entirely IMPOSSIBLE either.

That 8Ghz hypothesis about the Bulldozer architecture also makes complete sense. That's the problem with not owning your own fabs: You can't trust them as much. The GOOD thing about not owning your own fabs is that they can focus on making their processes work.

Maybe this time AMD will hit the nail on the head, a little like with the Phenom II architecture. And I agree with you, a tweaked Phenom (III and IV) chip would have been greatness.

But I do believe AMD moved away from manually optimizing electronic paths on their dies? That would be part of the reason for moving closer to a speedracer type of architecture and dumping the really good Phenom?
 
Maybe this time AMD will hit the nail on the head, a little like with the Phenom II architecture. And I agree with you, a tweaked Phenom (III and IV) chip would have been greatness.

But I do believe AMD moved away from manually optimizing electronic paths on their dies? That would be part of the reason for moving closer to a speedracer type of architecture and dumping the really good Phenom?

I think AMD has everything they need to match Haswell, or Sandy Bridge in the very least, in terms of IPC and net throughput... they just need to put it all together into a coherent package.

They could take the very same elements in Excavator and reorganize them in the same fashion as the phenom II and see good boosts. With what we know, being SMT rather than CMT, and not being able to copy hyper-threading, I'd suspect a more straight-forward and safe approach. That means lower latency cache, three or four ALUs per core being fed from two (yes, two) shorter, exclusive, pipelines, each handling a thread, using the improved branch prediction lessons learned from Excavator, and, of course, HSA features on top of it all.

This is the path of least resistance, I'd think. Especially considering the rather short time frame the have to accomplish their required targets. If they want back in server, they will have no choice but to aim for the top.
 
Q3 2016 for the performance CPU? At that point, why even bother? It'll likely be obsolete when it's launched...it'll be going up against Skylake...:(

It's already starting too look bad.
A so called "14nm" (don't get me started on the fake-PR because of Intel's process lead) AMD CPU...going agains not Intels 14 nm Skylake, but instead Intel's 10nm Cannonlake

My estimate is that it will change nothing from the current status.
AMD being behind on both SKU design and process technology.

With AMD it's been the same thing with CPU and GPU for years..."the NEXT edition will solve all problems of the previous SKU!!!"


Many people for a very, very long time have experienced otherwise, whether it's because a human projects unto a device or perceives through a veil of emotion is functionally irrelevant.
Don't take my word for it though, look into it. Aircraft are a good dramatic example since when they fail, you generally die. Talk to some old pilots, check out the author I mentioned. That saying "there are no atheists in foxholes" applies, when you're high in the air and the possibility of being on the ground in dramatic fashion presents itself, you will speak to your aircraft and it will be more than just a machine since it's the only thing preventing you from ceasing to be.

Atheist in foxhole reporting in, please stop using fallacies as an argument, thanks.
 
A so called "14nm" (don't get me started on the fake-PR because of Intel's process lead) AMD CPU...going agains not Intels 14 nm Skylake, but instead Intel's 10nm Cannonlake

You're loony if you think Cannonlake will actually be out in 2016. Intel's been having issues on their 14nm node and things will get worse with 10nm. Also I'm not sure why you put "14nm" in quotes, when it's been known that Intel's so-called 22nm node was more like 26nm and the 22nm was just the FinFET measurements.

And honestly what do you expect? AMD is a gazillion times smaller than Intel, who has like >95% of the x86 marketshare. You can't just expect AMD to crap out miracles every two seconds, especially with how badly mismanaged they've been over the past decade.

With AMD it's been the same thing with CPU and GPU for years..."the NEXT edition will solve all problems of the previous SKU!!!"

They had a few CPU problems yes, but their GPU's have been just fine for ages. Not sure why you even typed that.
 
You're loony if you think Cannonlake will actually be out in 2016. Intel's been having issues on their 14nm node and things will get worse with 10nm. Also I'm not sure why you put "14nm" in quotes, when it's been known that Intel's so-called 22nm node was more like 26nm and the 22nm was just the FinFET measurements.

You do know that 14nm and 10nm are developed independatly right?
What slowed down 14nm didn't automaticlly push 10nm further out.
Stasting that dosn't make me "loony"...it makes me informed.
Perhaps you missed something in the last 6 months?

And honestly what do you expect? AMD is a gazillion[] times smaller than Intel, who has like >95% of the x86 marketshare. You can't just expect AMD to crap out miracles every two seconds, especially with how badly mismanaged they've been over the past decade.

Look at the OP, people ARE expecting "miracles".
Again, you seem to be lacking information?



They had a few CPU problems yes, but their GPU's have been just fine for ages. Not sure why you even typed that.

So that is why the are gaining marketshare and increased profits in that segement? :p

The facts don't align with your hopes sorry.
 
Perhaps you missed something in the last 6 months?

http://gulfnews.com/business/technology/intel-to-launch-10nm-chips-in-early-2017-1.1443856

Taha Khalifa said:
We have been consistently pursuing Moore’s Law and this has been the core of our innovation for the last 40 years. The 10nm chips are expected to be launched early 2017

The smaller the process, the more they have to deal with parasitical capacitance, and that in and of itself becomes an even bigger problem with FinFETs. Most likely Cannonlake will end up getting pushed even further back and then there will be a Skylake refresh for desktop DIY.

Look at the OP, people ARE expecting "miracles".
Again, you seem to be lacking information?

I'm not lacking any information at all, you're just posting nonsense.

So that is why the are gaining marketshare and increased profits in that segement?

The facts don't align with your hopes sorry.

Yet their GPU's are just fine. There's nothing wrong with their GPU's that they need new SKU's to "solve problems" of previous parts. Intel are draining tons of money in various channels with failed projects for years, should I say that they need to "solve their iGPU and CPU problems"? No, that's retarded and ridiculous logic. Stop with the FUD.
 
Yes the Phenom I was at a wall. AMD admitted that they "screwed the pooch" on the original Phenom I. They were going from a 90nm to a 65nm process and at the same time from the K8 to the K10. AMD admitted that they had "Thought to big on making the K10 a Monolithic Die @ 65nm", this can be seen in the Phenom II. With a Monolithic Die @ 65nm, AMD had to make to many trade-offs, it was also late to market. By the time AMD got their "Native Quad Core" to market, Intel had already caught up and had surpassed AMD.
The Phenom I didnot like overclocking " It had a Cold Bug", It was found to have a serious errant "This put AMD back another 6 months", and was AFAIK equal to or inferior to the Intel Quad Core CPU's (That had used duck tape to make Quad CPU's).
ALL of this was shown by the Phenom II @ 45nm to have been corrected.

This what Intel does with its TICK/TOCK production schedule. Take a MATURE CPU design process and drop it a node(TICK). Refine the new manufacturing process on the current last gen model. Develope and SHIP a NEW CPU design on the new manufacturing process(TOCK) before JUMPING to a new node.

1) AMD releases a NEW CPU Design and NODE @ 65nm FAIL
2) AMD releases a NEW CPU Design and NODE @ 32nm FAIL
3) AND it looks like they are "GOING FOR IT AGAIN" with the ZEN
AMD releases a NEW CPU Design and NODE @ 22nm FAIL

I will always believe, that if they have taken a little $$$ and had reduced the Phenom II to 32nm.......

Conspiracy Theorist Awake : What if AMD had taken a MATURE uarch (K10 series) and had produced it @ 32nm?????

1) The Phenom III @ 32nm with 8 (FULL) CORES and 10MB L3 cache, along with the normal tweaks that come with a new node could have been very competitive.
2) The x6 1100T was close or faster than the X8 8150 in many benchmarks.
3) The Phenom IV @ 28nm could have had 12 cores and 20MB L3 cashe.


History : AMD was smoking the Intel Pentium IV, with the A64. (Also FIRST with a IMC)
The A64 was running at 1GHz less than the P IV but still smoked it, in 8 out of 10 test. Intel states that "The Pentium IV is to HOT at 3.8GHz", so they DROP the upcoming TEJAS Revision of the Prescott uarch of the P IV. Both Intel and AMD switch to Dual core processors and then to Quad core.

AMD was producing the K8 Dual core processors at 90nm and NEEDED a CPU to "Win back the crown" from Intel. Intel had taken a betting from AMD, until they switch from the NETBURST uarch, to the Core uarch (which was based off the P III). Intel was already making Quad CPU's by taking (2) two Dual core CPU's and joining them together.(MCM
)

K10 was a very mature architecture, and frankly, had no more room for improvement. It was right for AMD to ditch it.

AMD's mistake was trading IPC for core count. Granted, Intel has a MASSIVE edge because it's way ahead of AMD when it comes to cache management and branch prediction, but a 4GHz quad with similar IPC to PII would have been more competitive with SB/IB then BD/PD was.

SW Engineers like myself have tried for decades to make software scale, and most of the time, embarrassingly parallel problems aside, it doesn't. Parallel Processing was the wave of the future back in the 70's. By the end of the 80's, all those companies went bankrupt. Why? Because the software did not scale.

AMD needs to decide where it's going to focus it's attention. It gave up Servers, it was late to mobile, and it's fallen behind on Desktops. AMD needs to decide where its attention is going, and focus there. Otherwise, they won't be around for too much longer.
 
K10 was a very mature architecture, and frankly, had no more room for improvement. It was right for AMD to ditch it.

AMD's mistake was trading IPC for core count. Granted, Intel has a MASSIVE edge because it's way ahead of AMD when it comes to cache management and branch prediction, but a 4GHz quad with similar IPC to PII would have been more competitive with SB/IB then BD/PD was.

SW Engineers like myself have tried for decades to make software scale, and most of the time, embarrassingly parallel problems aside, it doesn't. Parallel Processing was the wave of the future back in the 70's. By the end of the 80's, all those companies went bankrupt. Why? Because the software did not scale.

AMD needs to decide where it's going to focus it's attention. It gave up Servers, it was late to mobile, and it's fallen behind on Desktops. AMD needs to decide where its attention is going, and focus there. Otherwise, they won't be around for too much longer.

In the end all we have is parallel processing since you will not be able to gain gigahertz beyond the point some people get now on certain overclocks. In a sense "your" problems with coding x86 seem not that important in general the cpu device hardly makes a dent pure processing power is done on the gpu for a good reason.

The lack of software development is mostly because of Windows. You can see that the software company does not concern itself with anything really until it forms a problem for their financial status. This piece of crap operating system has been screwing the consumers for over decades where it concerns progress on parallel computing and serious improvements on a scale which reflects the income it generates.

When developers are not required to do things in parallel and just rely on IPC you know why it is not happening. Market stays focused on IPC and that is where there are some changes now if you see the PS4 or Xbox1. Suddenly the parallel problem goes poof.
 
Really interested to see what Zen brings to the table, they have got some great minds leading the project and with the overdue process shrinks to 14nm and DX12 coming into its own I hope to see some great performance/watt out of AMD. They were late to the efficiency game but being stuck @ 28nm for so long has really made them shift focus, moving forward I hope to see some great things. Doesn't have to beat intel, but be competitive at least. They have the unique position to bring a game changer to the market with their CPU and GPU know how and not being married to X86 exclusively...
 
I'll take a 1:1 CPU to FPU ratio this time please.

Why? As I understand it the shared FPU in Bulldozer does not affect performance all that much. Bulldozer's weak IPC performance comes down to it's relatively small integer cores, AMD went for more small cores rather than a smaller number of big/deep cores like Intel did. Adding more FPU's would just take up more valuable space.
 
Why? As I understand it the shared FPU in Bulldozer does not affect performance all that much. Bulldozer's weak IPC performance comes down to it's relatively small integer cores, AMD went for more small cores rather than a smaller number of big/deep cores like Intel did. Adding more FPU's would just take up more valuable space.

Yup, it's not the decode units, which are widened to eight dedicated per module on Steamroller. And the FPUs are not involved in typical operations, only really gaming and modeling/simulations.
.
The Integer unit is only 2 ALUs wide, which is a far cry from the older Phenom or Intel Core architecture. It's more similar to the P6, which could only execute 2 integer ops at once.
 
Yup, it's not the decode units, which are widened to eight dedicated per module on Steamroller. And the FPUs are not involved in typical operations, only really gaming and modeling/simulations.
.
The Integer unit is only 2 ALUs wide, which is a far cry from the older Phenom or Intel Core architecture. It's more similar to the P6, which could only execute 2 integer ops at once.

Yup. To add, IIRC, K8 and K10 were 3-issue wide. Core 2 Duo/Quad onwards have been 4-issue, part of the reason why Intel has been mopping the floor with AMD the past 9 years or so. Intel chips get more work done per clock.
 
Well I still want a 1:1 ratio. Every little helps in this business. Especially when so much stock is held in gaming benches, especially here.

It does matter and if a 1:1 ratio could give an extra boost in any way then why not. What do people want? Extra tiny die or more performance to be competitive?

I still think it's a bizarre design choice.
 
It is what AMD does and most of the time it changes the environment for the better. AMD is usually the one that thinks outside of the box. Not every idea will net great gains or gain traction, as we have seen with AMDs module design. On the whole the design is a great idea assuming the released chip was the intended design and for the most part it was. Had the chip released at minimum of 6Ghz with 8Ghz possible OC as was expected then the game would have been very different. The module design was intentional to help allow higher OCs but the process at GloFo would not allow. Fortunately the design does allow for some 5Ghz OCs which help alleviate the issue to a point, maybe not giving it any lead in IPC but helping close the gap.

Also after working with Kaveri a few weeks ago, I could see how it improved over my 8350, but the lack of cores on it allowed my 8350 system to still be faster in general. Just my experience.
 
It is what AMD does and most of the time it changes the environment for the better. AMD is usually the one that thinks outside of the box. Not every idea will net great gains or gain traction, as we have seen with AMDs module design. On the whole the design is a great idea assuming the released chip was the intended design and for the most part it was. Had the chip released at minimum of 6Ghz with 8Ghz possible OC as was expected then the game would have been very different. The module design was intentional to help allow higher OCs but the process at GloFo would not allow. Fortunately the design does allow for some 5Ghz OCs which help alleviate the issue to a point, maybe not giving it any lead in IPC but helping close the gap.

Also after working with Kaveri a few weeks ago, I could see how it improved over my 8350, but the lack of cores on it allowed my 8350 system to still be faster in general. Just my experience.

The person who helped that fairytale into this world about global foundries being at fault is long gone at AMD. Power draw would have been something not even supported on AM3+ and the people whom managed 6GHZ prolly had todo some extra cooling on top of what is normal on a high end mainboard.

Not a viable option as a "consumer" processor..
 
The person who helped that fairytale into this world about global foundries being at fault is long gone at AMD. Power draw would have been something not even supported on AM3+ and the people whom managed 6GHZ prolly had todo some extra cooling on top of what is normal on a high end mainboard.

Not a viable option as a "consumer" processor..

Actually, never mentioned blame or fault which technical changes not the outcome. And also was stating target not what the actual result was. The power draw argument you made is based upon the RESULT not the TARGET. I was mentioning the TARGET to help understand the INTENT. The RESULT is not always indicative of INTENT nor does it invalidate the TARGET.
 
Actually, never mentioned blame or fault which technical changes not the outcome. And also was stating target not what the actual result was. The power draw argument you made is based upon the RESULT not the TARGET. I was mentioning the TARGET to help understand the INTENT. The RESULT is not always indicative of INTENT nor does it invalidate the TARGET.

Circular logic 101 I guess. Baseless assumptions produce baseless numbers upon declaring an intent.

And all it takes is a piece of paper and write that down, which in turn validates the process.
 
Let's put it this way: Some people have a target weight loss of 50lbs in 4 months, they intend to lose all of that weight. To achieve this they plan some work-out regimen, and a suitable diet, and they do well to stick to it. There were some unforeseen issues (the gym lacked necessary equipment...must've been a pretty bad gym) which resulted in them not being able to achieve their goal (they only lost, say, 35lbs).

The target was realistic, and they had intended to reach it, but things don't always go as planned. Not knowing much about computer engineering (other than basic electronics) or the process used to produce that particular processor, I couldn't say whether 6GHz was a realistic target or not. At the same time, not knowing the people behind it all, I couldn't say whether they legitimately intended to reach that target either--maybe they chose that gym because it lacked equipment...who knows? IOW, either argument is nothing more than opinion until someone provides some technical details to support it--wasting time 101.

Sure, it's interesting to hear the opinion of random people on the internet...sometimes. It's also nice to see some technical arguments every now and then. ;)
 
Except this is an area which would leapfrog Intel by 2ghz or higher. Which from past experience we know is unlikely.
You could say it is the same as a toddler with a pacifier in his mouth telling Mike Tyson "I'm going to knock you out in one punch using my pacifier"

The toddler might be right in his intent that he would blame his pacifier for the failure is a bit to much :) .Do not have to explain the rest I hope ?
 
Last edited:
Circular logic 101 I guess. Baseless assumptions produce baseless numbers upon declaring an intent.

And all it takes is a piece of paper and write that down, which in turn validates the process.

Wrong. You assert such an argument when it was you who couldn't comment on the subject and task at hand. Not sure why it is so hard for many in these forums to do this. We can never debate subjects without the need for attaching emotions to the subject and flinging accusations at those that don't support your own view.

My comment was on INTENT and what AMD was trying to do. Had they managed to make the chip a 6Ghz or higher CPU the outcome would have been different, but the end result we now see shows it did not meet that target = RESULT. For a moment pretend you are reasonable and have a fair degree of comprehension: If you look at the front end on the module design you see that they relaxed it which logically leads to the ability to clock the processor higher hopefully matching or at best beating the other design currently in use. Again this was the INTENT. Doesn't require feelings or negativity or blame, it is what it is. Now granted by your statements you are not pleased with the RESULT and you are in fact entitled to that, but we were commenting on the INTENT. You didn't even read my post enough to understand, so much so you asserted that I said others were running at 6Ghz and how much power that was, too much for AM3+, when I actually commented on INTENT where there is no physical existence needed and if there was that would be the RESULT.

I am not quite understanding your need to so callously debate this, or why the INTENT upsets you so. History is full of INTENT, where the results fell short by varying margins. At least NOBU gets the point. Then you counter with a way off base argument rather than attempt to comprehend the reality of it.
 
I am not quite understanding your need to so callously debate this, or why the INTENT upsets you so. History is full of INTENT, where the results fell short by varying margins. At least NOBU gets the point. Then you counter with a way off base argument rather than attempt to comprehend the reality of it.

It was also intended to launch the Bulldozer architecture 2 or 3 years earlier. Guess what ?
Called it a (6ghz) fairytale because that is what it is. The history of the Bulldozer is well known.

This is what companies employ to drive interest from the financial market, with bold goals or statements on "intent" shows commitment from the company with their product..

Intent in this case has nothing to do with realistic goals or has any bearing on what Bulldozer is or could be. AMD pumped all their resources into the new architecture for a reason.

That you have trouble understanding that intent in this case is nothing more then something for people to hear about rather then something which was a viable option (past,present or future). Which comes back to public relations, that is all it is throw intent out of the window.
 
Last edited:
It was also intended to launch the Bulldozer architecture 2 or 3 years earlier. Guess what ?
Called it a (6ghz) fairytale because that is what it is. The history of the Bulldozer is well known.

This is what companies employ to drive interest from the financial market, with bold goals or statements on "intent" shows commitment from the company with their product..

Intent in this case has nothing to do with realistic goals or has any bearing on what Bulldozer is or could be. AMD pumped all their resources into the new architecture for a reason.

That you have trouble understanding that intent in this case is nothing more then something for people to hear about rather then something which was a viable option (past,present or future). Which comes back to public relations, that is all it is throw intent out of the window.

Just because either you lack the intellect or refuse to delve into the facts doesn't change the INTENT. You keep clinging to the RESULTS because you can't understand or refuse to admit the INTENT was a viable idea.

Here try this: The front end was relaxed to allow for higher clocks which at the time clocks were between 3-4Ghz so aiming for 6-8Ghz isn't out of the realm of possibility and hence the design. Even seeing 8core FXs clocked @5Ghz able to compete with Intel SB showing that had they released @6Ghz we wouldn't be attempting to discuss failure.
 
I'd like to see 4GB of stacked memory on the motherboard.
Sort of like the old K6-3 days. Except, this could be MUCH faster and more bandwidth.

Yes, that was one of the first AMD boards I ever owned. I remember it had a cache slot in addition to the memory slots.
 
Just because either you lack the intellect or refuse to delve into the facts doesn't change the INTENT. You keep clinging to the RESULTS because you can't understand or refuse to admit the INTENT was a viable idea.

Here try this: The front end was relaxed to allow for higher clocks which at the time clocks were between 3-4Ghz so aiming for 6-8Ghz isn't out of the realm of possibility and hence the design. Even seeing 8core FXs clocked @5Ghz able to compete with Intel SB showing that had they released @6Ghz we wouldn't be attempting to discuss failure.

Already told you no intent whatsoever.
What is so hard to understand about that ?
No to intent no to your declaration of intent and no to any intent way shape or form with Bulldozer. Because the chip was late to begin with http://en.wikipedia.org/wiki/Bulldozer_(microarchitecture)

Bulldozer is the codename for a microprocessor microarchitecture developed by AMD for the desktop and server markets. It was released on October 12, 2011 as the successor to the K10 microarchitecture.

It was slated for release around 2010 but back then they didn't have anything working well enough to get it going, they waited until 32nm process was available.

So what you are telling me that back in 2005 they shot for 6ghz. Wait no they had the Intent to go for 6ghz.
 
Already told you no intent whatsoever.
What is so hard to understand about that ?
No to intent no to your declaration of intent and no to any intent way shape or form with Bulldozer. Because the chip was late to begin with http://en.wikipedia.org/wiki/Bulldozer_(microarchitecture)



It was slated for release around 2010 but back then they didn't have anything working well enough to get it going, they waited until 32nm process was available.

So what you are telling me that back in 2005 they shot for 6ghz. Wait no they had the Intent to go for 6ghz.

Yet you aren't proving they didn't. It is real easy to prove they did however just by looking at the architecture, which I have explained a number of times. But because, for one reason or another, you refuse to accept or even entertain the concept, you stick to the RESULTS because you don't want to believe that had it worked for AMD @>6Ghz then Intel would have been the lesser performer.

INTENT doesn't change the facts of what the RESULTS were/are. You just need to stop arguing the RESULTS when we are trying to discuss the INTENT.
 
Back
Top