New Zen information, AM3+ info, APU presentation, and video card information

Man you are just trying to be obtuse. I never said the library or ability to execute X87 wasn't there but that it was being blocked. FX processors weren't running X87 but were instead running it with SSE2 (how it was told 3 years ago) which explains the speed differential. Also it was said the block was to circumvent paying Intel for the X87 license. Not a bad call considering the only real issue with it was Skyrim as it seemed to be the lone software left using it other than HWBot. And that is how it works by the by. If the code is X87 and the processor doesn't support it then the active code on the processor will run the code, a legacy support if you will, so no it will not crash, well as long as the supported code has the legacy support which most will.

And for the love of all that is holy, try reading a post and attempt to comprehend. It doesn't matter if one were 4 core and the other 82 core, the comparison was for IPC/single core throughput. Number of cores is irrelevant until we start talking multithread computation.

You cant run x87 code with SSE2. I think you confuse the point that AMD wanted to replace x87 in 64bit mode with SSE2. The Intel licensing part is also utter BS to put it gently. Since its all part of the cross licensing agreement. When instructions are a defacto standard, there isn't any form of other path around it.


oua4qb.png


Construction cores could and have always been able to run native x87 code since day 1. Anything else is utter rubbish. You have as well provided absolutely no proof of your claims. But instead rejected evidence with even more fairy tale excuses.

http://support.amd.com/TechDocs/26569_APM_v5.pdf

At this point I think we have to agree on disagreeing. :)
 
Last edited:
You cant run x87 code with SSE2. I think you confuse the point that AMD wanted to replace x87 in 64bit mode with SSE2. The Intel licensing part is also utter BS to put it gently. Since its all part of the cross licensing agreement. When instructions are a defacto standard, there isn't any form of other path around it.


oua4qb.png


Construction cores could and have always been able to run native x87 code since day 1. Anything else is utter rubbish. You have as well provided absolutely no proof of your claims. But instead rejected evidence with even more fairy tale excuses.

http://support.amd.com/TechDocs/26569_APM_v5.pdf

At this point I think we have to agree on disagreeing. :)
Are you English illiterate? No part of what I said was that they couldn't run it natively as they have the library. The point is access to the lib is blocked. Blocked means it cant run X87 code but must run in another using legacy which has been done for a lot of other codes that have disappeared from libraries on the processors.

Instead of wasting your time and all of ours learn to comprehend and understand how code works and how simple words like BLOCKED correlate to this discussion.
 
I think the larger issue, aside from the blocking of the x87 code, is the SSE/SSE2 interrelation with legacy FP libraries which are still used for many FPU operations.

On the Intel side of the coin, many other libraries are blocked. For instance the y54 and trent.14 code bases which Intel has blocked- going all the way back to both 3D Now. When Alfonse Schwartzcoppin (of Cyrix fame) was trying to popularize integer decimation as an FP alternative- it was blocked by both AMD and Intel. It ended up in software form on Motorola/IBM PPC RISC processors.

And that was done, despite the fact that integer decimation was so effective with the numbers involved that the actual computation requested never executed- removing the work load from the FPU.

It was revolutionary of course.

Schwartzcoppin held on hard for support- but support never came.

So to this day- we have really crappy FPU based architectures. Because integer decimation was never allowed onto modern processors. And Intel and AMD fan boys both make swollen arguments about the largess of their favorite processor.

Now you know... the rest of the story.
 
I think the larger issue, aside from the blocking of the x87 code, is the SSE/SSE2 interrelation with legacy FP libraries which are still used for many FPU operations.

On the Intel side of the coin, many other libraries are blocked. For instance the y54 and trent.14 code bases which Intel has blocked- going all the way back to both 3D Now. When Alfonse Schwartzcoppin (of Cyrix fame) was trying to popularize integer decimation as an FP alternative- it was blocked by both AMD and Intel. It ended up in software form on Motorola/IBM PPC RISC processors.

And that was done, despite the fact that integer decimation was so effective with the numbers involved that the actual computation requested never executed- removing the work load from the FPU.

It was revolutionary of course.

Schwartzcoppin held on hard for support- but support never came.

So to this day- we have really crappy FPU based architectures. Because integer decimation was never allowed onto modern processors. And Intel and AMD fan boys both make swollen arguments about the largess of their favorite processor.

Now you know... the rest of the story.
This may not even be the point you were trying to make, but what I took away from all of that is the age old saying that "Nobody's perfect", and that everyone (be it a single person or a group [see: company]) will inevitably make foolish choices, done for one reason or another. Take Sony for example... they were one of the most successful companies on all fronts at one point in time, but have fallen so far. I can't help but think it's due to a multitude of instances where they've stood so firm on their own variations (inventions) that it ended up causing people to shy away from them due to becoming niche products... MiniDisc, Pro Duo memory sticks, and multiple others over the years that just don't stick out as well (note: the MD had notable benefits over CDs, but it was more expensive... too expensive, thus low adoption rate). I'm sure there were similar reasons AMD and Intel didn't opt to use Alfonse's method, be it their own designs that they were trying to push, maybe lack of performance on non-Cyrix designs (?), or perhaps their unwillingness was just a matter of pride.

Me, I'm like JustReason. I love AMD, despite the fact I know they aren't top dog anymore. My first computer was a 486DX2 66MHz, no idea who made it, though I suspect it was Intel (it was a Packard Bell system). My second was a K6-2 333 which was an underdog to my friend's Celeron 300, at least as far as gaming... but that might've been due to mine using SIMM. Mind you, I was ignorant still when it came time to build my first system, but the Athlon was about to come out and that's what I went with... Athlon 550! Thus began my journey with AMD, and what's kept me with them despite everything, has been how Intel has operated as a company and the corporate 'games' they've played in order to prevent AMD from regaining any sort of foothold in the OEM sector after losing the performance crown with the Phenom. Same reasons why I have a huge dislike for nVidia *shrug*

In any case, when it comes down to it, I stand proud in my preference in AMD and not because I am supporting the under-dog, it's because I'm not a blind fanboy. I know damn well that Intel and nV are dominant in the performance (namely Intel, since graphically AMD has shared blows with nV, at least until recently heh), and when I help someone come up with a system build I inform them of that fact. The majority of the time it doesn't matter since what they'll use the system for it doesn't matter, so it gets specced out with AMD parts because when your wallet talks, they generally can offer more for less, which equates to being able to spend their budget in other areas like more RAM, a bigger HDD, nicer Monitor, etc etc.

My hopes are high for Zen... but just as well I carry a lot of caution with that hope, because I was excited for Bulldozer and look where that got us :( Sure the architecture shaped up, but what we have now is what should've been released from the start. The reason I have hope, which I 'hope' is not misplaced, is purely because of Jim Keller as he was head architect behind the Athlon 64 (and I believe had a hand in the original Athlon as well). I'll admit that it seems quite obvious that his only reason for coming back to AMD was due to a likely considerable paycheck, considering he came back, put in his time under that contract, and left... With any luck he still put his 'all' into the work on Zen and didn't half-ass it :p
 
  • Like
Reactions: N4CR
like this
I mean I'm a self admitted AMD hater due to what I perceive as a long series of serious management blunders and missteps over the years combined with a few outright lies ("Fury is an overclockers dream" LOLOLOL also the BD shilling fiasco with JF-AMD) leaving the PC market in the current state of zero competition for intel and NVidia at the high end, and pretty much constantly shits on their fan base by years of sub-par products, but if this is true this isn't good for anybody. I mean a 40% IPC boost is great but when it comes with a 25% drop in clock speed that's practically no gain at all. I'm wondering if AMDs initial implementation of SMT will be as bad as intel's in the P4 series where you were at times better turning it off and they ended up just taking that feature out for a few generations to get it working right again.

Hey, be careful how you express your opinions here, you might be asked to leave this thread by one of the AMD fanboys on duty! If you don't like what AMD can do with it's limited R&D budget, you can spend your hard earn $ on something else than AMD! I mean, you want to tell me that a cpu designed last year, with 16 threads, does the same job as one with 12 threads designed 6 years ago?
 
This may not even be the point you were trying to make, but what I took away from all of that is the age old saying that "Nobody's perfect", and that everyone (be it a single person or a group [see: company]) will inevitably make foolish choices, done for one reason or another. Take Sony for example... they were one of the most successful companies on all fronts at one point in time, but have fallen so far. I can't help but think it's due to a multitude of instances where they've stood so firm on their own variations (inventions) that it ended up causing people to shy away from them due to becoming niche products... MiniDisc, Pro Duo memory sticks, and multiple others over the years that just don't stick out as well (note: the MD had notable benefits over CDs, but it was more expensive... too expensive, thus low adoption rate). I'm sure there were similar reasons AMD and Intel didn't opt to use Alfonse's method, be it their own designs that they were trying to push, maybe lack of performance on non-Cyrix designs (?), or perhaps their unwillingness was just a matter of pride.
Sony's Pro Duo sticks were fairly high capacity, had decent access speeds, and were adopted by a number of third parties, it's just that sdcards were cheaper and evolved faster. At least they have sdcard->Pro Duo adapters. What pisses me off is stuff like the proprietary memory card for the vita, priced sky-high with no compatibility and only used by sony...but I digress. ;)

Marstg could you please not troll? I'm fairly certain that statement adds nothing to this thread, and only invites people from both sides to flame.
 
The reason I have hope, which I 'hope' is not misplaced, is purely because of Jim Keller as he was head architect behind the Athlon 64 (and I believe had a hand in the original Athlon as well). I'll admit that it seems quite obvious that his only reason for coming back to AMD was due to a likely considerable paycheck, considering he came back, put in his time under that contract, and left... With any luck he still put his 'all' into the work on Zen and didn't half-ass it :p

Jim Keller left AMD in 1999 and he wasn't the lead architect of K8 that got released in 2003. Not sure why he wrongly got credited for that. Not that a single person matters anyway.
 
Hey, be careful how you express your opinions here, you might be asked to leave this thread by one of the AMD fanboys on duty! If you don't like what AMD can do with it's limited R&D budget, you can spend your hard earn $ on something else than AMD! I mean, you want to tell me that a cpu designed last year, with 16 threads, does the same job as one with 12 threads designed 6 years ago?

Why should people buy Intel cpu there all crap they get run over by ARM technology at 28nm process Intel is at 14nm and there still can not beat ARM.

Have a good time figuring out why you just put yourself in a spot you can't get out from ...
 
Why should people buy Intel cpu there all crap they get run over by ARM technology at 28nm process Intel is at 14nm and there still can not beat ARM.

Have a good time figuring out why you just put yourself in a spot you can't get out from ...

ARM does not address the desktop market last time I checked, so why the sidetracking? As for the the second phrase I honestly have no idea what are you talking about.
 
ARM does not address the desktop market last time I checked, so why the sidetracking? As for the the second phrase I honestly have no idea what are you talking about.

Intel does not make AMD cpu why are you side tracking ?
 
Jim Keller left AMD in 1999 and he wasn't the lead architect of K8 that got released in 2003. Not sure why he wrongly got credited for that. Not that a single person matters anyway.


he was the lead for Clawhammer, yeah it came out in 2003, and he left in 1999 but average life span of design I would say is 5 year ish, and clawhammer also had a lot of similarities to the K7 which was the original Athlon. In any case how ever Zen turns out, yeah its not fully up to Keller, AMD's IP has much more to do with it than anything else.
 
This whole launch is starting to smell like BullDozer...

Step 1: Hint at new, revolutionary tech aimed at putting AMD back in the competition with Intel

Step 2: Reveal more information about said architecture and make crazy claims about performance

Step 3: Go really quiet about architecture at big conferences, offer vague, subjective claims over said architecture at awkward points in time.

Step 4: Panic as leaked benchmarks of said architecture ES chips hit the web showing lacklustre results.

Step 5: Go into Full-Retard marketing mode and promise the world and the kitchen sink with said architecture.

Step 6: Release the product at completely non-competitive prices, with wild claims and useless buzzwords.

Step 7: Go into damage control mode and claim that the chip was never intended to meet buyer's expectations.
 
This whole launch is starting to smell like BullDozer...

Step 1: Hint at new, revolutionary tech aimed at putting AMD back in the competition with Intel

Step 2: Reveal more information about said architecture and make crazy claims about performance

Step 3: Go really quiet about architecture at big conferences, offer vague, subjective claims over said architecture at awkward points in time.

Step 4: Panic as leaked benchmarks of said architecture ES chips hit the web showing lacklustre results.

Step 5: Go into Full-Retard marketing mode and promise the world and the kitchen sink with said architecture.

Step 6: Release the product at completely non-competitive prices, with wild claims and useless buzzwords.

Step 7: Go into damage control mode and claim that the chip was never intended to meet buyer's expectations.

After what they did to Schwartzcoppin- I'd put nothing past them.
 
Please elaborate? This sounds Juicy.

Well it can be a hard story to tell unless you were involved with the industry- but it all goes back to the blocked x87, y54, and trent.14 code bases.

But as I mentioned earlier, there was a lot of opposition to integer decimation. After the industry rejected him and his methods, the patents were raided by various governments- aided and abetted by AMD, Intel, and others.

So when you bring up AMD making false statements- it doesn't surprise me.

If you want to understand the collusion in the industry- check out Schwartzcoppin's history.
 
Last edited:
Zen won't beat intel but it will be competitive. Call me crazy but Keller does not suck! It looks like it has balls on paper if unless keller smoked alot of crack during his time with AMD I expect it to be a decent product.
 
Zen won't beat intel but it will be competitive. Call me crazy but Keller doe not suck! It looks like it has balls on paper if unless keller smoked alot of crack during his time with AMD I expect it to be a decent product.

And we all know Shintai hates amd.

I don't know, man. My emotional Bulldozer scar is itching pretty hard right now...
 
I don't know, man. My emotional Bulldozer scar is itching pretty hard right now...

LOL Have faith in Keller. I think AMD brought him back to get them back in cpu game. They will be building of zen for time to come. Its not like GPUs they pump one out every year. I am sure they learned from the bulldozer disaster. Zen is a ground up design so I am sure they won't fuck it up years after bulldozer lol.
 
LOL Have faith in Keller. I think AMD brought him back to get them back in cpu game. They will be building of zen for time to come. Its not like GPUs they pump one out every year. I am sure they learned from the bulldozer disaster. Zen is a ground up design so I am sure they won't fuck it up years after bulldozer lol.

The problem is almost NO board members currently sitting were also sitting for the BD Fiasco. There is no 'experience' to learn from.
 
he was the lead for Clawhammer, yeah it came out in 2003, and he left in 1999 but average life span of design I would say is 5 year ish, and clawhammer also had a lot of similarities to the K7 which was the original Athlon. In any case how ever Zen turns out, yeah its not fully up to Keller, AMD's IP has much more to do with it than anything else.

Keller seems to be more glamour than substance to put it mildly. And no, it wasn't Kellers design that got used as the K8. I am also sure there are 1000 of previous/current AMD engineers who feel like all their work got downgraded so someone else could get the PR. I dont get why people think 1 person can have such an impact, no matter how good he is. Its simply delusional.

Keller also stopped unexpectedly at AMD with Zen. Fired or rage quit, pick one. But as I said, one person doesn't make an MPU and the impact is microscopic.

Also name me the lead architects of other MPUs if you can. ;)

And in case you wonder, the guy behind K8 is called Fred Weber. Not Jim Keller.
 
Last edited:
This whole launch is starting to smell like BullDozer...

Step 1: Hint at new, revolutionary tech aimed at putting AMD back in the competition with Intel

Step 2: Reveal more information about said architecture and make crazy claims about performance

Step 3: Go really quiet about architecture at big conferences, offer vague, subjective claims over said architecture at awkward points in time.

Step 4: Panic as leaked benchmarks of said architecture ES chips hit the web showing lacklustre results.

Step 5: Go into Full-Retard marketing mode and promise the world and the kitchen sink with said architecture.

Step 6: Release the product at completely non-competitive prices, with wild claims and useless buzzwords.

Step 7: Go into damage control mode and claim that the chip was never intended to meet buyer's expectations.
There is a lot more to the story than just that. Most of the hype was by the public more so than AMD. And given the start of a, what we now know as, DX12-like API, AMD was almost accurate in the jump in performance for the consumer. But that API was squashed, for unknown reasons, So they didn't get that boost. Then add that the original clock expectations were in the realm of 6-8Ghz, obviously that didn't happen, then you get a good picture of what might have happened had those things transpired.

But since then AMD has done a pretty good job not hyping almost to the point of no information till its release. I keep thinking back to the 290/X release where no one expected a 512 bit bus and then on launch day there it was to the shock and amazement of the community. Lets not forget the water-cooled 295 either that sent Nvidia scrambling to adjust clocks on the highly overpriced TitanZ.
 
Why should people buy Intel cpu there all crap they get run over by ARM technology at 28nm process Intel is at 14nm and there still can not beat ARM.

Have a good time figuring out why you just put yourself in a spot you can't get out from ...

The best ARM doesn't have a tenth of the performance of x86. And yes, I actually tested. Point being, Intel has a lot of circuitry on their chips that raise performance, but also increase power draw. As a result, x86 is simply not well designed for low-power devices, where ARM is. By contrast, ARM is absent outside of mobile because it lacks performance of other architectures.
 
There is a lot more to the story than just that. Most of the hype was by the public more so than AMD. And given the start of a, what we now know as, DX12-like API, AMD was almost accurate in the jump in performance for the consumer. But that API was squashed, for unknown reasons, So they didn't get that boost. Then add that the original clock expectations were in the realm of 6-8Ghz, obviously that didn't happen, then you get a good picture of what might have happened had those things transpired.

But since then AMD has done a pretty good job not hyping almost to the point of no information till its release. I keep thinking back to the 290/X release where no one expected a 512 bit bus and then on launch day there it was to the shock and amazement of the community. Lets not forget the water-cooled 295 either that sent Nvidia scrambling to adjust clocks on the highly overpriced TitanZ.

Just a refresher.

Err you don't remember the conference calls after conference calls where they told investors about the performance of Bulldozer...... AMD hyped the shit out of Bulldozer.

Numerous people stated it was a 512 bit bus before the 290 came out. That was not suprising at all. There were board picture leaked out! You don't seem to understand AMD is a leaky faucet, I don't know if they are doing it on purpose or what, but man I don't think they had a solid non leaky launch since the 4xxx series, (and even that there were probably leaks just that no one believed them so they didn't post or talk about them)
 
Last edited:
The best ARM doesn't have a tenth of the performance of x86. And yes, I actually tested. Point being, Intel has a lot of circuitry on their chips that raise performance, but also increase power draw. As a result, x86 is simply not well designed for low-power devices, where ARM is. By contrast, ARM is absent outside of mobile because it lacks performance of other architectures.

You are missing the point as is the poster which I responded to. If I need to clarify it to you then PM me ...
 
Keller seems to be more glamour than substance to put it mildly. And no, it wasn't Kellers design that got used as the K8. I am also sure there are 1000 of previous/current AMD engineers who feel like all their work got downgraded so someone else could get the PR. I dont get why people think 1 person can have such an impact, no matter how good he is. Its simply delusional.

Keller also stopped unexpectedly at AMD with Zen. Fired or rage quit, pick one. But as I said, one person doesn't make an MPU and the impact is microscopic.

Also name me the lead architects of other MPUs if you can. ;)

And in case you wonder, the guy behind K8 is called Fred Weber. Not Jim Keller.


Yeah k7 was as heavily Fred's design ;) At least the ALU part
 
Just a refresher.

Err you don't remember the conference calls after conference calls where they told investors about the performance of Bulldozer...... AMD hyped the shit out of Bulldozer.

Numerous people stated it was a 512 bit bus before the 290 came out. That was not suprising at all. There were board picture leaked out! You don't seem to understand AMD is a leaky faucet, I don't know if they are doing it on purpose or what, but man I don't think they had a solid non leaky launch since the 4xxx series, (and even that there were probably leaks just that no one believed them so they didn't post or talk about them)
As far as bulldozer I have to go by the mountains of material after the fact as I didn't keep up with it at the time, but it all speaks to the community with one guy at AMD that did post some overly zealous predictions. But either way it doesn't change the fact had some of the expectations as to clock speed been accurate then the whole landscape would have been different. Most of the time these lofty expectations are made early before the end result facts are known then the community continues to run these outdated predictions even against the more updated ones even still when it is the company stating a more reasonable outcome. Bulldozer is this exactly. Even after the engineers stated the newer more rational expectations most of the community kept touting the old ones.

The 290 bus width wasn't known till the final architecture was released a week or 2 before, like a paper launch. Most if not all of the community was surprised because they didn't think a 512 bit bus would fit in that space. I was there for that one and the whole debate over boost clocks. The 295 with water was unknown till release, even Nvidia was surprised.
 
As far as bulldozer I have to go by the mountains of material after the fact as I didn't keep up with it at the time, but it all speaks to the community with one guy at AMD that did post some overly zealous predictions. But either way it doesn't change the fact had some of the expectations as to clock speed been accurate then the whole landscape would have been different. Most of the time these lofty expectations are made early before the end result facts are known then the community continues to run these outdated predictions even against the more updated ones even still when it is the company stating a more reasonable outcome. Bulldozer is this exactly. Even after the engineers stated the newer more rational expectations most of the community kept touting the old ones.

The 290 bus width wasn't known till the final architecture was released a week or 2 before, like a paper launch. Most if not all of the community was surprised because they didn't think a 512 bit bus would fit in that space. I was there for that one and the whole debate over boost clocks. The 295 with water was unknown till release, even Nvidia was surprised.
JF-AMD
 
AMD Hawaii GPU R9-290X 'Volcanic Islands' PCB Leaked - 512-Bit Interface and Massive Die Size?



This was a full month before launch.

AMD-Hawaii-R9-290X-PCB-635x421.jpg


And this is why AMD is a leaky faucet ;)

Right around the same time

Erste Spezifikationen zu AMDs Hawaii-Grafikchip | 3DCenter.org

all specifications were leaked by 3dcenter.

And about bull dozer

Internal partner performance figures that were given out by AMD to partners, stated bull dozer was a great chip lol,

Also AMD mentioned bulldozer can keep up with Ivy Bridge.
 
Last edited:
10 years sounds like a good number ;)

err thought Samsung was doing 10nm this coming year.
 

The best damage control I saw from JF-AMD was the quote where he tried to say that the bulldozer IPC was higher than thuban because it was clocked higher.

People here were like "Thats BS" and JF-AMD was like "IPC means 'instructions per core' lol"
 
Last edited:
so this is turning in to Zen is another bulldozer? LOL i really highly doubt it. Its hard to repeat bulldozer failure and I doubt keller fucked up that bad. Do I expect it to blow away intel? No, so lets get real here if it is anywhere near competitive its an option worth having.
 
so this is turning in to Zen is another bulldozer? LOL i really highly doubt it. Its hard to repeat bulldozer failure and I doubt keller fucked up that bad. Do I expect it to blow away intel? No, so lets get real here if it is anywhere near competitive its an option worth having.

Keller is ONE person. and ONE person can't just sh*** out a fantastic winner. Bulldozer was not an architecture failure, it was an AMD marketing failure. They took what was essentially a basic 4 core unit with SMT and advertised it as a cutting-edge 8-core monster. This '8 core monster' was slower than Intel's 4 core parts, and even slower than AMD's last-generation 6-core parts. Imagine how the reception would have been if they advertised it as a quad-core, and hinted at a 6-core part in the future?
 
Keller is ONE person. and ONE person can't just sh*** out a fantastic winner. Bulldozer was not an architecture failure, it was an AMD marketing failure. They took what was essentially a basic 4 core unit with SMT and advertised it as a cutting-edge 8-core monster. This '8 core monster' was slower than Intel's 4 core parts, and even slower than AMD's last-generation 6-core parts. Imagine how the reception would have been if they advertised it as a quad-core, and hinted at a 6-core part in the future?

True, but he was the head of this project. Everyone is already assuming zen is fail. If ipc is improved 35-40% over excavator how the hell is that exactly fail? Sure its no intel kiler but its better than what amd has out now and that is what matters the most. Plus all this dooms day scenario again over one game benchmark seems a little bit blown out of proportion. When its out and it is a fail I will believe it. But if it get 30-40% ipc gain over previous amd chip I can't call it a fail that everyone is already claiming it to be.
 
True, but he was the head of this project. Everyone is already assuming zen is fail. If ipc is improved 35-40% over excavator how the hell is that exactly fail? Sure its no intel kiler but its better than what amd has out now and that is what matters the most. Plus all this dooms day scenario again over one game benchmark seems a little bit blown out of proportion. When its out and it is a fail I will believe it. But if it get 30-40% ipc gain over previous amd chip I can't call it a fail that everyone is already claiming it to be.

Well, AMD is not exactly known for their realistic and completely true pre-launch claims. So yeah, a 40% boost to IPC would be GREAT, but I'll believe it when I see it. Not to mention, What if there IS a 40% IPC improvement, but it can't clock higher than 2GHz? So, yeah. I used to be in LOVE with AMD. They were the Batman my Gotham deserved, but recently, they are like my smelly, greasy cousin living in my basement who I have to lend money once in a while.
 
SMT and BD modules have nothing in common and is not the reason BD was a failure. BD was a failure because the design sucked ass and copied all of the failings of the Pentium 4 and Netburst architecture. Deeply pipelined which takes severe prediction hits, lowering efficiency, and designed to scale to high frequencies that wasn't physically possible, except when intel tried it back in the early 2000's nobody knew how hard that wall would be, by the time BD came out it was pretty well known that BD would never be able to scale to the speeds necessary to compete. SMT works by creating copies of certain front end data structures of the CPU, duplicate registers and such things, but a group in a SMT module all share the same execution units and literally only one can run at a time, then the CPU intelligently alternates which of the modules on the front end gets to send commands through the processing section, keeping the processing core busy at all times preventing a stall while waiting for memory or disk access or when there's a wait for a previous instruction to finish before the next one can begin, since while a CPU can pump one instruction through it per clock it may take several clock cycles for the result to be available out the other end, in the mean time instructions from the other thread can execute. BD modules had 2 complete integer cores with a shared FPU that could be split or combined depending on what precision was needed. Each integer core was it's own separate unit and neither could share resources nor alternate to keep the internal pipelines full, just two, independent, shitty cores.

The reason Zen is looking to be another BD has nothing to do with leaks, ES samples, or any of that. Just look at their product stack. They have 4C/8t, 6C/12T, and 8C/16T parts which seem to be lining up with intels 2C/4T, 4C/4T, and 4C/8T parts, which means they're falling back to their old BD "MOAR COARS" ways of having shitty single thread performance but compensating by saying "someday soon multi-thread will actually matter!" except in the mobile world it really won't and it barely matters in the desktop world, and in the mobile world keeping the CPU active longer to finish a task before returning to a low power mode won't really help with battery life for tasks like web browsing where you need to hurry up and do something, then wait a good long while before doing the next something. So yeah I'm going to personally believe AMD screwed up again just due to the whole massive core count vs similar intel parts. Personally I was hoping the 8C/16T part would be some kind of "Zen Pro" to compete with the i7-x8xx and i7-x9xx series processors but that's seeming extremely unlikely at this point.
 
SMT and BD modules have nothing in common and is not the reason BD was a failure. BD was a failure because the design sucked ass and copied all of the failings of the Pentium 4 and Netburst architecture. Deeply pipelined which takes severe prediction hits, lowering efficiency, and designed to scale to high frequencies that wasn't physically possible, except when intel tried it back in the early 2000's nobody knew how hard that wall would be, by the time BD came out it was pretty well known that BD would never be able to scale to the speeds necessary to compete. SMT works by creating copies of certain front end data structures of the CPU, duplicate registers and such things, but a group in a SMT module all share the same execution units and literally only one can run at a time, then the CPU intelligently alternates which of the modules on the front end gets to send commands through the processing section, keeping the processing core busy at all times preventing a stall while waiting for memory or disk access or when there's a wait for a previous instruction to finish before the next one can begin, since while a CPU can pump one instruction through it per clock it may take several clock cycles for the result to be available out the other end, in the mean time instructions from the other thread can execute. BD modules had 2 complete integer cores with a shared FPU that could be split or combined depending on what precision was needed. Each integer core was it's own separate unit and neither could share resources nor alternate to keep the internal pipelines full, just two, independent, shitty cores.

The reason Zen is looking to be another BD has nothing to do with leaks, ES samples, or any of that. Just look at their product stack. They have 4C/8t, 6C/12T, and 8C/16T parts which seem to be lining up with intels 2C/4T, 4C/4T, and 4C/8T parts, which means they're falling back to their old BD "MOAR COARS" ways of having shitty single thread performance but compensating by saying "someday soon multi-thread will actually matter!" except in the mobile world it really won't and it barely matters in the desktop world, and in the mobile world keeping the CPU active longer to finish a task before returning to a low power mode won't really help with battery life for tasks like web browsing where you need to hurry up and do something, then wait a good long while before doing the next something. So yeah I'm going to personally believe AMD screwed up again just due to the whole massive core count vs similar intel parts. Personally I was hoping the 8C/16T part would be some kind of "Zen Pro" to compete with the i7-x8xx and i7-x9xx series processors but that's seeming extremely unlikely at this point.

You assume a lot of things here.
The mobile world is already seeing some more cores products (Mediatek).
If you don't want shitty IPC then buy Intel, how come that you and others still not getting the message that AMD can't beat today's Intel offerings , this has been mentioned god knows how many times already

I'm really wondering about the sincerity of your "outrage" how can a company as AMD which has had to stop producing AM3+ cpu because they were not getting anywhere for several years suddenly out of the blue make a chip that beat Intel on all fronts. Does that mean that you think that Intel has a bunch of muppets making their cpu or that AMD just can do a lot better but choose not to?

If it was easy to make a competing x86 cpu then we would have a lot more players active .....
 
The mobile world is already seeing some more cores products (Mediatek).

The mobile ARM world works entirely different there. It pretty much cost nothing to add these cores and it offers nothing either. Since you disable other clusters of the cores or likewise. And the ARM products for mobile is all about cost structure. Its like adding jaguar or Atom cores and disable them while running a proper load.
 
True, but he was the head of this project. Everyone is already assuming zen is fail. If ipc is improved 35-40% over excavator how the hell is that exactly fail? Sure its no intel kiler but its better than what amd has out now and that is what matters the most. Plus all this dooms day scenario again over one game benchmark seems a little bit blown out of proportion. When its out and it is a fail I will believe it. But if it get 30-40% ipc gain over previous amd chip I can't call it a fail that everyone is already claiming it to be.

The only thing that matters for a product is how it sells. And that determines if its a fail or not. All construction cores so far have been a huge flop in the sales department. Why would Zen change that when the metrics compared haven't changed? Zen from the looks of it, is even worse than the first FX parts vs the competition.
 
The only thing that matters for a product is how it sells. And that determines if its a fail or not. All construction cores so far have been a huge flop in the sales department. Why would Zen change that when the metrics compared haven't changed? Zen from the looks of it, is even worse than the first FX parts vs the competition.
Ooh, I wouldn't go that far. ..
 
Back
Top