AMD possibly going to 4 threads per core

We can leave it there 20 years ago Intel ripped off Cyrix.

I keep seeing this assertion; I was there, never heard it. I can't really take this as anything other than blatant anti-Intel sentiment.

Even if Cyrix was on to something, their products were second or third tier at best. Perhaps Intel was inspired by Cyrix, but the accusation of ripping off is quite silly. Also, your quote that would back up such a claim is unsourced- and full of opinion.

And none of it erases the fact that AMD and Cyrix were copying Intel designs.
 
Overall, Zen 2 has higher IPC...
The only scenario where Skylake can have higher IPC is in gaming workloads. In everything else Zen 2 has higher IPC, by a measurable margin.
Intel's advantage is having clocks, thus it has slightly higher per-core performance in some workloads, but not all.

Your comment only rings true if the only measurement you go by is gaming type workloads.

Overall, Zen 2 has lower IPC.

Or avx512 I guess...

I can only imaging the crying if someone included Intel's AVX performance in a benchmark comparing IPC :ROFLMAO:
 
I keep seeing this assertion; I was there, never heard it. I can't really take this as anything other than blatant anti-Intel sentiment.

Even if Cyrix was on to something, their products were second or third tier at best. Perhaps Intel was inspired by Cyrix, but the accusation of ripping off is quite silly. Also, your quote that would back up such a claim is unsourced- and full of opinion.

And none of it erases the fact that AMD and Cyrix were copying Intel designs.

https://patents.google.com/patent/US5630149
https://patents.google.com/patent/US5630143

Those are the Cyrix patents Intel copied... Cyrix Sued them. It was expected to drag on for years and for Intel to loose eventually. Intel CHOOSE to settle pay Cyrix a bunch of millions of dollars and agree to giving Cyrix access to ALL their patents.

Sounds like a settlement born out of pure innocence. ;)

Its hard to find filings that are 20 years old now of a case that was settled before it was heard.
https://www.washingtonpost.com/arch...patents/86df89ad-70aa-4010-9344-6e496fae10cd/
There are still some old articles around from that time though. Cyrix filed in Texas... Intel settled in less then a month. They also took the time to call Digitals suit BS... but interestingly never said the same about Cyrix. On the Cyrix suit all they ever said was they where looking into it.

Not that it matters to this thread... but its been fun. lol o7
 
Overall, Zen 2 has lower IPC.



I can only imaging the crying if someone included Intel's AVX performance in a benchmark comparing IPC :ROFLMAO:


And?

Not really much to cry over.

"For the first time in over a decade, AMD has reached IPC parity with Intel. On average, based on the results of 32 individual workloads Zen 2 even manages to provide slightly higher average IPC than Coffee Lake-S Refresh. Thanks to it AVX-512 resources Skylake-X manages to stay a head in this test suite however, not by a large margin."
 
And?

Not really much to cry over.

"For the first time in over a decade, AMD has reached IPC parity with Intel. On average, based on the results of 32 individual workloads Zen 2 even manages to provide slightly higher average IPC than Coffee Lake-S Refresh. Thanks to it AVX-512 resources Skylake-X manages to stay a head in this test suite however, not by a large margin."

They caught up with Skylake. That's five years old.

It's commendable that they have a salable product, but it's also clear that it's a bit of a Pyrrhic victory.
 
Overall, Zen 2 has lower IPC.



I can only imaging the crying if someone included Intel's AVX performance in a benchmark comparing IPC :ROFLMAO:

Wow, just look at the blatant false claims here...its you're living under a rock.

It takes not AVX2, but AVX512 (which nothing hardly uses) to beat AMD and thats only because AMD doesnt have avx512 hardware like Intel does.
 
Wow, just look at the blatant false claims here...its you're living under a rock.

It takes not AVX2, but AVX512 (which nothing hardly uses) to beat AMD and thats only because AMD doesnt have avx512 hardware like Intel does.

I didn't mention AVX2.
 
They caught up with Skylake. That's five years old.


It's commendable that they have a salable product, but it's also clear that it's a bit of a Pyrrhic victory.

They've caught up with skylake-x outside of avx512 supported applications. When you have an outlier, especially one that skews the results away from a true indication of performance, it's not atypical to drop that.

It's like claiming Navi performance in Forza Horizons 4 is representative of it's overall performance.

The quote from above isn't mine, it's from Stilt, and he's done comprehensive IPC testing since ryzen 1k, and he has a different take on it. You might have missed it the first time.

"For the first time in over a decade, AMD has reached IPC parity with Intel. On average, based on the results of 32 individual workloads Zen 2 even manages to provide slightly higher average IPC than Coffee Lake-S Refresh. Thanks to it AVX-512 resources Skylake-X manages to stay a head in this test suite however, not by a large margin."
 
When you have an outlier, especially one that skews the results away from a true indication of performance, it's not atypical to drop that.

Outliers are expected -- if they're representative of the workload implied and are repeatable, then they're statistically relevant.

The quote from above isn't mine, it's from Stilt, and he's done comprehensive IPC testing since ryzen 1k, and he has a different take on it. You might have missed it the first time.

Didn't miss it, just consider more sources.

Also, while I wouldn't base the whole comparison on AVX code, I also wouldn't discount it. If it's relevant enough to put in benchmark suites now then it's following the same path as SSE and SSE2 before it, and will become a deciding factor going forward. SSE2 is the reason Core 2 spanked AMDs decaying Athlon architecture when it was released. For those operations that aren't offloaded to GPGPU for whatever reason, advanced SIMD is going to be a differentiator on the CPU side.
 
I think AVX512 support and impact is relevant to skylake-x buyers. Lots of the market is scientific, often with custom code and no issue with using it to get faster results. I use it when present - it is Mighty.

For consumers, yes, I’d personally toss it from consideration.
 
SSE2 is the reason Core 2 spanked AMDs decaying Athlon architecture when it was released.
Are you sure it was SSE2 and not SSE4?
Legitimately asking, as the Athlon XP only had SSE, and Pentium 4's had SSE2.

The Socket 939 Athlon 64/X2 CPUs did have SSE3 in 2005 after Venice/San Diego were released, and the Core 2 CPUs didn't debut until Q3 2006, in which they had SSE4
So basically, wouldn't it have been SSE4 that made Intel's Core 2 CPUs curb-stomp the aging Socket 939 and AM2 Athlon 64/X2 CPUs?


I think you are mistaken on this one.
Pretty sure it was SSE4. ;)
 
Last edited:
If you remember Intel didn't believe that. Intel never built a RISC processor prior to the pentium pro... they where planning to jump form CISC x86 chips to EPIC (Explicitly Parallel Instruction Computing ). Itanium began development in 1989.

i960 from 1985 and i860 from 1989 says otherwise.

Intels plan was very much to go from 386/486/Pentium -> Itanium down the road when 64bit made sense in desktops. First in the server world and years later replacing their CISC Pentium chips. Even with the pentium pro they still went the whole P4 route. They didn't see the light until AMD forced the issue.

Intel has been trying to kill off x86 since the 8086, which was supposed to have just been a stopgap product to get a 16-bit processor out there to keep from losing market-share until their (failed) iAPX432 project could take over the market. Then it was i960.. then i860... then Itanium.

The i860 probably was probably the only point Intel COULD have put an end to x86 and really change the market. Motorola was going downhill (68k was all but dead, 88k was a flop). IBM's mindshare was gone in the PC market. The other x86 manufacturers relied on Intel's products to base theirs on, and it took them three years to clone the 486. Had Intel not released the 486, AMD, Cyrix, Nexgen, etc would have had to rely on their own enhancements to the already aged 386 architecture, while Intel was busy pushing everyone onto i860. I'd surmised that had the 486 not been released, we'd be in a very different world today. But they did, and developers didn't know which way Intel wanted to go. So x86's momentum carried it forward, i860 was relegated to niche markets, and everyone else decided they had better start doing something to beat Intel.


AMD threw a wrench in the works purchasing NexGen and then quickly getting K7 (Athlon) out the door. Intels plan was to run their long pipeline semi hybrid (CISC/RISC) P4 chips until desktops where ready for 64bit and Itanium.

AMD has been throwing wrenches into Intel's plans for decades, starting with the Am486 when Intel wanted to take complete control of the x86 architecture, then the K6 when they wanted to move away from the less profitable socket 7 platform. The K7 traded blows with the PIII, and the K8 tossed in 64-bit extensions that Intel did not want on x86. And now Zen... but really that's more because 10nm has really bit Intel in the backside for the past three years and their decisions to trade security for performance caught up to them. So Intel's plan lately had just been rehashing the same slightly improved products on new sockets until they could fix al their problems.


I don't think I'm overstating things... Cyrix and NexGen made the first "modern" CPUs. No doubt Intel has refined the heck out of things 20 years later... if Intel had their way though we would have basically been using supped up 486s till the 2010s. lol


You really are though. The Cyrix 486DLC released in 1992 (3 years after the i486) was nothing more than a 386 clone with 486 instructions and its L1 cache. The Cx486 release in 1993 (a year after the P5) was a clean-room copied i486 that was late and slow. The Cyrix 5x86 was a cut-down 6x86 that allowed 486 users to upgrade to Pentium-like performance.. in 1995. The 6x86 (still CISC btw) was the first time a non-Intel chip held the performance crown, but then the PII dropped and the MII, the MediaGX, and the MIII couldn't catch up.

Cyrix's problem wasn't Intel, it was themselves. They were constantly late and almost always overstated their performance (except for the 6x86), but their biggest blunder was allowing their fab partners to manufacture Cyrix's designs and sell them as their own products. IBM was known for going in behind the Cyrix sales guys and selling 6x86 processors to OEMs for less than Cyrix would need to just break even.
 
Was going to say, the only reason Intel succeeded during Netburst was due to extreme anti-consumer and anti-competitive tactics.

In the OEM market that's quite likely; among enthusiasts, well, I ran a few Prescotts due to VIAs extremely poor chipsets, up until the Athlon64 released. They were at least competitive, unlike Bulldozer. That marketshare nozedive was all on AMD.
 
Cyrix's problem wasn't Intel, it was themselves. They were constantly late and almost always overstated their performance (except for the 6x86), but their biggest blunder was allowing their fab partners to manufacture Cyrix's designs and sell them as their own products. IBM was known for going in behind the Cyrix sales guys and selling 6x86 processors to OEMs for less than Cyrix would need to just break even.

FABS lol
Always comes down to those damn fabs and the companies running them. ;) lol Good post. And I agree with you 99%. I think for a lot of those 90-2000s companies that came and went like Cyrix / Transmeta and a few others you have to judge them more by their tech they invented then their actual product/and or execution. Cyrix and transmeta patents are used to this day in a lot of modern CPUs.... the Cyrix and Transmeta patents that aged the best are the power related ones. With the size of the big players the Intels/IBMs ect there was just so many hoops. Legal issues... and as you say OEM sales to deal with and the big boys underselling and strong arming your customers. Never heard the IBM underselling them but ya it makes sense I know IBM sold 5x86 and 6x86 for years after Cyrix was gone... sounds like Cyrix got a raw deal with IBM, shocking. :)
 
Here the 65w 3700x is neck and neck with the mighty 9900k lol.

Three iterations and two nodes against a five year old architecture, and 'neck and neck' is... what?

I've already repeatedly stated that I'd recommend AMD with few exceptions, but it's not because they're absolutely faster.
 
Neck and neck at lower clocks does imply higher IPC since it means Instructions Per Clock, so yeah in average it has higher IPC even if slightly so.

There's nothing to discuss there in reality, it's just denied by those unable to admit reality.

The fact that Intel has been unable to put out anything actually new isn't a negative for AMD since how can they be at fault for Blue teams inability to get working new products? It's hilarious to see this argument "it's against an old architecture", so what if it's old? They don't really have anything better, do they?
 
Outliers are expected -- if they're representative of the workload implied and are repeatable, then they're statistically relevant.



Didn't miss it, just consider more sources.

Also, while I wouldn't base the whole comparison on AVX code, I also wouldn't discount it. If it's relevant enough to put in benchmark suites now then it's following the same path as SSE and SSE2 before it, and will become a deciding factor going forward. SSE2 is the reason Core 2 spanked AMDs decaying Athlon architecture when it was released. For those operations that aren't offloaded to GPGPU for whatever reason, advanced SIMD is going to be a differentiator on the CPU side.


Those outliers are, in this case, only representative of avx512, the performance increase there doesn't carry over avx2 and below performance.

The above, plus the extremely limited uptake of avx512, is why it shouldn't be lumped in for IPC calculations, it's simply not indicative of actual performance in 99% of situations.

Also, the comparison between avx and sse is also not a great one, desktop adoption or avx/2/512 is far lower than sse/2/3/4, there's simply no corollary between them in the real world.

The people/instructions that will both benefit from avx512 and do the work required to utilize it, should buy skylake-x.


..... They were at least competitive, unlike Bulldozer. That marketshare nozedive was all on AMD.

They caught up with Skylake. That's five years old.

More doublespeak, when AMD is beaten by Intel, it's their fault for a bad design (which is correct) when Intel is beaten by AMD it's not their fault because they INTENDED to be on a better node, so it doesn't really count

In addition, this nonsense about five year old architecture needs to stop, if there really hasn't been an improvement in performance in five years, then we can just compare against haswell.
But we both know that's nonsense, there was an improvement between just the last two revisions, much less going back to 2014.

Skylake-x is this years architecture, with all the improvements that goes along with it. Including the avx512 support that you insist is indicative of performance for the other 99% of applications.
 
Three iterations and two nodes against a five year old architecture, and 'neck and neck' is... what?

I've already repeatedly stated that I'd recommend AMD with few exceptions, but it's not because they're absolutely faster.

In fairness, it's not like Intel has done much other than add cores and slightly tweak their "five year old architecture."
 
Three iterations and two nodes against a five year old architecture, and 'neck and neck' is... what?

I've already repeatedly stated that I'd recommend AMD with few exceptions, but it's not because they're absolutely faster.

Architecture is irrevelant, as the 9900k was launched almost a year ago just about, and is Intel's best at the moment (excluding HEDT, etc)

Also, recommending AMD doesn't get you out of ingnoring the obvious, its like saying you're not racist because you have a black friend lol.
 
To me there’s two ways to look at it.

AMD would be behind if Intel didn’t misstep.

Or

Intel did not do their due diligence with security and are paying for bad engineering.

I actually think the Intel paying for their lack of QC is the better way at looking at it.

At the end of the day I suggest Intel only for high Hz gaming. Around 90Hz or higher for minimums. I also percieve Intel as being more compatible but the flipside is there’s a risk for possible unforeseen security issues. I see this risk as low for personal computing which is what I do.
 
To me there’s two ways to look at it.

AMD would be behind if Intel didn’t misstep.

Or

Intel did not do their due diligence with security and are paying for bad engineering.

I actually think the Intel paying for their lack of QC is the better way at looking at it.

At the end of the day I suggest Intel only for high Hz gaming. Around 90Hz or higher for minimums. I also percieve Intel as being more compatible but the flipside is there’s a risk for possible unforeseen security issues. I see this risk as low for personal computing which is what I do.

Agreed, although you would need a 2080Ti to take advantage of the 9900k's higher gaming perf.
 
They don't really have anything better, do they?

Yes, Intel does. Quite a bit better.

Also, the comparison between avx and sse is also not a great one, desktop adoption or avx/2/512 is far lower than sse/2/3/4, there's simply no corollary between them in the real world.

The comparison made above was SSE / SSE2 at introduction on the Pentium III / IV versus now. AVX is still relatively new across the board, but it provides significant benefits and is absolutely usable in real-world applications and developers are including it in their code, so it has statistical relevance now. Going forward, AVX performance will likely be a differentiator for compute code that is run on the CPU.

More doublespeak, when AMD is beaten by Intel, it's their fault for a bad design (which is correct) when Intel is beaten by AMD it's not their fault because they INTENDED to be on a better node, so it doesn't really count

If AMD could have actually beat Intel -- not be argued to have reached occasional parity -- I'd be running Zen today. I don't need more cores, nor do most consumers or enthusiasts, but I'll definitely take better single-core performance in a heartbeat. And the extra PCIe lanes on X570 are definitely a draw. But the reality is that Intel (and ASRock on the board side) provided what I was looking for first, and that product has yet to be eclipsed by AMD for my purposes.

In addition, this nonsense about five year old architecture needs to stop, if there really hasn't been an improvement in performance in five years, then we can just compare against haswell.
But we both know that's nonsense, there was an improvement between just the last two revisions, much less going back to 2014.

There were few core-level and IPC-level improvements, but Intel has made pretty large improvements to the package. Higher average overclocks for enthusiasts, more cores per socket in every market including mobile, and lower power parts for mobile are all improvements made during the tenure of 14nm Skylake releases. Note that AMD doesn't have a competitive mobile part that could hang with a two generation old Skylake 15w quad-core, let alone the new 10nm 15w Ice Lake quads that have graphics that match AMDs APUs :D.

When it comes to mobile, and entry-level desktops, SMT4 might actually be pretty useful in keeping cost and power draw down while still maintaining usability for productivity users.

Skylake-x is this years architecture, with all the improvements that goes along with it.

The base architecture is Skylake. That's five years old. Yes, Intel tacked on more SIMD, and yes that's useful, but the architecture is still Skylake, and it's still 14nm, and it still performs like Skylake outside of SIMD.

Including the avx512 support that you insist is indicative of performance for the other 99% of applications.

99% of what applications? I draw comparisons to the takeup of SSE and SSE2 for a reason: they were out for five or six years before they really came into their own and became a deciding factor for CPU performance. We're what, a few years in for AVX, and we're seeing developer interest and commercial takeup? Yeah, that's the same path that SSE took, and we have no reason to believe that the market will not put it to use. To wit: AMD is including successive AVX improvements in Zen. AVX is statistically relevant.

In fairness, it's not like Intel has done much other than add cores and slightly tweak their "five year old architecture."

Oh, that's entirely fair. Their 10nm stumble has been frustrating all around.

AMD would be behind if Intel didn’t misstep.

Or

Intel did not do their due diligence with security and are paying for bad engineering.

It's really both. Given AMD is only approaching parity with Intel's aging 14nm Skylake architecture, and that Skylake is only still around because Intel so resolutely fumbled their 10nm node, Zens release is very much the best-case scenario that AMD would have been foolish to hope for. AMD is extremely lucky.

Bad for Intel -- very bad! -- but great for AMD and great for competition. AMD is producing CPUs that satisfy emerging demands as well as markets where Intel literally doesn't have the capacity to serve, and that's getting AMD a toehold of marketshare that they desperately need in order to stay competitive into the next decade. They also have TSMC wholly on board as a partner, not just for the production business but also for the publicity, and that's a big deal for fab-less AMD when trying to compete with Intel, especially once Intel gets their fabrication schedule back on track.

Agreed, although you would need a 2080Ti to take advantage of the 9900k's higher gaming perf.

And this is why I recommend AMD by default. Unless someone is just looking to burn money for gaming, or they're actually interested in competitive gaming, the most important part of gaming performance is keeping frametimes flowing consistently and quickly enough to support the responsiveness that the average gamer is looking for, and AMD does that for less.
 
No, Intel has nothing better than the 9900 for the general public, laptop chips aren't better, if they were then we would be seeing those instead of the factory binned 5GHz 9900 as the latest and greatest.


Thanks to Jim Keller they will get a proper good design again, I'm certain of that, dunno if their good design will be able to debut on something other than 14nm+++++plus ultra
 
I love how gamers don't need more cores.. with all of these F&*(**& launchers today I sure need more cores to offload all of the erroneous BS my computer is saddled with.

So yes I want more cores than the 4 I have today. Do I need 12 or 16 or whatever greater number is for gaming. Nope. I'd be happy with a fast 8 or 12 really if I can get my hands on it when it is time to refresh. Other than that... meh...

And truth be told. I want AMD right now for my next refresh. I regret getting the 7700k when I did as it was when the new AMD CPU's were launching and the ONLY reason I went Intel was product scarcity on the AMD side. To think I would be able to just drop in a new CPU with new bios update and leave everything else the same if I had gone AMD then.

*weeps*
 
Just to clear up any misinformation, Skylake-X also includes a mesh interconnect as well as a different cache arrangement in addition to extra SIMD capability. Saying Skylake-X is the same architecture as Skylake-S is a half truth at best.
 
  • Like
Reactions: blkt
like this
The comparison made above was SSE / SSE2 at introduction on the Pentium III / IV versus now. AVX is still relatively new across the board, but it provides significant benefits and is absolutely usable in real-world applications and developers are including it in their code, so it has statistical relevance now. Going forward, AVX performance will likely be a differentiator for compute code that is run on the CPU.

A few things... first AMD supports all the AVX crap outside of 512 its not like its an Intel exclusive tech. AVX-512 is the only extension set exclusive to Intel... and to be honest its a hot fucking mess, avx512 includes 19 or 20 types of instructions. The core commands are common as is conflict detection... after that Intel has 3 generations with 512 and every single one of them supports different bits of those instructions with Knights landing being the only parts that support a handful of the extensions at all.

The only software that really makes good use of AVX and AVX2 are a handful of scientific calculation programs... Blender and a few x264/265 encoders. (and I would say most people really using something like blender aren't really using their CPU with cycles) All work on AMD and Intel... and AMD in general is giving you more cores which is no small thing if that type of work really matters to you.

As for AVX-512 unless your a scientist that has use for faster fast Fourier transform computation libraries there isn't to much practical use today and likely ever for AVX512. Its just the way it is it doesn't accelerate anything that is useful to 99% of the population.

At the end of the day AVX-512 implementation is a mess... and outside of scientific use and video encoding its not really useful. Sure Intel using AVX512 may be able to encode a video a bit faster..... however is a 8 core Intel chip using AVX going to be faster then a 12 core AMD part doing that. No. If you really have a need to do science on the cheap... or your a heavy user of video encoding. You should probably be paying more attention to your GPU... and or buying more real cores.

AVX-512 is no trump card. The stuff it would be good at accelerating is better done by GPUs.
 
Last edited:
Back
Top