N4CR
Supreme [H]ardness
- Joined
- Oct 17, 2011
- Messages
- 4,947
Zen+ already had higher IPC, might want to take that stock portfolio out of your eyeglasses.That'd be a nope.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Zen+ already had higher IPC, might want to take that stock portfolio out of your eyeglasses.That'd be a nope.
Zen+ most certainly does not have higher IPC than something like the 9900k.Zen+ already had higher IPC, might want to take that stock portfolio out of your eyeglasses.
The difference is small enough to not matter. Intel just has the clocks. It is dumb to go anything Intel over AMD at this point. The new TR CPU are going to crush Intel. Sure for that small niche of low res high fps segment Intel still owns. Intel got nothing elseZen+ most certainly does not have higher IPC than something like the 9900k.
That's the most concentrated nonsense I've read in a long time.
Zen+ already had higher IPC, might want to take that stock portfolio out of your eyeglasses.
The difference is small enough to not matter
It is dumb to go anything Intel over AMD at this point.
The new TR CPU are going to crush Intel.
Sure for that small niche of low res high fps segment Intel still owns. Intel got nothing else
but the real question is how long would it take microsoft to unfuck their scheduler to actually make SMT4 work?
guys... we haven't seen performance parity between CPU vendors like this since... what Cyrix, AMD, and Intel were in competition? Ok AMD and Intel really. Most of you young bucks probably don't remember Cyrix at all.
Started on Cyrix, actually- but that was back when AMD and Cyrix were copying Intel's x86 CPUs wholesale.
Started on Cyrix, actually- but that was back when AMD and Cyrix were copying Intel's x86 CPUs wholesale.
That is 100% untrue. Cyrix didn't have a licence until Intel gave them one to shut them up about Intels thieving ways. AMD did for sure have some licensed clones early on (286 and earlier), but started deviating around 386 days.
In fact Intel lost EVERY lawsuit they filed against Cyrix .... because no Cyrix chip ever was a copy in anyway at all.
Cryix counter suited and WON... cause Intel flat out ripped off Cyrix advancements when they designed the Pentium pro; you know the design that they have basically just been adding to for over 20 years now. Cryix ended that suit with a simple cross licence deal where Intel agreed to stop trying to sue Cyrix in return for Cyrix allowing Intels theft. Our absolutely shit legal system is responsible for the the fall of Cyrix. Cryix just wanted the legal bills to end... never mind that they where in the right every single time. They should have told Intel to spit and forced them to pay Cyrix royalties, in hind sight Intel would have owed Cyrix a royalty on every processor they have made since.
Cyrix contribution to CPUs has been reduced to platitudes in wikis. lol
From the Pentium pro Wiki page;
"The Pentium Pro incorporated a new microarchitecture, different from the Pentium's P5 microarchitecture. It has a decoupled, 14-stage superpipelined architecture which used an instruction pool. The Pentium Pro (P6) featured many advanced concepts not found in the Pentium, although it wasn't the first or only x86 processor to implement them (see NexGen Nx586 or Cyrix 6x86). The Pentium Pro pipeline had extra decode stages to dynamically translate IA-32 instructions into buffered micro-operation sequences which could then be analysed, reordered, and renamed in order to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro thus featured out of order execution, including speculative execution via register renaming. It also had a wider 36-bit address bus (usable by PAE), allowing it to access up to 64 GB of memory."
No mention of Cyrix suing Intel over their theft and Intel settling. The 6x86 wasn't just first to those ideas Cyrix invented them. Intel is great at copying they copied their basically complete speculation out of order operation design from Cyrix and they where able to copy x86_64 thanks to previous court cases where AMD and Intel agreed to share. Intel surprisingly has very few original ideas... but with a massive war chest and 1000s of engineers they have been able to refine things and win the fab wars. (well till recently on the fab wars anyway)
That is 100% untrue.
I started on a Cyrix 486. That's true. It was a copy of Intel's 486. That's also true. That's what I stated, and it's 100% true.
I didn't say anything about licensing
"Let us compare the execution units of AMD's Ryzen with current Intel processors. AMD has four 128-bit units for floating point and vector operations. Two of these can do addition and two can do multiplication. Intel has two 256-bit units, both of which can do addition as well as multiplication. This means that floating point code with scalars or vectors of up to 128 bits will execute on the AMD processor at a maximum rate of four instructions per clock (two additions and two multiplications), while the Intel processor can do only two."Chiming in a little late, but I did work a lot on SMT when I was a cpu designer (it's been a while, but the concepts are the same).
Increased scaling with increased threads / SMT does not mean that SMT is a better implementation per se. We have to remember what SMT is: a tool to increase procunit (ALU and friends) utilization.
When you add more top end threads, and it increases performance, what does that mean? Well, the simplest explanation is that the cores were not being fully utilized before. That can be due to a large number of factors, but that core (pun) point remains true. There were idle resources, else you would still not increase throughput with more threads on the top end.
I have not studied current architectures to a significant degree to indicate what the real issues are for both architectures. It simply reflects that Intel's default thread to core ratio is more balanced in terms of execution needs. Intel may be much faster on the top end. Zen may be much faster on the bottom (and thus starved more).
It's a balancing act. If you get no increase in performance from SMT, that means your procunits are always fully saturated, and perhaps you should add more to handle per-thread ILP. On the other hand, you don't want to go nuts with procunits which are largely idle, and thus require a myriad threads feeding it to be competitive.
As always, please benchmark with the things which most closely approximate (or actually are) your expected workload.
Exactly, and the NEC V30 was ~18% faster than the Intel 8086, and both CPUs were easy drop-in upgrades.actually WAY before then even.
i had a NEC V20 that was 20% faster than the Intel 8088 it replaced
"Let us compare the execution units of AMD's Ryzen with current Intel processors. AMD has four 128-bit units for floating point and vector operations. Two of these can do addition and two can do multiplication. Intel has two 256-bit units, both of which can do addition as well as multiplication. This means that floating point code with scalars or vectors of up to 128 bits will execute on the AMD processor at a maximum rate of four instructions per clock (two additions and two multiplications), while the Intel processor can do only two."
From my previous link, in normal (non high precision loads) AMD can run up to 4 simultaneous operations while Intel is limited to 2 in best case. In higher precision work loads they can both do 2 at a time. So (generalizing) AMD worst case is equivalent to Intel and best case is twice as good. In reality it appears to have a lead in smt usage. Obviously there is a lot more to this, it's very over simplified, but keeping the instruction pipeline/buffers full and data available is the hard part.
"Let us compare the execution units of AMD's Ryzen with current Intel processors. AMD has four 128-bit units for floating point and vector operations. Two of these can do addition and two can do multiplication. Intel has two 256-bit units, both of which can do addition as well as multiplication. This means that floating point code with scalars or vectors of up to 128 bits will execute on the AMD processor at a maximum rate of four instructions per clock (two additions and two multiplications), while the Intel processor can do only two."
From my previous link, in normal (non high precision loads) AMD can run up to 4 simultaneous operations while Intel is limited to 2 in best case. In higher precision work loads they can both do 2 at a time. So (generalizing) AMD worst case is equivalent to Intel and best case is twice as good. In reality it appears to have a lead in smt usage. Obviously there is a lot more to this, it's very over simplified, but keeping the instruction pipeline/buffers full and data available is the hard part.
Obviously there is a lot more to this, it's very over simplified, but keeping the instruction pipeline/buffers full and data available is the hard part.
Exactly, and the NEC V30 was ~18% faster than the Intel 8086, and both CPUs were easy drop-in upgrades.
I'm still running an NEC V30 to this day, and the performance boost over the 8086 was noticeable, especially in FTP transfers.
This was waaaay before both VIA and Cyrix, back in the early 1980s.
Yes it had the ability to translate 486 instruction calls into microops their RISC chip could compute. Had they simply etched 486 instructions into their CPU they would have lost their vs Intel law suit.
And the AMD 486 I used?
Perhaps there is something to what you're saying -- however, it is also true that Cyrix and AMD were copying x86 designs from Intel early on. I did own an AMD 686 with 3DNow! later on, too. It was okay, but AMD sucked at FP until the Athlon, and then they decided to suck again with Bulldozer.
they literally invented modern x86.
This is probably taking it too far. Yes, the idea of decoding x86 into micro-ops so that instruction-level parallelism and out-of-order execution may have predated the Pentium, but it was also really the only way forward. 'RISC' just means reducing the instructions and thus input complexity of the CPU ISA; everything is RISC these days.
I agree, but the design decision has implications that give it the lead in some cases and equivalency in the others. I work with plenty of mcu's that do 1 instruction per cycle (ok, some complex instructions can take more), but those tend not to be the anywhere near fast enough for today's desktops. Overlapping instructions and deep pipelines are necessary to keep busy and be able to work on multiple opcodes simultaneously, but sometimes you have stalls due to waiting on results (aka, some things can't run in parallel). These are the instances where smt makes sense, or if your using one specific unit but not another, it allows multiple threads to run along side each other instead.of just stalling and leaving the pipeline under utilized. So it does depend on th cpu, decoder, memory system, cache and probably a ton of other things, but these things will happen in most workloads (just to a differing degree). As mentioned though, AMD has a more flexible arrangement and tends to lead in smt gains (in most workloads that can actually use that many threads, games aren't taxing 12 cores, forget 24 threads).Yes - I think we're agreeing. An increase in procunit efficiency only matters if it is consistently saturated. SMT is a tool to assist for an otherwise unbalanced design.
To clarify - I do not say "unbalanced" as a pejorative. It's a totally valid approach to have bulkier cores and then adjust the thread ratio to optimize efficiency. It's also valid to have a design with is more optimal closer to 1:1 T:C if you can more closely match the throughput of fetch+decode with compute/writeback.
As we've said in many such threads - isn't it nice to see competing effective designs again? Who wins? WE DO!
Intel never built a RISC processor prior to the pentium pro
they where planning to jump form CISC x86 chips to EPIC (Explicitly Parallel Instruction Computing ). Itanium began development in 1989.
AMD threw a wrench in the works purchasing NexGen and then quickly getting K7 (Athlon) out the door.
I don't think I'm overstating things... Cyrix and NexGen made the first "modern" CPUs.
if Intel had their way though we would have basically been using supped up 486s till the 2010s. lol
Zen+ had decent IPC -- Zen 2 has a little more. Neither eclypse Intel's aging Skylake arch.
, and they're essentially *two nodes behind, and about four years behind where they themselves planned to be.
[*marketing nodes]
-> brought the old Keller guy from where?
To be fair, Jim Keller does whatever the eff he wants on his terms, he bounces back and forth from a lot of high profile tech companies and has worked for some more than once...
But this confuses me, AMD had a licence for CISC silicon, Cytrix didn't, but didn't need it... Then why do people keep saying AMDs x86 license is not transferrable or some such? If AMD is really RISC silicon with translation, then anyone can do that? Anyone can go to AMD, licence their shit, and have an AMD junior CPU that is x86, just not CISC... I mean what is an x86 license anyway?AMD had an actual licence. As I say ya those where basically clones... not only that they where built under contract from Intel to serve low cost markets. Their 386 was 100% a copy they where contracted by Intel to crate make them. Their 486 was their own design but again leaning heavy on their Intel licence. They paid royalties. (back then there was no guarantee that x86 was going to be the big boy forever... IBM power and MIPS ect could easily have replaced x86 if they got the right push, so Intel was happy to have low cost options around to head off MIPS especially) The last pure AMD cloner design was the K5... and although it had some "enhancements" and design choices some of which where of course to reduce production costs... they where still paying royalties.
Cyrix never had a licence, and never paid Intel one red cent. Their CPUs where NOT standard x86 chips. They where RISC chips that broke the x86 instruction calls down so they could be computed by their simpler RISC core. Turns out that was the future of x86 today neither Intel nor AMD execute pure x86 instructions in silicon as the 8086s ect did. They have a speculation engine that takes those calls and breaks them down into easier to compute chunks and are chunked by the RISC style cores at the heart of modern Intel and AMD chips.
AMD got a boost FPU and IPC wise after K6... which is when they bought out NexGen. NexGen like Cyrix was using a speculation engine to run x86 code on a RISC core. K6 was mostly a relabeled NexGen 5x86... later K6s got some AMD stuff included (3DNow) The K7 (Athlon) was the first time AMD took they had a chip Designed with both their own Engineers and the NexGen folks they acquired in their NexGen purchase (1.4 billion in 2019 dollars).
So ya back then Cyrix never ever copied an Intel design they built a RISC chip that could execute x86 commands. Its not just a legal difference... they literally invented modern x86. Around the same time Nexgen was also working on much the same tech and between Cyrix and Nexgen they basically had patented the 2 most practical ways to create a speculation engine. (There is a reason why L1 / L2 / L3 cache all sounds so much the same even if your talking about Power, ARM or RISC-V) Intel got a hold of the patents they needed to build the Pentium Pro by first just straight up stealing... and their lawyers made it ok by agreeing to basically leave Cyrix alone forever. (which was dumb on Cyrix part in hindsight... they should have pushed that suit to conclusion and they could have lived on forever with a nice juicy cut of every Intel processor sold for 20 years) AMD got into the same type of tech with a purchase. As I was saying we should all be thankful Intel wasn't smart enough to out bid AMD for Nexgen... had they done that Intel would have a monopoly on x86.
But this confuses me, AMD had a licence for CISC silicon, Cytrix didn't, but didn't need it... Then why do people keep saying AMDs x86 license is not transferrable or some such? If AMD is really RISC silicon with translation, then anyone can do that? Anyone can go to AMD, licence their shit, and have an AMD junior CPU that is x86, just not CISC... I mean what is an x86 license anyway?
That has me a little confused.
Hopefully soon, looking at 2020 though, hopefully first quarter, but that's just my optimism speaking .I don't recall if the Pentium was RISC, but it did decode and do out-of-order execution. The Pentium Pro improved that a bit and brought it to a full 32-bit pipeline.
We'll probably go back to EPIC / VLIW. The idea pushes parallelism onto the compiler, which is why it was so difficult to get optimized code for the Itanium. Code that was optimized though, flew. And compiler design has come a long way; for AMD to be considering SMT4 for a consumer processor, they need to be pretty confident in software optimizations to make a difference, as otherwise their cores will stall on cache misses and context switches. Compiler optimization is the future of computing performance increases. Perhaps we should look for Linux kernel commits to see where AMD is going with SMT4?
K7 was equivalent to P6 with a tad more raw FPU, which while nice, wasn't a big leap. The leap was that AMD was competitive at all. Of course, they had supply problems and getting a K7 board for Slot A working was a shitshow. I don't know many that bothered. But Socket A, everyone jumped on that one. That's when Intel took a left turn and instead of giving us Tualaten CPUs (updated Pentium IIIs), they went with the RAMBUS-equipped Willamette. You were still lucky to have a stable AMD system, but you could afford it, and it was competitive.
Notably, K7 and its successors didn't improve in terms of IPC nearly as much as Intel improved the P6 once they decided to shift back in earnest. Literally the first new P6 had people clamoring for more, and once Core 2 hit, AMD was done until Zen+. They still haven't pulled out of second place IPC.
If Cyrix's CPUs had been that good, they'd have taken the market. They weren't. I ran them. They worked, but they were weak. Same for AMD up until K7 -- I used its predecessors. Also weak. Ran plenty of K7s too though.
This I don't get. Intel has been innovating like crazy, and hasn't stopped. AMD bought the team that brought them the K7; they refined it a bit, brought the memory controller onboard, expanded the registers to 64bit at a strategic point in time, shrunk it and pumped up the clocks a little, but ultimately abandoned it for something... worse. They had their own Netburst moment, right after they kicked its ass. Excuse me if I don't put much faith in them. I want them to succeed, but I also spend so much time just getting their damn boards to function too -- and Ryzen has been no different. Ran tons of ATi GPUs, cried a little inside when AMD bought them, ran a few -- and spent plenty of time with several AMD GPU fuckups. And I still buy them and recommend them.
But I'm under no illusion just how AMDs brilliance and release shitshows go hand in hand. I don't expect SMT4 to be any different, even as I understand that there's plenty of room there for them to carve out more of a performance niche.
And while they do that, remember that Intel has been putting out their top-end desktop CPUs in the US$300 - US$350 range for over a decade now. They've increased clockspeeds, increased IPC, increased the performance and stability of their GPUs and expanded the capabilities of their GPUs beyond what even AMD has done -- and they've led in Linux kernel development. Intel has worked both to increase performance and decrease price / performance even as AMD took a left turn into irrelevance for a decade. Literally competing with themselves, and yet still serving customers.
AMD still doesn't have an answer for laptops -- they're two or three generations behind Intel in mobile -- and yet that's where Intel has put their newest technology, not just new CPUs, but also new graphics, chipset, and wireless capabilities. And I'm over here wishing for a full-blooded AMD APU and accompanying featureset for my next laptop upgrade. Unfortunately, I've watched AMD long enough to know not to be too optimistic.
I don't recall if the Pentium was RISC, but it did decode and do out-of-order execution. The Pentium Pro improved that a bit and brought it to a full 32-bit pipeline.
If Cyrix's CPUs had been that good, they'd have taken the market. They weren't. I ran them. They worked, but they were weak. Same for AMD up until K7 -- I used its predecessors. Also weak. Ran plenty of K7s too though.
But this confuses me, AMD had a licence for CISC silicon, Cytrix didn't, but didn't need it... Then why do people keep saying AMDs x86 license is not transferrable or some such? If AMD is really RISC silicon with translation, then anyone can do that? Anyone can go to AMD, licence their shit, and have an AMD junior CPU that is x86, just not CISC... I mean what is an x86 license anyway?
That has me a little confused.