Number 3 in this top 5 worst processors of all time is technically somewhat a lie

Joined
May 22, 2010
Messages
2,079
This article in the link below is technically a lie about the Intel Itanium as one of the 5 worst processors of all time because Itanium was never meant to be a desktop processor anyway and it didn't get support because programmers were somewhat to lazy to reprogram some their programs if not all for the new EPIC (Explicitly Parallel Instruction Set ) Architecture regardless of if I can understand why to some degree considering the amount of time needed to do so, but every program has or has a fundamental flaw anyway and as for Windows 10 that was why Microsoft may have actually skipped called Windows as Windows 9 because of third party if not their own softwares problem with version checking in the code.

Itanium was and still is mostly used in highend or still existing mainframe computers using Intel processors that I know of or might be correct. AMD just beet Intel to the table with making x86_64 aka AMD64 backwards compatible to the table just as Intel had done with past processors and besides my former Linux Instructor who has a PHD in Electrical Engineering said Intel attempt or way of making x86 backwards compatible all these years with past x86 architectures was flawed anyway. As for the other processors this article mentions people may have had no other choice based on budget. I know Itanium is at least used for highend servers currently because I spoke with HP about HP-UX and they sent me links about HP-UX as well as their highend server featuring Itanium 9500 and 9700 processors, which weren't that bad interms of specs.

However, I am choosing or gonna choose not to buy these or HP-UX because it is closed source and will only run on HP hardware meaning if the machine(s) breaks down moving the operating system with or without the original harddrives in going to be difficult and the same goes for IBM's AIX that must be run on IBM power-z processors systems, which is the possible problem with Solaris that pretty much requires Sun now Oracle Sparc processors as well as possibly even AT&T's UNIX that they said they still use that won't or couldn't answer what hardware it required to run on. That is the problem with so many other processors or processor based systems to is software portability too, but that is not what this thread is exactly about.

The only problem I see with the Itanium now is that it lacks as many cores as a current Xeon Scalable if not some previous Xeons products. Also, that any available motherboards may lack enough expansion slots for graphics processing or significant graphics processing to take that load off the CPU.

https://m.hardocp.com/article/2017/08/17/top_5_worst_cpus_all_time
 
Last edited:
The arch wasn't well suited to what we need from processors though, which was a big part of the reason it didn't get traction. It has nothing to do with developers being lazy, and man, that's becoming a real tired excuse for poor designs.

The VLIW design is something a lot of (mostly academic) people fell in love with, mostly for theoretical reasons which wouldn't and many of us argued couldn't work well in general purpose computing. The peak performance was always predicated on having a Magic Compiler which solved all your problems. It would magically extract parallelism which wasn't there from your source code, the chip would be fully utilized all the time, and we'd sing songs. But of course that didn't, and can't work.

And yes, the architecture was absolutely expected to be in desktops. Intel wanted to get the world away from x86 to something they'd own entirely with no sharesies, and desktops were a part of that plan. To be fair, at the time it did look like the x86 was a dead end.

Since then, x86 (including the x64 variants) have evolved in a better way, learning from some of the missteps. The addition of SIMD instructions means the x86 can perform very well in tasks which ARE explicitly parallel (basically, VLIW when you need it), while retaining a traditional instruction set for things that are not (most of your code). To help ensure execution units are not idle, SMT is deployed allowing a core to get high utilization.
 
What about the Pentium Overdrive processor? I would certainly nominate that guy for the list.
 
Too bad there is not a thread about this article already.
 
This article in the link below is technically a lie about the Intel Itanium as one of the 5 worst processors of all time because Itanium was never meant to be a desktop processor anyway
Technically, where did we exactly say that it must be a desktop processor? I am not sure you know what "technically" means.
 
The arch wasn't well suited to what we need from processors though, which was a big part of the reason it didn't get traction. It has nothing to do with developers being lazy, and man, that's becoming a real tired excuse for poor designs.

The VLIW design is something a lot of (mostly academic) people fell in love with, mostly for theoretical reasons which wouldn't and many of us argued couldn't work well in general purpose computing. The peak performance was always predicated on having a Magic Compiler which solved all your problems. It would magically extract parallelism which wasn't there from your source code, the chip would be fully utilized all the time, and we'd sing songs. But of course that didn't, and can't work.

And yes, the architecture was absolutely expected to be in desktops. Intel wanted to get the world away from x86 to something they'd own entirely with no sharesies, and desktops were a part of that plan. To be fair, at the time it did look like the x86 was a dead end.

Since then, x86 (including the x64 variants) have evolved in a better way, learning from some of the missteps. The addition of SIMD instructions means the x86 can perform very well in tasks which ARE explicitly parallel (basically, VLIW when you need it), while retaining a traditional instruction set for things that are not (most of your code). To help ensure execution units are not idle, SMT is deployed allowing a core to get high utilization.

itanium never was a poor design in my opinion except I could get my hands on it to prove it because every attempt I made failed and maybe for good reason. The only flaw I saw was in the design of the motherboard because it lacked enough graphics processing to do bitcoin or protein folding from home with a computer to lend extra processing power on a computer to do so.
 
My Linux Instructor at the Community College I attended before the University I'm at now said Intel's x86 architecture was flawed and that it ran hotter than RISC, but I countered his argument with saying that if intel and AMD's former CISC (Complicated Instructions Set Computer) became corrupt then how would the machine continue to function properly compared to RISC (Reduced Instruction Set Computers) that only new some of what they needed to do and had to have the rest programmed in software from my understanding. Which got me looking at Sun's now Oracle's Sparc processor with 8 threads per core, but an extremely high price of about around $30,000 or so or paying for the propreitary computers that didn't have configurations that were always desirable to the customer despite how well the processors may have been designed and this goes for any other computer or processor out there meant to do a job faster and more efficiently than a person or human and even an animal for that matter. This always made Intel, AMD, and even Cyrix options more desirable, but my real argument here is that Itanium and Itanium 2 either did or are getting passed up because they required full or almost full reprogramming of old software that people cling to or businesses cling to that still do the job at least for now and computers as well, but for use to make progress or for computers to count time beyond there limits they need upgraded and software needs migrated to newer machines that can continue to count beyond the older machines capabilities.

Itanium was the first PC 64-bit processor that I knew of before AMD officially offered the Athlon 64 to replace x86 32-bit processors with x86_64 or AMD's coined AMD64 instead of EMT64 or maybe IA-64. Itanium was supposed to replace all that with the EPIC instruction set (Explicitly Parallel Instruction Set), but AMD kinda or did steal that away from Intel with the Athlon 64, even though many Athlon 64 motherboard offerings only supported up to 2 GB of RAM instead of beyond 4 GB of RAM that was supposed to be the main benefit of 64-bit computing if not more. AMD did always offer better floating point than Intel with the Athlon XP, but I became frustrated by attempts to get closer to integrated CPU graphics because the performance was light years away from being as good as Graphic Card performance as well as offerings. However, I guess the ultimate goal is to get one processor to do it all and to get in into a small enough portable device for us to communicate and live our lives, but the high-end computing may or is still necessary except maybe mainframe computing because it was to bulking, difficult to move, maintain, and work with as well as restrictive. Interfacing with small computers such as a smart watch can be restrictive too and a hassle to have to charge every night as well as keep secure, but the trade off of capabilities and freedom might be worth it and global time synchronization as well except that I could believe it was already a certain time of day and I still hadn't gotten certain things done yet and it was already night time or dark out usually meaning it was time for bed or difficult to stay awake or see around me without some kind of artificial light or some kind of light from something.

My instructor said we measure time as though it doesn't exist, but time is considered another dimension or 4th dimension in Math or Science especially maybe even Physics Science that we measure to help us know how long it takes us to do something or maybe how many hours of daylight and hours of night we have. Whatever other reason we measure time we have reasons for it and we need ways to measure it as well as keep track of it somehow and computers fulfill that need as well as all other forms of keeping track of time including mechanical watches. People can make fun of daylight savings time all they want to as well because we know were not saving time in reality, but instead forcing people to get out and work during the day or kids to be out in the day, so they don't have to walk home in the dark and it's not just for people who still use candles either because it's a global if not universal problem of conserving hours of daylight for whatever the reason we have such a limited amount of it.

It seems though that my real concern was that Itanium support will now be officially ended though and none or most of us end users won't get the to see the real benefit of it. Also, AMD's Eypc and Threadripper will now compete with Intel's Core i9 and Xeon W as well as Xeon Scalable if not the lowest end Celeron, Pentium, Core i3, Core i5, and Core i7 or Xeon E3. Also, that Xeon W seems to be just a replacement for Xeon 1356 and Xeon Scalable seems to be it's own thing, but difficult to find boards and other components for as well as really pushes the limits of typical home electrical outlets in home offices or whatever being either the usual 15 amp or 20 amp circuit at about 1500 watts at the most if not slightly less for a 15 amp circuit or 2400 watts at the most or slightly less for a 20 amp circuit if the maximum number of internal devices is to be achieved for whatever particular configuration is desired. If all that is correct, which it is at least close because I looked it up on the internet and Johnny Guru stated in the past that the maximum for a 15 amp circuit is about 1500 watts, which you don't want to go over to prevent fires or blowing a circuit breaker because if you blow a circuit you should ask or find out why you did considering there are limits eventually to some degree at least and those limits can be difficult to completely overcome.

Intel Xeon Scalable will use socket 3647, Xeon W and Core i9 will use socket 2066 and the lower-end Celeron, Pentium, Core i-Series, and Xeon E3 will use socket 1151 for now except for laptops and other mobile devices. Therefore, I guess Xeon W is the equivalent replace of the Xeon E5 for socket 2011 in terms of socket 2066, but finding somebody who sells one seems to be extremely difficult. Therefore, unless I find a seller and decide I like the processor I'm going to pass on getting one and move to a different platform or stay where I'm at for now until I find or can afford a product I like or can hopefully help make one that I do. Considering I discovered that the kernel I made with borrowed code already has a shell, but it doesn't do anything except bring up a cursor without a prompt and says my first kernel with keyboard support using borrowed code that took years for whoever wrote it to come up with properly and complaining about is difficult, but happens because it doesn't do something or I don't if not anyone else has difficulty figuring out how to get it to do what we want it to do ourselves and why Stallman made a very good argument against proprietary software except having security because people didn't respect each others files before the year 2000 and still might not.

As I said though Itanium never seemed to be meant for a desktop processor though and just got studied and used for high-end server though, which I don't believe the technology ever got implemented into lower-end solutions in any possible way that I know of either, but to me it wasn't one of the worst processors ever manufactured. it did suffer from the same problem as the first Pentium 4's though and that was the over-sized processor or circuit board used to deliver the processor to the CPU socket, like WIllamette did and that Northwood and Prescott fixed for the Pentium 4. Also, that I'm not sure where they corrected this with the Itanium.

The nastagia nerd I believe said the Pentium M and Celeron M had PAE (Physical Address Extensions) that allowed it to support beyond 4 GB of memory and NX that also help extend memory, but that earlier processors than the Core 2 Duo had difficult reporting to benchmark or CPUID detection programs, such as CPUID, PC Wizard, or SiSoft Sandra, even the Pentium 4 Xeon MP's or DP's. Therefore, Intel did at least try to implement these features into earlier product, but failed to completely do so for some reason. Just as Intel failed to deliver Itanium to everyone's preference compared to AMD's Athlon 64 and had to introduce the Pentium 4 with EMT64 and Pentium D.

Finally, it's why Itanium never caught on in lowend computing as well as considering most programmers didn't want to reprogram for it's completely new instruction set because it lacked backwards compatibility with x86 just like Microsoft didn't want to have to make programmers reprogram old programs form of WIndows version checking that may have been the issue according to CNET and others that need a regular expression fix to correct it, so they skipped Window 9 and called it 10 regardless of if this was the real reason and if people can technically tinker with Windows 8.1 or 8 and change it to a customized unofficial Windows 9. I worked with my C++ Instructor at this University to think of a solution to this problem and hopefully my solution was correct. I still have it documented too in a file on my storage devices too, but I don't have a full program to implement it with the rest of the code needed to make the version checking work.
 
Last edited:
Itanium is not failed processor. It does not have hardware support of this fancy program 'Spectre' that is all the rave in 2018 :)
It also can be easily configured to run in lockstep
The issue with it getting any desktop market is lack of any desktop computer, motherboard or whatever. It was only released in servers and some workstations (from SGI I believe) but you could not go to computer store and get one to tinker with it. How was it supposed to get popularity? For all intents and purposes it was like the other RISC processors on the market (except maybe PowerPC) except it was from Intel and that made people speculate a lot about it... and see where speculation can take us ;)

Itanium performance was just weak. Definitely what Intel had predicted... at the time everyone seemed to predict RISC to perform much better than they did so that was rather norm. Also in this time the also designed NetBurst and predicted it to the monster... Celeron NetBurst should end on tha list! It was just plain terrible processor line!
Itanium was not a good performer even with ridiculous amount of L3 cache they added to it in desperate attempt to make it less terrible. Just see how much improvement adding L3 did to Netburst with Gellatin core and you will see how important it is.

I would leave it on this list. It was after all a product that failed to deliver.

I would not add AMD K5 or Cyrix. K5 was late but was cheap and performed everyday stuff of the time well. Quake ran well, even better per clock than K6. It was Pentium which performed exceptionally well in FPU and not the other way around. Same Cyrix which performed ALU/INT operations extremely well. Cyrix was killed more by factors other than processor being bad. More precisely Cyrix MediaGX was the reason Cyrix went where eternal calculations are performed, and not even because it was bad but because it was exceptionally good product for the time. Which potential was never utilized... what a shame :( :( :(

Obviously NetBurst celerons should be on the list if not whole NetBurst line. Intel had Pentium M which if they put into desktops instead NetBurst as Pentium 4 it would crush everything ... which they eventually did and it did crush everything. You could mount Penitum M on some desktop boards and it showed its superior performance to NetBurst

Bulldozer was only pick which I have no objection to. It is/was quite terrible. AMD made step backward to NetBurst way of thinking and lost a lot of money and face in the process. Performance as abysmal and power consumption even worse.

Pentium 60/66... really?
Intel offered replacements and 'big business' used it. Intel lost a lot money. Most people who owned these processors were not affected by it in the slightest and could play Quake just fine. And that is pretty much all to it. Still a great CPU.

With three processors out (K5, Cyrix, Pentium) and one in (NetBurst) I would add:
- Intel 8088 - it was garbage. There were number much better processors of available at the time that performed better (it was not that hard XD) including great Motorola 68000 and who knows, if it was chosen instead of that trash 8088 world would be a much better place right now...
- VIA C3 - its performance compared to anything else at the time was abysmal. It was based on butchered Winchip technology, not on Cyrix even despite VIA bought Cyrix. Winchip was the worst Socket 7 CPU but it itself did not deserve to called worst CPU because it was low price product at time when people could still get away with 486DX4 and its performance was maybe worst one could get for the platform but at least adequate. VIA C3 was on the other hand just nonsensical product with completely butchered performance and designed just to have highest clock possible for marketing reasons. It was supposedly low power but actually they cut so much performance enhancing features it was like highly clocked 486 with maybe even worse FPU (it ran at fraction of the speed) and wasted power due to high clock speed. They could make better performing product with similar power consumption by being smarter about design. After all they had Cyrix and with it all the tech which Cyrix used to make MediaGX so it was not like Cyrix tech could not be scaled for low power consumption. They choose wrong base design and optimized it to get high clock speed and like all other processors which did they deserve to have their CPU on this list.
 
Itanium is not failed processor. It does not have hardware support of this fancy program 'Spectre' that is all the rave in 2018 :)
It also can be easily configured to run in lockstep
The issue with it getting any desktop market is lack of any desktop computer, motherboard or whatever. It was only released in servers and some workstations (from SGI I believe) but you could not go to computer store and get one to tinker with it. How was it supposed to get popularity? For all intents and purposes it was like the other RISC processors on the market (except maybe PowerPC) except it was from Intel and that made people speculate a lot about it... and see where speculation can take us ;)

Itanium performance was just weak. Definitely what Intel had predicted... at the time everyone seemed to predict RISC to perform much better than they did so that was rather norm. Also in this time the also designed NetBurst and predicted it to the monster... Celeron NetBurst should end on tha list! It was just plain terrible processor line!
Itanium was not a good performer even with ridiculous amount of L3 cache they added to it in desperate attempt to make it less terrible. Just see how much improvement adding L3 did to Netburst with Gellatin core and you will see how important it is.

I would leave it on this list. It was after all a product that failed to deliver.

I would not add AMD K5 or Cyrix. K5 was late but was cheap and performed everyday stuff of the time well. Quake ran well, even better per clock than K6. It was Pentium which performed exceptionally well in FPU and not the other way around. Same Cyrix which performed ALU/INT operations extremely well. Cyrix was killed more by factors other than processor being bad. More precisely Cyrix MediaGX was the reason Cyrix went where eternal calculations are performed, and not even because it was bad but because it was exceptionally good product for the time. Which potential was never utilized... what a shame :( :( :(

Obviously NetBurst celerons should be on the list if not whole NetBurst line. Intel had Pentium M which if they put into desktops instead NetBurst as Pentium 4 it would crush everything ... which they eventually did and it did crush everything. You could mount Penitum M on some desktop boards and it showed its superior performance to NetBurst

Bulldozer was only pick which I have no objection to. It is/was quite terrible. AMD made step backward to NetBurst way of thinking and lost a lot of money and face in the process. Performance as abysmal and power consumption even worse.

Pentium 60/66... really?
Intel offered replacements and 'big business' used it. Intel lost a lot money. Most people who owned these processors were not affected by it in the slightest and could play Quake just fine. And that is pretty much all to it. Still a great CPU.

With three processors out (K5, Cyrix, Pentium) and one in (NetBurst) I would add:
- Intel 8088 - it was garbage. There were number much better processors of available at the time that performed better (it was not that hard XD) including great Motorola 68000 and who knows, if it was chosen instead of that trash 8088 world would be a much better place right now...
- VIA C3 - its performance compared to anything else at the time was abysmal. It was based on butchered Winchip technology, not on Cyrix even despite VIA bought Cyrix. Winchip was the worst Socket 7 CPU but it itself did not deserve to called worst CPU because it was low price product at time when people could still get away with 486DX4 and its performance was maybe worst one could get for the platform but at least adequate. VIA C3 was on the other hand just nonsensical product with completely butchered performance and designed just to have highest clock possible for marketing reasons. It was supposedly low power but actually they cut so much performance enhancing features it was like highly clocked 486 with maybe even worse FPU (it ran at fraction of the speed) and wasted power due to high clock speed. They could make better performing product with similar power consumption by being smarter about design. After all they had Cyrix and with it all the tech which Cyrix used to make MediaGX so it was not like Cyrix tech could not be scaled for low power consumption. They choose wrong base design and optimized it to get high clock speed and like all other processors which did they deserve to have their CPU on this list.

That was the nightmare of the socket 7 and previous socket world were all those competing chip or processors to change jumps on the board for and the other problems, but Intel and AMD broke away making their own chipsets and removing jumpers making their solutions exclusively for there products, which actually made reliability better or seem better and probably made programming for there products easier. However, out of all the things you mention gaming is your number 1 concern compared to other important computing uses, which I can understand because this is an Enthusiast gaming forum and not a Corporation or Business discussion forum for enterprise or workstation use. Also, I was stating that it didn't make it to desktop, which does fall in the market of enthusiasts and gamers.

You might think or be right that the Motorola 68000 was a better processor, but it was not an option for IBM PC users and was not an Intel Solution. Therefore, the Motorola 68000 was not chosen by IBM PC users despite that it might have been better, but it did get it's uses in a number of products though except the Windows or IBM PC.

Finally, the AMD K5 was the only option my cousin could find for his Compaq Pesario he got from his uncle at that time because we couldn't find the Pentium 63 MHz or 83 MHz Overdrive that I have now for myself, but can't get an operating system installed on that system due to a bad floppy and IDE controller, even with an IDE to compact flash adapter considering not that the computer won't even post regardless if I use my trick of physical resetting it with the reset switch.

When I tried to get an Itanium I ended up not being able to pay for it and with more problems as with all these other processors and systems, so I had to wait until the right opportunity and I still don't have an Itanium because now it's too expensive and I never gave it a second thought to look on ebay for a reliable used modern Itanium either. However, I have my doubt as to if I want an Itanium System now because it still might be to expensive for a single or low budget user to try to obtain let alone supply power for, but if I could prove that it can or might still be a good solution I would. One more thing is that I don't have a book on Assembly for the Itanium either, which is what I'll need to help support it the most and I don't know where to find it regardless if I could probably find it on the internet.
 
That was the nightmare of the socket 7 and previous socket world were all those competing chip or processors to change jumps on the board for and the other problems, but Intel and AMD broke away making their own chipsets and removing jumpers making their solutions exclusively for there products, which actually made reliability better or seem better and probably made programming for there products easier. However, out of all the things you mention gaming is your number 1 concern compared to other important computing uses, which I can understand because this is an Enthusiast gaming forum and not a Corporation or Business discussion forum for enterprise or workstation use. Also, I was stating that it didn't make it to desktop, which does fall in the market of enthusiasts and gamers.
At that time not many users bought separate parts. Most PCs were set up by PC manufacturers and computer stores.
If jumpers are scary for someone then it is good indication that this task beyond this persons technical abilities.

As for Itanium, if that is list of PC gaming processors then should not be on this list at all!

You might think or be right that the Motorola 68000 was a better processor, but it was not an option for IBM PC users and was not an Intel Solution. Therefore, the Motorola 68000 was not chosen by IBM PC users despite that it might have been better, but it did get it's uses in a number of products though except the Windows or IBM PC.
How users could choose anything?
It was IBM which choose X86 instead of 68K when they made first IBM PC

Finally, the AMD K5 was the only option my cousin could find for his Compaq Pesario he got from his uncle at that time because we couldn't find the Pentium 63 MHz or 83 MHz Overdrive that I have now for myself, but can't get an operating system installed on that system due to a bad floppy and IDE controller, even with an IDE to compact flash adapter considering not that the computer won't even post regardless if I use my trick of physical resetting it with the reset switch.
What is the point of this then?
How does your story translate to K5 being bad CPU?

When I tried to get an Itanium I ended up not being able to pay for it and with more problems as with all these other processors and systems, so I had to wait until the right opportunity and I still don't have an Itanium because now it's too expensive and I never gave it a second thought to look on ebay for a reliable used modern Itanium either. However, I have my doubt as to if I want an Itanium System now because it still might be to expensive for a single or low budget user to try to obtain let alone supply power for, but if I could prove that it can or might still be a good solution I would.

It only supported X86 ISA via some kind of emulation in microcode which was so slow that Intel intended to release (or released, not sure) software JIT based X86 emulator... which should give you an idea how bad its 'native' X86 support was :ROFLMAO:
I can buy HP RX4640 with two dual core Itanium 2 1.6GHZ with 6MB L3 (A9732A) and 8GB 146GB SAS drive for merely 300$ which I consider relatively cheap. Used rack mounted Xeon servers with similar specs from that time would cost at least the same.
It doesn't support IA-32 though so no it would made zero sense to own one to make benchmarks and laugh at how bad this poor microcode emulation thinggy is.

These used servers almost always have PSU so no issues with supplying power to it...


It goes way beyond my budget for rare old obsolete computers, especially since it is rack mount which would look strange as standalone computer. Besides Itanium 2 does not even have hardware IA-32 support so I could not even do any normal benchmark on it for fun.
These servers usally come with PSU and the one I mentioned with two so power is not an issue.

One more thing is that I don't have a book on Assembly for the Itanium either, which is what I'll need to help support it the most and I don't know where to find it regardless if I could probably find it on the internet.
Why would you ever want to program VLIW processor in assembly? :confused:
For fun?
What about this bit about supporting it? Supporting whom?
There are companies which use Itanium based servers ask them if they hire experienced assembler programmers if you are one. If not then become assembler god first :hungover:
 
Itanium was the first PC 64-bit processor that I knew of before AMD officially offered the Athlon 64 to replace x86 32-bit processors with x86_64 or AMD's coined AMD64 instead of EMT64 or maybe IA-64.

DEC (Alpha) and MIPS were earlier along with others.


Also, you might get more traction if everything wasn't "My instructor said".
 
It feels semi-religious, which it actually was at the time too. Now that that design has more or less been proven wrong, I don't understand the love for it.

The reality is the Itanium had some nice design features, but just wasn't that great a CPU overall - largely because a VLIW based design is almost always a bad idea for a general purpose processor.

OP: To the idea of programming an Itanium processor in assembly... if you think that's a path to success, you're missing the point the designers had (magic compilers can handle this problem of absurdly complex parallelism extraction), or have no idea the actual amount of complexity you are about to hit. I assume the latter. Or you're just making SIMD programs for Itanium, which isn't the point either.

Best of luck, but I can save you heartache unless you're just in it for fun - it doesn't end well.
 
It feels semi-religious, which it actually was at the time too. Now that that design has more or less been proven wrong, I don't understand the love for it.

The reality is the Itanium had some nice design features, but just wasn't that great a CPU overall - largely because a VLIW based design is almost always a bad idea for a general purpose processor.

OP: To the idea of programming an Itanium processor in assembly... if you think that's a path to success, you're missing the point the designers had (magic compilers can handle this problem of absurdly complex parallelism extraction), or have no idea the actual amount of complexity you are about to hit. I assume the latter. Or you're just making SIMD programs for Itanium, which isn't the point either.

Best of luck, but I can save you heartache unless you're just in it for fun - it doesn't end well.

Thanks this explains more than I knew about the Itanium, but why do some manufacturers still use it for highend server and how does AMD Eypc differ?
 
At that time not many users bought separate parts. Most PCs were set up by PC manufacturers and computer stores.
If jumpers are scary for someone then it is good indication that this task beyond this persons technical abilities.

As for Itanium, if that is list of PC gaming processors then should not be on this list at all!


How users could choose anything?
It was IBM which choose X86 instead of 68K when they made first IBM PC


What is the point of this then?
How does your story translate to K5 being bad CPU?



It only supported X86 ISA via some kind of emulation in microcode which was so slow that Intel intended to release (or released, not sure) software JIT based X86 emulator... which should give you an idea how bad its 'native' X86 support was :ROFLMAO:
I can buy HP RX4640 with two dual core Itanium 2 1.6GHZ with 6MB L3 (A9732A) and 8GB 146GB SAS drive for merely 300$ which I consider relatively cheap. Used rack mounted Xeon servers with similar specs from that time would cost at least the same.
It doesn't support IA-32 though so no it would made zero sense to own one to make benchmarks and laugh at how bad this poor microcode emulation thinggy is.

These used servers almost always have PSU so no issues with supplying power to it...


It goes way beyond my budget for rare old obsolete computers, especially since it is rack mount which would look strange as standalone computer. Besides Itanium 2 does not even have hardware IA-32 support so I could not even do any normal benchmark on it for fun.
These servers usally come with PSU and the one I mentioned with two so power is not an issue.


Why would you ever want to program VLIW processor in assembly? :confused:
For fun?
What about this bit about supporting it? Supporting whom?
There are companies which use Itanium based servers ask them if they hire experienced assembler programmers if you are one. If not then become assembler god first :hungover:

I didn't know that you don't program or optimize programs for the Itanium in assembly because it's a VLIW processor, so thanks for letting me know. I'm still optimistic of Itanium though, but I can clearly see it only has a need in highend server or possibly Intel based mainframes.
 
Thanks this explains more than I knew about the Itanium, but why do some manufacturers still use it for highend server and how does AMD Eypc differ?

Manufacturers don't really still use it for anything per se....it is EOL. Kittson is the last release. As for comparing to EPYC and differing they are completely different architectures and instruction sets. They aren't similiar at all.
 
Itanium - Revolutionary, Flat Memory Model

X86-64 - Evolutionary. Extension. EM64T in Intelese.
 
It is not that you do not use assembler on Itanium, you can certainly do that on any architecture, especially if you want to squeeze last ounce of computational power out of it.
Intel did it to optimize benchmarks to show power of EPIC architecture.

The whole idea of RISCs was to sacrifice niceness of assembler provided by CISC designs for simpler hardware design that is more 'symmetric' and thus easier to apply advanced optimization techniques in hardware and easier to write compilers for. Evaluating performance of RISC code for true RISC processors is easier than evaluating code for CISC processors because RISCs are more predictable, instruction have the same length and take the same time to execute thus simplifying things greatly all you need to be aware of is how many instructions you use which you clearly see in your code. On CISC you need to also take into account instruction length and how much clock cycles each instruction takes to actually execute making evaluation more tricky.

EPIC is example of RISC that shouldn't be even called RISC. It is VLIW not RISC. Intel called it RISC for marketing reasons to make investors more comfortable spending big fat bucks investing in it. RISC was proven technology with great success, great performance-wise while VLIW was to simply put it Intel experiment.
What you gained with RISC you lost with VLIW and got even more terrible assembler which require you to really understand program flow and how processor work to optimize it properly. Making compilers for it was by design much harder. The idea was that over time there will be compilers which will be able to their job very efficiently. This p roved to be much harder task than they first predicted.

At the same time other processors moved to Post-RISC designs. X86 packed tons of hardware level optimizations techniques. Code could be written in a lazy way to directly read and write memory without caring for registers and processor would do its internal magic and optimize it for you at run time. Each new processor doing better job (except Netburst and Bulldozer that is :dead: ) All this made them win many in real world scenarios. RISC were forced to this route also and to add so many instructions that 'reduced instruction set' name became kinda nonsensical and design gains from all this symetrical approach to give less of an advantage if any.

And why do you care for Itanium?
It make no sense whatsoever
More than anything this Itanium assembler sounds like strange fashion of trolling if not just as if lost your mind :ROFLMAO:
 
Pentium 60/66... really?
Intel offered replacements and 'big business' used it. Intel lost a lot money. Most people who owned these processors were not affected by it in the slightest and could play Quake just fine. And that is pretty much all to it. Still a great CPU.

I agree. The (original) Pentium doesn't deserve to be on this list. If anything it deserves to be on the top 5 list of all-time greats. It paved the way for Intel to market entirely new generations of future processor lines. The Pentium branding was a marketing juggernaut for Intel. That branding is still relevant today, almost 25 years later.

The Pentium was also a huge milestone for Intel in terms of technological achievement. Granted, it wasn't the first Superscalar architected cpu around the time of its launch, but it was the first to do so that could also run x86 based software. Back in those days, this was a huge boon. Existing software not only ran, it just simply ran better. Nobody back in the early 90s was complaining about how their Pentium based PC ran things. It just wasn't a thing.

Every x86-based cpu that followed it owes some degree of thanks (even the almighty PPro).

The FDIV bug was (and apparently still is per the OP article) so blown out of proportion. It's one of those historical anecdotes where if you make a product that is so successful, that any mark upon it, no matter how trivial, will get critics & naysayers laser focused on it with the goal of trying to bring said product down off its pedestal. Nobody outside the scientific community, which was extremely niche back then, would have been affected by it. The errata would only ever spit out an innacurate calculation one out of several billions of operations. In 1990 times, this was statistically incomprehensible. This errata behavior was also predictable. Not every floating point operation would lead to an innacurate result. You had to have the right inputs under the right conditions. Even still, you could trust the floating point value you got back out to 4-5 significant digits. So your banking software would only be losing a thousandths of a penny *gasp* once in a blue moon. Techradar did an article in 2014 celebrating the 20th anniversary of the FDIV bug discovery that gives good historical perspective on it.
 
Last edited:
You might think or be right that the Motorola 68000 was a better processor, but it was not an option for IBM PC users and was not an Intel Solution. Therefore, the Motorola 68000 was not chosen by IBM PC users despite that it might have been better, but it did get it's uses in a number of products though except the Windows or IBM PC.

The 68k was provably a much better architecture, and the 68000 was far superior to the 8086. And the ONLY reason IBM didn't build the PC around the 68000 was because they had a policy at the time where they wouldn't use parts that weren't being sold in bulk (the idea that would prove the parts reliability). The 68000 had only *just* released at the time, so it was disqualified on this point alone.

x86 is a horrible stopgap CPU architecture that we just can't be rid of. Itanium was sadly too early; developers weren't thinking of going parallel yet.
 
We (I) learned on the Motorola 68K. Easy as pie, multitasking programs - a joy to work with. Swapping memory blocks in and out at the beginning and end of your time slice was pretty cool. Did some x86 __asm block in C for some joystick coding later but never had to bother with full assembler routines ever again. Didn't have to deal with x86 memory since the C compiler abstracted it.

68k Motorola
 
Last edited:
The 68k was provably a much better architecture, and the 68000 was far superior to the 8086. And the ONLY reason IBM didn't build the PC around the 68000 was because they had a policy at the time where they wouldn't use parts that weren't being sold in bulk (the idea that would prove the parts reliability). The 68000 had only *just* released at the time, so it was disqualified on this point alone.
Price was definitely another reason, if not major deciding factor.
8086 was much cheaper than 68000 and IBM didn't even use 8086 but 8bit variant 8088 which allowed for all system buses to be 8bit - significant cost reduction

I bet that not for IBM PC Intel wouldn't not develop even 80286 and like everyone else moved to some RISC architecture. They even had those, 80960 and 80860 were the code names.
 
Poor initial 68K availability killed it's technological edge, that's why the x86 was picked. Everyone says the best technology wins yet the this is another example that proves otherwise. Sometimes being in the right place at the right time wins.
 
That was a good read on wiki. Once I saw DIgital..
Yup, especially that MS-DOS is basically a shameless CP/M rip-off
Windows is also 'inspired' by Digital Research GEM which was way ahead of anyone else in GUI interface technology.
I am not knowledgeable in GEM API but I can bet I would just pick it up and could program GUI applications for it exactly the same way I would program Winapi applications. I would not even be surprised if function names were the same...
 
I can see how the person who made this list is right about number 3, but considering 32-bit computers were supposed to be officially completely obsolete after 2038 according to the numberphile because they wouldn't be able to count the date past that. However, they are kinda right about this because Intel expected programmers to reprogram all their programs. Although, to be fair all programs need to be backwards compatible and all processors need backwards compatibility too, but if the software is backwards compatible the processor should already be backwards compatible by reducing the bits used or switches turned on in the processor.

In the early days 8-bits was a lot now computer processors are 64-bit multi-core processors that can process a lot at once and can have tasks automated faster too. Modern computers are so fast, but back in the day some of those older computers seemed so fast too. The first Intel Itaniums were revolutionary in that they were Intel first more modern 64-bit processors because the Pentium Pro's were supposed to be 64-bit, but they failed to catch on to a lot of users. The Pentium Pro's had a lot of on die cache too at up to 1 MB.

While the original socket 7 Pentium computers had to use external L2 motherboard cache because they only had 32 KB of L1. Overall though Pentium's still exist in modern socket forms including socket 1151 the most recent, but now the Core i-Series and Xeon rayne supreme. Anything from back in the original x86 computer days belongs in Museum or Not in the hands of kids, because if your going to give a kid a computer they should be given new computers that are safer to use considering now computers have on die security features and past processors didn't have them documented or they didn't exist considering I remember the first Pentium's w/MMX pretty much had the first processor features that I know of considering MMX meant Multi-media extensions and newer processors add a lot more processor features and capabilities.
 
Last edited:
Back
Top