Worst CPU's of all time?

Well they lack hyperthreading, too.
If you use Hyper Threading on P-cores then each thread isn't much faster than E-core so in that sense with HT enabled you get similar performance out of each thread if you use all of them.
Personally I do not like HT/SMT because of performance issues this tech can cause.

Existence of E-cores does not cause any performance issues.
E-cores unlike Hyper Threading or AMD's SMT do not require anything sophisticated to have threads being scheduled efficiently.
There is issue with E-cores on Windows 10 when application doesn't use all hardware threads but only because with power saving features enabled throttling cores to 800MHz you need to specifically tell Windows which cores should be scheduled with higher priority because task scheduler will treat them the same and will schedule threads on E-cores and P-cores. Windows (at least with high performance power plan) will however always schedule foreground application to fastest cores so if you make P-cores always have higher clock than E-cores you would guarantee P-cores cores would be used first - so even with Windows 10 it is possible to have Raptor Lake and eat it too.

Windows 11 knows what E-cores are so here no need to tweak anything.
 
E-core is normal core, just slower than P-cores. Still not terribly slow and actually comparable in performance to Skylake CPUs.
On my 13600KF with E-cores overclocked from 3.9GHz to 4.3GHz I got performance between Core i7 9700 and 9700K in Cinebench on E-cores alone. Games also run fine on them.

Applications do not need special support for E-cores.
Not to some operating systems 😇. And not to some workloads. Depends on your scheduler. Mine don’t acknowledge that they exist. And can’t tell the difference. That results in odd performance.
 
For the most part. Some people are finding it's better to disable them in a handful of tasks. Luckily they have a 1 button toggle for them.
Not to some operating systems 😇. And not to some workloads. Depends on your scheduler. Mine don’t acknowledge that they exist. And can’t tell the difference. That results in odd performance.
Not sure how all OSes and/or programs behave but surely there might be some unexpected behaviors if OS cannot figure out it should schedule threads to P-cores first. That would depend I guess on kernel/system version. It is definitely something people experienced on Alder Lake in the past.
For that matter Win11 implementation of "Thread Director" is by default also causing issues and something similar might happen on Linux until its configured differently.

Safest option software-wise is one core type and without HT/SMT enabled and this is unchanging truth.
 
Inspired by a thread of the opposite! I'll go first

Willamette Pentium 4 1.3ghz - The slowest version of the worst version of the Pentium 4!

I personally am really fond of the old Pentium 4's, but the only ones I ever touched were between Northwood to Ceder Mill. I thankfully skipped over Willamette because I stretched my old 440BX with a Tualatin Celeron [email protected] for way too long and upgraded into an i875 with a Pentium 4C [email protected].

The 7 worst CPUs of all time​


https://www.digitaltrends.com/computing/worst-cpus-all-time/

1718858872604.png
 
Same could be said for most Intel/AMD CPU releases.
Most of them didn't even begin to approach performance jump that happened from 486 to 586 - only maybe Pentium D to Core 2.


Not sure actual availability but back then distribution wasn't as quick as it is today for sure.
By the time you got computer press with Pentium benchmarks you could probably plan trip to bigger city with better equipped shop or find phone numbers and order Pentium system and had it delivered in few weeks.
Last CPU I got, Core i5 13600K I ordered on Thursday on its release day and on Saturday before noon I was putting it together.

If only it always worked like that... then I could perhaps have PS5 much much sooner 🙃


Yeah sure, you certainly had Cyrix P166+, 133MHz CPU for Pentium boards. It being released on February 5th 1996 doesn't mean its availability was as bad as Intel's and if you got it three years earlier then maybe back then things ran backward 🙃

Even Cyrix 5x86 was released much later than Pentium - in 1995. Cyrix was very slow to join the 586 party in any shape or form.


Was VLB even endorsed by Intel to begin with?
VLB looks like a hacky "let's just connect cards to CPU bus using this cheap connector from MCA and make cards and CPU worry about 'stuff'" solution to mitigate performance bottleneck and not proper bus standard.
Same manufacturers which used it for 486 could add through glue logic chips add it to Pentium systems and some actually did. It quickly became obsolete when industry moved to superior PCI.
PCI was superior because it was proper bus standard, not because of its bandwidth - it could be as good on VLB as it was on PCI.


Intel could not produce much faster 486 as process node at the time was limited to ~66MHz
Intel could make faster CPU for 486 bus like original Pentium with perhaps larger caches to mitigate bus bottleneck. Such thing would be however even more expensive and still not as fast as what we got and there would be even less incentives to get it. Especially later when process node allowed faster 486s variants no one would get such 'Pentium' just like no one got Pentium Overdrive.


So if I get source code of any program or OS and find even one instance of 8-bit variable in there does it mean it is 8-bit software/OS? 🤯
Bitness is defined by width of pointers...


...and Win9x used 32-bit pointers and flat memory space, not 16-bit segmented memory.
Win32s was workaround for Windows 9x to run Windows 32-bit software and as far as program was concerned it ran in 32-bit Windows - because Win32s did configure CPU and more specifically its MCU to 32-bit.
You could literally run 32-bit software on 16-bit Windows and get performance which was the same as if you ran it on Windows NT. Say you had to add million 32-bit numbers together. You could do that with 386DX on Win32s at least twice as fast as on the same Windows using 16-bit... unl;ess perhaps you wrote your application as 16-bit but used Assembler to force calculations to be 32-bit... but then again would this truly be 16-bit program?

Anyways, Windows 9x had lots of legacy 16-bit code and used BIOS interfaces but it doesn't mean it was 16-bit OS

Doom - and I already wrote it - was 32-bit as it used 32-bit pointers. The only 16-bit part of it was jumping to 16-bit real mode to issue BIOS/DOS calls.


Intel has no interest in making already sold CPUs faster - also because its ridiculously expensive to validate that any changes didn't introduce new bugs and easy to introduce new bugs.
Security patches tend to make things slower and same is usually true for other bugfixes.

True beauty of Pentium Pro microcode system which is still used up to this day is that end user doesn't really know if they run upgraded microcode or not. If not BIOS update then OS update will update the microcode.


I call BS on this.
Windows 9x and DOS benchmark indicate Pentium Pro is faster than Pentium MMX which itself is universally faster than Pentium without MMX - due to larger caches and better instruction decoder, not so much because of MMX which was mostly attributed for performance increase by marketing.

The whole Pentium Pro being Windows NT beast was due to some specialized software running much faster on it - and with 1MB L2 cache you would expect some programs would show significant performance improvements.

Always when something interesting is found people - especially those which considers themselves as experts - will put out some explanations. Often these speculations, even if not valid, transform over time in to facts and here it seems they transformed in to "My PP180 was slower than P133" fake memory. Might be slower than Cyrix P166+ though because this particular processor was apparently faster than light itself 🤪

And BTW Cyrix had more performant INT/ALU implementation of Post-RISC and per-clock should be faster than Pentium Pro in some programs. Cyrix didn't make their CPU with on-die 1MB L2 and has lower clock speeds and weak FPU so you didn't hear like how great Cyrix CPUs were for Windows NT...


Ruined what exactly? 😪

IBM PC was made to tap in to home computer market. It became hit in organizations (IBM but... cheap... releatively speaking - what not to love so much to computerize your whole office with?) which made it also hit for home users.
You can not thus say home users ruined anything - home users was the intended target. Only price and availability made PC dominate other markets.



Code is N-bit when its pointers are N-bit.
Its a little bit more complicated for various edge cases like using 32-bit opcodes in DOS programs but in this case its academic discussion - you would still need at the very least 386SX to run such program.

For your run of the mill Windows 11 applications you have those which have 32-bit pointers, the so called 32-bit applications and those which use 64-bit pointers also called 64-bit applications.
I could write and compile x86_64 application that does only use 8-bit variables and and pointers to 8-bit stuff but it would still be 64-bit application.
I disagree. The move from the lga1156 to lga1155 was more significant than from 486 to 586. No one I knew in professional circles besides the CAD people had Pentiums at launch because no one really could get the roi of such and investment.

I was there back in the day building systems so I had my finger on the pulse. And that's my experience.

Well, I know we had it because it was before I had had my 1993 Altima SE.

I'm not talking about the bus--you can do whatever bus you want under the cover--and that's the point--why change the physical connector? It would be like making all the x1,x4,x8,x16 pcie slots a completely different design--for nothing. But that's exactly what was done. A purpose 'cut' of being able to re-use things. I think that was the start of planned obsolescence in the pc industry now that I think about it.

Intel did have the ability to make a DX2-100 since they did make a 486DX-50 that was native at 50Mhz. They just never did. Pentium overdrive never sold because of price. It was an excellent product as I used it to upgrade my cousin's Packard Bell.

It wasn't defined that way by those of us that lived through that era. Something was xx-bit when ALL the code was xx-bit.

You can write it a hundred times--Doom was definitely not 32-bit. It was only 32-bit after it was launched from DOS and the overlay was loaded. It didn't just jump from the prompt straight into 32-bit mode as that didn't even exist.

Call whatever BS you want. I had all these systems in the same room and remember exactly what I was doing and how. Benchmarks can indicate whatever they want too. Not all Pentium Pros had 1MB L2--in fact most still had 256k or at best 512k. Even today finding PPros with 1MB L2 is very hard. The PP180/200s were definitely slower than the 200MMX in real-world 98se usage, no doubt about it. And these were all IBM systems of nearly the same generation too so not many other variables besides the cpu back then. I used them all with a 16-port kvm back then and bounced between all the systems all day long. And I did this for years. This wouldn't have stuck with me all these years if it wasn't a significant disappointment, which it was.

Ruined the whole industry!! The IBM PC was a machine that the home market couldn't afford and didn't do what the C64 and Atari and other machines did far better--games. So the IBM PC was for work and work is serious so needs serious hardware and software. The second the computer was some sort of 'home electronics' it turned to absolute garbage--hard drive failures, terrible proprietary designs, marketing vs substance--just ruined. Consumers ruin anything that's solid and good. It's why I have this stupid touchscreen on my phone vs a nice and fast keyboard.

I'm not a programmer so I don't care about the pointers. I remember what the industry said at the time and my views reflect that view from that time. NT was the first 32-bit, everything else was not. PPro was optimized for 32-bit, the other Pentiums were not.
 
It wasn't defined that way by those of us that lived through that era. Something was xx-bit when ALL the code was xx-bit.
I'm not a programmer so I don't care about the pointers.
A program is 16-32 or 64 bits tend to be classed by the size of its addressable memory (pointers size) and size of the instruction set for both of those Doom was running in 32 bits (and could use 4GB of ram), Intel cpu were 32 bits since the 386 in the mids 80s and it was possible to go in protected mode over the OS 16bitness. The code itself is not necessarily xx-bit (sometime it assume it is, which can be an issue or an optimization tool) and can still have 8 bits char-int operation even if it is a 32 bits program.

On some OS Doom did launch a 16 bits launcher that initialized the DOS extender that made launched the actual 32 bits doom game possible, but the video game Doom when running on those MS-DOS OS was running in 32 bits and was a 32 bits application, using DOS/4GW made it possible in protected mode to use the 32 bits cpu and the over 640k of ram despite the OS limitation:

https://en.wikipedia.org/wiki/DOS/4G
DOS/4G is a 32-bit DOS extender developed by Rational Systems (later Tenberry Software).[2] It allows DOS programs to eliminate the 640 KB conventional memory limit by addressing up to 64[3] MB of extended memory on Intel 80386 and above machines.
 
The UNIVAC 1103. No Hyperthreading. No GHz. Supported only 4.5kb memory, 72kb harddrive. Filled a room.

ENIAC was worse... The specs on power consumption was on the order of 150 kW, and those vacuum tubes used in them had to be replaced very frequently, where the hapless tech would be stuck replacing dozens of them every day.


Anyways, the CPU I considered the worst on its own was the Cx 486 SLC. Its sole saving grace was that it offered an upgrade path for unfortunate folks who were stuck with CPU's crippled with 16 bit external buses.

I lost track of the number of unfortunate folks who had these CPU's, and were wondering why their systems flat-out stank when it came to trying to run games such as X-Wing, Tie Fighter, Doom, Doom II, etc.

The 32 bit bus equivalent of that CPU, the Cx 486 DLC was at least more functional, but even then, their best DLC CPU, the 33 MHz, performed worse than an Intel 486 SX 25.

To Cryix's credit, though, the release of those CPU's did help drive down the prices of the Intel and AMD true 486 CPU's, but once I upgraded from a Cyrix 486 DLC-33 to an AMD 486 DX-40, it was as if someone had doubled the performance.
 
To Cryix's credit, though, the release of those CPU's did help drive down the prices of the Intel and AMD true 486 CPU's, but once I upgraded from a Cyrix 486 DLC-33 to an AMD 486 DX-40, it was as if someone had doubled the performance.
My roommate at the time and I had several SLC and DLC systems. The best thing I can say about them was they were dirt cheap. But performance was not good, especially in games bc the FPU was a disaster. X-Wing / Tie Fighter was BARELY playable and often would chug down to single digit framerates (or it felt like it, back in the low 90s we didn't know exactly what framerates were). I finally found a shady deal on a real Intel 486DX2/66 and it was like 10x faster in Tie Fighter even with the same pathetic Trident ISA VGA card I was using.
 
Pentium 60/66. All of them got recalled since they literary could do math calculations right in some situations.
 
The UNIVAC 1103.
Not to be confused with the Univac 1107 / 1108 / 1110, which were very cool machines. 36 bit word, ones-complement arithmetic, very fast for its day. I had a lot of fun with the 1108 at CMU, and it was tons faster than the 360/67 that was the main comp center machine.

I'm still voting for the WE 32000 as the worst CPU of all time.
 
Last edited:
Literally every iteration of the Intel Atom processor is complete balls.

Yeah I'm not seeing it. The Atom was the first processor where you could legit get 8 hours of use from in a light and portable design. The later Atoms were dual-core and were paired with graphics encoders that could even do emulated games really well.
 
Yeah I'm not seeing it. The Atom was the first processor where you could legit get 8 hours of use from in a light and portable design. The later Atoms were dual-core and were paired with graphics encoders that could even do emulated games really well.
There were also a set that have been solid for low power server nodes. Full VT support and all.
 
  • Like
Reactions: Axman
like this
But this one was slow as blue balls… whatever that means. Lol

No it was optimized for single-instruction compute, which Intel, of all companies, made branch prediction un-optimized for, and they told Microsoft how to un-optimize for. It was a good architecture, if you weren't running Windows.
 
Yeah I'm not seeing it. The Atom was the first processor where you could legit get 8 hours of use from in a light and portable design. The later Atoms were dual-core and were paired with graphics encoders that could even do emulated games really well.
Not seeing what? They've always been so astronomically slow that battery life was irrelevant.
 
Not seeing what? They've always been so astronomically slow that battery life was irrelevant.

I do not see any reason to believe they were bad CPUs.

They were extremely capable CPUs, especially paired with coprocessors once they went dual-core.

Your memoy of them is fabricated or lacking.
 
I do not see any reason to believe they were bad CPUs.

They were extremely capable CPUs, especially paired with coprocessors once they went dual-core.

Your memoy of them is fabricated or lacking.
2GB memory limit was kind of annoying but those super slow hard drives that every brand used made it impossible to have decent performance lol.
 
I do not see any reason to believe they were bad CPUs.

They were extremely capable CPUs, especially paired with coprocessors once they went dual-core.

Your memoy of them is fabricated or lacking.
Or you're having a severe case of rosy retrospection.
 
I’ve been an IBM Power / AIX engineer since the mid 2000’s. I’ll take AIX over Linux any day of the week for enterprise servers. I’ve used every generation of IBM PowerPC since Power 4, and from AIX 4.3 to 7.3.

Nice to see some non-x86 input here!
thats mostly due to the OS and not the hardware. But the big servers had impressive hot swap capabilities
 
No it was optimized for single-instruction compute, which Intel, of all companies, made branch prediction un-optimized for, and they told Microsoft how to un-optimize for. It was a good architecture, if you weren't running Windows.
Oh I know. So the only thing I'm really ripping my particular atom for was the shit 3D graphics support it had. It was the worst of the worst. GMA500 had a PowerVR chip on it and none of its drivers could get the thing to have any passible performance in anything. Quake 3 was the lone exception. It actually hit very high framerates in that game, if I recall correctly.
 
  • Like
Reactions: Axman
like this
Oh I know. So the only thing I'm really ripping my particular atom for was the shit 3D graphics support it had. It was the worst of the worst. GMA500 had a PowerVR chip on it and none of its drivers could get the thing to have any passible performance in anything. Quake 3 was the lone exception. It actually hit very high framerates in that game, if I recall correctly.

I had one of the later Asus EeePCs with an SSD, 2gb of ram, the Broadcom graphics accelerator, and that thing was faster and more responsive running JoliOS than my high-end Thinkbook. But it was only for productivity and lasting for hours when I was at events like CES or SHOT Show. And it only weighed like, 2lbs. The most gaming that it was capable of was SNES and the like.

It came with Windows 8 or 8.1 and holy dogshit what a difference. Even rolling it back to 7 it was so fucking sloooow. But if you used a netbook distro, holy shit that thing just ran and ran all day long. I didn't even bring my charger with me, I'd just leave it at the hotel room and run even lighter.

I just checked, it only used 6.5 watts at max. That's impressive power consumption to this day over a decade later. I've always been impressed by small devices that punch above their weight class. They're also cheap so you can play around with a bunch of 'em.

I'm about ready to pass my Mac Mini on to my wife and replace it with one of these all kitted out, 16gb, slick NVME drive, the works: https://www.friendlyelec.com/index.php?route=product/product&path=69&product_id=292

I have an older model that I bought to tinker with and it's very good so long as you don't multitast, even browsing can peg 4gbs of RAM these days. It's supposed to be a router and light server and my buddy and I were so impressed with it that we spent a weekend getting drunk and watching Failarmy videos while he was visiting.
 
I dunno I had a Atom based windows tablet. I think it was Win 8. While windows indeed sucked, so slow, I assume the slow ass MMC drive wasn't helping. But it did emulators pretty well. The constant windows updates were what drove me away from it
 
I dunno I had a Atom based windows tablet. I think it was Win 8. While windows indeed sucked, so slow, I assume the slow ass MMC drive wasn't helping. But it did emulators pretty well. The constant windows updates were what drove me away from it
The one I'm specifically talking about is the GMA500-paired chip. In theory it was better than GMA950. It could have been used to accelerate different applications and make Atom a decent thing. But alas, Intel completely punted on driver support and all we got was some experimental driver in a zip file that only worked for random things like Quake 3. GMA950 was shit and we all knew it, but it at least worked with applications, even if slowly. GMA500 you never knew if the application would even run.
 
The one I'm specifically talking about is the GMA500-paired chip. In theory it was better than GMA950. It could have been used to accelerate different applications and make Atom a decent thing. But alas, Intel completely punted on driver support and all we got was some experimental driver in a zip file that only worked for random things like Quake 3. GMA950 was shit and we all knew it, but it at least worked with applications, even if slowly. GMA500 you never knew if the application would even run.
Damn thing was such a debacle that Microsoft had to make a long-term support version of Windows 10 specifically for it because newer versions broke compatibility with the graphics driver and Intel was not gonna fix it... and there were a relative boatload of those systems out there in the wild.
 
Damn thing was such a debacle that Microsoft had to make a long-term support version of Windows 10 specifically for it because newer versions broke compatibility with the graphics driver and Intel was not gonna fix it... and there were a relative boatload of those systems out there in the wild.
Oh shit. Shame on Intel then. You would think that they could have hired someone from PowerVR to write a damn driver for the thing.
 
Back
Top