Pushing the boundaries of performance and security to unleash the power of 64-bit computing

juanrga

2[H]4U
Joined
Feb 22, 2017
Messages
2,804
Arm highlights its next two generations of CPUs, codenamed Matterhorn and Makalu, with up to a 30% performance uplift.

Matterhorn (Cortex-A79?) will be the last to support the 32-bit ARM ISA and the next big cores, like Makalu, will support only the 64-bit ISA.


Paul%20blog%20graph.jpg


https://www.arm.com/company/news/20...rity-to-unleash-the-power-of-64-bit-computing
 
It means even Android will have to follow iOS in it's strict-as-fuck native app support....ten years after Apple did it.

But I don't foresee Intel/AMD processors doing this anytime soon to IA32 mode.
 
It means even Android will have to follow iOS in it's strict-as-fuck native app support....ten years after Apple did it.

But I don't foresee Intel/AMD processors doing this anytime soon to IA32 mode.

Intel and AMD cannot. x86-64 is an extension to x86.
 
Intel and AMD cannot. x86-64 is an extension to x86.

They may not and most likely won't. But cannot is not the right term. x64 defines the operating space. Default operands, stack size, pointer size, predication.

The ecosystem keeps legacy around. People want to have the option to boot archaic old stuff.

How much penalty do you really think x86 pays if any for these extra mappings? I don't know, but the logic in me says its negligible.

Well we'll know in 2022.
 
Intel and AMD cannot. x86-64 is an extension to x86.

No, they could get rid of the older instructions, and whatever additional hacks they have in microcode

The backwards compatible overlapping register files does not mean that you have to use them

The reason I don't see that happening is because there is way too much legacy 32 bit windows code out there... That and the IA 32 memory model is a lot less of a hack than segments were.

It's still goingto happen, but not for at least a decade. Microsoft will coordinate with hardware manufacturers when they are ready (much like they forced USB 3 for windows 10, and the latest version of windows now requires ahci). They also officially cut 32 bit versions of windows, leaving them one step closer to pure Amd64
 
Last edited:
No, they could get rid of the older instructions, and whatever additional hacks they have in microcode

The backwards compatible overlapping register files does not mean that you have to use them

The reason I don't see that happening is because there is way too much legacy 32 bit windows code out there... That and the IA 32 memory model is a lot less of a hack than segments were.

It's still goingto happen, but not for at least a decade. Microsoft will coordinate with hardware manufacturers when they are ready (much like they forced USB 3 for windows 10, and the latest version of windows now requires ahci)
Well, that and the amount of die space the "extra" 32-bit decoding has become rather negligible. I mean, they could just force 64-bit uefi support and drop 32-bit app supports but so much legacy makes it difficult and as mentioned, it has minimal impact on their CPU/die space anyways, so why? Intel tried to make a clean break once, that turned out well. I honestly don't think it'd be as bad now since most apps are able to compile to 64-bit and if companies need legacy support your just have legacy CPUs that cost more, those that don't need legacy would have a slight price break, or something to this effect.
 
The way things are implemented, these 'older instructions' really add very little additional complexity and overhead. You have to pay the cost for supporting this complex CISC instruction set up front, and that's the limiter as far as an ISA goes.

As juanrga said - this is because x86-64 is really a pretty straightforward extension to the x86 ISA. You're already paying the cost of an ISA which was designed for a very different target (instruction density, primarily). It really is still very much the x86 ISA - but with wide registers and more registers (and yes, assurances of some levels of SIMD instructions, etc). Still a bit of a goofy mess, viewed via the lens of today.

It's actually pretty dang amazing at how well the current teams have made "x86" cpus the dominant platform.
 
The best analogy I can give it.

The penalty you pay for holding on to 32bit land in 64bit land, is the similar to the penalty you pay for SSE land in AVX land, or for that matter neon land in SVE land.
 
I honestly don't think it'd be as bad now since most apps are able to compile to 64-bit and if companies need legacy support your just have legacy CPUs that cost more, those that don't need legacy would have a slight price break, or something to this effect.

It's not about the die savings for Intel, it's about saving Microsoft billions of dollars in support costs.

Every other successful OS bit transition in history has eventually cut-off the lower bits. Even Linux is looking to move to put 64 (but as you might expect, there are some holdouts..

https://betanews.com/2019/06/24/can.../2019/06/24/canonical-backpedal-ubuntu-linux/

Windows has already officially ditched Windows 16-bit support, and it's only a matter of another decade before they ditch 32-bit.

And really, current AMD64 CPUs could use an update anyway, as a the initial implementation is capped at 48-bits. No time like the "embrace pure 64-bit Windows" for the processor to get more address bits, and default boot into AMD64 mode. It;'s not like servers will ever miss the 32 bits!!

Today, we are not ready. but within a decade, the remaining holdouts should be satisfied with 3rd-party emulation. That is what MS did with DOS support after XP (and you don't see anyone bitching about moving to DOSBOX, or XP under VM)


Every time they add a new feature to Windows 10, they have to go through a mountain of old code tests (and they have to ensure that they add new tests cases for any old code the new code could impact), and it's no surprise that they are starting to mi more critical error than in previous update. Making old 32-bit programs someone else's problem would cut their verification test complexity, and give them a shorter turnaround time on new features.
 
Last edited:
Making old 32-bit programs someone else's problem would cut their verification test complexity, and give them a shorter turnaround time on new features.
Can we force MS to add 16 bit support back in then? I think it might be a good thing to massively slow down Windows "feature" updates.
 
Can we force MS to add 16 bit support back in then? I think it might be a good thing to massively slow down Windows "feature" updates.

MS does have a currently supported way to run legacy 16 bit (win3.1) programs: Install a 32 bit version of windows (preferably in a VM).

MS has never supported win16 in 64 bit versions of windows. It's not something new, it was never there going all the way back to the initials version of server 2003-64 and XP-64 (which had a lot more in common with server 03 than XP-32).
 
It's not about the die savings for Intel, it's about saving Microsoft billions of dollars in support costs.

Every other successful OS bit transition in history has eventually cut-off the lower bits. Even Linux is looking to move to put 64 (but as you might expect, there are some holdouts..

https://betanews.com/2019/06/24/can.../2019/06/24/canonical-backpedal-ubuntu-linux/

Windows has already officially ditched Windows 16-bit support, and it's only a matter of another decade before they ditch 32-bit.

And really, current AMD64 CPUs could use an update anyway, as a the initial implementation is capped at 48-bits. No time like the "embrace pure 64-bit Windows" for the processor to get more address bits, and default boot into AMD64 mode. It;'s not like servers will ever miss the 32 bits!!

Today, we are not ready. but within a decade, the remaining holdouts should be satisfied with 3rd-party emulation. That is what MS did with DOS support after XP (and you don't see anyone bitching about moving to DOSBOX, or XP under VM)


Every time they add a new feature to Windows 10, they have to go through a mountain of old code tests (and they have to ensure that they add new tests cases for any old code the new code could impact), and it's no surprise that they are starting to mi more critical error than in previous update. Making old 32-bit programs someone else's problem would cut their verification test complexity, and give them a shorter turnaround time on new features.

The only thing updating current CPUs to support more addressing bits would do is to increase the costs of the sockets even more by adding more IO pins. 48bit addressing is enough for 256TB of ram; and the biggest servers still have a ways to go before getting there. An increase in addressing on server parts will probably happen sometime this decade, but probably only to 52 or 56 bits not the full 64.
 
How much penalty do you really think x86 pays if any for these extra mappings? I don't know, but the logic in me says its negligible.

Well we'll know in 2022.

An article presented at WIVOSCA 2013 mentions that microcode alone can take up to 20% of the area in a low power x86 core.

About the overall tax of legacy x86 on server class core, Gopal Hegde writes:

"Intel designed its cores for use in [systems] from laptops and desktops all the way to servers. It’s not optimized for servers. We have no x86 legacy, like 32-bit support and things like that,” said Hegde. “We are able to optimize our code, and our core area is significantly smaller [as a result]. Just to give you an idea, in the previous generation, if you look at ThunderX2, compared to AMD or Skylake, for the same process node technology [we get] roughly 20% to 25% smaller die area. That translates into lower power. When we move to 7nm with ThunderX3, our core compared to AMD Rome’s 7nm is roughly 30% smaller."

https://www.hpcwire.com/2020/03/17/marvell-talks-up-thunderx3-and-arm-server-roadmap/
 
No, they could get rid of the older instructions, and whatever additional hacks they have in microcode

The backwards compatible overlapping register files does not mean that you have to use them

The reason I don't see that happening is because there is way too much legacy 32 bit windows code out there... That and the IA 32 memory model is a lot less of a hack than segments were.

It's still goingto happen, but not for at least a decade. Microsoft will coordinate with hardware manufacturers when they are ready (much like they forced USB 3 for windows 10, and the latest version of windows now requires ahci). They also officially cut 32 bit versions of windows, leaving them one step closer to pure Amd64
x87, x64, MMX, SSE, PAE,... all them are extensions to x86. You cannot implement the extensions without implementing the base.

You cannot implement x64 without implementing all the legacy stuff back to 8086. That is the reason why modern x86 cores still support legacy x86 instructions such as the 8088 opcodes that no one uses anymore.
 
Well, that and the amount of die space the "extra" 32-bit decoding has become rather negligible. I mean, they could just force 64-bit uefi support and drop 32-bit app supports but so much legacy makes it difficult and as mentioned, it has minimal impact on their CPU/die space anyways, so why? Intel tried to make a clean break once, that turned out well. I honestly don't think it'd be as bad now since most apps are able to compile to 64-bit and if companies need legacy support your just have legacy CPUs that cost more, those that don't need legacy would have a slight price break, or something to this effect.

The tax for supporting all the legacy x86 stuff isn't only in decoding stage; renaming, and register files are affected as well.
 
It's not about the die savings for Intel, it's about saving Microsoft billions of dollars in support costs.

Every other successful OS bit transition in history has eventually cut-off the lower bits. Even Linux is looking to move to put 64 (but as you might expect, there are some holdouts..

https://betanews.com/2019/06/24/can.../2019/06/24/canonical-backpedal-ubuntu-linux/

Windows has already officially ditched Windows 16-bit support, and it's only a matter of another decade before they ditch 32-bit.

And really, current AMD64 CPUs could use an update anyway, as a the initial implementation is capped at 48-bits. No time like the "embrace pure 64-bit Windows" for the processor to get more address bits, and default boot into AMD64 mode. It;'s not like servers will ever miss the 32 bits!!

Today, we are not ready. but within a decade, the remaining holdouts should be satisfied with 3rd-party emulation. That is what MS did with DOS support after XP (and you don't see anyone bitching about moving to DOSBOX, or XP under VM)


Every time they add a new feature to Windows 10, they have to go through a mountain of old code tests (and they have to ensure that they add new tests cases for any old code the new code could impact), and it's no surprise that they are starting to mi more critical error than in previous update. Making old 32-bit programs someone else's problem would cut their verification test complexity, and give them a shorter turnaround time on new features.
It's not "capped" at 48-bits, that's what they chose to implement at the time. At anytime they can implement more physical memory bits without breaking compatibility. I've written os software and a virtual memory manager, so I'm intimately familiar with how their memory and virtual memory (and page tables, ldt's and gdt's) work and the differences between paging schemes for 32-bit and 64-bit. Heck I'm still writing 16-bit code with segment:eek:ffset pairs for funsies.
So like I said, it's not a limitation of the design, just an implementation. The memory controller on your board does not support > 1tb of ram, so why run all the circuitry for it? Would it be worth wiring up the rest of the 16-bits to add support for something that's physically impossible? As I said, it was a decision that won't affect the current instructions set or require any (or very minimal) changes to software, so what would the point of implementing it be?
I do agree though, the transition to 64-bit only would save microsoft in the long run, and to a slight extent AMD/Intel as well. It will be a very slow process though, lol. Look how long you had to boot your PC in 16-bit (real mode)... You can still boot a pc in legacy mode with a setting in your bios. Then the dance of real mode to pmode to long mode begins :).

The tax for supporting all the legacy x86 stuff isn't only in decoding stage; renaming, and register files are affected as well.
Yes I was being overly broad saying it's just the decoder, but my point was it's pretty minimal (and it is still mainly just mapping 16-bit memory address to 64-bit and then doing the 16-bit operation on the appropriate registers). Heck, the fact that they don't rename or do anything different/special in 16-bit mode is how unreal mode became a "but" that was used by almost every OS in existence (it basically allows you access to 32-bit ram from 16-bit real mode as long as you don't reload the 16-bit segment registers after jumping from 32-bit pmode back to 16-bit real mode).

Anyways, this is way off topic if you want to discuss further, feel free to message or start a new thread.
 
They may not and most likely won't. But cannot is not the right term.

No, they could get rid of the older instructions, and whatever additional hacks they have in microcode
I suggest that you take a look at the PS4 Linux talk from fail0verflow at 33C3 (slides available here). PS4 cut out a tiny little piece of PC (not even x86) legacy and it caused problems to no end getting something as robust and malleable as Linux to run on it. If you actually start removing legacy instructions, that will have profound impact on the entire ecosystem. When you do that, you may as well start on a fresh ISA and binary translate your x86 code to your new ISA. Apple is doing it, and reportedly in November we will see the first results of that in their new own-silicon Macbook. Microsoft is doing it with Windows on ARM. Intel tried it with Itanium too, but it didn't go well...
 
Back
Top