Has AMD busted itself trying to beat Intel?

But I don't see that happening now, because x86 code runs too well on current CPUs. Perhaps Intel realized that x86 NEEDS to suck in order to get people to give it up.

All of this stuff about AMD sabotaging Intel's beautiful vision for the future is absolutely laughable. We're talking about a free market place and making money here! Intel shouldn't care less what architecture people use, as long as they're buying it from them. And they don't. Itanium was just a horrible mistake, simple as. AMD's A64 architecture was just giving the market what it wanted, simple as.

This semi-conspiracy you're peddling makes me laugh.
 
1) The market doesn't know what it wants. Market is about short-term strategy, most are too obtuse to think about long-term. They just want fast x86 now, they don't care or even realize that this means they won't be getting a big boost in price/performance in 3 to 5 years.

2) Intel does care, they know their x86 can't compete that well against POWER and similar high-end architectures, so they need a high-end architecture alongside x86. Now if they could replace x86 by Itanium, they can attack all markets with a single architecture, which gives them the upper hand in competition.

And why use the word 'conspiracy'? It's the truth, and yes it's a free market, but that doesn't mean this isn't what happened. It's no secret that Pentium 4 was supposed to be the last ever x86, and that Itanium was coming to the desktop. Heck, there even was a Windows XP for IA64 years before the x64-version.

Go ahead and laugh, I'm right, you're just not getting it. See point 1).
 
I never heard of that..... proof?

Roadmaps of Intel at that time.
How can you not have heard of that? You weren't into hardware back then?
Doesn't everyone know that eg the P4 didn't get 64-bit extensions until the last moment (under pressure of Microsoft)? That was the first x86 action that was never planned... if Intel wanted to continue with the x86-line, then obviously they would have planned a 64-bit model. They didn't, because Itanium *was* their 64-bit model. Everything that was released since, was never on roadmaps either (Core2?).

On the other hand, there were various Itaniums on the roadmap with lower cache sizes, smaller dies and higher clockspeeds. In other words, high-volume, low-cost processors... for workstations and desktops.
HP actually had some Itanium workstations with Windows XP in their product line for a while. They were barely more expensive than x86 workstations... I was considering to buy one at the time.
After Intel made the decision to continue down the 64-bit road with x86, these were all cancelled. Currently Itanium is only in the server roadmaps.
 
Thirdly, people NEED 64-bit addressing, especially for servers and such, because their datasets get larger and larger.

hahaha what's this? are you sure of 64bit addressing? 2 to the 64 worth of addressable space it tooooo much. :p

when you say 32 or 64bit architecture, its all about the size of the registers. fyi, the current x86 processors are only up to 36 bits addressing.

you crack me up :D
 
Roadmaps of Intel at that time.
How can you not have heard of that? You weren't into hardware back then?
Doesn't everyone know that eg the P4 didn't get 64-bit extensions until the last moment (under pressure of Microsoft)?

Just an fyi, 64bit is already built into P4 architecture (already implemented at the die level). It wasn't turned on because of marketing strategy. Intel was pushing for 64bits on server.

Intel was right because all these years those Athlon 64s and P4s were still running on 32bit OS. Its only now that we have a 64bit OS which many of us are adamant to use.
 
hahaha what's this? are you sure of 64bit addressing? 2 to the 64 worth of addressable space it tooooo much. :p

No, I mean more than 32-bit, and 64-bit is the next logical step, at least in register size.
I know the physical addressing space is not that large yet, and even OSes don't support it yet, but that's not the point, really.
The point is that Intel wasn't going to increase the addressing space on x86, so people would have to move to other architectures, since the 4 gb-barrier was getting close.

when you say 32 or 64bit architecture, its all about the size of the registers.

No it's not. It can also be about data bus, address space or even other features... Just depends on the context.
Other than that, 64-bit registers and ALUs are really not very interesting for general purpose computing. In 99% of all software, 32-bit is plenty for most arithmetic, and 64-bit doesn't add any advantage.
It's mainly the addressing of more than 4 gb in an efficient way that makes 64-bit x86 interesting.
 
hahaha what's this? are you sure of 64bit addressing? 2 to the 64 worth of addressable space it tooooo much. :p

when you say 32 or 64bit architecture, its all about the size of the registers. fyi, the current x86 processors are only up to 36 bits addressing.

you crack me up :D
You are incorrect sir. Intel only supports 36-bits physical, 36-bits virtual. AMD currently supports 40-bits physical and 48-bits virtual, K10 supports 48/48.

Too much now, possibly. I could get by with 36-bits virtual for now, but that is where the issue is, not with physical. A couple of years from now and 36-bits will not be enough virtual address space for me. We like to mmap() some incredibly large files here, and that simply isn't possible with a small virtual adress space. The faster the machines get, the more data our sims will produce, and the more address space we'll need for mmap()-ing. Not that you were claiming that one does not need lots of bits, but the tone of your post was leaning that direction. Not that you are, but don't be one of the "this is all we will ever need" crowd... Large addressing makes many things incredibly easier to code.
 
Just an fyi, 64bit is already built into P4 architecture (already implemented at the die level). It wasn't turned on because of marketing strategy. Intel was pushing for 64bits on server.

No it's not. It wasn't until the Prescott-core was introduced. Northwood and Willamette are both 32-bit only.
There are some websites around that analyze the layout of the CPUs and the transistorcount.
Read this for example: http://chip-architect.com/news/2003_04_20_Looking_at_Intels_Prescott_part2.html

Intel was right because all these years those Athlon 64s and P4s were still running on 32bit OS. Its only now that we have a 64bit OS which many of us are adamant to use.

That's a chicken-and-egg question.
Microsoft already had a 64-bit version of Windows XP ready for Itanium in 2001 (because they believed this would be the new standard? They abandoned it when x64 support from Intel came).
They never released their x64 variation until Intel finally gave them enough CPUs in the channel. AMD alone was not interesting enough. That's why AMD had been selling 64-bit CPUs for years, with no Windows to run on it.
If Intel had put in 64-bit extensions at the time when Itanium was released, then there may have been an XP x64 right away. We'll never know.
 
According to Hester the first Fusion parts will not even do double-precision floating point, making the usefulness of the GPU part of the CPU for anything other than graphics suspect.

http://www.hpcwire.com/hpc/1282253.html

If you look at current GPGPUs you quickly find their limitations when it comes to GP computing. http://en.wikipedia.org/wiki/GPGPU They are best suited to stream computations, like the Cell is, only more so. (this is why a R300 Folding @ home client is faster than a PS3 and WAY faster than a CPU)

I don't see Fusion offering anything outside of the low power market that a separate card doesn't do better, even GPGPU stuff. But I could very well be wrong.

The cell processors must be doing something right because IBM plans to implement them in their new mainframes.
 
I dont know scali, correct me if I'm reading this wrong, but it sounds to me like you were hoping Intel would intentionally cripple their products to force IA64 on the market? The shouldnt have anything faster the a 1,5ghz Xeon? They shouldnt have a released an AMD64 version Xeon?

I'm sorry but I simply dont agree. Even VLIW cant run native VLIW code at full speed, due to the nature of compiler dependancies, let alone x86 code which it ran horribly. This was not the future... The future is supposed to outperform the past, and Itanuim didnt.

I just dont agree withit , however I can see why you have your opinion, and I respect it.
 
I dont know scali, correct me if I'm reading this wrong, but it sounds to me like you were hoping Intel would intentionally cripple their products to force IA64 on the market? The shouldnt have anything faster the a 1,5ghz Xeon? They shouldnt have a released an AMD64 version Xeon?

That's right, x86 should have ended there and then.
'Crippling' is not the right word though, just End-Of-Life. No more new developments.

I'm sorry but I simply dont agree. Even VLIW cant run native VLIW code at full speed, due to the nature of compiler dependancies, let alone x86 code which it ran horribly. This was not the future... The future is supposed to outperform the past, and Itanuim didnt.

Itanium did. And VLIW works fine, benchmarks prove that. Just look around on the net.
Especially in things like Povbench, Itanium was positively devastating at its introduction. The performance was way, WAY out of reach of any x86.
Itanium also holds some database benchmark records: http://www.microsoft.com/sql/prodinfo/compare/tpcc.mspx
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp
And there are still quite a lot of Itaniums in the top500 supercomputer list: http://www.top500.org/list/2006/11/100

So don't give me that nonsense that Itanium doesn't outperform the past.
 
The cell processors must be doing something right because IBM plans to implement them in their new mainframes.

Sure, it is great at some things, not so good at others. It depends on what you are going to use it for, it is a compromise between a full stream processor and a General Purpose CPU.
 
That's right, x86 should have ended there and then.
'Crippling' is not the right word though, just End-Of-Life. No more new developments.



Itanium did. And VLIW works fine, benchmarks prove that. Just look around on the net.
Especially in things like Povbench, Itanium was positively devastating at its introduction. The performance was way, WAY out of reach of any x86.
Itanium also holds some database benchmark records: http://www.microsoft.com/sql/prodinfo/compare/tpcc.mspx
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp
And there are still quite a lot of Itaniums in the top500 supercomputer list: http://www.top500.org/list/2006/11/100

So don't give me that nonsense that Itanium doesn't outperform the past.

It runs well despite not being able to achieve anywhere near max IPC. That says alot. But it still doesnt change that fact that it cant run x86 code worth a damn. That says a lot too.

I just dont think it is the way of the future. Its just not compelling enough.

I understand where your coming from though, and respect it.
 
The Desktop/Workstation version was called Deerfield.

Deerfield, now known as the Low Voltage (LV) Itanium 2, will be introduced in 2003. The chip will cut the Itanium's power down to 62 watts, less than half of the 130 watts of maximum thermal power that the Itanium 2 and Madison have been specified to. Deerfield will initially ship at 1.0 GHz, using 1.5 Mbytes of level 3 cache and a 0.13-micron design process.

http://www.theregister.com/2003/09/06/intels_deerfield_chip_goes/

They weren't canceled and went on sale.

http://h18000.www1.hp.com/products/quickspecs/11665_na/11665_na.HTML

Intel® Itanium® 2 processor,
1GHz or 1.4GHz,
1.5MB L3 cache
HP zx1 chipset with 400 MHz system bus

Awesome motherboard BTW! I think there was at least one follow up model with a 1.67GHz Processor on zx2000 platform. It sucked at emulation. But please remember, the first 32bit processors didn't like 16 or 8 bit stuff. How many of you guys remember the "Processor runs too fast for the OS" bugs?

But, I see the Anti-Itanium PR machine is well oiled and working nicely.
 
It runs well despite not being able to achieve anywhere near max IPC. That says alot.

Oh my god.
You are actually putting the Itanium down because it can't reach near max IPC?
Do you have ANY idea how far a modern x86 is from its max IPC?
You'll be lucky to get an average of 1 instruction per cycle, which is VERY poor, considering the amount of execution units and all.
Itanium obviously has higher IPC, even if only because it only runs at about 1.5-1.6 GHz.

But it still doesnt change that fact that it cant run x86 code worth a damn. That says a lot too.

It does?
It's mainly aimed at the server market. In most cases these people just recompile their application for the new architecture, and be done with it.
In a lot of cases they don't even need to run x86 code at all, because their stuff is just shell scripts or SQL queries.
So in the server market people don't care about x86... In fact, most of them never used x86 in the first place, they used some IBM, DEC, HP, Sun or whatever other non-x86 stuff.

Don't overrate the importance of x86-code. In a lot of markets it's not an issue at all...
And on the desktop market, if only there was a good reason (like there are no faster x86 CPUs, so for more performance you MUST go Itanium), they'd drop their legacy code very quickly.
But as long as you keep making x86 perform well, you'll never get rid of it.
Just look at what happened with the 386. It was introduced in 1985, and it took until 1995 (Windows 95) until the desktop market started running a 32-bit OS.
There simply wasn't a good enough reason. 16-bit was working fine for them, wasn't it?
Heck, even today people get in trouble with XP or Vista x64, because there no longer is 16-bit support, and there's still 16-bit installers and other crap in common use.

Apple has already made the move to a different architecture twice (first from 68000 to PPC, and now from PPC to x86). And that went pretty smoothly. They just didn't offer people a choice. That's how you get things done, that's how you move forward.
 
It runs well despite not being able to achieve anywhere near max IPC. That says alot. But it still doesnt change that fact that it cant run x86 code worth a damn. That says a lot too.

I just dont think it is the way of the future. Its just not compelling enough.

I understand where your coming from though, and respect it.

It was never meant to run x86 code. There were also folks saying we weren't ready for 32bit either. All you have to do is look those bashing bashing Dual Core in 2005? Some of those same folks hasn't change their view either.
 
Yes but Itanium needs an x86 emulator to run all the x86 code out there. Which it has, but runs very poorly. It does fine on the server market, but on the desktop it just isnt very compelling.

As a desktop solution, VLIW has nothing to offer.
 
Becouse it would mean dropping the huge volume of x86 code we have today. It would mean billions of dollars of engineering down the drain.

What do you mean by 'x86 code'?
Most code is just written in a high level language like VB, C/C++, Delphi etc.
You can just recompile it with an IA64-compiler, there is little or no code in such languages that needs to be rewritten from one architecture to the next (and there are no Endian-issues with IA64, it's Little-Endian like the x86).
As for assembly code, you need to throw that away and rewrite that everytime a new x86 generation/architecture surfaces, because what may be optimal code on one x86-implementation can be terribly slow on another.

Other than that, what is the problem of dropping old legacy code?
It's an investment in the future, a future with more efficient processors because we no longer require all that legacy overhead.
Itanium would be excellent for multicore processing, since despite its quite extremely parallel nature with lots of execution units, the cores are very compact, because there is no useless complexity like with x86 and its tons of legacy.
 
What do you mean by 'x86 code'?
Most code is just written in a high level language like VB, C/C++, Delphi etc.
You can just recompile it with an IA64-compiler, there is little or no code in such languages that needs to be rewritten from one architecture to the next (and there are no Endian-issues with IA64, it's Little-Endian like the x86).
As for assembly code, you need to throw that away and rewrite that everytime a new x86 generation/architecture surfaces, because what may be optimal code on one x86-implementation can be terribly slow on another.

Other than that, what is the problem of dropping old legacy code?
It's an investment in the future, a future with more efficient processors because we no longer require all that legacy overhead.
Itanium would be excellent for multicore processing, since despite its quite extremely parallel nature with lots of execution units, the cores are very compact, because there is no useless complexity like with x86 and its tons of legacy.

Not really. The vast majority of code is written in c. With a bit in c++, and a very small minority in other laguages. It is not a simple matter of a recompile, you need to have code paths for every architecture you want it to compile on.

Look at how the linux kernel does it, and you'll get an ideaof what I'm talking about. That wouldnt be possible for every program. It would need an effective x86 emulator, which it doesnt have.
 
What do you mean by 'x86 code'?
Most code is just written in a high level language like VB, C/C++, Delphi etc.
You can just recompile it with an IA64-compiler, there is little or no code in such languages that needs to be rewritten from one architecture to the next (and there are no Endian-issues with IA64, it's Little-Endian like the x86).
As for assembly code, you need to throw that away and rewrite that everytime a new x86 generation/architecture surfaces, because what may be optimal code on one x86-implementation can be terribly slow on another.

Other than that, what is the problem of dropping old legacy code?
It's an investment in the future, a future with more efficient processors because we no longer require all that legacy overhead.
Itanium would be excellent for multicore processing, since despite its quite extremely parallel nature with lots of execution units, the cores are very compact, because there is no useless complexity like with x86 and its tons of legacy.

how much of a performance increase or deacrease would we have going from code designed to work on an X86 chip(optomized), and then porting it over to IA-64 which is as even more different then RISC-CISC etc EPIC would shine only when code is written for it IMO

Wiki said:
* Sixteen times the amount of general purpose registers (now 128)
* Sixteen times the amount of floating point registers (now 128)
* Register rotation mechanism to keep values in registers over function calls
 
how much of a performance increase or deacrease would we have going from code designed to work on an X86 chip(optomized), and then porting it over to IA-64 which is as even more different then RISC-CISC etc EPIC would shine only when code is written for it IMO

The points you mention are below the level of a language such as the ones I mentioned.
These optimizations will be carried out by the compiler. The programmer doesn't have to pay attention to such details.
So the point is not relevant to any code, except for assembly code... at which point it is also relevant between different implementations of x86, as I mentioned. It's not a matter of instructionset, but a matter of execution backend.
 
Not really. The vast majority of code is written in c. With a bit in c++, and a very small minority in other laguages. It is not a simple matter of a recompile, you need to have code paths for every architecture you want it to compile on.

Look at how the linux kernel does it, and you'll get an ideaof what I'm talking about. That wouldnt be possible for every program. It would need an effective x86 emulator, which it doesnt have.

Obviously the Linux kernel is not comparable to a regular application.
This is the kernel of an OS, which has to work with a huge amount of hardware devices at a low level.
This part is already done. There are linux, Windows and various other OSes available for Itanium.
We're talking application level here. Most applications don't interface directly with hardware, but go through API calls, which are hardware-independent, and sometimes even platform-independent.
Especially with Windows-stuff in Visual Studio, once you've fixed up your C/C++ code to work on XP x64, it will automatically work on IA64 aswell.

Other than that it already DOES have an effective x86-emulator.
Any performance-critical applications would be recent applications, which can easily be recompiled to IA64 (just as they should be for x64, if you want to get the best possible performance from your hardware, as you're propagating). And the old stuff probably doesn't need to run as fast as possible. For most stuff, a 1.5 GHz Xeon would be perfectly acceptable (mail, www, Office, etc).
If Apple can do it, twice, then so can the x86-world.

I have quite a complex project in Visual Studio.NET, written in C++, with tons of classes and all, many lines of code, uses realtime audio and video, multithreaded optimizations and everything... And I can compile it for x86, x64 or IA64 by nothing more than selecting the architecture in a dropdown box and hitting 'Build'. All code compiles perfectly for all these Windows variations without any alterations. No specific paths or anything.
 
But that is exactly my point.
It's not a new product.
It's one product, it can just compile for three different architectures.
Completely portable.

Ofcourse there are always exceptions, but most code will just compile as-is on IA64 if it can also compile on x64. And VS.NET pretty much enforces that you write code that is compatible with x64. By default it warns about any 64-bit issues and things like that.
So there's no extra maintenance required or anything. Microsoft sees both the IA64 version and the x64 version of Windows as one 64-bit platform, for the most part, because to developers that's what it is.

Other than that, x86 would simply be *replaced* by IA64, so there won't be a new product line or anything. x86 will just be transformed into IA64, and x86 development stops.
 
But that is exactly my point.
It's not a new product.
It's one product, it can just compile for three different architectures.
Completely portable.

Ofcourse there are always exceptions, but most code will just compile as-is on IA64 if it can also compile on x64. And VS.NET pretty much enforces that you write code that is compatible with x64. By default it warns about any 64-bit issues and things like that.
So there's no extra maintenance required or anything. Microsoft sees both the IA64 version and the x64 version of Windows as one 64-bit platform, for the most part, because to developers that's what it is.

Other than that, x86 would simply be *replaced* by IA64, so there won't be a new product line or anything. x86 will just be transformed into IA64, and x86 development stops.

So let me ask you a quick question. Would it be possible to run a 32bit app in compat mode in vista 64 like the 16bit compat mode in xp? When I was beta testing vista (64 & 32) I only ran into 1 program that refused to run in vista64. It's a rather expensive r/c flight simulator, so I just keep using it in xp instead. But with a hd failure I went back to strictly xp and never got a chance to test it out.

And if it is fairly easy to recompile code to 64bit, why haven't alot more companies done so already? Or are they playing on the naivity of the majority of consumers and waiting to charge people a 2x premium to upgrade?
 
So let me ask you a quick question. Would it be possible to run a 32bit app in compat mode in vista 64 like the 16bit compat mode in xp?

What do you mean?
Ofcourse 32-bit applications run in Vista x64 (or XP x64 and IA64 for that matter), they have a 32-bit subsystem, just like XP and other 32-bit Windows before them had a 16-bit subsystem.
It's called Windows-on-Windows, or WOW (so really MS was wrong, the WOW isn't now, it's been there since 64-bit Windows XP).

I run XP x64 as my primary OS, and most of my applications are 32-bit, no problem, no performance loss either.
Ofcourse there are some exceptions, but that doesn't really have to do with the fact that it's 64-bit, but rather that it's not the same Windows.
We had the same with going from Windows NT 4 to 2000, and with 2000 to XP. They were all 32-bit Windows versions, but there were always some applications that didn't work for some reason.
Usually these are bugs in the application, they 'accidentally' worked in a specific Windows-version, even though the code wasn't following the specs 100%.
Especially when going from Windows 9x to NT/2000 this was a problem. The 9x implementation of the Windows API was rather limited and sloppy, so usually you wouldn't notice if you passed the wrong arguments to a function or such. In NT/2000 the same code would fail though.
Every programmer should know this adage: "Working code is not bugfree code".
Just because it worked in XP and doesn't work in Vista x64 doesn't mean the problem is with Vista, it could just aswell be a bug in the application itself.

And if it is fairly easy to recompile code to 64bit, why haven't alot more companies done so already? Or are they playing on the naivity of the majority of consumers and waiting to charge people a 2x premium to upgrade?

Well, there's no point to recompile to 64-bit now, because x64 runs 32-bit code with virtually no issues or performance loss... In fact, you often get a small performance boost when running a 32-bit application in an x64 Windows.
So while it's generally easy to recompile to 64-bit, it's even easier to do nothing.
Other than that, not all code gets faster if you recompile it to x64. It may get slower. In which case you have to rewrite the code to make it x64-friendly (rather hard to market a 64-bit version that runs slower than its 32-bit companion). So then it will cost more...

With IA64 it would be different. You'd always get a considerable performance-gain from recompiling from x86 to IA64, so the move to 64-bit would be more compelling than it is now.
On x64, 32-bit simply runs too well, and 64-bit doesn't run well enough. With IA64 it would be the exact opposite.

The trend I see is that companies that *have to* move to x64, do so with little trouble. This mostly concerns applications with low-level OS access, such as antivirus/firewall/disk defragmentation/management software. Most of that sort of software has moved to x64 already. They didn't have a choice because the 32-bit versions didn't work. Always a good reason to get something done.
The gaming industry also likes the extra performance, so you see that Far Cry and Half Life 2 have received an x64 engine a while ago aswell, and I suppose that newly released games will also get x64 binaries more and more.
But for most other software there's just no compelling reason. They don't need the extra performance, they don't need the larger addressing space... their current versions run fine, so why bother?
I wouldn't be surprised if a lot of software just *never* moved to x64 at all. Even Microsoft doesn't seem to have bothered to release an x64 version of Office 2007 (only Sharepoint Server and Forms Server have an x64 binary... no coincidence that these are server applications). So apparently they also think 32-bit stuff works well enough in Vista x64.
 
What I was getting at was if there is a compatibility mode in vista64 like there is in xp. My understanding of this compat mode was for older 16bit apps that didn't play nice under xp, so they were run with ntvdm and/or wowexexc (which is to simulate a win 3.1 environment).

Not a huge issue for me, as I only have the 1 app that refused to work in vista64. I just never got around to seeing if I could get it to work in the 64bit version, and never installed it in 32bit vista. When the day comes that I have to upgrade my OS, I may have to keep one xp/pc available to run this software or dual boot.
 
What I was getting at was if there is a compatibility mode in vista64 like there is in xp. My understanding of this compat mode was for older 16bit apps that didn't play nice under xp, so they were run with ntvdm and/or wowexexc (which is to simulate a win 3.1 environment).

What do you mean by "didn't play nice"?
You realize that 16-bit applications are a completely different binary executable format, and the code, although x86, is incompatible with any code in 32-bit mode?
In other words, ALL 16-bit applications ran on the 16-bit subsystem. This is not "compatibility mode". Also it's not exclusive to XP. The 16-bit subsystem has been available in every version of NT and 9x to run DOS an Win16 applications. There is no other way to run 16-bit applications than to switch the CPU back to a virtual 16-bit mode inside your 32-bit environment (apart from emulating an entire 16-bit CPU ofcourse).
With "Compatibility mode" you probably mean Appcompat, which was new to Windows 2000 if I'm not mistaken, but is used for simulating other versions of (32-bit) Windows, trying to fix bugs in certain applications (because of changes in certain libraries/API functions etc).

The new WOW in x64 versions of Windows is pretty much the same as the old DOS/Win16 subsystem. You get a virtual 32-bit mode in which you can run 32-bit tasks inside your 64-bit environment.
The 16-bit environment for DOS/Windows was abandoned (I believe this is because of hardware limitations on x64... you can't go into a virtual 16-bit mode from 64-bit mode, only 32-bit is supported. So 16-bit must be run from a native 16-bit OS or a 32-bit OS with virtual 16-bit support, or again, by emulating the entire CPU, like DOSBox does for example).

Not a huge issue for me, as I only have the 1 app that refused to work in vista64. I just never got around to seeing if I could get it to work in the 64bit version, and never installed it in 32bit vista. When the day comes that I have to upgrade my OS, I may have to keep one xp/pc available to run this software or dual boot.

Well, the above means that any 32-bit application would normally work.
There are always exceptions, so it's hard to tell whether this particular application would work in 32-bit Vista... It might work in XP x64, but then again it might not.
However, Vista x64 has a lot more issues with compatibility than just 32-bit mode.
Even in 32-bit Vista a lot of applications don't work properly, because of the new security issues and all sorts of other changes to the environment.
This is why I'm still using XP x64 as my primary OS. Its 32-bit mode is a 'normal' environment, nearly identical to the 32-bit version of XP, and therefore is very compatible.
The 32-bit mode in Vista x64 is probably very similar to its 32-bit version, but that still means there's tons of problems with compatibility. So my main problem at this point is Vista in general, not specifically the 64-bit version.
 
I gotta tell you Scali that is way off....

WoW in Vista x64 is more or less a 32bit emulator... It translates 32bit system calls into native 64bit system calls and then executes the translation. It runs 32bit applications in 64bit long mode... --NOT-- compatibility mode.

Personally I think this is a major design flaw, but that is just my opinion
 
I gotta tell you Scali that is way off....

WoW in Vista x64 is more or less a 32bit emulator... It translates 32bit system calls into native 64bit system calls and then executes the translation. It runs 32bit applications in 64bit long mode... --NOT-- compatibility mode.

Personally I think this is a major design flaw, but that is just my opinion

1) How is that different from the 16-bit subsystem, as I explained? So where exactly is it 'way off'?
You realize ofcourse that it runs the *applications* in 32-bit compatibility mode, because there's no other way...
It just thunks to 64-bit mode for certain calls (mostly kernel-stuff which most applications don't use. Most of the API is included in 32-bit DLLs).
So these calls run in 64-bit long mode.
Any 64-bit OS has to run in long mode anyway. Compatibility mode is just part of it. So WOW64 runs in long mode *and* compatibility mode.

2) If you think this is a design flaw, then I assume you're talking about the way AMD implemented their 64-bit extensions?
Because the OS doesn't really have much of a choice.
Other than that, in what way do you consider this a design flaw?
I think my case is explained pretty clearly. I think the whole 64-bit extension thing on x86 means we get suboptimal 64-bit processors and instructionsets. I prefer going forward in terms of technology, and leaving the x86 era behind.
But I don't understand where you're coming from. After all, this is the most efficient way to run 32-bit x86 code. For 32-bit code, you basically get a full-fledged x86 in hardware, running exactly as its 32-bits predecessors did.
Something that IA64 doesn't offer.

In other words, more substance please.
You might want to read this before answering: http://arstechnica.com/cpu/03q1/x86-64/x86-64-4.html
 
No, I think AMD did it right. They included compatibility mode for a reason. MS simply chose not to use it. And instead to rely on an emulator to make it work.

The way MS did things in the past is a bit different then the way things are dome today. In the past they relied on context switching from protected mode to real mode... This is an ideal situation with the exceptiion of added latency from the context switch.

Now instead of context switching from long mode to protected mode, they chose to emulate protected mode. I think this is where the difference is, and the design flaw.

The x64 kernel does not support compatibility mode... In the end the kernel executes everything... Every single last line of code in the system..... And in order for the kernel to exectue it, it must be 64bits. You can certainly have a 32bit library, but the system calls it makes will be translated to 64bits. In the end, once the abstraction is complete, and the kernel is executing code, that code will be 64bits, and that is why WoW64 exists.....
 
Back
Top