Vista SUCKS at multicore...why do I want vista again? :(

i tried the vista beta , really nothing impressive on the outside beside the 3d desktop,
and that sucks compared to xgl/compiz

so after vista i installed kubuntu , and im lovin it ,

oh , and it has kernels optimised for different cpus
 
Mikeblas, nessus, CEpeep, and others who have said similar stuff:

I now see how I was wasting my time. I've had some time off the OS forum, and didn't realize the thread has become populated with so many utterly pointless Vista threads.

bobrownik said:
so after vista i installed kubuntu , and im lovin it ,

oh , and it has kernels optimised for different cpus
Unless you compiled it from source, it hasn't been optimized for your CPU. And no matter how much you optimize your kubuntu kernel, it still won't be able to natively run MS Office, Macromedia Studio MX, or a whole list of other programs where alternatives are not going to fit my needs as a professional.

Stop trying to start a *nix / Win fight where one is completely unnecessary.
 
GreNME said:
Stop trying to start a *nix / Win fight where one is completely unnecessary.
He wasn't trying to start a *nix fight, he was adding to the point that there are readily-available operataing systems (free ones at that rate) that are optimised for multiple CPUs, and there was absoloutely no reason for M$ not to add this feature to Vista.
 
InorganicMatter said:
He wasn't trying to start a *nix fight, he was adding to the point that there are readily-available operataing systems (free ones at that rate) that are optimised for multiple CPUs, and there was absoloutely no reason for M$ not to add this feature to Vista.


Nice of you to speak for someone else's intentions, but either way the post was a) almost compltely incorrect (*nix is still not dual-core optimized) and b) ignoring the fact that whether the kernel is optimized or not means jack squat when there are no programs that are optimized to run for dual-cores.

Stop complaining about Windows being optimized for dual cores, and start complaining to Blizzard, ID, Macromedia (Adobe), and all the other software vendors who make programs that we run on top of the OS. You guys are putting the cart before the horse and don't even seem to understand the things you are complaining about.
 
Would someone enlighten me as to what a "dual-core optimization" in an OS is? From what I understand, WinXP is already able to schedule processes on two or more cores. It even appears to be "smart" about scheudling two processes that demand CPU resources at the same time on different cores: My two instances of F@H appear to be running on different cores, at the same time.

My OS understanding is not the greatest, but I have a hard time understanding how one could optimize an OS for dual/ multi core systems? Also, even with a good amount of today's PCs being multi-core, would a MC optimized system be slower on a SC computer than the equivalent non-optimized system?

most people here know that I love bashing Vista as much as the next person, but I really do not see the "sucking at DC" part if Vista's proficiency at dealing with dual-core processors is at least at the level that WinXP is exhibiting.
 
Hi, drizzt81! I too, wonder what StalkerZER0 thinks it is that he can't live without if Vista isn't fully optimized for multicore machines. Hell, Ford cars (or, those from any other manufacturer, for that matter!) aren't fully optimized for gasoline!

GreNME said:
Mikeblas, nessus, CEpeep, and others who have said similar stuff:

I now see how I was wasting my time. I've had some time off the OS forum, and didn't realize the thread has become populated with so many utterly pointless Vista threads.
Hi, GreNME. You've addressed your comments specifically to me, but I don't understand why you think you're wasting your time. I'm missing some context that you apparently think I must know. Would you mind filling me in?
 
GreNME said:
Mikeblas, nessus, CEpeep, and others who have said similar stuff:

I now see how I was wasting my time. I've had some time off the OS forum, and didn't realize the thread has become populated with so many utterly pointless Vista threads.


Unless you compiled it from source, it hasn't been optimized for your CPU. And no matter how much you optimize your kubuntu kernel, it still won't be able to natively run MS Office, Macromedia Studio MX, or a whole list of other programs where alternatives are not going to fit my needs as a professional.

Stop trying to start a *nix / Win fight where one is completely unnecessary.
On the contrary, I find that Windows is not able to satisfy my needs as a professional (SR. Linux Administrator/Network Engineer), so his comments were a benefit to me as I have been granted the luxury of choice...

Any way, Linux has for years supported SMP/Dual Core systems and, in the Linux world, there are thousands of applications that take advantage of SMP/Dual Core setups.

I too use Kubuntu, but have used Gentoo in the past and I can compile my system using gcc options to take advantage of SMP systems and generally see an improvement in performance when benchmarking single core and dual core running optimized applications. Kubuntu also has SMP optimized kernels right in their repositories for me to select from and a variety of CPU architectures as well.

Using SMP on Linux, I generally see quite a bit of a performance improvement with Dual Core AMD chips. More so than when I run Windows XP. It appears that the Linux kernel is far more mature in this area (Seeing that Linux has supported dual core AMD chips for a good 2 to 3 (Or longer) years before Microsoft did)

Also, ID software has release a SMP version of Quake 4 for Linux, and I believe there are a couple of other as well; although their names escape me at the moment...

Hope this helps...

Joe
 
EmbraceThePenguin said:
Using SMP on Linux, I generally see quite a bit of a performance improvement with Dual Core AMD chips. More so than when I run Windows XP. It appears that the Linux kernel is far more mature in this area (Seeing that Linux has supported dual core AMD chips for a good 2 to 3 (Or longer) years before Microsoft did)
I was under the strong impression that AMD released its dual-core processors last summer, i.e. about a year ago. You probably meant dual-socket systems? I distinctly remember that I used to own an Abit BP6 motherboard with a pair of celerons. I think windows 2000 was out at that point. It seemed to work rather nicely with the OS. At the point that this was the case, my college roommate Steve was running the same board. he was running BeOS that was supposed to be multi-CPU optimized and supposedly made all its applications multi-processed on the fly. Is that was linux does? Consume a single-process/ thread application and parallelize it on-the-fly?
If that is not the case, then please tell me what the Linux kernel does to take advantage of multi-core processors that windows does not do? I think we should keep applications out of the picture, since they are not really microsoft's responsibility. Maybe there are better tools in Linux to help develop multi-threaded (processed) applications and I guess we could fault MS for that. However, such a thing is not an inherent Vista problem, since I would consider the tools to be portable between WinXP and Vista.

Also, ID software has release a SMP version of Quake 4 for Linux, and I believe there are a couple of other as well; although their names escape me at the moment...
According to this article on Firingsquad Quake4 runs in "smp" mode on a windows system as well, showing "large" improvement in low-resolution environments. Is this different from Linux?
 
mikeblas said:
Hi, GreNME. You've addressed your comments specifically to me, but I don't understand why you think you're wasting your time. I'm missing some context that you apparently think I must know. Would you mind filling me in?
Actually, I addressed a number of individuals at once, of which you were included. You and the others already tried reasoning with the unreasonable comments of others, and have covered many things I would have already. That's the only context I was addressing.

EmbraceThePenguin said:
On the contrary, I find that Windows is not able to satisfy my needs as a professional (SR. Linux Administrator/Network Engineer), so his comments were a benefit to me as I have been granted the luxury of choice...
Oh, so we're dropping titles now? Please.

You are a Linux administrator, so of course Linux is going to be the right tool for you. However, when you are the IT manager of a company who has numerous AutoCAD, Photoshop, MS Office, Great Plains (and other Dynamics software), and other licenses, trying to argue swtiching to Linux as if it were a valid solution is preposterous. Each of those software suites is used to maintain industry standard formats, because the IT infrastructure is simply a tool for the rest of the company to use for operations. Being a professional isn't a personal contest to see who can be the most l33t, it is about getting the job done, and using the right tool for the right outcome.

EmbraceThePenguin said:
Any way, Linux has for years supported SMP/Dual Core systems and, in the Linux world, there are thousands of applications that take advantage of SMP/Dual Core setups.
And not a one of them is AutoCAD or Photoshop. You seem to be constantly missing the point here. GIMP is nowhere near being a match for what Photoshop can do, and there is no AutoCAD equivalent for Linux. Both of these programs are used at my company on a daily basis. You could have millions of programs, but if Photoshop and AutoCAD are not among them, then your thousands of apps are useless to me.

You are seriously misleading yourself if you think SMP support is the same as dual core support. Windows is just as capable as Linux in SMP, with the exception of (as I pointed out) actually compiling from source. Hell, Photoshop has minimal SMP support, as does AutoCAD. As long as I'm using an SMP HAL for Windows (yes, surprise, Windows has SMP support) then my major programs get to take advantage of multi-threading.

EmbraceThePenguin said:
Using SMP on Linux, I generally see quite a bit of a performance improvement with Dual Core AMD chips. More so than when I run Windows XP. It appears that the Linux kernel is far more mature in this area (Seeing that Linux has supported dual core AMD chips for a good 2 to 3 (Or longer) years before Microsoft did)
Amazing, since dual-core chips have not existed on the market for 2 years, and Windows has been SMP capable since Windows 2000 (longer if you want to follow the NT line). Perhaps you should check yourself before making ridiculous claims based on nonsense and biased conjecture. As for benchmarks: anyone who lives their lives based on benchmarks needs to go get themselves another life. Benchmarks are so often not indicative of real-world performance that they have mainly been relegated to marketing tools. Microsoft, Apple, Sun, and others love to use benchmarks in their marketing propaganda for just the reason you felt the need to mention it: spewing numbers easily confuses and astounds the gullible.

Now, what any of this has to do with the existence or lack of dual-core support in Vista, and why anyone should be gnashing teeth over the mostly-speculation claims at this point, is completely unknown to me. Instead, it seems you are arguing that since a gossip mag like the Inquirer said something, it is somehow proof that everyone should simply be using another platform (in your claim, Linux) anyway. Or, are you trying to say something else? Because if you are trying to say something else, you and the others are doing a very horrible job of it, and simply coming across as the type of guy who walks into a Chevy shop and preaches about the wonders of Ford.

This is not about Chevy/Ford, and this is not about Win/*nix. This is about what level of optimization is necessary for the OS to take advantage of dual-core chips, and what that may mean to the applications that are run on the OS. The core of the OS could be tweaked all the hell out for dual core, with loads of special flags in the source and compiled directly on the machine for which it will run, and all of that will mean jack shit when the operator is running a program that has no SMP capabilities at all. Whoop-di-freaking-do, you can run a text editor or regedit at blazingly fast speeds, but if you think that means anything to your non-optimized Battlefield 1942 install, then you have a whole lot to learn about how operating systems and applications work on different hardware platforms.
 
drizzt81 said:
Would someone enlighten me as to what a "dual-core optimization" in an OS is? From what I understand, WinXP is already able to schedule processes on two or more cores. It even appears to be "smart" about scheudling two processes that demand CPU resources at the same time on different cores: My two instances of F@H appear to be running on different cores, at the same time.
Basically, it would have to do with the OS being aware of not only two separate processors, but also being aware that they are on the same physical bus and thus not scheduling concurrent processes that would ultimately conflict or even cause delay. In fact, it would (theoretically) be able to schedule processes in such a manner so as to make full use of the CPU.

drizzt81 said:
My OS understanding is not the greatest, but I have a hard time understanding how one could optimize an OS for dual/ multi core systems?
Essentially, the main binaries and libraries of the OS (exe, dll, and so on) would be compiled on a multi-core system, so the software is compiled to specifically work best on that specific hardware.

drizzt81 said:
Also, even with a good amount of today's PCs being multi-core, would a MC optimized system be slower on a SC computer than the equivalent non-optimized system?
Most likely not, because sans the extra CPU the two systems would be practically identical.

drizzt81 said:
most people here know that I love bashing Vista as much as the next person, but I really do not see the "sucking at DC" part if Vista's proficiency at dealing with dual-core processors is at least at the level that WinXP is exhibiting.
You would be seeing things correctly, and what numerous people have been trying to point out from the start.
 
GreNME said:
Basically, it would have to do with the OS being aware of not only two separate processors, but also being aware that they are on the same physical bus and thus not scheduling concurrent processes that would ultimately conflict or even cause delay. In fact, it would (theoretically) be able to schedule processes in such a manner so as to make full use of the CPU.
Multi-socket machines don't have their processors on the same bus? My understanding is that they do -- if they're not NUMA.
 
mikeblas said:
Multi-socket machines don't have their processors on the same bus? My understanding is that they do -- if they're not NUMA.
No, you are correct: multi-sockets (usually) have their own bus. However, multi-cores do not.
 
BeOS is the only operating system fully optimized for SMP in the manner you describe. It was originally written assuming a multiprocessing architecture. They mostly used it for interactive speed, though. The world would be a very different place if apple went with beos instead of NeXT. It was an indisputably technically superior next-generation operating system, and included features that have been taken off the roadmap for even vista.
 
nessus said:
Neither Intel or AMD have released the necessary reference compilers for producing truly optimized dual core code. I would have thought you'd be flaming away at them, or perhaps the genius doctoral computer science candidates who are working on such things for their doctoral thesis.

Software has always trailed hardware by 4-5 years because of the time it usually takes to get really good tools, and then learn to use them.

1. Two years to create tools to barely capable of realizing the capability of the CPU once a working model is actually available
2. Two more years to really optimize it (during which time the programmers are learning how to change their own thinking)
3. Another year or so for the programmers to work around the quirks of the optimized compilers to create truly optimized code

Current compilers can't even really deal with hyperthreading, much less the multi-core hyperthreading that is on the way shortly.

Oh wait, figuring out all the subtleties of dealing with the detection of thread optimizable code from standard linear code which the programmer is currently turning out in linear fashion because he hasn't had the required time to completely restructure his entire design and programming methodology is really easy isn't it. Oh yeah, multiple branch predictor interaction on separate logical processors is really easy to understand as well.

You'll write the compiler that's needed tonight in your spare time, right?

QFT, and a pwnt for good measure ;)
 
schizo said:
BeOS is the only operating system fully optimized for SMP in the manner you describe. It was originally written assuming a multiprocessing architecture. They mostly used it for interactive speed, though. The world would be a very different place if apple went with beos instead of NeXT. It was an indisputably technically superior next-generation operating system, and included features that have been taken off the roadmap for even vista.
What the heck is this crap? We go from a Chevy / Ford argument directly into a VHS / BetaMAX argument?

You are incorrect, by the way. BeOS was the only one fully optimized for pre-emptive multitasking, not for SMP. Unix, Linux, and NT could all do SMP in varying degrees of adequacy.

The original post was pretty much based on an article that was just shy of flat-out lying, and the bulk of the arguments so far have been either the "M$ sucks" variety or the "this is why my personal favorite OS is better than Windows" variety. Both of those arguments, as well as almost every other argument that assumes the original post is true, are ignoring two major factors:
  1. Arguing about the SMP without arguing about application capability-- meaning your web browsers, games, office suites, etc-- is an empty argument. The OS facilitates the running of apps, and...
  2. Windows has had SMP capabilities for years, and Windows XP already does a pretty good job with it (even with HT processors). The article in the original post is based on false assumptions and jargon-based... well, let's just say it (and the original poster) assumes it knows more than the actual content of the text seems to show. In fact, based solely on the text I have read, it seems the OP and the writer of the article know very little about SMP and dual cores outside of possibly knowing what the acronym 'SMP' stands for.
 
GreNME said:
In fact, based solely on the text I have read, it seems the OP and the writer of the article know very little about SMP and dual cores outside of possibly knowing what the acronym 'SMP' stands for.

Another QFT. Not trying to minimod, but it would save a lot of needless arguments if any thread quoting or linking to an Inquirer article in an attempt to promote discussion was immediately locked for being flamebait. Bonus points if the OP has their status knocked back to n00bie.

Ok, on a serious note. Anyone who's been following tech news sites for any length of time knows that the Inq just prints whatever sounds like it will get them hits, and hopes that they get it right once in a great while. The amount of misinformation coming from that site about, well, everything tech related is nothing short of astounding. It amazes me that they even manage to get people to link them anymore. They should be as taboo and predictable as a certain individual who had an affinity for stretching his anus to astronomical circumferences (sorry, the short version got censored) at this point.

Can we stop arguing about it now?
 
nigerian_businessman said:
They should be as taboo and predictable as a certain individual who had an affinity for stretching his anus to astronomical circumferences (sorry, the short version got censored) at this point.

Can we stop arguing about it now?

That made my day...
 
GreNME said:
Basically, it would have to do with the OS being aware of not only two separate processors, but also being aware that they are on the same physical bus and thus not scheduling concurrent processes that would ultimately conflict or even cause delay. In fact, it would (theoretically) be able to schedule processes in such a manner so as to make full use of the CPU.
Would such a scheduler not require that either the OS learns which processes require a lot of Bus interaction or that the process control block (or task control block or whatever it is called nowadays) includes information about the likelihood of a bus access?
In the end, doesn't every process require bus interaction just for the sake of execution (get data and instructions from memory)? Or would the OS be "cache aware" and consider how likely it is that the processes next N instructions are already cached in L2?
I think this is getting a bit too complicated for me at the moment. Sounds like an interesting research topic nonetheless.
 
drizzt81 said:
Would such a scheduler not require that either the OS learns which processes require a lot of Bus interaction or that the process control block (or task control block or whatever it is called nowadays) includes information about the likelihood of a bus access?
Considering I don't yet know regarding the dual cores, I would say probably. It works that way (or similarly) with single-core SMP, so I doub't they are going to change much in that regard (though instruction sets may differ slightly).

In the end, doesn't every process require bus interaction just for the sake of execution (get data and instructions from memory)? Or would the OS be "cache aware" and consider how likely it is that the processes next N instructions are already cached in L2?
Every process requires interaction, but the OS never directly accesses the hardware (in Windows). The "cahce aware" you refer to is more just scheduling the processes CPU time in a fashion that is optimal for the CPU configuration (this is where the HAL comes into play).

I think this is getting a bit too complicated for me at the moment. Sounds like an interesting research topic nonetheless.
Indeed it is, and I know I'm not fully explaining it clearly. It's a difficult thing to cover on a thread in a forum on the web. There are loads of diagrams out there that can better illustrate what I'm trying to say.
 
Wow, amazing how far some people will run with a comment they don't understand.

Back in 1988, NT design requirements included SMP support, reentrancy and preemptive multitasking. So the NT line, NT 3.1, 3.51, 4.0, Win2K, XP, 2K3 Server have all been "optimized" for multiple CPUs.

Now, "fully optimized" for any given form that multiple CPUs can take is a different story. You can't very well design in support for forms that are unknown at design time.

Let's consider some variations, within the limits of my understanding. For example, we have the following possibilities:

Multiple CPUs, each in their own socket and presumably with their own bus to the memory/memory controller.
Multiple CPUs, same as above, but organized in a NUMA system.
Single or multiple physical CPUs with multiple logical CPUs, as with Hyperthreading - shared cache.
Single or multiple physical CPUs with multiple cores, as with Pentium D, AMD X2 - separate cache.
Single or multiple physical CPUs with multiple cores, as with Conroe, Merom, Woodcrest and Yonah - shared cache.

Now, as an example of "fully optimizing", consider Hyperthreading when two physical CPUs are present. Win2K would schedule threads on any available logical CPU, without considering if a physical CPU was already busy or idle. In other words, it would use Hyperthreading, but not in the best way. XP and Server 2K3 were optimized for hyperthreading, such that they would schedule threads to logical CPUs on idle physical CPUs before scheduling to a logical CPU on a busy physical CPU.

Let's consider one aspect of the latest multi-core CPUs. Until recently, each core in a multi-core CPUs had its own L2 cache. Conroe (and I guess Yonah which is already out) will have the L2 cache shared between cores. This difference would suggest that the thread-scheduling algorithm could be tuned to take advantage of the shared cache. For example, it may be acceptable to schedule a thread on one core of a particular CPU and then schedule it on the other core and have it benefit because some of its data is still in the L2 cache. Previously, the scheduler would have to favor the particular core the thread ran on. In this sense, Conroe might be treated similarly to a Hyperthreaded CPU with its shared cache. But not having this particular optimization does would not make an OS suck at multi-core as some folks would like to make out.

Of course, there may be other things that the MS guy was referring to. It would be nice for single-threaded apps to get a boost. Perhaps other aspects of the various multi-core CPUs can be exploited. But, every version of Windows in the NT line has improved scalability and there no reason to believe that Vista won't be at least as good as Server 2K3, if not better.

Edit: Fixed some things where I'd confused Merom and Yonah CPUs.
 
ZXTT said:
Of course, there may be other things that the MS guy was referring to. It would be nice for single-threaded apps to get a boost.

"Nice" for single-threaded apps to get a boost? Do you think it's possible to do this effectively in the general case?
 
mikeblas said:
"Nice" for single-threaded apps to get a boost? Do you think it's possible to do this effectively in the general case?

I don't know. There've been these rumors of "reverse hyperthreading". I don't know if the OS needs to support that or not. And as others have pointed out, there are probably compilers coming that can take parallelize code.

The thing I talked about in my previous post, the shared L2 cache, is the most obvious new feature to me that multi-core CPUs are sporting now. Otherwise, they look suspiciously like multiple CPUs with a shared bus, and so multi-core issues should be similar to multi-CPU issues. However, I wonder if the fact that desktops and notebooks are getting multiple cores is an issue in and of itself, in that perhaps the multi-CPU/multi-core work put into Windows has been favoring servers.
 
ZXTT said:
I don't know.
Here's a hint: You can't.

"Reverse Hyperthreading" isn't much more than a rumour at the moment. Try searching for the term at amd.com, or even developer.amd.com -- you get zero hits.

There are compilers with features that allow the programmer to express the idea that certain things might happen in parallel at runtime if the processor resources are available. OpenMP is probably the most pertinent example of this.

But that happens at compile time, with extra information from the programmer. It doesn't happen arbitrarily for any code in the program. It still requires the programmer to know what can happen in parallel, what can't, and when to synchronize between the two.

Only some problems are limited by processing power, by the way. The issue usually is memory bandwidth. The canonical examples that OpenMP gives, for example, usually involve painting a large array. Instead of having one processor store 10,000 integers to the array, let's have two processors store 5,000 integers each.

Sounds great -- until you realize that storing the integers is what's taking the time. Memory bandwidth is the problem, not CPU power. Throwing another CPU at the problem might make it worse. (To demonstrate that for yourself, try playing around MEMSPD.EXE on your dual-core or dual-proc machine.)

For the OS to be involved in fluffing code out to multiple processors, the decision of what to parallelize wouldn't happen until runtime. If you can get it to happen at runtime, after the code is compiled, you should make sure your doctoral advisor nominates you for The Turing Award.

There are environments were compilation can happen at runtime, like Java or the Microsoft .NET platforms. There, maybe some combination of source annotation and runtime moidifications can work. But I'm not holding my breath for any substantial win.

Implementing something automatic in compilers requires exhaustive and complete dependency analysis. This takes time, and also takes a lot of infrastructure. You might be able to tell two loops aren't interdependent, for instance, if they touch only local variables and distinct sets of global variables.

But what if they call OS-provided functions, like WriteFile()?
 
Back
Top