Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Unless you compiled it from source, it hasn't been optimized for your CPU. And no matter how much you optimize your kubuntu kernel, it still won't be able to natively run MS Office, Macromedia Studio MX, or a whole list of other programs where alternatives are not going to fit my needs as a professional.bobrownik said:so after vista i installed kubuntu , and im lovin it ,
oh , and it has kernels optimised for different cpus
He wasn't trying to start a *nix fight, he was adding to the point that there are readily-available operataing systems (free ones at that rate) that are optimised for multiple CPUs, and there was absoloutely no reason for M$ not to add this feature to Vista.GreNME said:Stop trying to start a *nix / Win fight where one is completely unnecessary.
InorganicMatter said:He wasn't trying to start a *nix fight, he was adding to the point that there are readily-available operataing systems (free ones at that rate) that are optimised for multiple CPUs, and there was absoloutely no reason for M$ not to add this feature to Vista.
Hi, GreNME. You've addressed your comments specifically to me, but I don't understand why you think you're wasting your time. I'm missing some context that you apparently think I must know. Would you mind filling me in?GreNME said:Mikeblas, nessus, CEpeep, and others who have said similar stuff:
I now see how I was wasting my time. I've had some time off the OS forum, and didn't realize the thread has become populated with so many utterly pointless Vista threads.
Chris_Morley said:What a silly thread...
Well, with input like yours, how will it get any better?MrGuvernment said:quote for the F*'n truth.
On the contrary, I find that Windows is not able to satisfy my needs as a professional (SR. Linux Administrator/Network Engineer), so his comments were a benefit to me as I have been granted the luxury of choice...GreNME said:Mikeblas, nessus, CEpeep, and others who have said similar stuff:
I now see how I was wasting my time. I've had some time off the OS forum, and didn't realize the thread has become populated with so many utterly pointless Vista threads.
Unless you compiled it from source, it hasn't been optimized for your CPU. And no matter how much you optimize your kubuntu kernel, it still won't be able to natively run MS Office, Macromedia Studio MX, or a whole list of other programs where alternatives are not going to fit my needs as a professional.
Stop trying to start a *nix / Win fight where one is completely unnecessary.
I was under the strong impression that AMD released its dual-core processors last summer, i.e. about a year ago. You probably meant dual-socket systems? I distinctly remember that I used to own an Abit BP6 motherboard with a pair of celerons. I think windows 2000 was out at that point. It seemed to work rather nicely with the OS. At the point that this was the case, my college roommate Steve was running the same board. he was running BeOS that was supposed to be multi-CPU optimized and supposedly made all its applications multi-processed on the fly. Is that was linux does? Consume a single-process/ thread application and parallelize it on-the-fly?EmbraceThePenguin said:Using SMP on Linux, I generally see quite a bit of a performance improvement with Dual Core AMD chips. More so than when I run Windows XP. It appears that the Linux kernel is far more mature in this area (Seeing that Linux has supported dual core AMD chips for a good 2 to 3 (Or longer) years before Microsoft did)
According to this article on Firingsquad Quake4 runs in "smp" mode on a windows system as well, showing "large" improvement in low-resolution environments. Is this different from Linux?Also, ID software has release a SMP version of Quake 4 for Linux, and I believe there are a couple of other as well; although their names escape me at the moment...
Actually, I addressed a number of individuals at once, of which you were included. You and the others already tried reasoning with the unreasonable comments of others, and have covered many things I would have already. That's the only context I was addressing.mikeblas said:Hi, GreNME. You've addressed your comments specifically to me, but I don't understand why you think you're wasting your time. I'm missing some context that you apparently think I must know. Would you mind filling me in?
Oh, so we're dropping titles now? Please.EmbraceThePenguin said:On the contrary, I find that Windows is not able to satisfy my needs as a professional (SR. Linux Administrator/Network Engineer), so his comments were a benefit to me as I have been granted the luxury of choice...
And not a one of them is AutoCAD or Photoshop. You seem to be constantly missing the point here. GIMP is nowhere near being a match for what Photoshop can do, and there is no AutoCAD equivalent for Linux. Both of these programs are used at my company on a daily basis. You could have millions of programs, but if Photoshop and AutoCAD are not among them, then your thousands of apps are useless to me.EmbraceThePenguin said:Any way, Linux has for years supported SMP/Dual Core systems and, in the Linux world, there are thousands of applications that take advantage of SMP/Dual Core setups.
Amazing, since dual-core chips have not existed on the market for 2 years, and Windows has been SMP capable since Windows 2000 (longer if you want to follow the NT line). Perhaps you should check yourself before making ridiculous claims based on nonsense and biased conjecture. As for benchmarks: anyone who lives their lives based on benchmarks needs to go get themselves another life. Benchmarks are so often not indicative of real-world performance that they have mainly been relegated to marketing tools. Microsoft, Apple, Sun, and others love to use benchmarks in their marketing propaganda for just the reason you felt the need to mention it: spewing numbers easily confuses and astounds the gullible.EmbraceThePenguin said:Using SMP on Linux, I generally see quite a bit of a performance improvement with Dual Core AMD chips. More so than when I run Windows XP. It appears that the Linux kernel is far more mature in this area (Seeing that Linux has supported dual core AMD chips for a good 2 to 3 (Or longer) years before Microsoft did)
Basically, it would have to do with the OS being aware of not only two separate processors, but also being aware that they are on the same physical bus and thus not scheduling concurrent processes that would ultimately conflict or even cause delay. In fact, it would (theoretically) be able to schedule processes in such a manner so as to make full use of the CPU.drizzt81 said:Would someone enlighten me as to what a "dual-core optimization" in an OS is? From what I understand, WinXP is already able to schedule processes on two or more cores. It even appears to be "smart" about scheudling two processes that demand CPU resources at the same time on different cores: My two instances of F@H appear to be running on different cores, at the same time.
Essentially, the main binaries and libraries of the OS (exe, dll, and so on) would be compiled on a multi-core system, so the software is compiled to specifically work best on that specific hardware.drizzt81 said:My OS understanding is not the greatest, but I have a hard time understanding how one could optimize an OS for dual/ multi core systems?
Most likely not, because sans the extra CPU the two systems would be practically identical.drizzt81 said:Also, even with a good amount of today's PCs being multi-core, would a MC optimized system be slower on a SC computer than the equivalent non-optimized system?
You would be seeing things correctly, and what numerous people have been trying to point out from the start.drizzt81 said:most people here know that I love bashing Vista as much as the next person, but I really do not see the "sucking at DC" part if Vista's proficiency at dealing with dual-core processors is at least at the level that WinXP is exhibiting.
Multi-socket machines don't have their processors on the same bus? My understanding is that they do -- if they're not NUMA.GreNME said:Basically, it would have to do with the OS being aware of not only two separate processors, but also being aware that they are on the same physical bus and thus not scheduling concurrent processes that would ultimately conflict or even cause delay. In fact, it would (theoretically) be able to schedule processes in such a manner so as to make full use of the CPU.
No, you are correct: multi-sockets (usually) have their own bus. However, multi-cores do not.mikeblas said:Multi-socket machines don't have their processors on the same bus? My understanding is that they do -- if they're not NUMA.
nessus said:Neither Intel or AMD have released the necessary reference compilers for producing truly optimized dual core code. I would have thought you'd be flaming away at them, or perhaps the genius doctoral computer science candidates who are working on such things for their doctoral thesis.
Software has always trailed hardware by 4-5 years because of the time it usually takes to get really good tools, and then learn to use them.
1. Two years to create tools to barely capable of realizing the capability of the CPU once a working model is actually available
2. Two more years to really optimize it (during which time the programmers are learning how to change their own thinking)
3. Another year or so for the programmers to work around the quirks of the optimized compilers to create truly optimized code
Current compilers can't even really deal with hyperthreading, much less the multi-core hyperthreading that is on the way shortly.
Oh wait, figuring out all the subtleties of dealing with the detection of thread optimizable code from standard linear code which the programmer is currently turning out in linear fashion because he hasn't had the required time to completely restructure his entire design and programming methodology is really easy isn't it. Oh yeah, multiple branch predictor interaction on separate logical processors is really easy to understand as well.
You'll write the compiler that's needed tonight in your spare time, right?
What the heck is this crap? We go from a Chevy / Ford argument directly into a VHS / BetaMAX argument?schizo said:BeOS is the only operating system fully optimized for SMP in the manner you describe. It was originally written assuming a multiprocessing architecture. They mostly used it for interactive speed, though. The world would be a very different place if apple went with beos instead of NeXT. It was an indisputably technically superior next-generation operating system, and included features that have been taken off the roadmap for even vista.
GreNME said:In fact, based solely on the text I have read, it seems the OP and the writer of the article know very little about SMP and dual cores outside of possibly knowing what the acronym 'SMP' stands for.
nigerian_businessman said:They should be as taboo and predictable as a certain individual who had an affinity for stretching his anus to astronomical circumferences (sorry, the short version got censored) at this point.
Can we stop arguing about it now?
Would such a scheduler not require that either the OS learns which processes require a lot of Bus interaction or that the process control block (or task control block or whatever it is called nowadays) includes information about the likelihood of a bus access?GreNME said:Basically, it would have to do with the OS being aware of not only two separate processors, but also being aware that they are on the same physical bus and thus not scheduling concurrent processes that would ultimately conflict or even cause delay. In fact, it would (theoretically) be able to schedule processes in such a manner so as to make full use of the CPU.
Considering I don't yet know regarding the dual cores, I would say probably. It works that way (or similarly) with single-core SMP, so I doub't they are going to change much in that regard (though instruction sets may differ slightly).drizzt81 said:Would such a scheduler not require that either the OS learns which processes require a lot of Bus interaction or that the process control block (or task control block or whatever it is called nowadays) includes information about the likelihood of a bus access?
Every process requires interaction, but the OS never directly accesses the hardware (in Windows). The "cahce aware" you refer to is more just scheduling the processes CPU time in a fashion that is optimal for the CPU configuration (this is where the HAL comes into play).In the end, doesn't every process require bus interaction just for the sake of execution (get data and instructions from memory)? Or would the OS be "cache aware" and consider how likely it is that the processes next N instructions are already cached in L2?
Indeed it is, and I know I'm not fully explaining it clearly. It's a difficult thing to cover on a thread in a forum on the web. There are loads of diagrams out there that can better illustrate what I'm trying to say.I think this is getting a bit too complicated for me at the moment. Sounds like an interesting research topic nonetheless.
ZXTT said:Of course, there may be other things that the MS guy was referring to. It would be nice for single-threaded apps to get a boost.
mikeblas said:"Nice" for single-threaded apps to get a boost? Do you think it's possible to do this effectively in the general case?
Here's a hint: You can't.ZXTT said:I don't know.