what are the most important innovations in operating systems?

Joined
Apr 4, 2003
Messages
836
this forum is kind of just for fun, especially with how tense thing have been around here recently.

speaking strictly from a systems standpoint, what has really pushed your buttons over the past 50 years? Granted, there are several things that MUST exist in any general-purpose operating systems, but really grabs your interest? To keep this thread at a little bit more of a scientific level, i would like to refrain from debating the merits of certain graphical interfaces and things of that level, but it's your call.

i personally am amazed at our ability these days to insert a new module into a running kernel and have reasonable expectations of proper functionality. it's really fascinating. try to imagine yourself implementing that for the first time. where do you put the code? how do you know how much space to allocate? should the new module be given kernel-level access to the hardware or direct access to hardware? granted, many of these problems are decided beforehand by the model you have chosen, but what if lack of foresight forces you to change your model?

as i said, i find it fascinating.
 
what are the most important innovations in operating systems?

SuperFetch is my favorite OS improvement in the last several years.

http://www.microsoft.com/windows/products/windowsvista/features/details/performance.mspx

Windows SuperFetch

A new memory management technology in Windows Vista, Windows SuperFetch, helps keep the computer consistently responsive to your programs by making better use of the computer's RAM. Windows SuperFetch prioritizes the programs you're currently using over background tasks and adapts to the way you work by tracking the programs you use most often and preloading these into memory. With SuperFetch, background tasks still run when the computer is idle. However, when the background task is finished, SuperFetch repopulates system memory with the data you were working with before the background task ran. Now, when you return to your desk, your programs will continue to run as efficiently as they did before you left.


Also, perhaps this should be moved to the OS forum?

And, from a kernel level, these are always good reads:
Microkernel
Monolithic kernel
 
Memory protection, virtual memory and rings were essential improvements. Before that, any program could stomp freely all over memory.

Toss in multitasking, and that's basically it. Everything else is window dressing.
 
I would say advances in file formats like ZFS, hell even NTFS is a huge improvement over FAT. SMP and multi-processing is getting bigger every day. I agree with the advances in memory management, improvement caching, but the biggest and most important for all of us will be improvements in security not just in the operating systems but in browsers and firewalls. Not just "improvements" in security but usable intelligent improvements you can claim UAC in Vista is a security improvement, but everyone I know turns it off cause its so damn annoying these kind of improvements that are functional will be important in the future.
 
it seems that the biggest advances we've had in the world of operating systems came a long time ago, and we simply keep finding better ways to do the same thing.

as developers, what has made your life easier and what has made it more difficult while programming on any particular operating system?

i know my dad cries for the glory days when DOS would let you write information directly to the hardware, because he has no desire to learn an API for his hobby-style programming. i personally appreciate the development rapidness, portability, and security that comes from using system calls to get the same work done.
 
Personally, I'd like to see a way of aggregating multiple cores into a single core; I realise that there would have to be some major tradeoffs, but even if it happened in a virtual environment...it could be beneficial. Honestly, though, I'm not even sure it's possible (although I seem to remember some company or other announcing they'd figured it out...possibly AMD? Or VMWare?).
 
Personally, I'd like to see a way of aggregating multiple cores into a single core; I realise that there would have to be some major tradeoffs, but even if it happened in a virtual environment...it could be beneficial. Honestly, though, I'm not even sure it's possible (although I seem to remember some company or other announcing they'd figured it out...possibly AMD? Or VMWare?).
It'd have to be done at a hardware, not a software level. It's not as simple as dividing CPU usage among multiple cores - memory access and register access have to be considered as well.
 
as developers, what has made your life easier and what has made it more difficult while programming on any particular operating system?
Debugging, management, and monitoring support are the obvious answers to this. Most developers take for granted the advanced features operating systems like Windows have for monitoring system activity and reporting on events; older OSes had many of the same services, but absolutely no way to interrogate which user or which process was doing what work with what resources. Same for debugging support: in DOS, debuggers needed lots of smarts for their rudimentary functionality -- but now, much of that is built into the OS.
 
Personally, I'd like to see a way of aggregating multiple cores into a single core; I realise that there would have to be some major tradeoffs, but even if it happened in a virtual environment...it could be beneficial. Honestly, though, I'm not even sure it's possible (although I seem to remember some company or other announcing they'd figured it out...possibly AMD? Or VMWare?).

What, you mean like Instruction-Level Parallelism (parallelizing single-threaded code)? They've already tackled that, with mixed results. Designs leveraging pipelining, branch prediction, out-of-order execution and register renaming in the 90s (P6, Nx686/K6, K7) showed huge improvements over previous designs, and allowed programmers (and compiler makers) to ignore the kludgy superscalar designs of previous processors like the P5.

Unfortunately, the overhead with every additional pipeline in an ILP core increases chip complexity exponentially, and the benefits diminish with every additional dispatch pipe. Tweaking things like branch prediction, and taking-on speculative memory / fetch units is simpler, but yields even less improvement. Essentially, ILP took a few giant leaps back in the 90s, and never quite matched those impressive steps.

The solution is to throw the ball back into the programmer's corner, much like early superscaler processors that lacked instruction schedulers. Yes, it sucks, and no, programming will never be the same, but you'll just have to suck it up and learn proper multithreaded design and debug methods.
 
NO_HZ kernel option

gotta be a winner

and when 2.6.24 comes out it should finally be available for amd64
 
Back
Top