Performance variation

Ree

n00b
Joined
Jul 14, 2008
Messages
36
During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames:
  • POST
  • OS installation
  • Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats
  • Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc)
I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why?
 
Where are you trying to go with this thread?

I ask because you give a lot of technical "jargon" (for lack of a better term), and end everything with a simple question -- as if the answer would even be half as simple.

Looking at the examples that you've given, you missed an important detail: the individual user and the role of a particular computer.
 
Where are you trying to go with this thread?
I'm just looking for an answer/comments on something I have observed - that performance sometime differs noticeably (even if the difference is small).
Looking at the examples that you've given, you missed an important detail: the individual user and the role of a particular computer.
No, the user has no effect here. What effect could they possibly have on the time of completion of some long-running HDD diagnostic procedure executed in DOS environment? Or OS installation after they click "Install"? I'm not sure what you mean by role. Server/Desktop? I don't think roles have any difference here - whatever roles you have in mind.
 
hardware, the smaller the hardware (and the greater the amount) you're working with the greater the difference in performance you're going to see from the same machine. nearly all similar machines show this to some degree, but it's probably most visible in computers because of the scale (size and number) of what you're working with.
 
Last edited:
WARNING: lots of tech jargon follows.

Computers are non-deterministic machines. In the early days, a microprocessor executed instructions in-order, and retired them in-order. The processor handled memory accesses directly, and basically did nothing while waiting for data. Performance was predictable, especially since the OS was bare-metal (you can still get a similar experience today with embedded computing).

But today your PC is not just a microprocessor.

Your PC is made-up of thousands of integrated processing units, most of them specialized and relatively simple, and most of them run independently of each-other. So when your processor wants to read something from memory, it first checks multiple levels of cache via the cache controller. If that turns-up empty, the read is scheduled by the memory controller, and is then re-ordered to optimize bandwidth (by doing a burst read). When the memory access happens, it falls to the DMA controller to send the data to the processor (and this could also be re-ordered for maximum bandwidth). This is just one of the many thousands of interactions your computer deals with every second - add-in the likelihood of communication errors and re-send overhead, and you could get very different performance from run-to-run.

Above all that complex hardware is an equally-complex operating system layer. Today's processors have TLB cache that track what virtual memory addresses map to real memory addresses, and if you encounter a page fault, it takes a good bit of time to allocate a new page, or load the existing page from disk cache. On top of that, multi-threading introduces it's own overhead: the time it takes for a non-HT processor to perform a context switch is very high, and can affect the results of many benchmarks. And in modern operating systems, it's hard to disable all other tasks, because the system usually runs stock with dozens of processes.

Short and sweet: this is why good testing includes multiple runs of the same benchmark, averaged.
 
Back
Top