HDD or SSD for swap-heavy system

omniscence

[H]ard|Gawd
Joined
Jun 27, 2010
Messages
1,311
I have a 16 GB UP Xeon workstation with Windows 2008 R2 server that constantly uses like 20-25 GB of overall memory and as a result is doing some heavy swapping. A RAM upgrade is out of question here. Currently the pagefile is on the same disk as the OS and the software running can generate multiple GB data per minute also to the single 7200rpm drive.

The system is overall unresponsive and especially during the data writeout phases it gets very slow. I plan to put the pagefile on another disk do reduce the IO to the system disk. The question I have is, what type of disk would be the best here? A SLC based SSD like the Intel X25-E, a 10k or 15k SAS disk or even a Velociraptor?

Is there a way in Windows to determine the amount of data written to and read from the pagefile over a larger period of time like 12 hours? The ressource monitor can only show near-term access.
 
First suggestion was going to be: get more RAM, but apparently you're saying that's a no-go.

Second suggestion: use Performance Monitor which is still in there, click Start - type performance monitor (and then press Enter) and voila, start monitoring and creating whatever reports you deem necessary.

If you have multiple physical hard drives in that workstation, put a multi-gigabyte page file on every single physical drive, say 4GB on each, maybe 8GB if you're willing - that allows Windows to "hit" all of them as required. Having a single page file only on the system drive is just going to make it churn and burn constantly; one drive isn't enough. Multiple page files spread across physical drives that are (a) static in size and b) placed towards the beginning of each drive will ensure the absolute best performance you're going to get. If the main system/OS drive is occupied in a read or write operation, Windows will "hit" one of the page files on one of the other drives.

Honestly, if that machine is in excessive usage, and you're generating that level of data flow consistently, a hardware upgrade of some kind - most notably RAM - is the only thing that's seriously going to help.

While SSDs do last longer than many people seem to believe, that much data flow and generation could - and I'm saying it could wreak havoc on even the best SSD modules because that's going to be a huge amount of data created, erased, replaced, ad nauseum.

An SSD will improve things from the swap perspective because of the random access time (primarily) and the actual transfer speeds (secondary) but, in the long run, real RAM simply can't be beat in such a situation.

If there any potential of perhaps buying something like one of those Gigabyte iRAM cards or something similar and load that up with RAM so you can use it as an additional drive, perhaps? That should be a potential solution or at least something to assist in moving all that data around. The downside for such devices is they're still limited by the PCI-Express bus (which is pretty fast but nowhere near true RAM speeds).

RAM is still going to be the #1 way to "fix" these problems...
 
Thank you for your resonse. As the board is maxed out, a RAM upgrade would require a new board and most likely new processor(s). That would cost in excess of 2000€ and is not justified for that system. It works like it is now, but the usability suffers a bit due to the unresponsiveness.

I'm curios how windows does the load balancing for the pagefiles. You say that it will use another disk if the system disk does IO at the same time, but if a page miss occures on a page saved to the system disks pagefile, it obviously has to wait for the disk to complete the IO. Or is the same data mirrored to all disks?
 
No, the OS will "see" the combined page files as one in the long run; it just reads/writes from one that's accessible (since you can only do one or the other - read or write - at any given time). If drive A is busy in some operation, the OS can work with one of the other drives - if two drives are busy, then a third can be accessed, and so on. It's pretty efficient in how it manages things, actually.

If you want the actual nitty gritty details of how it does what it does, you're gonna have to ask Microsoft. ;)

But it's a simple thing:

Give the OS more page files, spread across multiple physical drives, and that will enhance the system's ability to multitask dramatically, especially in light of a situation where massive paging is being "forced" because of hardware limitations as you just described.

It's a stop-gap measure, a Band-Aid if you will, but it'll help out pretty well, and certainly much better than a single-page-file equipped system ever could hope to do.
 
Back
Top