Setting page file size to 0 MB to disable virtual memory in Win XP Pro

beowulf7

[H]F Junkie
Joined
Jun 30, 2005
Messages
10,433
If you have "a lot" of physical memory, say 2 GB of DDR RAM, and are running Windows XP Pro, is there any need for a page file that can add up to an additional few GB of "wasted" space on one's hard drive? I remember reading in either this sub-forum or the memory one of how some people set their initial and maximum size of the page file size to 0 MB so that only the RAM would be used for memory and never doing the disk swapping.

My questions are:
1.) Is there any danger/risk in setting the page file size to be 0 MB?
2.) Would Windows inefficiently use the paging file instead of the actual RAM if some RAM is available?
3.) For question #2, if Windows handles memory efficiently, then in theory, if your OS and apps never use more than 2 GB of RAM (given my example), then the page file would never be utilized, so it doesn't make any difference if it is set to 0 MB or 4096 MB, right?
4.) Is setting the initial and maximum size to 0 MB the same as selecting the radio button that says "No paging file"?

Your input is appreciated. Thanks in advance.
 
beowulf7 said:
1.) Is there any danger/risk in setting the page file size to be 0 MB?
Yes, but minimal. Your PC isn't going to shoot flames out the back and spawn satan himself, in fact it'll run just fine w/o a page file. However I don't advise it.
2.) Would Windows inefficiently use the paging file instead of the actual RAM if some RAM is available?
No, which is why you shouldn't disable the PF. When you have "enough" RAM windows will not use the page file much at all.
3.) For question #2, if Windows handles memory efficiently, then in theory, if your OS and apps never use more than 2 GB of RAM (given my example), then the page file would never be utilized, so it doesn't make any difference if it is set to 0 MB or 4096 MB, right?
The page file is used for a bit more than paging to disk, which is why it's still usefull. See the sticky linked above for more info.
4.) Is setting the initial and maximum size to 0 MB the same as selecting the radio button that says "No paging file"?
Pretty much.

Your input is appreciated. Thanks in advance.
Your welcome.
 
I tend to set my page file to 128mb just so photoshop and it's ilk shut the hell up.
 
Phoenix86 said:
Yes, but minimal. Your PC isn't going to shoot flames out the back and spawn satan himself, in fact it'll run just fine w/o a page file. However I don't advise it.

No, which is why you shouldn't disable the PF. When you have "enough" RAM windows will not use the page file much at all.

The page file is used for a bit more than paging to disk, which is why it's still usefull. See the sticky linked above for more info.

Pretty much.


Your welcome.
Thanks for the detailed reply. I have 2 GB of RAM and right now am using 2048 MB as the initial and maximum file swap size (i.e. 1:1 ratio between physical and virtual memory). In the past, I had done a 1:3 ratio between physical and virtual memory (e.g. 512 MB of RAM and 1536 MB page size). But that would require 6 GB of disk space for my current system, which would've been a waste. So I will keep the 1:1 ratio or maybe even drop down to 2:1 ratio, so that my 2 GB of RAM has a 1024 MB page file.

I know some folks here have turned the page file down to 0 MB and claim this works fine for them. But with my luck, my computer will lock up/crash as I'm in the middle of something important.
 
beowulf7 said:
Thanks for the detailed reply. I have 2 GB of RAM and right now am using 2048 MB as the initial and maximum file swap size (i.e. 1:1 ratio between physical and virtual memory). In the past, I had done a 1:3 ratio between physical and virtual memory (e.g. 512 MB of RAM and 1536 MB page size). But that would require 6 GB of disk space for my current system, which would've been a waste. So I will keep the 1:1 ratio or maybe even drop down to 2:1 ratio, so that my 2 GB of RAM has a 1024 MB page file.

I know some folks here have turned the page file down to 0 MB and claim this works fine for them. But with my luck, my computer will lock up/crash as I'm in the middle of something important.
The whole ratio of RAM to PF is BS. The minimum PF size is determined by memory USAGE. If you have 4 GB in a PC that's a web browser you don't need 8GB of PF giving 12GB of virtual memory, heck you wouldn't need the 4GB of RAM either. Anways, I have posted a sticky about sizing your PF. I probably need to update it with some better information, but I find any rule for PF size that only looks at total RAM is, lacking.

In short, check out your peak commit charge in task manager/performance. This value tells you how much memory your programs require. Test after giving the rig a workout, IE run your most intensive apps, the peak value is reset on each bootup.

Once you know how much memory you're using, subtract that from your RAM total. This tells you the *minimum* PF size needed. Obviously you want to set it higher than minimum. With 2GB you are going to realize this value will be negative, indicating you don't need a PF, however as stated before don't disable it.

Think about it this way, if windows is not paging to disk because you have enough RAM, why bother tweaking the PF?
 
I think he is saying why waste the hard drive space? I personally have been using the 1:1.5 ratio for ever, but just got 2GB of RAM myself. I also am a CS undergraduate though, and just took Computer Architecture and realized exactly what the PF does. I think its important to have it on there, but the best thing to possible due is to set it to a static and small size if you have 2GB and test from there. There won't be any improvements made on performance, but stability will definitely be another question all together. And since we all use our systems so differently, its probably best to try and see on your own system, your own use, and your own time.

GL
 
Phoenix86 said:
The whole ratio of RAM to PF is BS. The minimum PF size is determined by memory USAGE. If you have 4 GB in a PC that's a web browser you don't need 8GB of PF giving 12GB of virtual memory, heck you wouldn't need the 4GB of RAM either. Anways, I have posted a sticky about sizing your PF. I probably need to update it with some better information, but I find any rule for PF size that only looks at total RAM is, lacking.

In short, check out your peak commit charge in task manager/performance. This value tells you how much memory your programs require. Test after giving the rig a workout, IE run your most intensive apps, the peak value is reset on each bootup.

Once you know how much memory you're using, subtract that from your RAM total. This tells you the *minimum* PF size needed. Obviously you want to set it higher than minimum. With 2GB you are going to realize this value will be negative, indicating you don't need a PF, however as stated before don't disable it.

Think about it this way, if windows is not paging to disk because you have enough RAM, why bother tweaking the PF?
Excellent! Thanks again. I recently rebooted and only brought up Mozilla FF and TB and Winamp, so I'm not doing much. The Commit Charge (K) value for Peak is 428744. The Limit is 4037080, so my limit is well over the peak.

I will open up a bunch of apps and see how high I can make the peak. I doubt I'll even come close to 2000000 K, so I might do what Malogato said, that is to make the page file size to be a small value, like 128 K.

MS, not surprisingly, recommends I use 1.5 times my memory, which is 3070 MB (according to the Virtual Memory window when you right click on My Computers and eventually get to that screen).
 
MeTaSpARKs said:
I think he is saying why waste the hard drive space? I personally have been using the 1:1.5 ratio for ever, but just got 2GB of RAM myself. I also am a CS undergraduate though, and just took Computer Architecture and realized exactly what the PF does. I think its important to have it on there, but the best thing to possible due is to set it to a static and small size if you have 2GB and test from there. There won't be any improvements made on performance, but stability will definitely be another question all together. And since we all use our systems so differently, its probably best to try and see on your own system, your own use, and your own time.

GL
Yes, I was thinking why waste the HDD space. But since I have a 300 GB HDD (two of them, actually), and extra GB or 2 allocated for the paging file won't tremendously affect my free HDD space.
 
OK, I got an IM window going and played a high-def WMV file and got my Commit Charge (K) Peak value to 670104. That's still quite a bit below 1 GB.
 
MeTaSpARKs said:
I think he is saying why waste the hard drive space? I personally have been using the 1:1.5 ratio for ever, but just got 2GB of RAM myself. I also am a CS undergraduate though, and just took Computer Architecture and realized exactly what the PF does. I think its important to have it on there, but the best thing to possible due is to set it to a static and small size if you have 2GB and test from there. There won't be any improvements made on performance, but stability will definitely be another question all together. And since we all use our systems so differently, its probably best to try and see on your own system, your own use, and your own time.

GL
Waste drive space? Not these days. ;)

Yeah, my basic recommendation for people with "enough" RAM is to set the PF to a smaller static size, say 512MB (arbitrary) to 1GB. This will allow the OS to page anything it really wants to, and still give room for system cache and whatnot.

Your last statement rings so true, test, test, test; because what works in my environment may not do shit for you. :)
 
Just to keep it as an integer # in terms of GB, before I turned off my PC last night, I changed the page file size from 2048 kB to 1024 kB. I'm sure I won't see a difference in performance but at the least, it freed up an extra GB. :D Thanks for all your contribution to this thread.
 
Setting page file size to 0 MB to disable virtual memory in Win XP Pro

You can't disable Virtual Memory. You are refering to the pagefile. They are two completely different things.

Also, while Phoenix86's recommendation is not bad, it is best to use perfmon to actually determine real PF usage. Then set the initial 4x this. The max should be 2x the number you just calculated.

If you really insist on using task managers "PF usage" graph, set the initial 2x that and the max 2x that number.
 
KoolDrew said:
You can't disable Virtual Memory. You are refering to the pagefile. They are two completely different things.

Also, while Phoenix86's recommendation is not bad, it is best to use perfmon to actually determine real PF usage. Then set the initial 4x this. The max should be 2x the number you just calculated.

If you really insist on using task managers "PF usage" graph, set the initial 2x that and the max 2x that number.
You are right. But by setting the page file to be 0 MB, isn't that in effect disabling virtual memory? How could one use virtual memory if there is no disk swap space to be used as memory? :confused:

I just ran "perfmon" from the command prompt and see that application. I will play around with it and find out if it can retain the most amount of memory used since the last boot-up, much like how Phoneix86's method shows.

Thanks.
 
Again, you are confused about what the term "Virtual Memory" really is.

Windows implements a virtual memory system based on a flat (linear) address space that provides each process with the illusion of having its own large, private address space. Virtual memory provides a logical view of memory that might not correspond to its physical layout. At run time, the memory manager, with assistance from hardware, translates, or maps, the virtual addresses into physical addresses, where the data is actually stored. By controlling the protection and mapping, the operating system can ensure that individual processes don’t bump into one another or overwrite operating system data. Figure 1-3 illustrates three virtually contiguous pages mapped to three discontiguous pages in physical memory.

Because most systems have much less physical memory than the total virtual memory in use by the running processes, the memory manager transfers, or pages, some of the memory contents to disk. Paging data to disk frees physical memory so that it can be used for other processes or for the operating system itself. When a thread accesses a virtual address that has been paged to disk, the virtual memory manager loads the information back into memory from disk

The pagefile is just a portion of Virtual Memory, which is just a backing store form some data. NT requires everything in memory to have a backing store in disk so it can free up that memory if it ever needs to. Most things can be paged back to their original respective files (i.e. executables, shared libraries, non-changed files, etc) but any data in memory that's been altered needs a place to go on disk and that place is the pagefile.

So, if you do disable the pagefile, you are not disabling paging in any way. If you were really to disable paging you would lose both per-process protected address spaces and protection of kernel mode pages from user mode access. So basically, turning off paging just can't be done.

When you do disable the pagefile, you are just forcing the system to keep all "private" virtual memory in RAM and only allowing code and mapped files to be paged. Even if the "private" stuff has not been touched for hours and will never be touched again, it will have to stay in RAM. This means that there will be more paging of code, for a given workload and RAM size. Paging can also not be correctly balanced between code, mapped files, the file cache, and private data. That's going to be a bad thing in the long run.

Also, like Phoenix86 has alrady said, if you have enough RAM, the pagefile will not be used much anyway. There is always a practical need for a pagefile, because you can always use the extra RAM for other things. It's not as simple as "If I have enough RAM, I don't need a pagefile." It is a vase of how to best use the finite recourse for many things.
 
KoolDrew said:
Again, you are confused about what the term "Virtual Memory" really is.



The pagefile is just a portion of Virtual Memory, which is just a backing store form some data. NT requires everything in memory to have a backing store in disk so it can free up that memory if it ever needs to. Most things can be paged back to their original respective files (i.e. executables, shared libraries, non-changed files, etc) but any data in memory that's been altered needs a place to go on disk and that place is the pagefile.

So, if you do disable the pagefile, you are not disabling paging in any way. If you were really to disable paging you would lose both per-process protected address spaces and protection of kernel mode pages from user mode access. So basically, turning off paging just can't be done.

When you do disable the pagefile, you are just forcing the system to keep all "private" virtual memory in RAM and only allowing code and mapped files to be paged. Even if the "private" stuff has not been touched for hours and will never be touched again, it will have to stay in RAM. This means that there will be more paging of code, for a given workload and RAM size. Paging can also not be correctly balanced between code, mapped files, the file cache, and private data. That's going to be a bad thing in the long run.

Also, like Phoenix86 has alrady said, if you have enough RAM, the pagefile will not be used much anyway. There is always a practical need for a pagefile, because you can always use the extra RAM for other things. It's not as simple as "If I have enough RAM, I don't need a pagefile." It is a vase of how to best use the finite recourse for many things.

I see, that sounds right. It's been a while since I had OS and Comp. Arch. classes in school, but it's starting to come back to me. You're right that virtual memory is never literally turned off. Thanks for the refresher. :)

I checked my Windows Task Manager and noticed my peak Commit Charge is 701384 K (up from my previous high of 670104 K). I did play a couple games, which probably did the trick. So far so good with my 1024 MB page file.
 
Hi
I've got 1.5Gb RAM. Currenty running FF 1.0.7 and iTunes and only have 300Mb memery used. I play HL2, rip DVDs etc all fine. Worth noting that I dont do any Photoshop work though.

L
 
If the RAM to PF ratio is BS, why does MS make the default Page File size 1.5 times the amount of physical RAM? I mean with 1GB of RAM, the PF is defaulted to an initial size of 1536MB with a maximum size of

This seems like such a confusing topic. I have heard that the PF and total physical RAM should both add up to 2048MB. So if you have 1GB of physical RAM, you should set a static page file size of 1024MB. If you have 512MB of physical RAM, you should set the PF to 1536MB. If you have 2GB, you shoud set it to like 200MB just so you still have a page file because it is not good to disable it all together. What is the real truth regarding this? If you have 2GB of system RAM, would it be totally safe to set the PF to 1MB because a PF is still there and it is not disabled, Or does the PF have to at least be a decent size ?(like 100MB or larger) no matter how much RAM
 
If the RAM to PF ratio is BS, why does MS make the default Page File size 1.5 times the amount of physical RAM? I mean with 1GB of RAM, the PF is defaulted to an initial size of 1536MB with a maximum size of

Because 1.5x is a "safe" setting. However, if you monitor your PF usage, you can set the optimal pagefile setting based on this.

I have heard that the PF and total physical RAM should both add up to 2048MB. So if you have 1GB of physical RAM, you should set a static page file size of 1024MB.

Optimal pagefile size should be determined by how much of it you use. Also, the only time you should set the pagefile a fixed size is if it is in a partition of its own (which is a bad idea anyway). You should set the initial high enough so it doesn't fragment, but it still has that "safety net."

If you have 2GB of system RAM, would it be totally safe to set the PF to 1MB because a PF is still there and it is not disabled, Or does the PF have to at least be a decent size ?(like 100MB or larger) no matter how much RAM

The optimal pagefile size should be determined by actual usage, not just looking at how much RAM you have.
 
KoolDrew said:
Because 1.5x is a "safe" setting. However, if you monitor your PF usage, you can set the optimal pagefile setting based on this.

Indeed. If you actually scan back on the MS line, you will find they have been stating 1.5x since the early 90's (the pagefile article gets updated with a few words every few years). This was when it was rare to have even 64Mb of ram let alone 2Gb so pagefiles were hit hard in those days. However, now they are pretty much a legacy item.

An optimised pagefile is a bit of a misnomer really as any use of pagefile is a performance hit in most cases.
 
Phoenix86 said:
Yes, but minimal. Your PC isn't going to shoot flames out the back and spawn satan himself, in fact it'll run just fine w/o a page file.

This isn't correct. Disabling the page file affects managability and stability.

Phoenix86 said:
The page file is used for a bit more than paging to disk, which is why it's still usefull. See
the sticky linked above for more info.

Really? What else? I couldn't see any any additional uses besides paging for the page file described in that sticky.
 
mikeblas said:
This isn't correct. Disabling the page file affects managability and stability.
Managability? Wha???
Stability? Definitely doesn't make the system unstable. If you're going to refute another user's post, atleast explain your viewpoint.
 
djnes said:
Managability? Wha???
Stability? Definitely doesn't make the system unstable. If you're going to refute another user's post, atleast explain your viewpoint.

Sure. I've explained this elsewhere (like in the sticky, and in its related thread), but I'll be happy to summarize it here.

Managability: if you've got no page file, the system may not be able to write a dump file in the event of a crash. Without a dump file, debugging system crashes can be more difficult.

Stability: if you've got no page file, you've limited virutal memory. Say an application, either by design or as a mode of failure, allocates a great deal of memory. The system is under stress, since memory allocation requests are more likely to fail. If an error occurs, there might not be enough discardable pages in to allow the code to handle the error condition properly -- maybe the code in the image which handles the error can't be faulted-in (because there's no page file to get rid of a writeable page), or there's not enough memory to bring up UI to work with the user. The app may terminate without reporting the error, or it might be difficult for the user to regain control of the system.
 
I appreciate the dialog here. Learning good stuff. :)

So it's decided - I obviously can't "disable" virtual memory and I will not set the page file to 0 MB. But I will not use MS' "1.5-3.0 times the RAM" algorithm either, so, unless I see a compelling reason to change otherwise, I will have my 2.0 GB of RAM use a 1.0 GB (1024 MB) page file. If I max out my memory to 4.0 GB of RAM (i.e. 4 x 1 GB of RAM), I will still keep my page file size as is (assuming I'm still running Windows XP Pro 32-bit).
 
I always disable dump files etc., after all, what the hell am I (or for that matter 99% of users) going to do with it??
 
mikeblas said:
Sure. I've explained this elsewhere (like in the sticky, and in its related thread), but I'll be happy to summarize it here.

Managability: if you've got no page file, the system may not be able to write a dump file in the event of a crash. Without a dump file, debugging system crashes can be more difficult.

Stability: if you've got no page file, you've limited virutal memory. Say an application, either by design or as a mode of failure, allocates a great deal of memory. The system is under stress, since memory allocation requests are more likely to fail. If an error occurs, there might not be enough discardable pages in to allow the code to handle the error condition properly -- maybe the code in the image which handles the error can't be faulted-in (because there's no page file to get rid of a writeable page), or there's not enough memory to bring up UI to work with the user. The app may terminate without reporting the error, or it might be difficult for the user to regain control of the system.
These are opinions at best. If you have enough physical memory to handle your tasks, stability is not an issue. If you know how to use the Event Viewer, and disable automatic reboots, you can still easily figure out what's wrong without the dump file. What you said makes sense, but it is far from a proven fact. Much of this topic is theory and opinions, and should be treated that way.
 
djnes said:
These are opinions at best. If you have enough physical memory to handle your tasks, stability is not an issue.

It's only not an issue if you can completely predict your workload; in particular, the memory load pattern of all your processes over time. Most people aren't interested in doing these kinds of measurements, and updating them to make sure they're still correct and current. Is it worth the time to study memory load of all your processes to see when they spike? Compared to what benefit?

djnes said:
If you know how to use the Event Viewer, and disable automatic reboots, you can still easily figure out what's wrong without the dump file. What you said makes sense, but it is far from a proven fact. Much of this topic is theory and opinions, and should be treated that way.

It's a fact that you don't get a dump file if you don't have a page file.

The system event log ends up containing a record that shows the bugcheck code and the address where the check happened. Does the system log contain the module name causing the fault?

It's a fact that you can't do any debugging if you don't have the dump file. Without the dump file, what will you load in the debugger?

I'm not sure how you would expect someone to prove these statements. Managability and stability are affected for the reasons I set forth; you acknowledge that they are effects of setting no page file, so why do you think something still needs to be proven? These responses also answer the original poster's original #1 question:

beowulf7 said:
1.) Is there any danger/risk in setting the page file size to be 0 MB?

Yes, there is danger and risk; if something sucks all the physical memory in your machine, you'll find it harder to regain control of your box. If you can't create dumps after a crash, you risk having a harder time diagnosing the problem.

It's odd that the poster didn't ask about the benefit of disabling the page file. Is there one? Why change the setting if it wins you nothing useful, especially considering the downside? Even if the negatives are low or unknown, they're greater than zero.
 
I disabled my paging file and deleted the file off my hdd and I havent had any problem running, BF2 at 1024x768 max settings, fraps, ati tool, teamspeak, and winamp all at the same time. Ive been running like that for 2 monthes and have never had a problem.

I dont think my system is any faster with it or without it, at least that I can tell.

My 2 cents is disable it, delete it, and use your system. If you get weird crashes or errors, put it back and see if you still have the problems. If you still have problems its not the paging file.

 
Rustedimpala said:
My 2 cents is disable it, delete it, and use your system. If you get weird crashes or errors, put it back and see if you still have the problems. If you still have problems its not the paging file.
That's pretty much the sentiment. There's no hard and fast rule of who should and who shouldn't do it. Each system and it's uses are unique. What may be enough RAM for you, isn't for me, and vice versa. There's no harm in trying it out, as long as you are ready to revert back in case of a problem. I ran one for a long long time without issue, but that doesn't mean it's okay for everyone.
 
Here's some proof. Below is a program that allocates all the memory it can get.

It asks the system what the memory page size is; it's 4096 bytes on all Win32 implementations that I know about. (Er, except maybe Win64. Hmm. Is it 8192 bytes there? What about Win NT 3.5 on the DEC Alpha? It might've been larger back then, too.)

It starts allocating 1024 times that size in each call; 4 megs of memory in each shot. If it successfully allocates that memory, it modifies all the pages in the allocation. So in each iteration at four megs, 1024 pages are marked as dirty and writable.

If the allocation fails, it halves the number of pages it asks for. If the size of the allocation drops below the page size, the program gives up because it knows it has allocated all the memory on the system.

My main system at home has 2 gigs of physical memory, and a 2.5 gig page file. (I don't know why; I can't remember how I came up with these numbers, or if they're system defaults.)

If I run this program with a page file, it runs until it exhausts the virtual address space for the process; it ends up allocating 2090 megs.

When the program stops, I still have control of my system. There's lots of paging, and things are getting swapped out like crazy. Some background processes have even stopped, since they've been swapped out of memory. I can fire up perfmon and task manager to figure out what's going on and get control of my system.

If I run this program without a page file, I don't have such great luck. It allocates 1,591,904 kilobytes. If I try to bring up task manager, sometimes I get this error message:

---------------------------
taskmgr.exe - Application Error
---------------------------
The application failed to initialize properly (0xc000012d). Click on OK to terminate the application.
---------------------------
OK
---------------------------

other times, I don't get Task Manager at all. The Start menu doesn't paint correctly. If I try to use the "Run" dialog to start notepad, I get only a message beep.

Fortunately, my test app below lets me exit it by entering a line. It doesn't even free the all memory it allocated; it lets the system clean up. An application that was using lots of memory (either by design or in a failure mode) might not give me a chance to kill it. And even if I kill it, I might lose whatever work I had done so far in that application.

After I regained control of my system, I noticed that a few other things were affected. Judging by a bunch of application event logs entries I found I found, WMI wasn't looking too good:

Event provider attempted to register query "select * from __InstanceOperationEvent" whose target class "__InstanceOperationEvent" does not exist. The query will be ignored.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

Obviously, my application is written to suck as much memory as it can, as fast as it can. Don't be tempted to dismiss it as an unrealistic example, however. Many real-world applications are written to use as much memory as possible in the interest of improving performance; stateful server apps almost always fit into this category, for example. IIS and SQL Server both try to cache as much as they can in memory, for instance. Many desktop applications are similar; Adobe applications do a great job of aggressively managing memory and trying to provide a great experience while editing even even the largest files.

Sure, you might trundle along with no page file and never have a problem. But when using such applications, especially in concert (that is, using more than one of the apps in Creative Suite instead of just PhotoShop) without a page file, you run the risk of reproducing these results.

When you finally do have a problem, you're on thin ice. Just because it has worked for a couple of months without a problem doesn't mean there's not a risk that it will fail. When it does, you'll find you've painted yourself into a corner. And for what benefit?

Code:
#include <stdio.h>
#include <windows.h>

int main(int argc, char* argv[])
{
	// find the system page size
	SYSTEM_INFO si;
	GetSystemInfo(&si);
	printf("System Page size is %u\n", si.dwPageSize);

	// get a single block of memory that large
	LPBYTE lpReferencePage = (LPBYTE) VirtualAlloc(NULL, si.dwPageSize, MEM_COMMIT, PAGE_READWRITE);
	
	// paint the page with random bytes
	for (DWORD dwIndex = 0; dwIndex < si.dwPageSize; dwIndex++)
	{
		lpReferencePage[dwIndex] = rand() & 0xFF;
	}

	// start at a thousand times the page size
	DWORD dwAllocSize = si.dwPageSize * 1024;
	DWORD dwTotalAllocated = 0;
	while (dwAllocSize > si.dwPageSize)
	{
		// allocate memory and paint it
		LPBYTE lpbCurrentPage = (LPBYTE) VirtualAlloc(NULL, dwAllocSize, MEM_COMMIT, PAGE_READWRITE);
		if (lpbCurrentPage == NULL)
		{
			// allocation failed. ask for half as much
			dwAllocSize /= 2;
		}
		else
		{
			dwTotalAllocated += dwAllocSize;
			DWORD dwCopies = dwAllocSize / si.dwPageSize;
			LPBYTE lpbTarget = lpbCurrentPage;
			for (DWORD dwCopy = 0; dwCopy < dwCopies; dwCopy++)
			{
				memcpy(lpbTarget, lpReferencePage, si.dwPageSize);
				lpbTarget += si.dwPageSize;
			}
			printf("Allocated %u bytes\n", dwTotalAllocated);
		}
	}

	printf("Allocation failed for single page!\n");
	getchar();

	VirtualFree(lpReferencePage, 0, MEM_RELEASE);
	return 0;
}
 
mikeblas said:
My main system at home has 2 gigs of physical memory, and a 2.5 gig page file. (I don't know why; I can't remember how I came up with these numbers, or if they're system defaults.)
But you're still going to tell everyone else how they should configure theirs? I'm not trying to start a flame war here, but there is no correct answer on this for everyone person. Furthermore, if you're going to tell everyone else how they should do it on their system, shouldn't you at the very least, know why you configured your system a certain way?
 
djnes said:
But you're still going to tell everyone else how they should configure theirs? I'm not trying to start a flame war here, but there is no correct answer on this for everyone person. Furthermore, if you're going to tell everyone else how they should do it on their system, shouldn't you at the very least, know why you configured your system a certain way?

It depends on what question you're trying to answer. The question I'm answering is: "Is there any danger/risk in setting the page file size to be 0 MB?".

The answer I'm providing is "yes".

If you reread my posts in this thread (carefully, this time) you'll find that I've offered no quantitative advice about pagefile size. I've only said that I think running without a pagefile is a bad idea.

Either way, it isn't relevant to my answer Beowulf's first question, or to the proof you've asked for about stability being negatively affected by running without a page file.
 
And it brings back to a common issue on here, that people have trouble separating opinion from fact when answering questions. Your opinion, is that yes, disabling the page file is a bad idea.

I'm trying to explain that this answer is only an opinion, and isn't correct for every single person. The correct answer is, maybe...depending on the situation. There's no harm in trying it out, if you know the risks. Neither you nor I can tell someone it's a good idea or a bad idea. I can say I have had it disabled on systems in the past, with no negative effects....but that certainly doesn't mean anyone can. I have reread your posts carefully, and that's why I'm posting these comments, so the OP and others realize there can be risks, but that it's safe to try.
 
djnes said:
And it brings back to a common issue on here, that people have trouble separating opinion from fact when answering questions. Your opinion, is that yes, disabling the page file is a bad idea.

I'm trying to explain that this answer is only an opinion, and isn't correct for every single person. The correct answer is, maybe...depending on the situation. There's no harm in trying it out, if you know the risks. Neither you nor I can tell someone it's a good idea or a bad idea. I can say I have had it disabled on systems in the past, with no negative effects....but that certainly doesn't mean anyone can. I have reread your posts carefully, and that's why I'm posting these comments, so the OP and others realize there can be risks, but that it's safe to try.

Then I guess you haven't read carefully enough, or maybe you didn't think through the ramifications of what I've demonstrated: It's not safe to try.

You can loose control of your machine -- requiring a reboot to regain control, losing work or data. The system is cornered; it won't be able to let you run any tools that would let you recover from running out of physical memory.

These are facts, they're not opinion. With no page file and no free physical memory, it's a fact that you can end up not able to regain control of your system. Run the program I provided and demonstrate it to yourself.

Meanwhile, most people will take "try" to mean that they change the setting and play around for a little bit to see what happens. (There's evidence of this all over the board, including a couple of posts in this thread.) They won't measure anything, assume they got some mythical performance win, and then tell all their friends about it since they think the change they made is safe.

Weeks or months later, they might run some application that isn't working right (or that is working correctly, and is simply asked to use more memory than previously tested), and hang their machines.

Since they can't (or, at least, probably won't) exhaustively test all combinations of all the workloads they'll face, they can't predict what will happen before it's too late.

Sure, people are free to set their systems up any way they want. If they want to ignore my advice, it's fine by me.

So let's try it the other way: in what situation would someone want to disable the paging file? What benefit would they get from it that would outweigh the risks I've explained here?
 
It is safe to try. Plain and simple. You wouldn't lose any work or data, because if you were just trying it out..."testing" as it's called. No one is arguing that it's risk-free.

My point was that you need to state your opinions, and that's it. Give advice, but don't expect everyone to agree. It's perfectly safe to try out.

I'm not going to waste my time anymore with this. You're very closed-minded, and won't bother to accept that some people may disagree with you, and that each person is allowed to do whatever they want to their system. You've given advice, which no one is arguing with, so let them make their own decision.

As you've said, they are free to try on their own systems. They were warned of the risks, so anything they do is on them. Leave it at that. You've given advice. Let them make their own minds up.
 
What I've posted is fact: someone who's running without a page file is compromising the stability of their system. To substantiate my claim, I've provided a way to demonstrate the problem and noted my observations. In response, instead of trying to reproduce it yourself and risk learning something, you went off on some tangent.

If you want to question the method or the results, that would be worthwhile. Instead of trying to investigate or reproduce it yourself, you went off on some tangent about the setting I happen to have.

I'm certainly open-minded: I asked if you knew about any scenarios where someone would want jeopardize their stability in the interest of not having a page file -- and what benefit you'd expect to override that risk.

In response, I got a silly ad hominem rant.
 
mikeblas said:
I'm certainly open-minded: I asked if you knew about any scenarios where someone would want jeopardize their stability in the interest of not having a page file -- and what benefit you'd expect to override that risk.
No, and in fact, I'll be the first to say there's no real benefit, so this is a moot point. But there's nothing wrong with trying, if a person feels like it, and it's certainly not worth getting your panties all in a bunch because someone defies your advice and wants to try it out.
 
Back
Top