# windows server 2008 – Any benefit or detriment from removing a pagefile on an 8 GB RAM machine?

## The Question :

214 people think this question is useful

I’m running Windows 7 on a dual core, x64 AMD with 8 GB RAM.

Do I even need a page file?

Will removing it help or hurt performance?

Would it make a difference if this is a server or a desktop?

Does Windows 7 vs. Windows 2008 make a difference with a page file?

304 people think this answer is useful

TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins.

Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands.

This is incorrect. There’s more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that’s in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what’s in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.

Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals, and note that a new edition will soon be available), but there are a few nice things to note. First, much of what’s in RAM is intrinsically already on the disk – program code fetched from an executable file or a DLL for example. So this doesn’t need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed.

Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM – possibly even OS code. Now every time those other apps – or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with.

There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios.

Some references:

dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I’m putting it here because I suspect some people won’t scroll down to other answers – but if you find it valuable, you owe dmo a vote, so use the link to get there!

81 people think this answer is useful

Eric Lippert recently wrote a blog entry describing how Windows manages memory. In short, the Windows memory model can be thought of as a disk store where RAM acts as a performance-enhancing cache.

49 people think this answer is useful

As I see from other answers I am the only one that disabled page file and never regreted it. Great 🙂

Both at home and work I have Vista 64-bit with 8 GB of RAM. Both have page file disabled. At work it’s nothing unusal for me to have few instances of Visual Studio 2008, Virtual PC with Windows XP, 2 instances of SQL Server and Internet Explorer 8 with a lot of tabs working together. I rarely reach 80% of memory.

I’m also using hybrid sleep every day (hibernation with sleep) without any problems.

I started experimeting with it when I had Windows XP with 2 GB of RAM and I really saw the difference. Classic example was when icons in Control Panel stopped showing itself one after one, but all at once. Also Firefox/Thunderbird startup time increased dramatically. Everything started to work immediately after I clicked on something. Unfortunately 2 GB was too small for my applications usage (Visual Studio 2008, Virtual PC and SQL Server), so I enabled it back.

But right now with 8 GB I never want to go back and enable page file.

For those that are saying about extreme cases take this one from my Windows XP times.
When you are trying to load large Pivot Table in Excel from an SQL query, Excel 2000 increases its memory usage pretty fast.
When you have page file disabled – you wait a little and then Excel will blow up and the system will clear all memory after it.
When you have the page file enabled – you wait some time and when you’ll notice that something is wrong you can do almost nothing with your system. Your HDD is working like hell and even if you somehow manage to run Task Manager (after few minutes of waiting) and kill excel.exe you must wait minute or so until system loads everything back from the page file.
As I saw later, Excel 2003 handles the same pivot table without any problems with the page file disabled – so it was not a “too large dataset problem”.

So in my opinion, a disabled page file even protects you sometimes from poorly written applications.

Shortly: if you are aware of your memory usage – you can safely disable it.

Edit: I just want to add that I installed Windows Vista SP2 without any problems.

35 people think this answer is useful

You may want to do some measurement to understand how your own system is using memory before making pagefile adjustments. Or (if you still want to make adjustments), before and after said adjustments.

Perfmon is the tool for this; not Task Manager. A key counter is Memory – Pages Input/sec. This will specifically graph hard page faults, the ones where a read from disk is needed before a process can continue. Soft page faults (which are the majority of items graphed in the default Page Faults/sec counter; I recommend ignoring that counter!) aren’t really an issue; they simply show items being read from RAM normally.

Perfmon graph http://g.imagehost.org/0383/perfmon-paging.png

Above is an example of a system with no worries, memory-wise. Very occasionally there is a spike of hard faults – these cannot be avoided, since hard disks are always larger than RAM. But the graph is largely flat at zero. So the OS is paging-in from backing store very rarely.

If you are seeing a Memory – Pages Input/sec graph which is much spikier than this one, the right response is to either lower memory utilization (run less programs) or add RAM. Changing your pagefile settings would not change the fact that more memory is being demanded from the system than it actually has.

A handy additional counter to monitor is PhysicalDisk – Avg. Queue Length (all instances). This will show how much your changes impact disk usage itself. A well-behaved system will show this counter averaging at 4 or less per spindle.

34 people think this answer is useful

I’ve run my 8 GB Vista x64 box without a page file for years, without any problems.

Problems did arise when I really used my memory!

Three weeks ago, I began editing really large image files (~2 GB) in Photoshop. One editing session ate up all my memory. Problem: I was not able to save my work since Photoshop needs more memory to save the file!

And since it was Photoshop itself, which was eating up all the memory, I could not even free memory by closing programs (well, I did, but it was too little to be of help).

All I could do was scrap my work, enable my page file and redo all my work – I lost a lot of work due to this and can not recommend disabling your page file.

Yes, it will work great most of the time. But the moment it breaks it might be painful.

20 people think this answer is useful

While the answers here covered the topic quite well, I will still recommend this read:

http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx

He talks about PF size almost at the end:

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).

I really like Mark’s articles.

13 people think this answer is useful

The best answer I can think of is that under a normal load you may not use up the 8 GB, but it is the unexpected loads where you will run into trouble.

With a page file, the system will at least run slowly once it starting hitting the page. But if you remove the page file it will just die (from what I know).

Also, 8 GB seems like a lot now, but a few years down the line it might be considered the minimum amount of memory for a lot of software.

Either way – I would recommend keeping at least a small page file; but others please correct me if I am off-base.

6 people think this answer is useful

You didn’t mention if it’s a 64-bit edition of Windows, but I guess yes.

The pagefile serves many things, including generating a memory dump in case of BSoD (Blue Screen of Death).

If you don’t have a pagefile, Windows won’t be able to page out to disk if there isn’t enough memory. You may think that with 8 GB you won’t reach that limit. But you may have bad programs leaking memory over time.

I think it won’t let you go hibernate/standby without a pagefile (but I didn’t try yet).

Windows 7 / 2008 / Vista doesn’t change the use of page file.

I saw one explanation from Mark Russinovich (Microsoft Fellow) explaining that Windows can be slower without page file than wih a page file (even with plenty of RAM). But I can’t find back the root cause.

Are you out of disk space? I would keep a minimum of 1 GB to be able to have a kernel dump in case of a BSoD.

6 people think this answer is useful

The only person that can tell you if your servers or workstations “need” a page file is you, with careful use of performance monitor or whatever it’s called these days. What applications are you running, what use are they seeing, and what’s the highest possible memory use you could potentially see?

Is stability worth possibly compromising for the sake of saving a minute amount of money on smaller hard disks?

What happens when you download a very large patch, say a service pack. If the installer service decides it needs more memory than you figured to unpack the patch, what then? If your virus scanner (rightly) decides to scan this very large pack, what sort of memory use will it need while it unpacks and scans this patch file – I hope the patch archive file doesn’t contain any archives itself, because that would absolutely murder memory use figures.

What I can tell you is that removing your page file has far higher probability of hurting than helping. I can’t see a reason why you wouldn’t have one – I’m sure that there might be a few specialist cases where I’m wrong on that one, but that’s a whole other area.

5 people think this answer is useful

I disabled my page file (8 GB on an x86 laptop) and had two problems even with 2500 MB free:

1. ASP.NET error trying to activate WCF service: Memory gates checking failed because the free memory (399,556,608 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment configuration element.

Quite how 3.7 GB is less than 5% of 8 GB I will never know!!

2. Getting Close programs to prevent information loss dialog: When 75% of my RAM is used I get a dialog box telling me to close programs. You can disable this with a registry modification (or possibly by disabling the ‘Diagnostics Policy Service’).

In the end I decided to just turn it back on again. Windows just plain and simple was never designed to be used without a page file. It’s optimized to run with paging, not without. If you’re planning on using more than 75% of your memory and you don’t want to mess with your registry – then it may not be for you.

3 people think this answer is useful

The key question is whether your anticipated total memory usage for all apps and the operating system usage approaches 8 GB. If your average mem usage is 2 GB and your max memory usage is only 4 GB then having a page file is pointless. If your max memory usage is closer to 6-7 Gb or greater than it’s a good idea to have a page file.

PS: Don’t forget to allow for growth in the future!

3 people think this answer is useful

It seems a lot of severely limited people have an opinion on this subject but have never actually tried running their computer without a page file.

Few, if nearly none, have tried. Even less seem to know how Windows treats the pagefile. It doesn’t “just” fill up when you run out of physical RAM. I bet most of you didn’t even know that your “free” RAM is used as a file cache!

You CAN get massive performance improvements by disabling your page file. Your system WILL be more susceptible to out-of-memory errors (and do you know how your applications respond in that scenario – for the most part the OS just terminates the application). Start-up times from standby or long idle periods will be far snappier.

If Microsoft actually permitted you to set an option whereby the pagefile ONLY gets used when out of physical RAM (and all the file buffers have been discarded) then I would think there was little to gain from disabling the pagefile.