Best paging file configuration for 32gb of ram?
Should I have none at all to boost performance or is there a specific good setting? I'm getting mixed opinions on the internet and can't figure out what to do.
< >
Wyświetlanie 1-15 z 29 komentarzy
For the average user, you shouldn't need to. Been putting together my own gaming PCs since 2003 (and I'm no expert) and I've never once changed anything to do with those settings, not even when my primary storage was a 'spinning rust' hard drive back in the day.

With 32GB of RAM, and an OS on a fast SSD, you can leave the page file alone for the OS to manage.
It's best left to its own devices unless you have an understanding of how Windows manages memory and the effects changing this could cause in relation to that. If you don't actually have a good reason to manually set it, then there's little reason to change it. Most of the claimed benefits are misstated, and skip over the risks due to a lack of understanding of memory management, or worse they stem from "I did it for years and never had a problem". There's a lot of things you can do and get away with but that's never a good reason to do it. I was one of those people who did it for years and never had a problem... until I did.

For example, if you entirely disable the page file, you are capping your commit limit to be your physical RAM amount, which can have some pretty amusing results[imgur.com] in some situations if you ever need anywhere near your RAM amount (or sometimes nowhere near it, as that example shows).

Setting a small page file is perhaps no less silly than outright disabling it (maybe it's more silly), because if you're going to have it enabled anyway, then why limit the potential amount? The supposed benefits of disabling it are no longer present, but you're still taking the risks.

Then there's the silly ritual you'll see stated that you have to not only change it, but you have to do it a certain way. You have to disable it, and then be sure to restart before finally setting it again, because you know, we wouldn't want our page file fragmented on what is most likely an SSD these days, where fragmentation literally doesn't matter, would we? There's so much misunderstanding surrounding this that I don't blame you for asking.

If your uses typically don't need near 32 GB RAM (and you avoid situations like the above example where you might have issues well below your RAM amount), then in practice you can probably get away with disabling it and not suffering any ill results. But it still at least begs the question of why you'd disable it and give yourself a commit limit that restricts you to less RAM than you have.
nullable 6 maja 2022 o 11:59 
Początkowo opublikowane przez Desolate Eden:
Should I have none at all to boost performance or is there a specific good setting? I'm getting mixed opinions on the internet and can't figure out what to do.

System managed. The most I ever did to fiddle with it was moved the paging file from my C: drive SSD to a D: drive HDD back in 2013 (yeah I ran 32GB in 2013) but having up to 48GB reserved for the page file on a 250GB SSD was a big ask, so I compromised.

Sure I could have turned the page file down (or maybe off) and left it on the C: drive. But I went the other way. That worked fine for me for five years until I built a new system. Now I run the page file off D: drive SATA SSD because up to 48GB reserved on a 500GB NVMe SSD is a bit of an ask, so I compromised.

But I am thinking about building a new i7 12700k system with a 2TB NVMe as the primary, so I probably won't bother moving the page file then.
Ostatnio edytowany przez: nullable; 6 maja 2022 o 12:00
I have 16GB RAM and just have mine set to 8192mb Min/Max
That should be enough for you also, but you could always take it to 16GB if you want, but again 8GB would probably be enough
Overseer 6 maja 2022 o 13:28 
The mixed opinions come from people with different use cases. That's why there is no answer to your question and why Microsoft itself - the people who made this - also keep it open.
Only you yourself know your specific scenario.
"It depends."

Początkowo opublikowane przez Overseer:
The mixed opinions come from people with different use cases. That's why there is no answer to your question and why Microsoft itself - the people who made this - also keep it open.
Only you yourself know your specific scenario.
Most realistic answer here.

For example, I have a high RAM system so I completely disable it. I also don't use anything prone to leaks or unusual resource use, though some things can easily take 11-17GBs of RAM for one app I use, but I have enough to still keep it disabled.

It truly is case-by-case recommendations to use system recommended, manual, or if you have enough just disabling it instead of utilizing an SSD as an offset point.

:cozybethesda:
Carlsberg 6 maja 2022 o 15:56 
I always recommend to leave it to windows to manage the reason for that being that windows is intended to run with a pagefile regardless of the amount of ram you have because it allows the os to manage ram efficiently depending on system usage and running apps.

That said, if you disable and get no out of memory errors and system runs well then i guess its good. I think maybe its just continued from the 32 bit system days when 4gb was a limit and Microsoft has just remained with it as they cannot forsee the amount of ram a user may opt to use and the swap file prevents the possible out of memory error popping up.
Ostatnio edytowany przez: Carlsberg; 6 maja 2022 o 15:56
leave it on system managed.
changing it is pretty pointless.
windows dynamically changes it as it needs.
there's no reason to change it/disable it.
it doesn't "boost" performance by disabling it or adjusting it.
it can cause issues if you set it off or to low
with certain programs sometimes.
even though you have well more then enough ram
you gain nothing by disabling it or adjusting it.
Ostatnio edytowany przez: Bing Chilling; 6 maja 2022 o 18:18
Daggoth 6 maja 2022 o 23:50 
Początkowo opublikowane przez Carlsberg:
I think maybe its just continued from the 32 bit system days when 4gb was a limit and Microsoft has just remained with it as they cannot forsee the amount of ram a user may opt to use and the swap file prevents the possible out of memory error popping up.

Oh much older than that. Page file voodoo has been around since page files existed. Win95/98 was prime voodoo grounds.
Back then it was a real issue. You'd be playing a game, then the RAM would hit capacity, windows would panic and try to unload stuff and realise it didn't have enough pagefile so it would grind the HDD to create more before juggling everything to page stuff out. Then it would do it again 20 seconds later because it didn't create enough and the entire time your game is running at 1 frame every 3 seconds. Then it would leave behind a fragmented mess which slowed down other file operations until you fixed it.
A fixed-size file meant no mid-game shenanigans and no fragmentation. You had space or you didn't. The various formulae for sizing the file were basically balancing disk space availability against the need for a sizable file.

These days with big ram, big drives, big transfer speeds, multiple cores, and a much more memory-aware OS, it's a complete non-issue.
It's rare for windows to run out of pagefile because the defaults are set much better, pagefile is used better; paging stuff out well before you're grinding 100% physical memory usage, and even if you do hit the limits, you've got more I/O threads and faster drives to create the space without completely destroying your experience.
Overseer 7 maja 2022 o 0:21 
Początkowo opublikowane przez Daggoth:
These days with big ram, big drives, big transfer speeds, multiple cores, and a much more memory-aware OS, it's a complete non-issue.
It's rare for windows to run out of pagefile because the defaults are set much better, pagefile is used better; paging stuff out well before you're grinding 100% physical memory usage, and even if you do hit the limits, you've got more I/O threads and faster drives to create the space without completely destroying your experience.
It's not even just that. The page file is part of a complex memory management system by now that evolved over time. This fact is just often confronted with outdated concerns from the late 90s and random solutions whose origins are even unknown. It's much better to just look at what the specific system needs. Be it RAM, page file size or even just physical storage.
From my personal experience, the only thing that hits performance hard is memory compression.
Heretic 7 maja 2022 o 0:36 
The only complex thing about Windows is working around the bugs and issues.
Modern games may not require a PF but some older games designed around the PF MIGHT so results may vary.

This has been talked about a number of times in the past, which those results can be searched for on Steam.

Ive found 8192 Min/Max to be a good middle ground that doesnt use a large amount of SSD/HDD space.
plat 8 maja 2022 o 9:59 
Well, in reality, no one can tell you what's best for your machine while going by their own. I have 32 GB with a highly stable system and I modified my pagefile down to 2000 MB because I have a smaller Windows drive. I pretty much already know what would trigger a blue screen on here anyway but don't want to go completely without.

It's what feels comfortable to you while still having a bit of safety measure in place, even if you may never need it.

If your system is currently unstable or has a history of being that way, I would leave the pagefile up to Windows.
Here's an article I found I'll reference as it basically represents my stance to the letter.

https://www.howtogeek.com/126430/htg-explains-what-is-the-windows-page-file-and-should-you-disable-it/

I've tried running it all three ways throughout the years; with it left to system managed, with it entirely off, and with a set size.

I've never noticed any performance differences. You'll never lose performance in actively running stuff. The minor cases it might (namely the fringe example of having something open but minimized forever, thus leading to it being paged and then slow upon accessing it again) is really a non-issue, and if your page file is on your fastest drive in your system (or at least an SSD) as it should be, then it really is a non issue, and certainly not worth the risks incurred by capping your commit limit to your physical RAM amount, which can lead to you "losing" access to all of your RAM in situations.

I have, however, run into issues with it disabled or set to a static size. Were they common? No. They were VERY uncommon. But it's simply not worth capping my commit limit to a point it might restrict me having access to RAM I paid for, even if it might be a rare edge case, just to cover for the other edge cases of "program left open but inactive for a long time is slightly faster to switch back to", which I never notice a slowdown from anyway.
Początkowo opublikowane przez UberFiend:
Officially it's 1/8th physical ram to a max of 4GB. Thus 32GB ram -> 4GB pf.

https://docs.microsoft.com/en-us/windows/client-management/determine-appropriate-page-file-size
Two things:

1. That's referencing how much space the page file will be initially set to as a minimum when the page file is left to system managed. That's not a declaration that the official recommendation is those values so I'm not sure why you're proclaiming them as such. The article is quite clear from the onset that the official answer is more akin to "it varies from system to system and use to use".

2. You're interpreting the values wrong anyway. That is not stating 4 GB is the maximum. It states that the minimum initial size will be 1/8 of the installed RAM until the point the initial size is 32 GB (meaning with 256 GB physical RAM installed, so with more than 256 GB the initial page file size won't continue to grow), and that the maximum size will be allowed to grow up to either three times the physical RAM installed, or 4 GB, whichever is larger. You're missing the "whichever is larger" portion which is honestly the far more relevant one, unless a given Windows 10 or 11 system has less than 1.33 GB physical RAM installed (which is the only condition where 4 GB will be the larger and thus relevant value).

The behavior I'm seeing on a system with 64 GB reflects this, as initial size typically appears to be a bit under 10 GB. It's not capping at a size of 4 GB.
Ostatnio edytowany przez: Illusion of Progress; 8 maja 2022 o 10:16
Początkowo opublikowane przez UberFiend:
Początkowo opublikowane przez Delusion of Progress:

The behavior I'm seeing on a system with 64 GB reflects this, as initial size typically appears to be a bit under 10 GB. It's not capping at a size of 4 GB.

64/8=8
Exactly (the part that you're ignoring mine is setting an initial value of almost 10 GB instead of 8 GB aside), my point was that there is no "to a maximum of 4 GB" as you stated.

There is a "maximum" (in quotes because this refers to the initial minimum size and not the actual maximum it can grow to) but it is a value of 32 GB when 256 GB system RAM is present, so above 256 GB RAM, the initial page file size should be around 32 GB still.

Remember, this is the "initial size that will be uses when system managed" and not "recommended page file size".
Początkowo opublikowane przez UberFiend:
When you boil it all down, pf in a nutshell, end of story -

"The purpose of a page file is to back (support) infrequently accessed modified pages so that they can be removed from physical memory."

"If you want a crash dump file to be created during a system crash, a page file or a dedicated dump file must exist and be large enough to back up the system crash dump setting. Otherwise, a system memory dump file isn't created."
Sure, and that's correct, but that's not the "end of the story". That is a part of the whole, not the whole itself. You're intentionally focusing just on the parts you want to see, rather than the whole.

"Page file sizing depends on the system crash dump setting requirements and the peak usage or expected peak usage of the system commit charge. Both considerations are unique to each system, even for systems that are identical. This uniqueness means that page file sizing is also unique to each system and can't be generalized."

"The system commit charge can't exceed the system commit limit. This limit is the sum of physical memory (RAM) and all page files combined. If no page files exist, the system commit limit is slightly less than the physical memory that is installed. Peak system-committed memory usage can vary greatly between systems. Therefore, physical memory and page file sizing also vary."

Keep in mind I'm not advocating for "disabling it is objectively wrong and you will see issues or worse results". I'm merely advocating for "understand what it is, what the risks are, and take care in adjusting it". I've stated here and in other discussions about this subject that one may skirt issues with enough RAM relative to their topmost demands (as in, has more RAM than they need). But, as someone who did blindly adjust this while following the misleading advice, I didn't have problems... until I did. And then when I did have problems, I didn't even understand that this was why... until I learned.
Ostatnio edytowany przez: Illusion of Progress; 9 maja 2022 o 10:47
< >
Wyświetlanie 1-15 z 29 komentarzy
Na stronę: 1530 50

Data napisania: 6 maja 2022 o 10:59
Posty: 29