Nainstalovat Steam
přihlásit se
|
jazyk
简体中文 (Zjednodušená čínština)
繁體中文 (Tradiční čínština)
日本語 (Japonština)
한국어 (Korejština)
ไทย (Thajština)
български (Bulharština)
Dansk (Dánština)
Deutsch (Němčina)
English (Angličtina)
Español-España (Evropská španělština)
Español-Latinoamérica (Latin. španělština)
Ελληνικά (Řečtina)
Français (Francouzština)
Italiano (Italština)
Bahasa Indonesia (Indonéština)
Magyar (Maďarština)
Nederlands (Nizozemština)
Norsk (Norština)
Polski (Polština)
Português (Evropská portugalština)
Português-Brasil (Brazilská portugalština)
Română (Rumunština)
Русский (Ruština)
Suomi (Finština)
Svenska (Švédština)
Türkçe (Turečtina)
Tiếng Việt (Vietnamština)
Українська (Ukrajinština)
Nahlásit problém s překladem
This way of thinking is extremely old fashioned.
Somehow people believe that they're fine tuning memory management in Windows (for consumers). In reality, Microsoft stopped tweaking memory management since windows 7, maybe even vista.
(global) Memory commit charge is 1.7x what is used. This means that if you use 10GB, which is the working set if all of it is in your physical ram, it will be holding 17 GB prepped up for use, and somewhere in here will be the 'page pool'.
If you have 32GB, and you're using 20GB, part of the allocated memory pool will diffidently be on disk, with at least 4GB. That is completely besides the paging pool, which is space on pagefile.sys that was already used. So pagefile will be a lot larger.
Typically it tries to create a pagefile equal to 1 physical DIMM + 256MB, so if you have dimms of 8GB (x4) your pagefile.sys file is likely going to be slightly over 8GB by default although it can grow beyond that.
pagefile.sys is where it usually stores a copy of the pagepool in your DIMMs (in case of loss). It also contains pagefaults, which is data that the system knows it only needs to read every now and then instead of needing to access it a lot. So pagefile.sys is a place where a lot of writes happen.
Every program gets a pagepool, usually very small. Windows commits a lot more to the pagepool, but doesn't assign this commit to the program, much like how it commits memory but doesn't assign that commit to a program. (it just keeps spare commit room up for no reason)
That said, it works a bit differently with VRAM. In VRAM it will assign all the commit to a single program (and in VRAM it is x2 what the program actually used, at any point in time, so it will be based on the peak).
So what happens here is that, you can't really use all of your RAM anyway due to the default overcommitting microsoft does to your RAM. Yes, enabling pagefile lets you use all you RAM, but then, that 'overcommit' will happen to your disk. It will use part of your disk anyway, at all times.
And if it is an SSD it will ruin its lifespan due to consistent writes happening there. And so you need to use trimming software a lot to save it.
Enabling pagefile is detrimental to a modern system basically. No it doesn't improve performance, but people now use SSDs instead of HDDs, so I would highly recommend 'casual PC users and gamers' to disable it.
Pagefile at most is a cache that doesn't actually work very well in most cases, it is also a backup of your RAM in case you get BSODs (so enable it if needed then). Intel Opane is a good alternative for caching if you need that.
You won't notice any speed drop due to how rediculous fast things are now, so yeah--
its like supporting sysmain basically, or readyboost. Practically useless in most cases.
So, when should you enable pagefile to use up all your RAM? (keep in mind with 32, it means pagefile will be at least 22GB big if this happens). .. well, temporary till you upgrade your RAM to get 64GB instead.
I mean, unless you want constant data writes of 22GB to happen as you work on your video and deteriorate your ssd in the process.
So that is my opinion, but it is also incomplete.
you see, under normal circumstances, pagefile.sys is a space that is used only very little. It might write 16MB to it while the commit is 20GB for example. (yes it may have made the file big, but it doesn't use it that much to be honest)
So actually its okay to keep it enabled. But if you want your SSD to remain alive for 8+ years I recommend to disable it basically, because over time even small amounts of writes become detrimental.
and windows and browsers already do that so, keep it minimized I would say. If you reach near RAM fullness, upgrade to higher amounts of RAM, don't rely on pagefile.
Yes, even if it is an SSD and you won't notice speed drops, which you wont.
Edit: What I am trying to say mostly is that windows memory management needs an update. We don't use a few MBs of files, and preparing huge amounts of GBs in commit is rediculous. It doesn't do this to programs, but just globally.
except in VRAM where it gives huge extra GBs to a program, despite that program not using it, ever. It doesn't drop down either, it will always be x2 of what it actually uses and yes, it adds the GBs of VRAM it cannot commit to the physical memory chips to pagefile.sys as well, which is F-
Its basically crap that needs rework. Memory commit can rise in GBs in seconds, so its dumb to expect the program to be faster with using 2GBs than the commit preparation. It should imo not be more than 1GB or 2 at most, especially in VRAM it should be a lot lower. e.e;
Edit 2 (24 minutes in):
What you should be given the option to control is how much max commit you'll allow windows to give to itself and to a program, not the size of Pagefile.sys. thats not that important IMO. You want to use all your RAM like people here said, but windows practically prevents this in VRAM no matter what, and in RAM with commits of space not used.
This is what is forcing you to enable it if you need more yes. e.e; bad design with GBs of ram not actually used indeed.
(Commit is like "You're not allowed to use this space.") xd (its still empty space)
Windows memory management is the oppressor here. xd
How was what I said "old fashioned thinking"? This same reasoning, that's a relic of an implementation that came about from the days where RAM was lacking, is always focused on while ignoring everything else about the page file. Regardless of whether that is the case or not, there is simply more to it than that.
Keep in mind, while I would suggest leaving it to system managed, I'm not on firm opposition to adjusting it if the user understands what they are adjusting, which is my point. I'm merely in firm opposition of the idea that disabling it is objectively the correct setting. The fact is, there's a lot of misinformation just saying "it's not needed if you're not short on RAM, it's old fashioned, you gain performance without it" as the default recommendation for everyone and that is careless. I know this literally from experience.
Also, what are some sources that memory management hasn't changed since a decade to a decade and a half ago? And, even if it hasn't, in what way is this relevant? How long it's been since a change isn't what matters.
I'm not sure where you're getting that the commit charge will always be 1.7 times your actual needs, but this is just incorrect. It varies! You can literally prove this for yourself by disabling the page file, and if you are ever able to use over 58.9% of your physical RAM, then you know this "commit charge is 1.7 times what you're using" isn't always the case. Or, you don't even need to go that far; just take a look at resource monitor and compare the physical RAM use to the commit charge. You'll be able to find plenty of examples where the latter isn't always 1.7 times the former.
Nor am I sure where you're getting that it will equal one module + 256 MB. I can't find any sources for either of these, nor does behavior I've seen consistently indicate either of those. I can find a source stating what it actually is though, which is given here...
https://docs.microsoft.com/en-us/windows/client-management/determine-appropriate-page-file-size
It states that "minimum page file size varies based on page file usage history, amount of RAM (RAM ÷ 8, max 32 GB) and crash dump settings" and that the "maximum page file size
is 3 × RAM or 4 GB, whichever is larger. This size is then limited to the volume size ÷ 8. However, it can grow to within 1 GB of free space on the volume if necessary for crash dump settings."
The pagefile.sys file isn't some copy of the page pool contents duplicated on your RAM (or am I misunderstanding you here?), it IS the page file itself.
Do you have sources for the concerns about the page file killing SSDs? This was a common debate back when SSDs first came about around 2009 to 2011 or so, and I thought it was pretty settled back then that SSDs won't die in short order from having a page file on them. None of mine ever did, and I also use mine for scratch space for Photoshop (which will use it aggressively), but that's just anecdotal evidence of course. But, I don't see large swaths of reports of SSDs dying since they became a thing because of the page file. Arguing that "some use is more wear and tear than none" is, uh... obvious? Your PC takes wear and tear when it's used. A page file is wear and tear on HDDs. This isn't exclusive to SSDs. Unless you have some sources showing the page file is exhausting the write limits of SSDs in short order, I think time has proven this concern to be one you can pretty much forget. It was also a fairly popular study back then that Intel also recommended an SSD as the best place for the page file (no surprise, as they are faster), and that a ratio of something like 40 to 1 read to write was common for the page file, but again, that will of course vary, and likewise, Intel might be considered a biased source given they are in the SSD selling market. I mean, I guess if you're trying to do something that needs many, many times the physical RAM you have to where your entire PC experience is always coming from the page file, maybe? But I mean, that's not a normal case and that's definitely a case for needing more RAM.
It is indeed a lower number if you have less ram installed. And it will get lower once your pagefile exceeds 96GB and you somehow are approaching the 88GB Memory usage.
That said, if you wait a while, it will unallocate some of the assigned commitment, that is if processes are labeled as idle. If you run any process that is 'high power consuming' or not detected as idle, then it will go back up to the 1.7x scheme. (and that is just System Memory)
You don't read these things, you measure them with Resource Monitor, Process Explorer and RAMMap.
Its because it needs that much for a complete dump, which is also what it will keep in account for with default settings. It seems i was off by 1MB though. Again you measure this. You click on the file and then examine the properties and see the file size (if you don't want to download rammap and look).
https://docs.microsoft.com/en-us/windows/win32/memory/memory-pools
https://docs.microsoft.com/nl-nl/archive/blogs/markrussinovich/pushing-the-limits-of-windows-paged-and-nonpaged-pool
A page is basically a small piece of the memory block inside the Virtual Memory Space windows created, or rather the Address Space, although that term is also used for the individual section assigned to a program.
Windows combines all, all of your DIMMs + Pagefile + whatever it can use as memory into one big stroke of 'virtual memory'. It pretends that it is basically one big DIMM, but its not.
It knows which section is on the disk, and which is on a DIMM, even if it is sorted weirdly.
https://docs.microsoft.com/en-us/windows/win32/memory/page-state
https://en.wikipedia.org/wiki/Memory_paging
https://en.wikipedia.org/wiki/Page_fault
Pagefile doesn't permanently keep a copy of the paged pool, rather it keeps some of it. (it depends on how often the page is needed). It means that pagefile has unique pages as well, and once a reference to such page is made, you'll get a page fault (making windows put that page into memory). It acts like a cache, basically.
Like I said, despite it allocating a lot to pagefile (and preventing your ssd's space from being used, what it actually writes to pagefile is extremely small. I give an example of where you are using 10GB (workingset) and it writing 16MB to the ondisk pagefile.sys It is very small.
That said, its still uncontrolled, undesired writes. You're just trying to game, you have enough RAM available to keep all of the paging happening within RAM, and it is placing them on the disk just because they're used less often. Its a sigh.
Its similar with the hibernation file, which I also recommend disabling (although hibernation is far worse in what damage it does daily to someone's SSD when they often turn the power off)
Over a long term, like years, even Pagefile.sys will noticably affect the SSDs lifespan. It may save it another year, maybe more.
In case of cheaper SSDs it will likely not matter since they're designed to break within 4 years or so. But you expect 10 years out of an SSD, and 30 out of an HDD. If they break earlier its due to usage, or due to design. (manufacturers want to sell for money after all. making lasting products is for their pockets a bad practise).
A regular consumer wants to keep their SSD around with the idea to never wanting to replace it, so then its a good idea to optimize towards that as much as possible. It doesn't affect their performance anyway.
Yes it may not be the best way to use 'memory' (ram), but I mean we have 32GBs of RAM here. And idk how much SSD room they have, but if its 250GB or something, I'd disable it.
It stops mattering as much if it is 1TB, since they don't last indefidently (SSDs I mean).
And yes there are other solutions, like Full Disk copies and such.
And old fashioned thinking is basically 'buying microsoft's words'. You're assuming they have the best intentions for their users even though they have been trying to give you less access, less capabilities, etc. They turned part of your computer into a service they control.
You need to test things instead of read paid old blogposts basically.
That said, it doesn't mean everything that they say in those posts is a lie. A lot of it is not, but usually the point they try to make is.
Also, did you ever see how much memory windows is willing to commit to its own programs, like svchost? Its 2TB RAM per instance. (even though windows limits the amount of ram you can install on your motherboard).
So, if you were to break the limits on pagefile's max size, you know how big pagefile can get just for that program, not that it will ever happen (hopefully).
If you want to know how you can tell if things have changed in memory management, you need to compare old articles with newer ones or better, test it yourself. Here's one.
http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/
Windows 7 got archived by people and can be freely downloaded.
The last major change to the system was with Vista, I think.
https://imgur.com/a/W5MVB8i
It's telling that you're already partially walking this claim back by stating the ratio shrinks once your approach physical RAM capacity.
Do we therefore agree there is no fixed ratio and that it varies, yes?
I didn't ask about memory dumps though. I asked for support of your claim that the initial page file size will typically equal one DIMM size + 256 MB (memory dump settings themselves are not a constant but a variable anyway).
The reason I asked for this support of your claim because Microsoft instead states different criteria, which you ignored.
The observable behavior on my end does not match what you claimed either (see earlier link, it shows my system with a page file of ~9.4 GB, which is what it typically sets initially, which is not one DIMM worth of RAM in my case). If what you said is true, my system should never have a page file lower than a bit over 16 GB, but it does.
Yes, I do disable the hibernation feature as I don't use the feature itself (that, and disabling hibernation has the benefit of making shut down a full shut down instead of hibernating parts of the kernel and drivers and whatnot). The page file though, like it or not, is a rather core part of Windows memory management and brings some potentially (key word) fundamental changes when adjusted.
I was asking for some support to the claim, not just for you to restate it.
Again, while we're "only" a decade to a decade and a half into the era of SSDs, I think time has at least somewhat proven this is one that can be put to rest. Page files definitely don't seem to be causing SSDs to fail in much shorter order than otherwise. If you have sound reasoning to show they do, I'd be open to it, though.
I mean, same thing was being said about 16 GB a few years ago, 8 GB before that, 4 GB before that, etc.
At some point you might realize you might essentially be advocating for having more RAM than you need to cover for edge cases that might occur when disabling the page file for what is arguably no performance difference (none anyone will notice in most use cases). RAM capacities are typically dealt with in factors of two, so you're basically doubling your RAM cost to do this (unless you intentionally buy one pair of one capacity and a second pair of half the capacity, which is still typically roughly a 50% cost increase). Sounds like you'd be better off instead getting a faster GPU, faster CPU, faster storage, faster RAM, or really anything but "more RAM so you can disable the page file" if the pursuit was more performance, but maybe that's just me.
And, maybe it's just me, but I'd presume if someone was buying more than 16 GB of RAM now (32 GB is the next common step), it's because they feel 16 GB is either close to inadequate and will be soon, or it is inadequate. Or, maybe it's "just because it's cheap" after all (I've done that). But if one does need it, that means that 32 GB isn't entirely fluff, and thus running without the page file could be risky, if not now but later on.
I feel the best way to use your system is not to impose risks on it in the chase of performance that you probably won't even notice. But, if you have more RAM than you need, sure, you'll probably skirt issues with it disabled. I've acknowledged that many times. I've simply become accustomed to "boring stability" and get disappointed when I don't get it, so I'm not willing to trade performance differences I've never noticed to retain that. I've run it all three ways. I never noticed greater performance with it off, but I did notice the issues (albeit it very rare).
No, you're projecting here. I didn't make any of those assumptions. You're projecting those assumptions onto me.
Minimum page file should be at least 1GB, this is so it can support older games and some apps which use the page file regardless of RAM size. Never put it below 1GB.
If you have the page file on a SSD (Solid State Drive), it's best to fix the size min and max to be the same. This prevents Windows writing to it as much, changing it's size all the time. Prolonging the life span of that SSD.
For 32GB RAM, ideal page file size would be between 4-5GB, sweet spot tends to be 4955 MB. That would give you enough head room to remain stable, unless there's a major memory leak in an app or game. In which case, you would know there's a problem in what you are running, crashing out with a "Paging File" error or "Out of Memory" message.
Control Panel > System > Advanced System Settings > Advanced (tab) > Performance Settings > Advanced (tab) again > Virtual Memory change button.
Set custom size
Initial Size (MB) = 4955 MB
Maximum Size (MB) = 4955 MB
Set and reboot.
Windows is designed to run with a pagefile, the guys at Microsoft are not as daft as we think they are and they wrote the thing and to run efficiently it needs a pagefile. I do not understand completely why but they say it does and i believe them.
I found your post to be full of high tech dribble that belongs way out there in the far left field, take your tin foil hat off its doing you no favors.
I live in the real world, one of my ssd's is in a system that been running almost constantly for close to ten years, with a pagefile, and without issues.
This is an advice forum, helping those that have issues to resolve them when we can and far out left field opinions do not help.
I think Illusion of Progress' first post said it best
so ehhhhh
However it really is not a good idea to have a wide range between the Min & Max. Doing so causes a ton of system lag when the size changes in real-time, along with actively effecting disk fragmentation on said disk drive.
I've been doing the; pick the same size for min and max ever since the early WinXP days. Never had a problem because of that. The only "problem" is the disk space you have forced to be set aside at all times.
"Also as Carlsberg noted, if you statically set the page file size (or disable it) you will impact the ability for the system to memory dump in the event of a crash unless you've also manually configured specific dedicated dump files."
he did as i was replying to him.
The second is a consideration for HDDs, but doesn't matter on SSDs (if you have an SSD in a given system at all, it is the ideal place for the page file).
Well, and potentially the lower cap to your commit limit you set upon yourself if you should ever need higher.