Installer Steam
Logg inn
|
språk
简体中文 (forenklet kinesisk)
繁體中文 (tradisjonell kinesisk)
日本語 (japansk)
한국어 (koreansk)
ไทย (thai)
Български (bulgarsk)
Čeština (tsjekkisk)
Dansk (dansk)
Deutsch (tysk)
English (engelsk)
Español – España (spansk – Spania)
Español – Latinoamérica (spansk – Latin-Amerika)
Ελληνικά (gresk)
Français (fransk)
Italiano (italiensk)
Bahasa Indonesia (indonesisk)
Magyar (ungarsk)
Nederlands (nederlandsk)
Polski (polsk)
Português (portugisisk – Portugal)
Português – Brasil (portugisisk – Brasil)
Română (rumensk)
Русский (russisk)
Suomi (finsk)
Svenska (svensk)
Türkçe (tyrkisk)
Tiếng Việt (vietnamesisk)
Українська (ukrainsk)
Rapporter et problem med oversettelse
My recomendation: If you are only gaming and have 16GB of RAM then a 4GB SWAP file will be plenty to offload unused crap from the memory to the disk. Otherwise just leave it to dynamic.
What people recommend for SWAP depends on personal preference and the use case of your machine. Often advice given is based on 20-year old crap "double as much as your memory size" which isn't what you should be doing on modern hardware.
You want the OS to use the SWAP as little as possible, the more it uses SWAP the slower the machine will become. But it is best to have SWAP as cheap virtual memory in case your system memory alone isn't enough.
If you need a 8-16GB SWAP file to keep your system from crashing you are doing something wrong. Upgrade your RAM, your system isn't equiped for whatever you are trying to do. If the system uses that much SWAP it will become unusable or whatever workload you are running will take 1000x longer then it would if you just had another 8GB of memory.
You want your SWAP file on the fastest disk. So whenever the OS starts swapping the performance hit is minimized. The downside to swapping to the fastests disk is that this disk will likely be an SSD, if the OS is swapping agressively like Windows does it will put a lot of extra writes on the disk shortening it's livespan (not that it matters much with modern disks).
And yea, I already set the Pagefile to a set static size. I'll try and play around to see if there are any changes or if the crashes are gone.
That would give you 2^((5+30) + 2^*(5+30) bytes of virtual memory space. Or
34,359,738,368 / 2^(2+10) entries = 33,554,432 numbered 0 to the limit - 1 As Pagetable entries are usually in 4KB blocks. Giving you Page # to Page Frame # mapping 1 to 1
Pagefiles are often better the bigger they are. However your TLB is influenced by this factor as well. TLB (Translation Look Aside buffer) is a "intellligent" cache monitoring for cached virtual memory blocks and keeps the most recent data in it incase of needing to refer to it again, also known as a temporal data locality.
So your "swap location isn't" the only defining factor. The TLB often has a pretty high associativity for each block of memory contained so if it's a 64 entries of 4-way for each entry and you can store a lot of entries in your TLB before it gets filled. Which contains the page-offeset and the virtual address to physical address translation. The TLB is intelligent and often uses a number of replacement data policies to manage the TLB.
People here clearly have no idea how virtual memory actually functions. The pagetable protects yourself mostly from memory leakage. If an application can suddenly fill your RAM say 32GB thats a pretty fat memory leak it and yes they do happen especially bad is kernel level leakage which results in no application actually consuming the memory rather a single driver or core system file is not deallocating (garbage collecting / freeing / unfragmenting the non contiguous memory)
As a result you hit the 32GB physical memory limit and the system starts paging / evicting page table blocks from the pagetable. When this happens it results in massive system slow down. Including after the TLB has been filled and also starts evicting chunks of cached data.
I have 32 GB ram, i wouldn't want a single MB of pagefile to be used.
sure i could run out of ram if i run a never ending recursive function, but in that case no amount of pagefile would help - i'd rather crash than pagefile.
Nope. Because the pagefile isn't static and will evict chunks and re-allocate chunks as a result won't crash. 4KB pages or variable length segments. If all 32GB (2^(5+30)) / 2^(2+10) page entries are filled. The operating system will begin evicting blocks.
The entire virtual memory space = Pagefile + Physical memory space. So in the case of a 32GB meachine that gives you 64GB of virtual memory. Then the terminologies when defining weindows is you've got. The actual "REAL virtual address space on a 64bit machine is 2^48 / 2^40
281474976710656 ÷ 2 ^ 40 = 256TB of data in theory. (No system as of today can address this much memory) (Exception being Linux and Unix) Windows is capped at 2TB for their highest end OS's
Non-paged pool - Is the allocation which cannot be moved to disk
Paged-pool - Is the allocation which can be moved to disk.
There is a reason i'm in computer science
Back to the books I go....
Not everyone shares your values or opinions is why.
Sure with sufficient RAM you might not need a page file. I've run 32GB systems, and having a 48GB pagefile on a 3TB HDD that's is a trivial use of space. I wouldn't have really gained anything turning it off or micromanaging the pagefile to be smaller. The default configuration was perfectly fine.
Then set it to be on C drive and set the Min and Max both to 8192. Click Set after both entry boxes are filled, click OK and reboot.
The reason to always use this method of settings None, reboot and set the min and max to same value, firstly is to rid the drive of that file entirely. Then after the same value for min and max is applied, this prevents fragmentation.
It's best to have 16gb ram minimum these days.
Its not entirely due to how much ram you have, the pagefile is part and parcel of the OS and should be available even if not used. In short, set it and forget it.
I have 32gb also and 3 drives, PF is set to my main ssd only at 8gb and Its good, never crashes and never any issues.