Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
if its for large files, a larger size wouldnt hurt, but for small files each file will fill the larger unit
and consume more space than required
each unit is only good for holding parts of one file
a file containing 1 byte will always consume 1 allocated unit, default size depends on the size of the drive as there are a limited number of units that can be allocated
this is why file size is not always whats taken on disk
But again this can help with IOPS with larger files, due to the fact that when viewing, moving or copying data when the cluster size is larger allows the disk to work less and output higher speed much closer to that of the stated specs from the maker. But overall I think most users might be ok with the default of 4K clusters and should be what is used when comparing disk based benchmarks.
Other reasons are when we talk about space waste, the smaller the cluster/block size the more space is lost on metadata. Metadata is often the performance bottleneck, and will cause SSDs to be defragged in windows when used with system restore.
Finally the talk of tiny files, now days even on OS drive, the average file size is much bigger than it used to be, the only use case I can think of the top of my head that would support smaller cluster sizes is Maildir, but most people manage their email online these days and Outlook uses MBOX format.
No question for games and media files a higher cluster size is better.
The only practical consideration for 4k on ntfs is if you want to use compression as that currently isnt supported above 4k.
It is inefficient for space in some games.
They are on a NAS though & compression helps a lot with that.
https://github.com/andyg2/clustersize/blob/main/assets/20230701_135629_solution.png
You can check the repo for more info if you like but the basic are it scans all files recursively and figures out how many clusters would be used at each of the cluster sizes. The purple is the number of clusters (the more clusters, the slower loading of files), the blue is the total size on disk. As I want to prioritize speed I went into the right (faster) but not so much that I'm wasting space too much. From 264GB to 270GB and extra 6GB is fine.
https://github.com/andyg2/clustersize
Thanks, had to figure out how to install and run a php file, but the concept and example are great - i'm tired of steam updates adding 5K+ fragments per large game file on drives that have plenty of free space, and larger cluster sizes is a way to reduce that