Teber (Banned) Sep 4, 2021 @ 5:22pm
allocation unit size for ssd with games, movies and ♥♥♥♥?
say something

Something went wrong while displaying this content. Refresh

Error Reference: Community_9708323_
Loading CSS chunk 7561 failed.
(error: https://community.fastly.steamstatic.com/public/css/applications/community/communityawardsapp.css?contenthash=789dd1fbdb6c6b5c773d)
Originally posted by Bad 💀 Motha:
The default is 4K clusters, way too small to make good on most of the speed and especially IOPS that many of the faster ssds could dish out. I would recommend doing diskpart and use either 8K or 16K clusters. But again please understand that you will lose space per # of files if you have alot of tiny files. As a file must be allocated at the cluster size for file space on disk even if the file is small. So let's say I have a cluster size of 4K, 8K, whatever. That cluster size setting is in KiloBytes; so 4096, 8192, 16384... etc. And should file be below this size, let's say file is 1024KB, the actual size on disk will be whatever is set for the cluster size when disk was formatted. So if say 16K cluster and I have files that are only 1KB in size, the file must take up that 16K on disk and thus the other 15K is wasted as a result.

But again this can help with IOPS with larger files, due to the fact that when viewing, moving or copying data when the cluster size is larger allows the disk to work less and output higher speed much closer to that of the stated specs from the maker. But overall I think most users might be ok with the default of 4K clusters and should be what is used when comparing disk based benchmarks.
Showing 1-8 of 8 comments
Autumn_ Sep 4, 2021 @ 5:35pm 
What do you mean?
Teber (Banned) Sep 4, 2021 @ 5:42pm 
Originally posted by Autumn_:
What do you mean?
yes
_I_ Sep 4, 2021 @ 5:55pm 
when formatting the drive, just use default

if its for large files, a larger size wouldnt hurt, but for small files each file will fill the larger unit
and consume more space than required
each unit is only good for holding parts of one file

a file containing 1 byte will always consume 1 allocated unit, default size depends on the size of the drive as there are a limited number of units that can be allocated
this is why file size is not always whats taken on disk
Last edited by _I_; Sep 4, 2021 @ 6:00pm
The author of this thread has indicated that this post answers the original topic.
Bad 💀 Motha Sep 5, 2021 @ 12:01am 
The default is 4K clusters, way too small to make good on most of the speed and especially IOPS that many of the faster ssds could dish out. I would recommend doing diskpart and use either 8K or 16K clusters. But again please understand that you will lose space per # of files if you have alot of tiny files. As a file must be allocated at the cluster size for file space on disk even if the file is small. So let's say I have a cluster size of 4K, 8K, whatever. That cluster size setting is in KiloBytes; so 4096, 8192, 16384... etc. And should file be below this size, let's say file is 1024KB, the actual size on disk will be whatever is set for the cluster size when disk was formatted. So if say 16K cluster and I have files that are only 1KB in size, the file must take up that 16K on disk and thus the other 15K is wasted as a result.

But again this can help with IOPS with larger files, due to the fact that when viewing, moving or copying data when the cluster size is larger allows the disk to work less and output higher speed much closer to that of the stated specs from the maker. But overall I think most users might be ok with the default of 4K clusters and should be what is used when comparing disk based benchmarks.
chrcoluk May 28, 2023 @ 4:37am 
I think within 5 years we will start to see defaults move from 4k. Nand drives are advancing rapidly and already have page sizes larger than 4k. Spindles already benefit significantly from larger cluster sizes.

Other reasons are when we talk about space waste, the smaller the cluster/block size the more space is lost on metadata. Metadata is often the performance bottleneck, and will cause SSDs to be defragged in windows when used with system restore.

Finally the talk of tiny files, now days even on OS drive, the average file size is much bigger than it used to be, the only use case I can think of the top of my head that would support smaller cluster sizes is Maildir, but most people manage their email online these days and Outlook uses MBOX format.

No question for games and media files a higher cluster size is better.

The only practical consideration for 4k on ntfs is if you want to use compression as that currently isnt supported above 4k.
Last edited by chrcoluk; May 28, 2023 @ 4:39am
Lord Flashheart May 28, 2023 @ 11:46am 
I have 1MB block sizes for the drives containing games.
It is inefficient for space in some games.
They are on a NAS though & compression helps a lot with that.
Phoct Jul 1, 2023 @ 5:52pm 
I wrote a quick script to figure this out after reading this. I found that 256KB cluster size works best for my Steam folder. Larger than that the excess space starts to ramp up quite quickly.

https://github.com/andyg2/clustersize/blob/main/assets/20230701_135629_solution.png

You can check the repo for more info if you like but the basic are it scans all files recursively and figures out how many clusters would be used at each of the cluster sizes. The purple is the number of clusters (the more clusters, the slower loading of files), the blue is the total size on disk. As I want to prioritize speed I went into the right (faster) but not so much that I'm wasting space too much. From 264GB to 270GB and extra 6GB is fine.

https://github.com/andyg2/clustersize
Last edited by Phoct; Jul 1, 2023 @ 5:57pm
Mussels Nov 4, 2023 @ 2:44am 
Originally posted by Phoct:
I wrote a quick script to figure this out after reading this. I found that 256KB cluster size works best for my Steam folder. Larger than that the excess space starts to ramp up quite quickly.

https://github.com/andyg2/clustersize/blob/main/assets/20230701_135629_solution.png

You can check the repo for more info if you like but the basic are it scans all files recursively and figures out how many clusters would be used at each of the cluster sizes. The purple is the number of clusters (the more clusters, the slower loading of files), the blue is the total size on disk. As I want to prioritize speed I went into the right (faster) but not so much that I'm wasting space too much. From 264GB to 270GB and extra 6GB is fine.

https://github.com/andyg2/clustersize

Thanks, had to figure out how to install and run a php file, but the concept and example are great - i'm tired of steam updates adding 5K+ fragments per large game file on drives that have plenty of free space, and larger cluster sizes is a way to reduce that
Showing 1-8 of 8 comments
Per page: 1530 50

Date Posted: Sep 4, 2021 @ 5:22pm
Posts: 8