Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem



if its a ssd, it will make no difference, only wear down the drive slightly when you defrag
hdd/sshd will improve reading files that are not fragmented
sluggish drive can be it failing or other rproblems
check with hdtune or other drive test to check read/write speed, and smart table data
above 0 for pending, bad sectors means its going bad and needs to be replaced soon
Windows won't defrag SSD drives.
It will run the trim command instead.
Long version short - sure, it's worth defragging.
Now, I take defrag seriously, as I often deal with used parts, including HDDs. 4% defrag might not be a big deal, but there's a catch. If there's a particular software having its files all across the drive - those files might be not fragmented, but this software's files still are, as drive still has to get here, and there, and oooover there again and again whenever it needs its files. I deal with this using two different options with the same goal. The first one is sorting files by name and path. This way particular software always has got its files close to each other, and when there are a lot - this can save a little bit of time. The second option is keeping particular software's files close to each other by moving those to outer ring of HDD. In Disktrix UltimateDefrag this option is called "Performance", and that's what I do with my most-used games. From what I understand, outer ring of the hard drive is longer than inner one, it's got a disk shape, so with the same speed reading head manages to read more sequential info than it would in inner ring. This definitely helps with big files like those found in modern games, but then random small files reads are just slightly better in inner ring, as due to way less space in inner rings reading head can travel the distance needed to reach another file faster.
I've done a little reading benchmark[imgur.com] just now to see if I'm not completely insane, showing results I've just said. The C partition is in outer ring of my drive, and the E is in inner one, both ~200GB, separated by a big D between, it's 2TB drive. Also pay attention to the enormous speed difference between Q32T1 sequential and Q32T1 randoms - it's kinda the reason why SSDs work way faster, as they don't have any reading head which takes it time to move somewhere for random reads, and also the reason I try to keep particular software's files close to each other.
ssd can access all sectors with the same delays, order makes no diff
hdd, small files will be faster if they are located closer to the read heads park position
large files will be faster if they are near the outer rim of a platter
but hdds often fill up from outer to inner, outer has the higher surfsce speed, but it parks near the spindle (lower surface speed but less heat generated)
if you can get a list of the files that are fragmented, you can see if its worth doing or not
or just do it and let it go overnight
windows had a bug where it would show some hdds as ssd and not schedule defrag for it
Of course, this is a very rough analogy, but the principle is the same - to read an arbitrary cell, you need to read a few more to find out exactly where it is located.
The situation with random write operations is much worse.
There might be some miniscure amount of performance to be won but it's not worth it. I am not even sure if periodically running a full TRIMMing an SSD is worth it, unless of course you are doing a ton of writing to the disk with applications which really profit from the faster write times.
Including fragmentation within the physical block, which entails the so-called write amplification.
Let's look at Intel site: https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/consumer-ssds/545s-series/545s-256gb-2-5inch-6gbps-3d2.html
Search for "performance" and press "info" hyperlink.
"Sequential Read - Speed with which the device is able to retrieve data that forms one contiguous, ordered block of data".
If the access time was not dependent on the page address, there would be no need to separately measure the sequential time, right?
Let's look further:
"Random Read (8GB Span) - Speed with which the SSD is able to retrieve data from arbitrary locations in the memory, within 8GB of LBA (Logical Block Address) range on the drive."
Have you noticed that to achieve maximum reading speed, it is also necessary that the addresses of logical blocks are in the specified range?
Why? Yes, for the simple reason - solid-state drives are not direct write devices, but an associative. And in this SSD one physical block of L2P table (logical to physical table that is a part of FTL layer of SSD firmware) contains exactly 8GB of LBA.