ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Never Defragment an SSD ?

<< < (2/3) > >>

f0dder:
So apposed to what is normally (IDE/SCSI/SATA) perceived as a defrag. - accounting for the hardware lieing - Wouldn't that really just be somewhere between damage control and a placebo effect?-Stoic Joker (February 17, 2011, 11:25 AM)
--- End quote ---
A little knowledge is a dangerous thing.

Yup, all the decent SSDs do remapping in order to improve lifetime. But that's not the only stuff they do - there's a lot going on in SSD firmwares (I believe some of them are using fully-fledged ARM7 cores). OCZ, for instace, does compression of blocks to achieve some speedup. Writes are cached and combined so blocks don't have to be needlessly erased multiple times, and there might be predictive read-ahead going on...

So, we can all sit down and guess what the firmware does and make nice half-assed guesses to what it does for performance, but benchmarks show that there's quite a difference in speed between doing sequential reads/write as opposed to scattered smaller I/O.

This will translate to a performance difference for fragmented filesystems - how much it matters is debatable, and I haven't seen anybody doing a benchmark of it (it's a pretty darn hard thing to set up for a realistic real-world scenario), but my personal guess is that it's a small enough problem that I'm definitely not defragmenting my SSDs... and if I notice performance dropping, I'll be doing the above-mentioned disk imaging based "defragment".

f0dder:
That's true. But I think that the scales are different. Disc blocks are relatively big. We're not talking about defragging at the byte scale.-CWuestefeld (February 17, 2011, 11:58 AM)
--- End quote ---
Defragging is never done at the byte scale, and not even at the sector scale - so yes, relatively large chunks (for my NTFS filesystems, generally 4kb clusters).

Even if a file has hundreds of fragments, it's not going to make any noticeable (or probably even measurable) difference in the overall access times.-CWuestefeld (February 17, 2011, 11:58 AM)
--- End quote ---
Do you know this to be true? Have you measured it? For every possible combination of SSD firmware, file fragmentation level, and application + access pattern? :)

Not all applications are good at doing proper I/O. On my system partition, excluding .log files which are truly hopeless, I see stuff like a ~14meg file in 739 fragments, the 11.2meg installer for paint.net in 316 fragments, the 1.2meg Internet Explorer cache index in 71 fragments, et cetera.

And keep in mind that those would be cluster-size fragments, which are might be located very differently on the SSD because it deals with erase-block-size blocks :)

Armando:
Thanks for the contributions. So... 1-defragmenting an SSD in the "traditional way" would not be a particularly brilliant idea ("limited amount of erase-cycles of the flash cells"... windows 7 defrag system skips SSDs for that very reason), 2- the fragmentation that'll inevitably occur will somewhat affect performance. 3- According to f0dder the only "defraging" remedy would be to
create a disk image of it to a mechanical drive, defrag that image, do a single-pass wipe of the SSD (to let the drive know all sectors blocks are blank, useful for the reallocation algorithms), then transfer the defragged image back.

--- End quote ---

f0dder:
#2: it will somewhat impact performance, yes, but whether it's going to be measurable will depend heavily on your disk as well as usage patterns - my guess is that most people won't feel much of a difference. Some people have reported that some SSDs get noticeably slower after a lot of use, but that's more likely because of the block remapping done by the wear-leveling algorithms.

#3: IMHO, yes. The whole image back-and-forth jig is because you don't want to use up your erase-cycles... defragging is very disk intensive, and defraggers tend to move stuff more back and forth than "necessary" (to reduce the risk of data loss on power-out, and because computing the optimal way to shuffle stuff around is not an easy problem).

The "wipe disk" (using vendor-supplied tool) step before re-applying the defragged image is to help the drive's wear-leveling algorithms, and as I understand things shouldn't stress the drive (much) more than simply re-applying the image: it's the erase-block cycles that are limited, not the writes.

worstje:
I don't know much about SSDs at all, but how is writing different from erasing (or erasing different from writing)? Writing is the act of putting a bit to either a 1 or a 0. Erasing is the act of writing 0s in the most conventional case, although more secure versions tend to randomize whatever they write. Either way, erasing would be implemented as the act of writing, so I still believe it is write-cycles you are worried about. The only 'trick' about writing on SSDs is that entire blocks of data need to be rewritten when you change a single bit, and that is what helps wear down on SSDs so much, if I understand it all properly.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version