1) If you
really belive it's a bug that the Shell doesn't temporarily
remove protection of critical system files, you should file a but report on MS Connect, instead of making spurious claims in your marketing material - I'm pretty sure this is a by-design decision from Microsoft. I do agree it's probably harmless to compress those files, but calling a security feature a bug is misleading marketing, IMHO. And you deliberality keep your wording vague enough (combined with your "three times smaller", which is obviously only valid if there's not much else on the disk than Windows) to give the impression that the "bug" would be somewhere else (like, the core NTFS compression routines).
THIS is why I'm pursuing this aggressively - you're using snake-oil salesman tactics. Which is a shame, since you obviously do get a better compression rate (and you really do ought to warn users that you're doing it by messing with critical OS files).
2 & 3) There's nothing wrong with what I've stated here. I do acknowledge in SSD speedup in #4, but for obvious reasons there's no way in hell I'll be NTFS compression any of my SSDs. The HDD backing my VM disk image is a 10k rpm velociraptor. I plan on running a single-threaded DrivePress later today to compare with the 2-thread version.
4) first,
again my problem with compression on an SSD isn't the speed hit caused by fragmentation (it's several scores lower than the speed hit on a HDD, but it's still real) - it's (to some degree) the reduced speed and hindered wear-leveling on (at least, but probably not limited) drives with SandForce controllers, and (to a fairly large degree) the heavily increased amount of block erases caused by how NTFS compression is implemented. Having NTFs compression on often-modified files approaches suicidal tendencies for an SSD.
Because an SSD can read from/write to all parts of the drive at the same time (think of a hard disk platter rotating at infinite speed), that is why fragmentation is of absolutely no consequence for SSDs - be it NTFS compression induced, or the "normal" fragmentation that happens on NTFS inevitably. There is no delay, because all areas of disk are equally accessible at all times.-simonking
This is patently wrong - take a look at
some benchmarks. For instance, the 120GB Intel 510 drive does ~50MB/s for 4k random reads, whereas it does ~380MB/s for 128kb sequential reads (4k sequential would be slower, but should still be quite a lot faster than the random reads). You'll notice that it does 4k random *writes* faster, which is obviously because the drive has internal cache and can do (sequence-optimized) writes at leisure - and some of the other drives handle this even better.
a. Do NOT attempt to manually acquire file permissions just to be able to compress them. Doing this will create a huge security hole on your system (one that MagicRAR Drive Press does not create, because it restores all permissions as has been confirmed in this third party report).-simonking
This is very good indeed - the way you handle the permissions is something to give credit for. I didn't look too closely at the code, but it seemed like you even throw in exception handling to do the permission-restore? You
do leave files potentially vulnerable during the compression process... not much of a real-world problem, but could be reason enough for Microsoft consciously choosing
not to do it. I still feel it wrong to classify the behaviour as a bug.
b. There will always be some files/folders that would be locked by the system/applications, and as such incompressible. If there is demand for it, we could also automate the conversion of those parts by building a boot time version of MagicRAR Drive Press - however, in my research, the additional space savings would be negligible.-simonking
Agreed.
So while the MagicRAR Drive Press Challenge technically remains unmet-simonking
Ho, humm, you still chose not to
properly address any of the points of the original thread, most of which I showed to be
clearly true. As I see it, only the points regarding interactions with SSD speed/lifetime can be debated... and for those points, I indeed do believe that I'm correct; what can be debated is to which
degree lifetime and performance will be affected. For the current crop of SSDs, I definitely wouldn't do gung-ho NTFS compression, and I would recommend people against it.
Selective compression of static files would be fine, though. I wonder if it would make sense to apply compression on the files on another (and preferably HDD) partition, then move the files to the SSD target? I haven't tested, but it
might result in less fragmentation of the target files.
Oh, and one last thing: your progress bars are severely bugged - they reached 100% several minutes before the actual operation was done (bugged both in analyze as well as compress phase). Looks like you use Delphi, and I haven't touched that since Delphi2, so dunno if there's limits on it's current/max values... but iirc the win32 controls are/were clamped to pretty low values, meaning you definitely shouldn't be using currentBytes/maxBytes - or even currentNumFiles/maxNumFiles for modern filesystems.