ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Other Software > Found Deals and Discounts

Paragon Total Defrag 2009 For Free - Powerful but controversial

<< < (6/7) > >>

Shades:
See your point, however there is a but...

It appears to me that single platter drives are more common nowadays. Manufacturers are able to put quite some density on a single platter and  when you have a (reasonable) recent SATA drive with a storage capacity below 500GByte it seems to me that they come in a single platter variant. At least the last Maxtors I received here had a very thin case.

What is more, is 500GB also not the max. possible capacity with current (financially viable) technology? I believe that the 1 TByte disks and bigger have again several platters.

Carol Haynes:
The point is that the the defragger doesn't even know where the partition is on the disc surface - if you have partitioned your disc into 4 partitions it is still going to be putting the fastest access files on the 'outer rim' and claiming a 400% increase - if the partition is only 10Gb on a 500Gb disc and the fourth created it is in the last 2% of the disc surface and there won't be any smart placement benefits at all.

Also unless the technology they use actually uses low level code to access the disc surface directly (in which case they would have to pretty much produce their own file system and break all the windows design criteria - not to mention get below even kernel mode and have different code for every drive manufacturer and model) there is no way for an app to know where the Windows API is putting the files physically.

I agree it is probably true that if Windows API is moving the files to a faster part of the disc then this is probably true in reality but the abstraction layer deliberately built into windows is there precisely so that vendors don't work at hardware level and therefore they can have no idea what the API is actually doing in reality.

srdiamond:
First of all I'm sorry for venting, but...

[rant]
I used to pay for DisKeeper (until version 8 or so), my main frustration with that one was that I have to have 15% of harddisk space free before defragmentation could took place.

Given the harddisk sizes of today that is a sizable chunk of space (for example: 30GByte on a 200 GByte disk/partition!!!!). The stupidity of that rule baffles me to no end. I have been around PC's long enough to know why that rule was put into place, but today that rule is insane.

With today's 1Tbyte drives i am not allowed to use 150GByte, because my defragger won't allow for it?!?
Denying me a 'snappy' system just because I use the total capacity of my hard disk?!??!
Do you have (an) contiguous file(s) of 150GByte in size on your disk? The biggest single file I have seen was 35Gbyte (Oracle database file).

Because the file size of the biggest (fragmented) file on a harddisk and equivalent free space on a harddisk should be the only reason why defrag software isn't able to start. And even that should actually not be a reason to start defragmenting the files that do fit in the available free space.

In that sense most defragmenting software has a lot of growing up to do.

Not the software from DiskTrix though. The harddisk in my system is an IDE WD Caviar with a capacity of 160GByte capacity (unformatted, 149GByte formatted). Directory Opus reports that this disk has 2,3Gbyte of free space or 1.5% free.
DiskTrix starts without any problem.

If Perfect Disk would commence to defragment my disk all the power to it, but my guess is that it either will not commence or will be painfully slow because of all the (literally) grinding work. DiskTrix commences without problems and it will take quite some time because of the amount of files, but it goes on without complaining.

Furthermore, how often is defragmenting required? DisKeeper was setup to run every night when I was asleep and still the results were not that great. Defragging every night puts (quite) some wear and tear on the disk. Nowadays I have scheduled the defrag software to run once a month.[/rant]

How often to defrag? Seems to me, one reasonable standard would be to reduce overall disk usage. At what point does defragging stop decreasing overall disk usage and start increasing it? Depends obviously on individual variables, but some salient estimates would help, and maybe some feature-laden defragger should calculate this for the use in its stealth or set-and-forget modes. Or would this information prove embarrassing to the developer? What if defragging hurts your hard drive more than it helps if you defrag more than annually?

It is just the stupid archaic rules required by this kind of software and the lack of results that drives me insane  >:(...and which make me a (very) happy Disktrix user.  :)
-Shades (April 05, 2009, 06:04 PM)
--- End quote ---

moor:
Diskeeper 2009 defragments drives with as low as 1% of free space, the 15% free space in the past was a Microsoft Windows recommendation because 12.5% that is reserved for the MFT and you can not move files into it. Each system fragment differently, so for an IT person that is in charge of a sizable number of systems would be a major headache to try to guess the best defrag schedule for all of them. It is highly recommended to have a fully automatic solution that would work in the background without taking resources from ctitical processes.  In addition, it is fragmentation that could cause the wear and tear on the disk than defragmentation for the simple reason that when a single file is fragmented into 1000 fragments, it would require a 1000 I/O requests to the disk instead of 1.

f0dder:
moor: defragmentation also induces wear and tear - but one could argue that while defragmenting, you get that tear once, whereas you get it "all the time" when accessing fragmented files.

As for free space requirements, all defrag apps I've seen take longer time (and shuffle data around more) when there's not "enough" free space. "Enough" is a bit hard to define, since it's a combination of defrag method, file size, amount of fragmentation, etc.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version