If there aren't many duplicates in the hard drive, then manually selecting the ones to remove isn't a big deal. But when having to remove hundreds or even thousands of duplicates, this can take many hours of work. An example of having such large numbers of duplicates is when people use hard drive backup programs and the programs keeps copying the same files over and over into different folders and mixing them up until the hard drive is full and the user doesn't know what to do.
-ednja
yeah, in my experience, duplicate file finders still need a lot of work after the duplicates have been found. But maybe there's really good duplicate file finders out there (havent tried any in a few years).
[Could they/you just dump everything on an external drive and start from scratch?]
Relatives of mine want me to sort out their photos - they used big SD cards and didnt delete photos from their cameras, so they kept uploading the same photos again and again - often using different methods (so: different names as well as timestamps). That is a nightmare, gigabytes of duplicated material - and I have to admit, a nightmare I've been avoiding... so, if a solution is found here, I could give it a go as well ;-)
If you had a programme that would locate moved files - and do a bit comparison of files with the same size (but possibly different date &/or name) -
if you had that - you could just keep merging folders (or copying and deleting copied folders). At some stage you'd have to run a duplicate file finder on the remaining files. Syncovery could do the merging bit intelligently (finding moved files) - but not directly - you might have to sync both ways - and then delete one 'side'.
I might try that at the weekend with those photos I was talking about.
Syncovery uses MD5 checksums - but the option appears to do this on all the files - which would take a wet week to run.
The ideal, IMO,
would be to compare file sizes, and only do bit comparisons of files that are the same size (regardless of the filename/date).