Have a little faith, will you? -apankrat
*Grin*
Hope you don't take my posts as grumpy-old-man. I'm just interested in these things, and some of what you're syaing sounds weird compared to my own experiences. But I can handle being proved wrong, and always like learning new stuff
(Also, I've been spending quite some time looking at backup software lately - pretty much everything sucks in one way or another. Closest I've come yet are Genie Timeline which was kinda nice but had bugs and shortcomings, and Crashplan which does some of the stuff GTL sucked at better, but has it's own problems - *sigh*.)
Hm, you might have a point wrt. warm cache querying - but have you tested the code across several OSes, especially pre-Vista? That's when Microsoft started doing a lot of work on lock-free data structures and algorithms in the kernel. Have you tested on XP and below?
This is with warmed up cache. C:\ was scanned in full immediately before this test. Interestingly enough, playing with the order, in which sub-directories are queued for scanning, can speed things up by additional 5-10%:-apankrat
Hrm, last I played with different scanning techniques was back on XP - that's some years ago, which also means quite slower hardware. I tested NTFS, FAT32 and even ISO9660 (on a physical CD, since that's the slowest seek-speed I had available). I tried depth- vs. breadth-first, tried eliminiating SetCurrentDirectory calls since that'd mean less user<>kernel transitions (and I had hoped CWD wouldn't change, but it did - FindFirstFile probably changes directory internally), spent some effort on making the traversal non-recursive and eliminating as many memory allocations as possible... and nothing really did much of a difference. Was hellish doing cold-boots between each and every benchmark
Can't remember if that was before or after I got a Raptor disk - so it might have been on hardware before NCQ got commonplace, and it was definitely on XP. Still, even with NCQ, it's my experience that you don't need a lot of active streams before performance
dies with mechanical disks. For SSDs, the story is entirely different, though - there, on some models, a moderate queue depth can be necessary to reach full performance. So a cold-scan on an SSD might benefit from multiple threads - I'd be surprised if a mechanical disk did, though!
Got any benchmark code you're willing to share? I'd be interested in trying it out on my own system, I'm afraid I didn't keep the stuff I wrote back then (and there were no threaded versions anyway).
there will always be time when the OS is not doing anything for us, because our app is busy copying what it got from the OS into its own data structures. So if we have 2+ threads pulling at the API, it eliminates these idle OS times.-apankrat
It's my understanding that what you're generally waiting for when traversing the filesystme is disk I/O - the CPU overhead of data structure copying and user<>kernel switches should be entirely dwarfed compared to the I/O. Which is why I'm surprised you say multiple threads help when there's a mechanical disk involved. I'd like to verify myself - and I'd like even more if somebody can find a good explanation
Thirdly, the problem of the OS sitting idle becomes even more pronounced when you do an over-the-network scan.-apankrat
That's a part I'm fully convinced you're right, without seeing benchmarks
- there's indeed quite some latency even on a LAN, and the SMB/CIFS protocol sucks.
I got the point re: marketing speak though. I will try and back it up with the graphs -apankrat
Please also change the wording, though
- even with graphs, the sentence is still suspicious. I'm too tired at the moment to come up with something better that isn't going to confuse normal people, though
With regards to the MFT/USN - I really don't want to descent to that level. I considered using USN, for example, for move detection and it is - basically - a support hell. As much as I love troubleshooting NTFS nuances, this is just not my cup of tea.-apankrat
It's a nastily low level to be operating at - and it definitely shouldn't be the
only scanning available, since it might break anytime in the future. I'm also not sure MFT scanning is the best fit for a backup program, it's my understanding you pretty much have to read it in it's entirety (possibly lots of memory use, constructing a larger in-memory graph than necessary, or spending CPU on pruning items you're not interested in?) - but g'darnit it's fast.
WizTree can scan my entire source partition in a fraction of the time just part of it can be traversed via API calls...
USN is tricky getting right, and I haven't had time to play enough with it myself. But IMHO the speed benefits should make it worth it. Without USN parsing, after (re)starting the backup program, you have to do complete traversal of all paths in the backup set. It's quite a lot faster simply scanning the USN logs and picking up changes - but yes, complex.
---
What about Symlinks, Hardlinks and Junctions? Do you handle those correctly, and have they given you much headache?