avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Monday April 19, 2021, 6:36 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - xviruz [ switch to compact view ]

Pages: [1]
Skwire Empire / SFV Ninja FR: checkpoint results
« on: July 02, 2018, 03:44 AM »
Feature request for SFV Ninja: add an option to persist checksum results to a specified output file at fixed intervals (e.g., XX mins), so that not all progress is lost upon a system crash.

In my use case, I'm running checksums against large amounts of data that take multiple hours to complete, so a system crash part way through is quite annoying. For me personally, persisting only the completed checksums is perfectly fine: i.e., have the user be responsible for reconciliation/adding back incomplete files.

1.3.5 works great, thanks a lot!  :D

I'm liking the pause button in 1.3.3 but miss the ability to scroll while performing the checksum (the middle portion is grayed out/non-interactive). Any chance you could add that back?

I just noticed that the "Total" bar is broken. It either doesn't update at all or, when verifying newly added files, gets stuck in a partial state (bar never goes to the end). It was working fine in 1.2.6.

Both these items should be addressed in the latest version I just uploaded.  Please test and let me know.  Links at bottom.

Thanks, they both work great. It's much faster now!

The timestamp data is not currently pulled for each file so I'm not certain how much cumulative time this would add.  So, how important is this feature to you?

It's nice-to-have but not critical: the drives I'm CRC'ing are largely append/read-only, so it's not too hard to sift through the bad CRCs and figure out if it was because the file changed. I guess if most of your files are always changing, stale CRCs will be pretty useless.

I haven't ever developed in Windows, so my guess was that if it were anything like Unix, where stat-ing a file will give you a struct with both file size and timestamps, it wouldn't add any disk overheads (since you're already scanning for file sizes). If that's not the case, it'd probably take twice as long in the worst case (an additional scan, no caching).

I have a few feature requests... Apologies if they've already been asked for.

1. Do not remember "full screen" as the last used resolution. It's a bit annoying when the program starts up un-maximized but still takes up the entire screen.

2. Faster skips for files with saved checksums. It takes over 10 mins to go through 10k or so files that already have saved checksums when using "verify new files only" (Win 7 x64, Core i5, 8GB RAM, 7200RPM Seagate drive). I'm not sure why that is---if this is all in memory, it should very fast. If GUI is the issue, update it less frequently?

If this is not possible, then it'd be nice to enable saving an SFV without reverifying or skipping files that already have saved checksums. That is, the SFV is generated using "checksum" where possible and "saved checksum" otherwise, throwing a warning only when neither "checksum" nor "saved checksum" exist. So for example, if I load an SFV, I can immediately save it with no warnings. (Right now, it warns when "checksum" fields are empty, even when "saved checksum" is present.)

My use case is CRC'ing an entire drive and occasionally appending/adding CRCs of new files by first loading an existing SFV and drag-n-dropping the drive (or its folders). Ideally, files already with checksums can be skipped very quickly. If not, I'd (want to) sort by saved checksum, calculate it for the new files, sort by filepath, and save a new SFV without reverifying/skipping the existing ones.

3. Allow the comment to display "This file has been modified" when a file's modified timestamp is later than the SFV's creation timestamp. If the timestamp metadata is pulled together with file size when scanning files, there should be no performance penalty. Though, I'd understand if you think this bloats the program in a negative way.

Thanks for all your hard work.

Pages: [1]