ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

How do incremental backup programs work?

<< < (3/5) > >>

mouser:
Block level incremental is not something i look for.

It's not hard to think up scenarios where block level incremental could save you backup space -- for example if you have a lot of LARGE files for which you were only adding stuff at the end of them (giant log files).  But that's not that common, and i don't have any such documents of my own that i would mark for incremental backup.

For me, block-level incremental backup seems to prone to trouble, and i feel better having full copies of each version of each backed up file.  I don't want all the backups to be dependent on one another the way they would be with a block-level thing.  That's just asking for trouble in my view.

As far as the "best" programs supporting virtual mounting of backups -- I wouldn't say that.  However coding such a feature is pretty advanced stuff, so when you see that feature it's probably a pretty good indication that you are dealing with an advanced program.  It can be quite a useful feature sometimes if you have your own file search/explore tools that you would like to use on your backups.


ps. it's hard to post to this thread because each time you try to post you get "Warning - while you were typing a new reply has been posted. You may wish to review your post."

jgpaiva:
jgpaiva drops an anvil on justice! Bah, that's bad news for me :)

mouser: the good advantage of using a block level incremental backup system is that to store multiple versions of a file, you only have to store the diffs and the metadata to be able to rebuild the file.

jgpaiva hopes JD doesn't get well known :P

[edit] oh, JD needs the S3 subscription.. I feel better now [/edit]

jgpaiva:
@mouser: yeah, block-level might not seem useful for the regular user, but there's one more advantage: similarities between files also are recognized, thus, the compression is made throughout the whole backup and not only per-backup. I probably didn't express myself correctly.
What i mean is that when you do a backup, it is compressed. But it is compressed only based on information from the current backup, thus, it takes more space then it would if it used information about previous backups (which is possible with block-level backup).
I'm not absolutelly sure about this, but i think that conceptually it is possible to take as little space as possible, by using BLB.

The point you made about the whole backups depending on each other is interesting, though... I think i'll have to think about a solution for that :)

justice:
Maybe you can look at how PAR files work which means you could use small helper files to reconstruct the backup even when the whole backup wouldn't be available? I always thought they'd be very useful even in a downloadmanager.

Parchive parity files (or par files for short) create redundant data that can be used later in case parts of the original data is lost or corrupted. Par files allow file level recovery of data. That is, out of a group of many files, if a limited number of files are lost or corrupted, they can be recovered. These files will have the extensions of .par, .p01, .p02 etc http://en.wikipedia.org/wiki/Parchive

jgpaiva:
Maybe you can look at how PAR files work which means you could use small helper files to reconstruct the backup even when the whole backup wouldn't be available?
-justice (June 17, 2008, 06:50 AM)
--- End quote ---
Yep, that was my first idea too ;) It'll take some more investigating to know if it'll actually compensate the space it takes, though ;)

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version