I read many posts talking about importance of having multiple copies. but the problem is, even if you have multiple copies, how do you make sure that EVERY FILE in each copy is good. For instance, imagine you want to view a photo taken a few years ago, when you checkout copy 1 of your backup, you find it already corrupted. Then you turn to copy 2/3, find this photo is good. OK you happily discard copy 1 of backup and keep 2/3. Next day you want to view another photo 2, and find that photo 2 in backup copy 2 is dead but good in copy 3, so you keep copy 3, discard copy 3. Now some day you find something is wrong in copy 3, and you no longer have any copies with everything intact.

Someone may say, when we find that some files for copy 1 are dead, we make a new copy 4 from copy 2 (or 3), but problem is, there are already dead files in this copy 2, so this new copy would not solve the issue above.

Just wonder how do you guys deal with this issue? Any idea would be appreciated.

  • GNUr000t@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Good backup software is going to have methods to verify that backed-up data is intact. When backups are stored in (potentially fixed-sized) blobs, you have the option of verifying a single file in one action instead of potentially thousands.

    By “dead” I’m also assuming you mean bit rot. While that’s a real problem, it’s not something that happens day after day at any scale an individual would be using. If the source is getting corrupted somehow and that corrupted file is being backed-up, this is what version history is for.