restic must be able to refresh lock files in time. However, large
uploads over slow connections can cause the lock refresh to be stuck
behind the large uploads and thus time out.
Since 0.15 (#4020), inodes are generated as hashes of names, xor'd with
the parent inode. That means that the inode of a/b/b is
h(a/b/b) = h(a) ^ h(b) ^ h(b) = h(a).
I.e., the grandchild has the same inode as the grandparent. GNU find
trips over this because it thinks it has encountered a loop in the
filesystem, and fails to search a/b/b. This happens more generally when
the same name occurs an even number of times.
Fix this by multiplying the parent by a large prime, so the combining
operation is not longer symmetric in its arguments. This is what the FNV
hash does, which we used prior to 0.15. The hash is now
h(a/b/b) = h(b) ^ p*(h(b) ^ p*h(a))
Note that we already ensure that h(x) is never zero.
Collisions can still occur, but they should be much less likely to occur
within a single path.
Fixes #4253.
This turns snapshotFilterOptions from cmd into a restic.SnapshotFilter
type and makes restic.FindFilteredSnapshot and FindFilteredSnapshots
methods on that type. This fixes #4211 by ensuring that hosts and paths
are named struct fields instead of unnamed function arguments in long
lists of such.
Timestamp limits are also included in the new type. To avoid too much
pointer handling, the convention is that time zero means no limit.
That's January 1st, year 1, 00:00 UTC, which is so unlikely a date that
we can sacrifice it for simpler code.
Fixes restic#719
If the option is passed, restic will wait the specified duration of time
and retry locking the repo every 10 seconds (or more often if the total
timeout is relatively small).
- Play nice with json output
- Reduce wait time in lock tests
- Rework timeout last attempt
- Reduce test wait time to 0.1s
- Use exponential back off for the retry lock
- Don't pass gopts to lockRepo functions
- Use global variable for retry sleep setup
- Exit retry lock on cancel
- Better wording for flag help
- Reorder debug statement
- Refactor tests
- Lower max sleep time to 1m
- Test that we cancel/timeout in time
- Use non blocking sleep function
- Refactor into minDuration func
Co-authored-by: Julian Brost <julian@0x4a42.net>
Since 0.15 (#4020), inodes are generated as hashes of names, xor'd with
the parent inode. That means that the inode of a/b/b is
h(a/b/b) = h(a) ^ h(b) ^ h(b) = h(a).
I.e., the grandchild has the same inode as the grandparent. GNU find
trips over this because it thinks it has encountered a loop in the
filesystem, and fails to search a/b/b. This happens more generally when
the same name occurs an even number of times.
Fix this by multiplying the parent by a large prime, so the combining
operation is not longer symmetric in its arguments. This is what the FNV
hash does, which we used prior to 0.15. The hash is now
h(a/b/b) = h(b) ^ p*(h(b) ^ p*h(a))
Note that we already ensure that h(x) is never zero.
Collisions can still occur, but they should be much less likely to occur
within a single path.
Fixes #4253.
This turns snapshotFilterOptions from cmd into a restic.SnapshotFilter
type and makes restic.FindFilteredSnapshot and FindFilteredSnapshots
methods on that type. This fixes #4211 by ensuring that hosts and paths
are named struct fields instead of unnamed function arguments in long
lists of such.
Timestamp limits are also included in the new type. To avoid too much
pointer handling, the convention is that time zero means no limit.
That's January 1st, year 1, 00:00 UTC, which is so unlikely a date that
we can sacrifice it for simpler code.
The scanner process has only cosmetic effect for the progress printer,
and can be disabled without impacting functionality when the user does
not need an estimate of completion.
In many cases the scanner process can provide beneficial priming of
the file system cache, so as general advice it should not be disabled.
However, tests have shown that backup of NFS and fuse based filesystems,
where stat(2) is relatively expensive, can be significantly faster
without the scanner.
The Original field is meant to remember the original snapshot id if e.g.
changing its tags. It was only set by the `rewrite` command if it was
not set previously. However, a rewritten snapshot is potentially rather
different from the original snapshot. Thus just always set the Original
field. This also makes it easier to later on detect and potentially
remove the original snapshots.
Automatically fall back to hiding files if not authorized to permanently
delete files. This allows using restic with an append-only application
key with B2. Thus, an attacker cannot directly delete backups with the
API key used by restic.
To use this feature create an application key without the deleteFiles
capability. It is recommended to restrict the key to just one bucket.
For example using the b2 command line tool:
b2 create-key --bucket <bucketName> <keyName> listBuckets,readFiles,writeFiles,listFiles
Suggested-by: Daniel Gröber <dxld@darkboxed.org>
Counting the first occurrence of a duplicate blob as used and counting
all other as duplicates, independent of which instance of the blob is
kept, is only accurate if all copies of the blob have the same size. This
is no longer the case for a repository containing both compressed and
uncompressed blobs.
Thus for duplicated blobs first count all instances as duplicates and
then subtract the actually used instance later on.
The backup command failed if a directory contains duplicate entries.
Downgrade the severity of this problem from fatal error to a warning.
This allows users to still create a backup.
While searching for lock file from concurrently running restic
instances, restic ignored unreadable lock files. These can either be
in fact invalid or just be temporarily unreadable. As it is not really
possible to differentiate between both cases, just err on the side of
caution and consider the repository as already locked.
The code retries searching for other locks up to three times to smooth
out temporarily unreadable lock files.