To prevent accidentally wiping all snapshots from a repository, that
option can only be used if either a snapshot filter or a keep policy is
specified.
Essentially, the option allows `forget --tag something
--unsafe-allow-remove-all` calls to remove all snapshots with a specific
tag.
`--keep-tag invalid-tag` was previously able to wipe all snapshots in a
repository. As a user specified a `--keep-*` option this is likely
unintentional. This forbid deleting all snapshot if a `--keep-*` option
was specified to prevent data loss. (Not specifying such an option
currently also causes the command to abort)
The toplevel context in restic only canceled if the user interrupts a
restic operation. If the network connection has failed this can require
waiting the full retry duration of 15 minutes which is a bad user
experience for interactive usage. Thus limit the delay to one minute in
this case.
Retries in restic try to solve two main problems:
- retry a temporarily failed operation
- tolerate temporary network interruptions
The first problem only requires a few retries, whereas the last one benefits
primarily from spreading the requests over a longer duration.
Increasing the default multiplier and the initial interval works for
both cases. The first few retries only take a few seconds, while later
retries quickly reach the maximum interval of one minute. This ensures
that the total number of retries issued by restic will remain at around
21 retries for a 15 minute period. As the concurrency in restic is
bounded, retries drastically reduce the number of requests sent to a
backend. This helps to prevent overloading the backend.
Previously, if an operation failed after 15 minutes, then it would never
be retried. This means that large backend requests are more unreliable
than smaller ones.
Depending on how long an operation takes to fail, the total retry
duration can currently vary between 1.5 and 15 minutes. In particular
for temporarily interrupted network connections, the former timeout is
too short. Thus always use a limit of 15 minutes.
RemoveUnpacked will eventually block removal of all filetypes other than
snapshots. However, getting there requires a major refactor to provide
some components with privileged access.
This ensures that the pack header is actually read completely.
Previously, for a truncated file it was possible to only read a part of
the header, as backend.Load(...) is not guaranteed to return as many
bytes as requested by the length parameter.
This is inspired by the circuit breaker pattern used for distributed
systems. If too many requests fails, then it is better to immediately
fail new requests for a limited time to give the backend time to
recover.
By only forgetting a file in the cache at most once, we can ensure that
a broken file is only retrieved once again from the backend. If the file
stored there is broken, previously it would be cached and deleted
continuously. Now, it is retrieved only once again, all later requests
just use the cached copy and either succeed or fail immediately.