a gs service account may only have object permissions on an existing
bucket but no bucket create/get permissions.
these service accounts currently are blocked from initialization a
restic repository because restic can not determine if the bucket exists.
this PR updates the logic to assume the bucket exists when the bucket
attribute request results in a permissions denied error.
this way, restic can still initialize a repository if the service
account does have object permissions
fixes: https://github.com/restic/restic/issues/3100
The VSS support works for 32 and 64-bit windows, this includes a check that
the restic version matches the OS architecture as required by VSS. The backup
operation will fail the user has not sufficient permissions to use VSS.
Snapshotting volumes also covers mountpoints but skips UNC paths.
This removes the requirement on `restic self-update --output` to point
to a path of an existing file, to overwrite. In case the specified
path does exist we still want to verify that it's a regular file,
rather than a directory or a device, which gets overwritten.
We also want to verify that a path to a new file exists within an
existing directory. The alternative being running into that issue
after the actual download, etc has completed.
While at it I also replace `errors.Errorf` with the more appropriately
verbose `errors.Fatalf`.
Resolves #2491
As an alternative to -r, this allows to read the repository URL
from a file in order to prevent certain types of information leaks,
especially for URLs containing credentials.
Fixes #1458, fixes #2900.
When the backup is interrupted for some reason while the scanner is
still active this could lead to a deadlock. Interruptions are triggered
by canceling the context object used by both the backup progress UI and
the scanner. It is possible that a context is canceled between the
respective check in the scanner and it calling the `ReportTotal` method
of the UI. The latter method sends a message to the UI goroutine.
However, a canceled context will also stop that goroutine, which can
cause the channel send operation to block indefinitely.
This is resolved by adding a `closed` channel which is closed once the
UI goroutine is stopped and serves as an escape hatch for reported UI
updates.
This change covers not just the ReportTotal method but all potentially
affected methods of the progress UI implementation.
This allows creating multiple repositories with identical chunker
parameters which is required for working deduplication when copying
snapshots between different repositories.
Previously the directory stats were reported immediately after calling
`SaveDir`. However, as the latter method saves the tree asynchronously
the stats were still initialized to their nil value. The stats are now
reported via a callback similar to the one used for the fileSaver.
Add a copy command to copy snapshots between repositories. It allows the user
to specify a destination repository, password, password-file, password-command
or key-hint to supply the necessary details to open the destination repository.
You need to supply a list of snapshots to copy, snapshots which already exist
in the destination repository will be skipped.
Note, when using the network this becomes rather slow, as it needs to read the
blocks, decrypt them using the source key, then encrypt them again using the
destination key before finally writing them out to the destination repository.
Do not lock the repository if --no-lock global flag is set. This allows
to mount repositories which are archived on a read only system.
Signed-off-by: Sébastien Gross <seb•ɑƬ•chezwam•ɖɵʈ•org>
- Add Open() functionality to dir
- only access index for blobs when file is read
- Implement NodeOpener and put one-time file stuff there
- Add comment about locking as suggested by bazil.org/fuse
=> Thanks at Michael Eischer for suggesting the last two improvements
In the Google Cloud Storage backend, support specifying access tokens
directly, as an alternative to a credentials file. This is useful when
restic is used non-interactively by some other program that is already
authenticated and eliminates the need to store long lived credentials.
The access token is specified in the GOOGLE_ACCESS_TOKEN environment
variable and takes precedence over GOOGLE_APPLICATION_CREDENTIALS.