The SemaphoreBackend now uniformly enforces the limit of concurrent
backend operations. In addition, it unifies the parameter validation.
The List() methods no longer uses a semaphore. Restic already never runs
multiple list operations in parallel.
By managing the semaphore in a wrapper backend, the sections that hold a
semaphore token grow slightly. However, the main bottleneck is IO, so
this shouldn't make much of a difference.
The key insight that enables the SemaphoreBackend is that all of the
complex semaphore handling in `openReader()` still happens within the
original call to `Load()`. Thus, getting and releasing the semaphore
tokens can be refactored to happen directly in `Load()`. This eliminates
the need for wrapping the reader in `openReader()` to release the token.
We now check for space that is not reserved for the root user on the
remote, and the check is no longer in a defer block because it wouldn't
fire. Some change in the surrounding code may have led the deferred
function to capture the wrong err variable.
Fixes #3336.
The Test method was only used in exactly one place, namely when trying
to create a new repository it was used to check whether a config file
already exists.
Use a combination of Stat() and IsNotExist() instead.
The ioutil functions are deprecated since Go 1.17 and only wrap another
library function. Thus directly call the underlying function.
This commit only mechanically replaces the function calls.
This is especially useful if ssh asks for a password or if closing the
initial connection could return an error due to a problematic server
implementation.
pkg/sftp.Client.MkdirAll(d) does a Stat to determine if d exists and is
a directory, then a recursive call to create the parent, so the calls
for data/?? each take three round trips. Doing a Mkdir first should
eliminate two round trips for 255/256 data directories as well as all
but one of the top-level directories.
Also, we can do all of the calls concurrently. This may reintroduce some
of the Stat calls when multiple goroutines try to create the same
parent, but at the default number of connections, that should not be
much of a problem.
... called backend/sema. I resisted the temptation to call the main
type sema.Phore. Also, semaphores are now passed by value to skip a
level of indirection when using them.
github.com/pkg/errors is no longer getting updates, because Go 1.13
went with the more flexible errors.{As,Is} function. Use those instead:
errors from pkg/errors already support the Unwrap interface used by 1.13
error handling. Also:
* check for io.EOF with a straight ==. That value should not be wrapped,
and the chunker (whose error is checked in the cases changed) does not
wrap it.
* Give custom Error methods pointer receivers, so there's no ambiguity
when type-switching since the value type will no longer implement error.
* Make restic.ErrAlreadyLocked private, and rename it to
alreadyLockedError to match the stdlib convention that error type
names end in Error.
* Same with rest.ErrIsNotExist => rest.notExistError.
* Make s3.Backend.IsAccessDenied a private function.
Create a temporary file with a sufficiently random name to essentially
avoid any chance of conflicts. Once the upload has finished remove the
temporary suffix. Interrupted upload thus will be ignored by restic.
This enables the backends to request the calculation of a
backend-specific hash. For the currently supported backends this will
always be MD5. The hash calculation happens as early as possible, for
pack files this is during assembly of the pack file. That way the hash
would even capture corruptions of the temporary pack file on disk.
Bugs in the error handling while uploading a file to the backend could
cause incomplete files, e.g. https://github.com/golang/go/issues/42400
which could affect the local backend.
Proactively add sanity checks which will treat an upload as failed if
the reported upload size does not match the actual file size.
If our ssh process has died, not only the next, but all subsequent
calls to clientError() should indicate the error.
restic output when the ssh process is killed with "kill -9":
Save(<data/afb68adbf9>) returned error, retrying after 253.661803ms: Write: failed to send packet header: write |1: file already closed
Save(<data/afb68adbf9>) returned error, retrying after 580.752212ms: ssh command exited: signal: killed
Save(<data/afb68adbf9>) returned error, retrying after 790.150468ms: ssh command exited: signal: killed
Save(<data/afb68adbf9>) returned error, retrying after 1.769595051s: ssh command exited: signal: killed
[...]
error in cleanup handler: ssh command exited: signal: killed
Before this patch:
Save(<data/de698d934f>) returned error, retrying after 252.84163ms: Write: failed to send packet header: write |1: file already closed
Save(<data/de698d934f>) returned error, retrying after 660.236963ms: OpenFile: failed to send packet header: write |1: file already closed
Save(<data/de698d934f>) returned error, retrying after 568.049909ms: OpenFile: failed to send packet header: write |1: file already closed
Save(<data/de698d934f>) returned error, retrying after 2.428813824s: OpenFile: failed to send packet header: write |1: file already closed
[...]
error in cleanup handler: failed to send packet header: write |1: file already closed