This commit changes the internal file system implementation for reading
data from stdin, it now returns an error when no bytes could be read. I
think it's worth failing in this case, the user instructed restic to
read some data from stdin, and no data was read at all. Maybe it was in
a pipe and some earlier stage failed.
See #2135 for a short discussion.
When restic reads the backup from stdin, the number of bytes processed
was always displayed as zero. The reason is that the UI for the archive
uses the total bytes as returned by the scanner, which is zero for
stdin. So instead we keep track of the real number of bytes processed
and print that at the end.
Closes #2136
Make restic forget --keep-within accept time ranges measured in hours and choose
accordingly which snapshots to keep and which to forget. Add relative tests.
don't create fileInfo structs for empty files. this saves memory.
this also avoids extra serial scan of all fileInfo, which should
make restore faster and more consistent.
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
reworked restore error callback to use file location
path instead of much heavier Node. this reduced restore
memory usage by as much as 50% in some of my tests.
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
* uses less memory as common prefix is only stored once
* stepping stone for simpler error callback api, which
will allow further memory footprint reduction
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
When the scanner is slower than the actual backup, the tomb cancels the
context passed to Scan(), which then returns ctx.Err(). In the end, the
main function prints an error message that is not helpful ("Context
cancelled") and exits with an error code although no error occurred.
The code now ignores the error in the context and just uses it for
cancellation. The scanner is not supposed to return an error anyway.
Closes #1978
Sometimes, users run restic without retaining the local cache
directories. This was reported several times in the past.
Restic will now print a message whenever a new cache directory is
created from scratch (i.e. it did not exist before), so users have a
chance to recognize when the cache is not kept between different runs of
restic.
This commit adds a command called `self-update` which downloads the
latest released version of restic from GitHub and replacing the current
binary with it. It does not rely on any external program (so it'll work
everywhere), but still verifies the GPG signature using the embedded GPG
public key.
By default, the `self-update` command is hidden behind the `selfupdate`
built tag, which is only set when restic is built using `build.go`. The
reason for this is that downstream distributions will then not include
the command by default, so users are encouraged to use the
platform-specific distribution mechanism.
Adds a SelectByName method to the archive and scanner which only require
the filename as input, and can thus be run before calling lstat on the
file. Can speed up scanning significantly if a lot of filename excludes
are used.
Accessing beyond the end of the file now removes the file from the cache
because it is assumed to be truncated. Usually, this means that the data
is fetched directly from the backend instead.
traverseTree() was meant to call enterDir() whenever a directory is
selected for restore, either explicitly or implicitly (=contains a file
which is to be restored). After restoring a file, leaveDir() is called
in reverse order for all intermediate directories so that the metadata
can be restored.
When a directory is selected implicitly, the metadata for it is
restored. This is different from the previous restorer behavior, which
created implicitly selected intermediate directories with permissions
0700 (only user can read/write it).
This commit changes the behavior back to the old one. Only a directory
is explicitly selected for restore, enterDir()/leaveDir() are called for
it. Otherwise, only visitNode() is called, so visitNode() needs to make
sure the parent directory exists. If the directory is explicitly
included, leaveDir() will then restore the metadata correctly.
When we decide to change the behavior (restore metadata for all
intermediate directories, even if selected implicitly), we should do
that in the selection functions, not here.
This finally resolves #1870
This fixes #1833, which consists of two different bugs:
* The `defer` in `cacheFile()` may remove a channel from the
`inProgress` map although it is not responsible for downloading the
file
* If the download fails, goroutines waiting for the file to be cached
assumed that the file was there, there was no way to signal the
error.
When the archiver is faster than the scanner, restic deadlocks. This
commit adds a `finished` channel to the struct in `ui/backup.go` so that
scanner results are ignored when the archiver is already finished.
Closes #1834
If our ssh process has died, not only the next, but all subsequent
calls to clientError() should indicate the error.
restic output when the ssh process is killed with "kill -9":
Save(<data/afb68adbf9>) returned error, retrying after 253.661803ms: Write: failed to send packet header: write |1: file already closed
Save(<data/afb68adbf9>) returned error, retrying after 580.752212ms: ssh command exited: signal: killed
Save(<data/afb68adbf9>) returned error, retrying after 790.150468ms: ssh command exited: signal: killed
Save(<data/afb68adbf9>) returned error, retrying after 1.769595051s: ssh command exited: signal: killed
[...]
error in cleanup handler: ssh command exited: signal: killed
Before this patch:
Save(<data/de698d934f>) returned error, retrying after 252.84163ms: Write: failed to send packet header: write |1: file already closed
Save(<data/de698d934f>) returned error, retrying after 660.236963ms: OpenFile: failed to send packet header: write |1: file already closed
Save(<data/de698d934f>) returned error, retrying after 568.049909ms: OpenFile: failed to send packet header: write |1: file already closed
Save(<data/de698d934f>) returned error, retrying after 2.428813824s: OpenFile: failed to send packet header: write |1: file already closed
[...]
error in cleanup handler: failed to send packet header: write |1: file already closed
This commit changes how the worker goroutines for saving e.g. blobs
interact. Before, it was possible to get stuck sending an instruction to
archive a file or dir when no worker goroutines were available any more.
This commit introduces a `done` channel for each of the worker pools,
which is set to the channel returned by `tomb.Dying()`, so it is closed
when the first worker returned an error.
This commit changes the archiver so that low-level errors saving data to
the repo are returned to the caller (instead of being handled by the
error callback function). This correctly bubbles up errors like a full
temp file system and makes restic abort early and makes all other worker
goroutines exit.
This now keeps the cursor at the first column of the first status line
so that messages printed to stdout or stderr by some other part of the
progarm will still be visible. The message will overwrite the status
lines, but those are easily reprinted on the next status update.
The previous code tried to be as efficient as possible and only do a
single open() on an item to save, and then fstat() on the fd to find out
what the item is (file, dir, other). For normal files, it would then
start reading the data without opening the file again, so it could not
be exchanged for e.g. a symlink.
This behavior starts the watchdog on my machine when /dev is saved
with restic, and after a few seconds, the machine reboots.
This commit reverts the behavior to the strategy the old archiver code
used: run lstat(), then decide what to do. For normal files, open the
file and then run fstat() on the fd to verify it's still a normal file,
then start reading the data.
The downside is that for normal files we now do two stat() calls
(lstat+fstat) instead of only one. On the upside, this does not start
the watchdog. :)
Previously, the function read from ARGV[1] (hardcoded) rather than the
value passed to it, the command-line argument as it exists in globalOptions.
Resolves #1745
This adds two implementations of the new `FS` interface: One for the local
file system (`Local`) and one for a single file read from an
`io.Reader` (`Reader`).
This change removes the hardcoded Google auth mechanism for the GCS
backend, instead using Google's provided client library to discover and
generate credential material.
Google recommend that client libraries use their common auth mechanism
in order to authorise requests against Google services. Doing so means
you automatically support various types of authentication, from the
standard GOOGLE_APPLICATION_CREDENTIALS environment variable to making
use of Google's metadata API if running within Google Container Engine.
Before this change restic would attempt to JSON decode the error
message resulting in confusing `Decode: invalid character 'B' looking
for beginning of value` messages. Afterwards it will return `List
failed, server response: 400 Bad Request (400)`
This commit fixes a bug introduced in
e9ea268847: When an invalid lock is
encountered (e.g. if the file is empty), the code used to ignore that,
but now returns the error.
Now, invalid files are ignored for the normal lock check, and removed
when `restic unlock --remove-all` is run.
Closes #1652
As mentioned in issue [#1560](https://github.com/restic/restic/pull/1560#issuecomment-364689346)
this changes the signature for `backend.Save()`. It now takes a
parameter of interface type `RewindReader`, so that the backend
implementations or our `RetryBackend` middleware can reset the reader to
the beginning and then retry an upload operation.
The `RewindReader` interface also provides a `Length()` method, which is
used in the backend to get the size of the data to be saved. This
removes several ugly hacks we had to do to pull the size back out of the
`io.Reader` passed to `Save()` before. In the `s3` and `rest` backend
this is actively used.
This is a bug fix: Before, when the worker function fn in List() of the
RetryBackend returned an error, the operation is retried with the next
file. This is not consistent with the documentation, the intention was
that when fn returns an error, this is passed on to the caller and the
List() operation is aborted. Only errors happening on the underlying
backend are retried.
The error leads to restic ignoring exclusive locks that are present in
the repo, so it may happen that a new backup is written which references
data that is going to be removed by a concurrently running `prune`
operation.
The bug was reported by a user here:
https://forum.restic.net/t/restic-backup-returns-0-exit-code-when-already-locked/484
This pulls the header reads into a function that works in terms of the
number of records requested. This preserves the existing logic of
initially reading 15 records and then falling back if that fails.
In the event of a header with more than 15 records, it will read all
records, including the already-seen final 15 records.
Before, all backend implementations were required to return an error if
the file that is to be written already exists in the backend. For most
backends, that means making a request (e.g. via HTTP) and returning an
error when the file already exists.
This is not accurate, the file could have been created between the HTTP
request testing for it, and when writing starts. In addition, apart from
the `config` file in the repo, all other file names have pseudo-random
names with a very very low probability of a collision. And even if a
file name is written again, the way the restic repo is structured this
just means that the same content is placed there again. Which is not a
problem, just not very efficient.
So, this commit relaxes the requirement to return an error when the file
in the backend already exists, which allows reducing the number of API
requests and thereby the latency for remote backends.
During the development of #1524 I discovered that the Google Cloud
Storage backend did not yet use the HTTP transport, so things such as
bandwidth limiting did not work. This commit does the necessary magic to
make the GS library use our HTTP transport.
A user discovered[1] that when the backup finishes during the upload of
an intermediate index, the upload is cancelled and the index never fully
saved, but the snapshot is saved and the backup finalizes without an
error. This lead to a situation where a snapshot references data that is
contained in the repo, but not referenced in any index, leading to
strange error messages.
This commit uses a dedicated context to signal the intermediate index
uploading routine to terminate after the last index has been uploaded.
This way, an upload running when the backup finishes is completed before
the routine terminates and the snapshot is saved.
[1] https://forum.restic.net/t/error-loading-tree-check-prune-and-forget-gives-error-b2-backend/406
The logging in these functions double the time they take to execute.
However, it is only really useful on failures, which are better
reported by the calling functions.
benchmark old ns/op new ns/op delta
BenchmarkMasterIndexLookupSingleIndex-6 897 395 -55.96%
BenchmarkMasterIndexLookupMultipleIndex-6 2001 1090 -45.53%
BenchmarkMasterIndexLookupSingleIndexUnknown-6 492 215 -56.30%
BenchmarkMasterIndexLookupMultipleIndexUnknown-6 1649 912 -44.69%
benchmark old allocs new allocs delta
BenchmarkMasterIndexLookupSingleIndex-6 9 1 -88.89%
BenchmarkMasterIndexLookupMultipleIndex-6 19 1 -94.74%
BenchmarkMasterIndexLookupSingleIndexUnknown-6 6 0 -100.00%
BenchmarkMasterIndexLookupMultipleIndexUnknown-6 16 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkMasterIndexLookupSingleIndex-6 160 96 -40.00%
BenchmarkMasterIndexLookupMultipleIndex-6 240 96 -60.00%
BenchmarkMasterIndexLookupSingleIndexUnknown-6 48 0 -100.00%
BenchmarkMasterIndexLookupMultipleIndexUnknown-6 128 0 -100.00%
Index.Has() is a faster then Index.Lookup() for checking if a blob exists
in the index. As the returned data is never used, this avoids a ton
of allocations.
When looking up a blob in the master index, with several
indexes present in the master index, a significant amount of time
is spent generating errors for each failed lookup. However, these
errors are often used to check if a blob is present, but the contents
are not inspected making the overhead of the error not useful.
Instead, change Index.Lookup (and Index.LookupSize) to instead return
a boolean denoting if the blob was found instead of an error. Also change
all the calls to these functions to handle the new function signature.
benchmark old ns/op new ns/op delta
BenchmarkMasterIndexLookupSingleIndex-6 820 897 +9.39%
BenchmarkMasterIndexLookupMultipleIndex-6 12821 2001 -84.39%
BenchmarkMasterIndexLookupSingleIndexUnknown-6 5378 492 -90.85%
BenchmarkMasterIndexLookupMultipleIndexUnknown-6 17026 1649 -90.31%
benchmark old allocs new allocs delta
BenchmarkMasterIndexLookupSingleIndex-6 9 9 +0.00%
BenchmarkMasterIndexLookupMultipleIndex-6 59 19 -67.80%
BenchmarkMasterIndexLookupSingleIndexUnknown-6 22 6 -72.73%
BenchmarkMasterIndexLookupMultipleIndexUnknown-6 72 16 -77.78%
benchmark old bytes new bytes delta
BenchmarkMasterIndexLookupSingleIndex-6 160 160 +0.00%
BenchmarkMasterIndexLookupMultipleIndex-6 3200 240 -92.50%
BenchmarkMasterIndexLookupSingleIndexUnknown-6 1232 48 -96.10%
BenchmarkMasterIndexLookupMultipleIndexUnknown-6 4272 128 -97.00%
When setting up the index used for benchmarking, use math/rand instead of
crypto/rand since the generated ids don't need to be evenly distributed,
and not be secure against guessing. As such, use a different random id
function (only available during tests) that uses math/rand instead.
Load pack header length and 15 header entries with single backend
request. This eliminates separate header Load() request for most pack
files and significantly improves index.New() performance.
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
This reduces the chance of duplicate blobs, otherwise the tests fail
(make the contents of a blob depend on a pseudo-random number instead of
the size, sizes may be duplicate).
Use result of single repository.List() to find both missing and
orphaned data packs. For 500GB repository this eliminates ~100K
repository.Test() calls and improves check time by >30M in my
environment (~45min before this change and ~7min after).
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
This is a follow-up on fb9729fdb9, which
runs the `ssh` in its own process group and selects that process group
as the foreground group. After the sftp connection is established,
restic switches back to the previous foreground process group.
This allows `ssh` to prompt for the password, but it won't receive
the interrupt signal (SIGINT, ^C) later on, because it is not in the
foreground process group any more, allowing a clean tear down.
When backing up several million files (>14M tested here) with few changes,
a large amount of time is spent failing to find an id in an index and creating
an error to signify this. Since this is checked using the Has method,
which doesn't use this error, this time creating the error is wasted.
Instead, directly check if the given id and type are present in the index.
This also avoids reporting all the packs containing this blob, further
reducing cpu usage.
We added previously a code to fix the issue of chaining
credentials, we do not need this anymore since the
upstream minio-go already has this relevant change.