The method only ever receives *hashing.Writers, which don't implement
io.Closer. These come from packerManager.findPacker and have their
actual writers closed in Repository.savePacker. Moving the closing logic
to hashing.Writer results in "file already closed" errors.
When loading a blob, restic first looks up pack files containing the
blob. To avoid unnecessary work an already cached pack file is preferred.
However, if there is only a single pack file to choose from (which is
the normal case) sorting the one-element list won't change anything.
Therefore avoid the unnecessary cache check in that case.
The previous benchmark spent much of its time allocating RNGs and
generating too many random numbers. It now spends 90% of its time
hashing and half of the rest writing to files.
name old time/op new time/op delta
PackerManager-8 319ms ± 1% 247ms ± 1% -22.48% (p=0.000 n=20+18)
name old speed new speed delta
PackerManager-8 143MB/s ± 1% 213MB/s ± 1% +48.63% (p=0.000 n=10+18)
name old alloc/op new alloc/op delta
PackerManager-8 635kB ± 0% 92kB ± 0% -85.48% (p=0.000 n=10+19)
name old allocs/op new allocs/op delta
PackerManager-8 1.64k ± 0% 1.43k ± 0% -12.76% (p=0.000 n=10+20)
Archivers TestMetadataChanged incorrectly clears the Extended Attributes
from the expected metadata of the temporary file. This is incorrect as on
SELinux enabled filesystem, as the kernel will automaticly add a SElinux
label. However, since ExtendedAttributes{} != ExtendedAttributes{nil} we
still need to clear them if there are no attributes found.
The pool was used improperly, causing more allocations to be
performed than without it.
name old time/op new time/op delta
SaveAndEncrypt-8 36.8ms ± 2% 36.9ms ± 2% ~ (p=0.218 n=10+10)
name old speed new speed delta
SaveAndEncrypt-8 114MB/s ± 2% 114MB/s ± 2% ~ (p=0.218 n=10+10)
name old alloc/op new alloc/op delta
SaveAndEncrypt-8 21.1MB ± 0% 21.0MB ± 0% -0.44% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
SaveAndEncrypt-8 79.0 ± 0% 77.0 ± 0% -2.53% (p=0.000 n=10+10)
The `dump`, `find`, `forget`, `ls`, `mount`, `restore`, `snapshots`,
`stats` and `tag` commands will now take into account multiple
`--host` and `-H` flags.
Much simpler implementation that guarantees each required pack
is downloaded only once (and hence does not need to manage
pack cache). Also improves large file restore performance.
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
internal/archiver.readdir and internal/fs.ReadDir were unused.
internal/fs.ReadDirNames and internal/archiver.readdirnames were doing
nearly the same thing, except one sorted its output and opened with
fs.O_NOFOLLOW. Both were only used in internal/archiver.
Each of the random test files was split into the same five blobs. The
test fails once the fifth blob is passed on to `SaveBlob`. That is for
certain interleavings of goroutine execution it would be possible for
the test to trigger the testErr just after storing the first file.
The fixed test uses a different file content for each of the nine files
and fails after writing the fourth blob. The file content is also small
enough to ensure that for each file only a single blob is saved. This
guarantees that the test cannot fail before reading the first four
files. FileReadConcurrency = 2 allows up to two files queued for
processing. Therefore the test can at most open the sixth file before it
has to save the fourth file / blob which triggers the testErr.
internal/ui/jsonstatus and termstatus sound similar but are not related
in any way. Instead `internal/ui/backup` and `internal/ui/jsonstatus/status`
are the counterparts. Rename the latter to `internal/ui/json/backup` to
make this clear.
jsonstatus wrote the JSON output without synchronization to the
stdio_wrapper which caused mangling between different status lines.
Use the Print and Error methods of termstatus instead which use a
central goroutine to synchronize output.
I was running "golangci-lint" and found this two warnings
internal/checker/checker.go:135:18: (*Checker).LoadIndex$3 - result 0 (error) is always nil (unparam)
final := func() error {
^
internal/repository/repository.go:457:18: (*Repository).LoadIndex$3 - result 0 (error) is always nil (unparam)
final := func() error {
^
It turns out that these functions are used only in "RunWorkers(...)",
which is used only two times in whole project right after this "final"
functions.
And because these "final" functions always return "nil", I've
descided, that it would be better to remove requriments for "final" func
to return error to avoid magick "return nil" at their end.
In some (rare) cases "fake" UID 51234 may exist in a system running a
test. When this is the case, `cmp.Equal(want, node3)` will fail based on
difference between empty string and an actual username present in a
system.
Fixes github issue #2372
Restic used to quit if the repository password was typed incorrectly once.
Restic will now ask the user again for the repository password if typed incorrectly.
The user will now get three tries to input the correct password before restic quits.
Return valid directory info from Lstat() for parent directories of the
specified filename. Previously only "/" and "." were valid directories.
Also set directory mode as this is checked by archiver.
Closes #2063
Windows does not have a concept of a `change time` in the sense as Unix
has it: the field `CreationTime` of the `Win32FileAttributeData` struct
is not updated when attributes or content is changed. So from now on
we're using the `LastWriteTime` as the `change time` on Windows.
Since I could not remember what the value for `Check` means this commit
renames it to `SameFile`: when set to true, the test should make sure
that `FileChanged` should return false (=file is unmodified).
Sometimes restic gets bogus timestamps which cannot be converted to
JSON, because the stdlib JSON encoder returns an error if the year is
not within [0, 9999]. We now make sure that we at least record _some_
timestamp and cap the year either to 0000 or 9999. Before, restic would
refuse to save the file at all, so this improves the status quo.
This fixes #2174 and #1173
This commit is a followup to the addition of the --group-by flag for the
snapshots command. Adding the grouping code there introduced duplicated
code (the forget command also does grouping). This commit refactors
boths sides to only use shared code.