2
2
mirror of https://github.com/octoleo/restic.git synced 2024-11-16 01:57:10 +00:00
Commit Graph

278 Commits

Author SHA1 Message Date
Michael Eischer
68370feeee backends: Remove TestSaveFilenames test
Filenames are expected to match the sha256 sum of the file content. This
rule is now enforced by the rest server thus making this test useless.
2021-08-15 18:24:16 +02:00
Michael Eischer
574c83e47f rest: Fix test to use paths which are the sha256 sum of the data 2021-08-15 18:19:43 +02:00
Michael Eischer
e6a5801155 rest: Fix test backend url
The rest config normally uses prepareURL to sanitize URLs and ensures
that the URL ends with a slash. However, the test used an URL without a
trailing slash, which after the rest server changes causes test
failures.
2021-08-15 18:16:17 +02:00
Michael Eischer
f4c5dec05d backend: test that a wrong hash fails an upload 2021-08-04 22:17:46 +02:00
Michael Eischer
7c1903e1ee panic if hash returns an error
Add a sanity check that the interface contract is honoured.
2021-08-04 22:17:46 +02:00
Michael Eischer
51b7e3119b mem: calculate md5 content hash for uploads
The mem backend is primarily used for testing. This ensures that the
upload hash calculation gets appropriate test coverage.
2021-08-04 22:17:46 +02:00
Michael Eischer
a009b39e4c gs/swift: calculate md5 content hash for upload 2021-08-04 22:17:46 +02:00
Michael Eischer
1d3e99f475 azure: check upload using md5 content hash
For files below 256MB this uses the md5 hash calculated while assembling
the pack file. For larger files the hash for each 100MB part is
calculated on the fly. That hash is also reused as temporary filename.
As restic only uploads encrypted data which includes among others a
random initialization vector, the file hash shouldn't be susceptible to
md5 collision attacks (even though the algorithm is broken).
2021-08-04 22:17:46 +02:00
Michael Eischer
9aa2eff384 Add plumbing to calculate backend specific file hash for upload
This enables the backends to request the calculation of a
backend-specific hash. For the currently supported backends this will
always be MD5. The hash calculation happens as early as possible, for
pack files this is during assembly of the pack file. That way the hash
would even capture corruptions of the temporary pack file on disk.
2021-08-04 22:17:46 +02:00
Michael Eischer
ee2f14eaf0 s3: enable content hash calculation for uploads 2021-08-04 22:12:12 +02:00
greatroar
81e2499d19 Sync directory to get durable writes in local backend 2021-08-04 21:51:53 +02:00
greatroar
195a5cf996 Save files under temporary name in local backend
Fixes #3435.
2021-08-04 21:51:53 +02:00
Alexander Weiss
38a8a48a25 Simplify dry run backend 2021-08-04 21:19:29 +02:00
Ryan Hitchman
77bf148460 backup: add --dry-run/-n flag to show what would happen.
This can be used to check how large a backup is or validate exclusions.
It does not actually write any data to the underlying backend. This is
implemented as a simple overlay backend that accepts writes without
forwarding them, passes through reads, and generally does the minimal
necessary to pretend that progress is actually happening.

Fixes #1542

Example usage:

$ restic -vv --dry-run . | grep add
new       /changelog/unreleased/issue-1542, saved in 0.000s (350 B added)
modified  /cmd/restic/cmd_backup.go, saved in 0.000s (16.543 KiB added)
modified  /cmd/restic/global.go, saved in 0.000s (0 B added)
new       /internal/backend/dry/dry_backend_test.go, saved in 0.000s (3.866 KiB added)
new       /internal/backend/dry/dry_backend.go, saved in 0.000s (3.744 KiB added)
modified  /internal/backend/test/tests.go, saved in 0.000s (0 B added)
modified  /internal/repository/repository.go, saved in 0.000s (20.707 KiB added)
modified  /internal/ui/backup.go, saved in 0.000s (9.110 KiB added)
modified  /internal/ui/jsonstatus/status.go, saved in 0.001s (11.055 KiB added)
modified  /restic, saved in 0.131s (25.542 MiB added)
Would add to the repo: 25.892 MiB
2021-08-04 21:19:29 +02:00
greatroar
fa3eed1998 Use rclone.wrappedConn by pointer
This shaves a kilobyte off the Linux binary by not generating a
non-pointer interface implementation.
2021-08-01 09:11:50 +02:00
Alexander Neumann
d8ea10db8c rest: Rework handling HTTP2 zero-length replies bug
Add comment that the check is based on the stdlib HTTP2 client. Refactor
the checks into a function. Return an error if the value in the
Content-Length header cannot be parsed.
2021-07-31 17:12:24 +02:00
Michael Eischer
097ed659b2 rest: test that zero-length replies over HTTP2 work correctly
The first test function ensures that the workaround works as expected.
And the second test function is intended to fail as soon as the issue
has been fixed in golang to allow us to eventually remove the
workaround.
2021-07-10 17:22:42 +02:00
Michael Eischer
185a55026b rest: workaround for HTTP2 zero-length replies bug
The golang http client does not return an error when a HTTP2 reply
includes a non-zero content length but does not return any data at all.
This scenario can occur e.g. when using rclone when a file stored in a
backend seems to be accessible but then fails to download.
2021-07-10 16:59:01 +02:00
MichaelEischer
c1eb7ac1a1
Merge pull request #3420 from greatroar/local-errs
Modernize error handling in local backend
2021-06-20 14:20:40 +02:00
greatroar
e5f0f67ba0 Modernize error handling in local backend
* Stop prepending the operation name: it's already part of os.PathError,
  leading to repetitive errors like "Chmod: chmod /foo/bar: operation not
  permitted".

* Use errors.Is to check for specific errors.
2021-06-18 11:13:27 +02:00
MichaelEischer
a476752962
Merge pull request #3421 from greatroar/s3-fileinfo
Return s3.fileInfos by pointer
2021-06-12 18:55:18 +02:00
greatroar
0d4f16b6ba Return s3.fileInfos by pointer
Since the fileInfos are returned in a []interface, they're already
allocated on the heap. Making them pointers explicitly means the
compiler doesn't need to generate fileInfo and *fileInfo versions of the
methods on this type. The binary becomes about 7KiB smaller on
Linux/amd64.
2021-06-07 19:48:43 +02:00
Michael Eischer
75c990504d azure/gs: Fix default value in connections help text 2021-05-17 20:56:51 +02:00
MichaelEischer
64b00d28b1
Merge pull request #3345 from greatroar/sftp-enospc
Check for ENOSPC and remove broken files in SFTP
2021-05-13 20:09:38 +02:00
greatroar
dc88ca79b6 Handle lack of space and remove broken files in SFTP backend 2021-03-27 15:02:14 +01:00
Alexander Neumann
88a23521dd
Merge pull request #3327 from MichaelEischer/fix-s3-sanity-check
s3: Fix sanity check
2021-03-11 13:13:46 +01:00
greatroar
187a77fb27 Upgrade pkg/sftp to 1.13 and simplify SFTP.IsNotExist 2021-03-10 21:05:24 +01:00
Michael Eischer
2a9f0f19b6 s3: Fix sanity check
The sanity check shouldn't replace the error message if there is already
one.
2021-03-08 20:23:57 +01:00
Michael Eischer
a293fd9aef local: Ignore files in intermediate folders
For example the data folder of a repository might contain hidden files
which caused list operations to fail.
2021-02-27 13:47:55 +01:00
Lorenz Brun
5427119205 Allow HTTP/2 2021-01-31 02:44:30 +01:00
Michael Eischer
e0867c9682 backend: try to cleanup test leftovers 2021-01-30 21:23:20 +01:00
Michael Eischer
f740b2fb23 mem: check upload length before storing upload 2021-01-30 21:23:20 +01:00
Alexander Neumann
f867e65bcd Fix issues reported by staticcheck 2021-01-30 20:43:53 +01:00
Alexander Neumann
0858fbf6aa Add more error handling 2021-01-30 20:19:47 +01:00
Alexander Neumann
aef3658a5f Address review comments 2021-01-30 20:02:37 +01:00
Alexander Neumann
3c753c071c errcheck: More error handling 2021-01-30 20:02:37 +01:00
Alexander Neumann
75f53955ee errcheck: Add error checks
Most added checks are straight forward.
2021-01-30 20:02:37 +01:00
Michael Eischer
a53778cd83 rest: handle dropped error in save operation 2021-01-30 19:25:04 +01:00
Michael Eischer
8a486eafed gs: Don't drop error when finishing upload
The error returned when finishing the upload of an object was dropped.
This could cause silent upload failures and thus data loss in certain
cases. When a MD5 hash for the uploaded blob is specified, a wrong
hash/damaged upload would return its error via the Close() whose error
was dropped.
2021-01-30 13:31:32 +01:00
Alexander Neumann
cdd704920d azure: Pass data length to Azure libray
The azureAdapter was used directly without a pointer, but the Len()
method was only defined with a pointer receiver (which means Len() is
not present on a azureAdapter{}, only on a pointer to it).
2021-01-29 21:08:41 +01:00
Michael Eischer
1f583b3d8e backend: test that incomplete uploads fail 2021-01-29 13:51:53 +01:00
Michael Eischer
c73316a111 backends: add sanity check for the uploaded file size
Bugs in the error handling while uploading a file to the backend could
cause incomplete files, e.g. https://github.com/golang/go/issues/42400
which could affect the local backend.

Proactively add sanity checks which will treat an upload as failed if
the reported upload size does not match the actual file size.
2021-01-29 13:51:51 +01:00
Michael Eischer
4526d5d197 swift: explicitly pass upload size to library
This allows properly setting the content-length which could help the
server-side to detect incomplete uploads.
2021-01-29 13:50:46 +01:00
Michael Eischer
dca9b6f5db azure: explicitly pass upload size
Previously the fallback from the azure library was to read the whole
blob into memory and use that to determine the upload size.
2021-01-29 13:50:46 +01:00
Michael Eischer
678e75e1c2 sftp: enforce use of optimized upload method
ReadFrom was already used by Save before, this just ensures that this
won't accidentally change in the future.
2021-01-03 22:23:53 +01:00
Alexander Neumann
b2efa0af39
Merge pull request #3164 from MichaelEischer/improve-context-cancel
Improve context cancel handling in archiver and backends
2020-12-29 17:03:42 +01:00
greatroar
3b09ae9074 AIX port 2020-12-29 01:35:01 +01:00
Michael Eischer
d0ca8fb0b8 backend: test that a canceled context prevents RetryBackend operations 2020-12-28 21:06:47 +01:00
Michael Eischer
e483b63c40 retrybackend: Fail operations when context is already canceled
Depending on the used backend, operations started with a canceled
context may fail or not. For example the local backend still works in
large parts when called with a canceled context. Backends transfering
data via http don't work. It is also not possible to retry failed
operations in that state as the RetryBackend will abort with a 'context
canceled' error.

Ensure uniform behavior of all backends by checking for a canceled
context by checking for a canceled context as a first step in the
RetryBackend. This ensures uniform behavior across all backends, as
backends are always wrapped in a RetryBackend.
2020-12-28 21:06:47 +01:00
tWido
7dab113035 Don't retry when "Permission denied" occurs in local backend 2020-12-22 23:41:12 +01:00