2
2
mirror of https://github.com/octoleo/restic.git synced 2024-12-23 11:28:54 +00:00
Commit Graph

116 Commits

Author SHA1 Message Date
Lorenz Bausch
d6e3c7f28e
Wording: change repo to repository 2022-07-08 20:05:35 +02:00
Michael Eischer
6f53ecc1ae adapt workers based on whether an operation is CPU or IO-bound
Use runtime.GOMAXPROCS(0) as worker count for CPU-bound tasks,
repo.Connections() for IO-bound task and a combination if a task can be
both. Streaming packs is treated as IO-bound as adding more worker
cannot provide a speedup.

Typical IO-bound tasks are download / uploading / deleting files.
Decoding / Encoding / Verifying are usually CPU-bound. Several tasks are
a combination of both, e.g. for combined download and decode functions.
In the latter case add both limits together. As the backends have their
own concurrency limits restic still won't download more than
repo.Connections() files in parallel, but the additional workers can
decode already downloaded data in parallel.
2022-07-03 12:19:26 +02:00
Michael Eischer
753e56ee29 repository: Limit to a single pending pack file
Use only a single not completed pack file to keep the number of open and
active pack files low. The main change here is to defer hashing the pack
file to the upload step. This prevents the pack assembly step to become
a bottleneck as the only task is now to write data to the temporary pack
file.

The tests are cleaned up to no longer reimplement packer manager
functions.
2022-07-02 22:42:34 +02:00
Michael Eischer
120ccc8754 repository: Rework blob saving to use an async pack uploader
Previously, SaveAndEncrypt would assemble blobs into packs and either
return immediately if the pack is not yet full or upload the pack file
otherwise. The upload will block the current goroutine until it
finishes.

Now, the upload is done using separate goroutines. This requires changes
to the error handling. As uploads are no longer tied to a SaveAndEncrypt
call, failed uploads are signaled using an errgroup.

To count the uploaded amount of data, the pack header overhead is no
longer returned by `packer.Finalize` but rather by
`packer.HeaderOverhead`. This helper method is necessary to continue
returning the pack header overhead directly to the responsible call to
`repository.SaveBlob`. Without the method this would not be possible,
as packs are finalized asynchronously.
2022-07-02 22:42:34 +02:00
Michael Eischer
a6e9e08034 Account for pack header overhead at each entry
This will miss the pack header crypto overhead and the length field,
which only amount to a few bytes per pack file.
2022-07-02 18:55:58 +02:00
Alexander Neumann
99634c0936 Return real size from SaveBlob 2022-07-02 18:55:12 +02:00
Michael Eischer
ec7c9ce88b drop unused repository.Loader interface 2022-07-02 18:39:59 +02:00
Michael Eischer
e68c3a4e62 repository: simplify CreateIndexFromPacks 2022-07-02 18:39:59 +02:00
Michael Eischer
bf81bf0795 repository: Properly set id for finalized index
As MergeFinalIndex and index uploads can occur concurrently, it is
necessary for MergeFinalIndex to check whether the IDs for an index were
already set before merging it. Otherwise, we'd loose the ID of an index
which is set _after_ uploading it.
2022-07-02 18:39:59 +02:00
Michael Eischer
628ae799ca repository: make flushPacks private 2022-07-02 18:39:12 +02:00
Michael Eischer
a77d5c4d11 repository: index saving belongs into the MasterIndex 2022-07-02 18:38:56 +02:00
greatroar
c9557b2822 internal/repository: Fix LoadBlob + fuzz test
When given a buf that is big enough for a compressed blob but not its
decompressed contents, the copy at the end of LoadBlob would skip the
last part of the contents.

Fixes #3783.
2022-06-06 17:02:28 +02:00
greatroar
2e0f1f5113 repository: Remove RunWorkers, report ctx.Err()
This removes RunWorkers, which had become mere overhead by successive
refactors. It also ensures that each former user of that function
returns any context error that occurs, so failure to complete an
operation is always reported as an error.
2022-05-10 22:26:00 +02:00
Michael Eischer
cf5cb673fb repository: Use existing method to collect pack ids 2022-04-30 19:14:21 +02:00
Michael Eischer
b335cb6285 repository: Refactor index IDs collection 2022-04-30 19:14:21 +02:00
Michael Eischer
abe5935693 repository: unify repository version-specific initialization
Mark the master index as compressed also when initializing a new
repository. This is only relevant for testing.
2022-04-30 11:34:10 +02:00
Alexander Neumann
8776031f96 Leave allocating slices to the decompress code 2022-04-30 11:34:10 +02:00
Alexander Neumann
5eb05a0afe Configure zstd encoder/decoder 2022-04-30 11:34:10 +02:00
Michael Eischer
2f36e044db Cleanup pack header check 2022-04-30 11:34:10 +02:00
Alexander Neumann
8b11b86383 Add option global --compression 2022-04-30 11:34:10 +02:00
Michael Eischer
7132df529e repository: Increase index size for repo version 2
A compressed index is only about one third the size of an uncompressed
one. Thus increase the number of entries in an index to avoid cluttering
the repository with small indexes.
2022-04-30 11:34:10 +02:00
Michael Eischer
66f9048bce repository: Alloc zstd encoder/decoder on demand 2022-04-30 11:34:10 +02:00
Michael Eischer
6fb408d90e repository: implement pack compression 2022-04-30 11:34:10 +02:00
Michael Eischer
362ab06023 init: Add flag to specify created repository version 2022-04-30 10:07:42 +02:00
Michael Eischer
4b957e7373 repository: Implement index/snapshot/lock compression
The config file is not compressed as it should remain readable by older
restic versions such that these can return a proper error.

As the old format for unpacked data does not include a version header,
make use of a trick: The old data is always encoded as JSON. Thus it can
only start with '{' or '['. For any other value the first byte indicates
a versioned format. The version is set to 2 for now. Then the zstd
compressed data follows.
2022-04-30 10:07:42 +02:00
Michael Eischer
c2aabb2686 Print used key name if config fails to load 2022-04-09 22:38:18 +02:00
Michael Eischer
dc3d77dacc repository: make saveAndEncrypt private 2022-03-28 22:09:49 +02:00
Michael Eischer
6877e7edbb repository: Rename LoadAndDecrypt to LoadUnpacked
The method is the complement for SaveUnpacked and not for
SaveAndEncrypt. The latter assembles blobs into pack files.
2022-03-28 22:09:49 +02:00
Michael Eischer
bba8ba7a5b repository: cancel streampack context after error 2022-02-12 20:18:25 +01:00
Michael Eischer
47554a3428 repository: Fix error handling in repack
When storing a blob fails, this is a fatal error which must not be
retried.
2022-02-12 20:18:25 +01:00
Michael Eischer
930a00ad54 checker: reuse bufio reader 2022-02-12 20:18:25 +01:00
Michael Eischer
34ebafb8b6 repository: don't crash if blob size is too short 2022-02-12 20:18:25 +01:00
Michael Eischer
becebf5d88 repository: remove unused DownloadAndHash 2022-02-12 20:18:25 +01:00
Michael Eischer
f1e58e7c7f checker: rewrite ReadData to stream packs 2022-02-12 20:18:25 +01:00
Michael Eischer
f40abd92fa restorer: convert to use StreamPack 2022-02-12 20:18:25 +01:00
Michael Eischer
c4a2bfcb39 repository: Add StreamPacks function
The function supports efficiently loading a specified list of blobs from
a single pack in a streaming fashion. That is there's no need for
temporary files independent of the pack size.
2022-02-12 20:18:25 +01:00
Alexander Weiss
81876d5c1b Simplify cache logic 2021-09-03 21:01:00 +02:00
Michael Eischer
9aa2eff384 Add plumbing to calculate backend specific file hash for upload
This enables the backends to request the calculation of a
backend-specific hash. For the currently supported backends this will
always be MD5. The hash calculation happens as early as possible, for
pack files this is during assembly of the pack file. That way the hash
would even capture corruptions of the temporary pack file on disk.
2021-08-04 22:17:46 +02:00
Ryan Hitchman
77bf148460 backup: add --dry-run/-n flag to show what would happen.
This can be used to check how large a backup is or validate exclusions.
It does not actually write any data to the underlying backend. This is
implemented as a simple overlay backend that accepts writes without
forwarding them, passes through reads, and generally does the minimal
necessary to pretend that progress is actually happening.

Fixes #1542

Example usage:

$ restic -vv --dry-run . | grep add
new       /changelog/unreleased/issue-1542, saved in 0.000s (350 B added)
modified  /cmd/restic/cmd_backup.go, saved in 0.000s (16.543 KiB added)
modified  /cmd/restic/global.go, saved in 0.000s (0 B added)
new       /internal/backend/dry/dry_backend_test.go, saved in 0.000s (3.866 KiB added)
new       /internal/backend/dry/dry_backend.go, saved in 0.000s (3.744 KiB added)
modified  /internal/backend/test/tests.go, saved in 0.000s (0 B added)
modified  /internal/repository/repository.go, saved in 0.000s (20.707 KiB added)
modified  /internal/ui/backup.go, saved in 0.000s (9.110 KiB added)
modified  /internal/ui/jsonstatus/status.go, saved in 0.001s (11.055 KiB added)
modified  /restic, saved in 0.131s (25.542 MiB added)
Would add to the repo: 25.892 MiB
2021-08-04 21:19:29 +02:00
Alexander Neumann
aef3658a5f Address review comments 2021-01-30 20:02:37 +01:00
Alexander Neumann
16313bfcc9 errcheck: Add error check for MergeFinalIndexes() 2021-01-30 20:02:37 +01:00
Alexander Neumann
75f53955ee errcheck: Add error checks
Most added checks are straight forward.
2021-01-30 20:02:37 +01:00
Michael Eischer
a12c5f1d37 repository: move otherwise unused LoadIndex to tests 2020-12-22 22:36:18 +01:00
Michael Eischer
24474a36f4 repository: deduplicate index loading implementation 2020-12-22 22:36:18 +01:00
Alexander Neumann
36c5d39c2c Fix issues reported by semgrep 2020-12-11 09:41:59 +01:00
Alexander Neumann
7facc8ccc1
Merge pull request #2505 from aawsome/fix-repo-configfile
Fix repo configfile
2020-12-07 07:52:37 +01:00
Alexander Weiss
aa7a5f19c2 Use BlobHandle in index methods 2020-11-22 20:41:12 +01:00
Alexander Weiss
c3ddde9e7d Return hdrSize in ListPack 2020-11-21 22:13:54 +01:00
Alexander Weiss
43732bb885 Add CreateIndexFromPacks() 2020-11-15 07:04:51 +01:00
Alexander Weiss
fd33030556 Use in-memory index to rebuild index in prune 2020-11-06 20:23:30 +01:00