2
2
mirror of https://github.com/octoleo/restic.git synced 2024-11-18 02:55:18 +00:00
Commit Graph

20 Commits

Author SHA1 Message Date
Alexander Neumann
99f7fd74e3 backend: Improve Save()
As mentioned in issue [#1560](https://github.com/restic/restic/pull/1560#issuecomment-364689346)
this changes the signature for `backend.Save()`. It now takes a
parameter of interface type `RewindReader`, so that the backend
implementations or our `RetryBackend` middleware can reset the reader to
the beginning and then retry an upload operation.

The `RewindReader` interface also provides a `Length()` method, which is
used in the backend to get the size of the data to be saved. This
removes several ugly hacks we had to do to pull the size back out of the
`io.Reader` passed to `Save()` before. In the `s3` and `rest` backend
this is actively used.
2018-03-03 15:49:44 +01:00
Alexander Neumann
29da86b473 Merge pull request #1623 from restic/backend-relax-restrictions
backend: Relax requirement for new files
2018-02-18 12:56:52 +01:00
Alexander Neumann
b5062959c8 backend: Relax requirement for new files
Before, all backend implementations were required to return an error if
the file that is to be written already exists in the backend. For most
backends, that means making a request (e.g. via HTTP) and returning an
error when the file already exists.

This is not accurate, the file could have been created between the HTTP
request testing for it, and when writing starts. In addition, apart from
the `config` file in the repo, all other file names have pseudo-random
names with a very very low probability of a collision. And even if a
file name is written again, the way the restic repo is structured this
just means that the same content is placed there again. Which is not a
problem, just not very efficient.

So, this commit relaxes the requirement to return an error when the file
in the backend already exists, which allows reducing the number of API
requests and thereby the latency for remote backends.
2018-02-17 22:39:18 +01:00
Igor Fedorenko
d58ae43317 Reworked Backend.Load API to retry errors during ongoing download
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
2018-02-16 21:12:14 -05:00
Alexander Neumann
5dc8d3588d GS: Use generic http transport
During the development of #1524 I discovered that the Google Cloud
Storage backend did not yet use the HTTP transport, so things such as
bandwidth limiting did not work. This commit does the necessary magic to
make the GS library use our HTTP transport.
2018-01-27 20:12:34 +01:00
Alexander Neumann
e9ea268847 Change List() implementation for all backends 2018-01-21 21:15:09 +01:00
Alexander Neumann
7d8765a937 backend: Only return top-level files for most dirs
Fixes #1478
2017-12-14 19:14:16 +01:00
George Armhold
d069ee31b2 GS backend: limit http concurrency in Save(), Stat(), Test(), Remove(), List()
as per discussion in PR #1399
2017-10-31 08:01:43 -04:00
Michael Pratt
9fa4f5eb6b gs: disable resumable uploads
By default, the GCS Go packages have an internal "chunk size" of 8MB,
used for blob uploads.

Media().Do() will buffer a full 8MB from the io.Reader (or less if EOF
is reached) then write that full 8MB to the network all at once.

This behavior does not play nicely with --limit-upload, which only
limits the Reader passed to Media. While the long-term average upload
rate will be correctly limited, the actual network bandwidth will be
very spikey.

e.g., if an 8MB/s connection is limited to 1MB/s, Media().Do() will
spend 8s reading from the rate-limited reader (performing no network
requests), then 1s writing to the network at 8MB/s.

This is bad for network connections hurt by full-speed uploads,
particularly when writing 8MB will take several seconds.

Disable resumable uploads entirely by setting the chunk size to zero.
This causes the io.Reader to be passed further down the request stack,
where there is less (but still some) buffering.

My connection is around 1.5MB/s up, with nominal ~15ms ping times to
8.8.8.8.

Without this change, --limit-upload 1024 results in several seconds of
~200ms ping times (uploading), followed by several seconds of ~15ms ping
times (reading from rate-limited reader). A bandwidth monitor reports
this as several seconds of ~1.5MB/s followed by several seconds of
0.0MB/s.

With this change, --limit-upload 1024 results in ~20ms ping times and
the bandwidth monitor reports a constant ~1MB/s.

I've elected to make this change unconditional of --limit-upload because
the resumable uploads shouldn't be providing much benefit anyways, as
restic already uploads mostly small blobs and already has a retry
mechanism.

--limit-download is not affected by this problem, as Get().Download()
returns the real http.Response.Body without any internal buffering.

Updates #1216
2017-10-17 21:12:04 -07:00
Alexander Neumann
e56370eb5b Remove Deleter interface 2017-10-14 16:04:29 +02:00
Herbert
3473c3f7b6 Remove all dot-imports 2017-10-02 15:06:39 +02:00
Michael Pratt
fa0be82da8 gs: allow backend creation without storage.buckets.get
If the service account used with restic does not have the
storage.buckets.get permission (in the "Storage Admin" role), Create
cannot use Get to determine if the bucket is accessible.

Rather than always trying to create the bucket on Get error, gracefully
fall back to assuming the bucket is accessible. If it is, restic init
will complete successfully. If it is not, it will fail on a later call.

Here is what init looks like now in different cases.

Service account without "Storage Admin":

Bucket exists and is accessible (this is the case that didn't work
before):

$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
created restic backend c02e2edb67 at gs:this-bucket-does-exist:/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Bucket exists but is not accessible:

$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
create key in backend at gs:this-bucket-does-exist:/ failed:
service.Objects.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have
storage.objects.create access to object this-bucket-exists/keys/0fa714e695c8ecd58cb467cdeb04d36f3b710f883496a90f23cae0315daf0b93., forbidden

Bucket does not exist:

$ ./restic init -r gs:this-bucket-does-not-exist:/
create backend at gs:this-bucket-does-not-exist:/ failed:
service.Buckets.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have storage.buckets.create access to bucket this-bucket-does-not-exist., forbidden

Service account with "Storage Admin":

Bucket exists and is accessible: Same

Bucket exists but is not accessible: Same. Previously this would fail
when Create tried to create the bucket. Now it fails when trying to
create the keys.

Bucket does not exist:

$ ./restic init -r gs:this-bucket-does-not-exist:/
enter password for new backend:
enter password again:
created restic backend c3c48b481d at gs:this-bucket-does-not-exist:/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
2017-09-25 22:25:51 -07:00
Michael Pratt
3b2106ed30 gs: document required permissions
In the manual, state which standard roles the service account must
have to work correctly, as well as the specific permissions required,
for creating even more specific custom roles.
2017-09-24 11:25:57 -07:00
Michael Pratt
5f4f997126 gs: minor comment cleanups
* Remove a reference to S3.
* Config can only be used for GCS, not other "gcs compatibile servers".
* Make comments complete sentences.
2017-09-24 10:10:56 -07:00
Alexander Neumann
3b6a580b32 backend: Make pagination for List configurable 2017-09-18 12:01:54 +02:00
Alexander Neumann
40edf00182 gs: implement pagination 2017-09-17 11:08:51 +02:00
Alexander Neumann
c35518a865 Azure/GS: Remove ReadDir() 2017-09-17 11:05:30 +02:00
Michael Pratt
9537bc561d gs: fix nil dereference
info can be nil if err != nil, resulting in a nil dereference while
logging:

$ # GCS config
$ ./restic init
debug enabled
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x935947]

goroutine 1 [running]:
github.com/restic/restic/internal/backend/gs.(*Backend).Save(0xc420012690, 0xe84e80, 0xc420010448, 0xb57149, 0x3, 0xc4203fc140, 0x40, 0xe7be40, 0xc4201d8f90, 0xa0, ...)
	src/github.com/restic/restic/internal/backend/gs/gs.go:226 +0x6d7
github.com/restic/restic/internal/repository.AddKey(0xe84e80, 0xc420010448, 0xc4202f0360, 0xc42000a1b0, 0x4, 0x0, 0xa55b60, 0xc4203043e0, 0xa55420)
	src/github.com/restic/restic/internal/repository/key.go:235 +0x4a1
github.com/restic/restic/internal/repository.createMasterKey(0xc4202f0360, 0xc42000a1b0, 0x4, 0xa55420, 0xc420304370, 0x6a6070)
	src/github.com/restic/restic/internal/repository/key.go:62 +0x60
github.com/restic/restic/internal/repository.(*Repository).init(0xc4202f0360, 0xe84e80, 0xc420010448, 0xc42000a1b0, 0x4, 0x1, 0xc42030a440, 0x40, 0x32a4573d3d9eb5, 0x0, ...)
	src/github.com/restic/restic/internal/repository/repository.go:403 +0x5d
github.com/restic/restic/internal/repository.(*Repository).Init(0xc4202f0360, 0xe84e80, 0xc420010448, 0xc42000a1b0, 0x4, 0xe84e40, 0xc42004ad80)
	src/github.com/restic/restic/internal/repository/repository.go:397 +0x12c
main.runInit(0xc420018072, 0x16, 0x0, 0x0, 0x0, 0xe84e40, 0xc42004ad80, 0xc42000a1b0, 0x4, 0xe7dac0, ...)
	src/github.com/restic/restic/cmd/restic/cmd_init.go:47 +0x2a4
main.glob..func9(0xeb5000, 0xedad70, 0x0, 0x0, 0x0, 0x0)
	src/github.com/restic/restic/cmd/restic/cmd_init.go:20 +0x8e
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).execute(0xeb5000, 0xedad70, 0x0, 0x0, 0xeb5000, 0xedad70)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:649 +0x457
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xeb3e00, 0xc420011650, 0xa55b60, 0xc420011660)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:728 +0x339
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).Execute(0xeb3e00, 0x25, 0xc4201a7eb8)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:687 +0x2b
main.main()
	src/github.com/restic/restic/cmd/restic/main.go:72 +0x268

(The error was likely because I had just enabled the GCS API. Subsequent
runs were fine.)
2017-08-27 21:36:04 -07:00
Alexander Neumann
d9a5b9178e gs: Rework path initialization 2017-08-06 21:47:56 +02:00
Dipta Das
ba75a3884c Add Google Cloud Storage as backend
Environment variables:
GOOGLE_PROJECT_ID=gcp-project-id
GOOGLE_APPLICATION_CREDENTIALS=path-to-json-file

Environment variables for test:
RESTIC_TEST_GS_PROJECT_ID=gcp-project-id
RESTIC_TEST_GS_APPLICATION_CREDENTIALS=path-to-json-file
RESTIC_TEST_GS_REPOSITORY=gs:us-central1/test-bucket

Init repository:
$ restic -r gs🪣/[prefix] init
2017-08-06 21:47:55 +02:00