We added previously a code to fix the issue of chaining
credentials, we do not need this anymore since the
upstream minio-go already has this relevant change.
Before, creating a new repo via REST would use the defaut HTTP client,
which is not a problem unless the server uses HTTPS and a TLS
certificate which isn't signed by a CA in the system's CA store. In this
case, all commands work except the 'init' command, which fails with a
message like "invalid certificate".
chaining failed because chaining provider
was only looking for subsequent credentials
provider after an error. Writer a new
chaining provider which proceeds to fetch
new credentials also under situations where
providers do not return but instead return
no keys at all.
Fixes https://github.com/restic/restic/issues/1422
List().
move comment regarding problematic List() backend api (it's s3's ListObjects
that has a problem, NOT swift's ObjectsWalk).
As per discussion in PR #1399.
This is a fix for the following situation (gh-1188):
List() grabs a semaphore token upon entry, starts a goroutine, and
does not release the token until the routine exits (via a defer).
The goroutine iterates over the results from ListCurrentObjects(),
sending them one at a time to a channel, where they are ultimately
processed by be.Load().
Since be.Load() also needs a token, this will result in deadlock if
b2.connections=1.
This fix changes List() so that the token is only held during the call
to ListCurrentObjects().
- be explicit when discarding returned errors from .Close(), etc.
- remove named return values from funcs when naked return not used
- fix some "err" shadowing when redeclaration not needed
Sometimes s3 listobjects for a directory includes an entry for that
directory. The restic s3 backend doesn't expect that and returns
an error.
Symptom is:
ReadDir: invalid key name restic/key/, removing prefix
restic/key/ yielded empty string
I'm not sure when s3 does that; I'm unable to reproduce it myself.
But in any case, it seems correct to ignore that when it happens.
Fixes #1068
By default, the GCS Go packages have an internal "chunk size" of 8MB,
used for blob uploads.
Media().Do() will buffer a full 8MB from the io.Reader (or less if EOF
is reached) then write that full 8MB to the network all at once.
This behavior does not play nicely with --limit-upload, which only
limits the Reader passed to Media. While the long-term average upload
rate will be correctly limited, the actual network bandwidth will be
very spikey.
e.g., if an 8MB/s connection is limited to 1MB/s, Media().Do() will
spend 8s reading from the rate-limited reader (performing no network
requests), then 1s writing to the network at 8MB/s.
This is bad for network connections hurt by full-speed uploads,
particularly when writing 8MB will take several seconds.
Disable resumable uploads entirely by setting the chunk size to zero.
This causes the io.Reader to be passed further down the request stack,
where there is less (but still some) buffering.
My connection is around 1.5MB/s up, with nominal ~15ms ping times to
8.8.8.8.
Without this change, --limit-upload 1024 results in several seconds of
~200ms ping times (uploading), followed by several seconds of ~15ms ping
times (reading from rate-limited reader). A bandwidth monitor reports
this as several seconds of ~1.5MB/s followed by several seconds of
0.0MB/s.
With this change, --limit-upload 1024 results in ~20ms ping times and
the bandwidth monitor reports a constant ~1MB/s.
I've elected to make this change unconditional of --limit-upload because
the resumable uploads shouldn't be providing much benefit anyways, as
restic already uploads mostly small blobs and already has a retry
mechanism.
--limit-download is not affected by this problem, as Get().Download()
returns the real http.Response.Body without any internal buffering.
Updates #1216
This PR adds the ability of chaining the credentials provider,
such that restic as a tool attempts to honor credentials from
multiple different ways.
Currently supported mechanisms are
- static (user-provided)
- IAM profile (only valid inside configured ec2 instances)
- Standard AWS envs (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
- Standard Minio envs (MINIO_ACCESS_KEY, MINIO_SECRET_KEY)
Refer https://github.com/restic/restic/issues/1341
If the service account used with restic does not have the
storage.buckets.get permission (in the "Storage Admin" role), Create
cannot use Get to determine if the bucket is accessible.
Rather than always trying to create the bucket on Get error, gracefully
fall back to assuming the bucket is accessible. If it is, restic init
will complete successfully. If it is not, it will fail on a later call.
Here is what init looks like now in different cases.
Service account without "Storage Admin":
Bucket exists and is accessible (this is the case that didn't work
before):
$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
created restic backend c02e2edb67 at gs:this-bucket-does-exist:/
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
Bucket exists but is not accessible:
$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
create key in backend at gs:this-bucket-does-exist:/ failed:
service.Objects.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have
storage.objects.create access to object this-bucket-exists/keys/0fa714e695c8ecd58cb467cdeb04d36f3b710f883496a90f23cae0315daf0b93., forbidden
Bucket does not exist:
$ ./restic init -r gs:this-bucket-does-not-exist:/
create backend at gs:this-bucket-does-not-exist:/ failed:
service.Buckets.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have storage.buckets.create access to bucket this-bucket-does-not-exist., forbidden
Service account with "Storage Admin":
Bucket exists and is accessible: Same
Bucket exists but is not accessible: Same. Previously this would fail
when Create tried to create the bucket. Now it fails when trying to
create the keys.
Bucket does not exist:
$ ./restic init -r gs:this-bucket-does-not-exist:/
enter password for new backend:
enter password again:
created restic backend c3c48b481d at gs:this-bucket-does-not-exist:/
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
In the manual, state which standard roles the service account must
have to work correctly, as well as the specific permissions required,
for creating even more specific custom roles.