Previously such files (typically log files) wouldn't be backed up at
all!
The proper behaviour is to backup what we can, and warn the operator
that file is possibly not complete. But it is a warning, not an error.
Closes #689
Since client.BucketExists was changed to return a separate 'found' value, instead of reporting an error when the bucket doesn't exist, the error code path does no longer imply a call to client.MakeBucket. So the second part of the debug message, "...trying to create the bucket" doesn't apply any more.
Also, changed the name of the return value from 'ok' to 'found', matching the API documentation at https://docs.minio.io/docs/golang-client-api-reference#BucketExists.
* When a directory already exists, CreateDirAt returns an error stating so
* This means that the restoreMetadata step is skipped, so for directories which already exist no file permissions, owners, groups, etc will be restored on them
* Not returning the error if it's a "directory exists" error means the metadata will get restored
* It also removes the superfluous "error for ...: mkdir ...: file exists" messages
* This makes the behaviour of directories consistent with that of files (which always have their content & metadata restored, regardless of whether they existed or not)
This is subtle. A combination od fast client disk (read: SSD) with lots
of files and fast network connection to restic-server would suddenly
start getting lots of "dial tcp: connect: cannot assign requested
address" errors during backup stage. Further inspection revealed that
client machine was plagued with TCP sockets in TIME_WAIT state. When
ephemeral port range was finally exhausted, no more sockets could be
opened, so restic would freak out.
To understand the magnitude of this problem, with ~18k ports and default
timeout of 60 seconds, it means more than 300 HTTP connections per
seconds were created and teared down. Yeah, restic-server is that
fast. :)
As it turns out, this behavior was product of 2 subtle issues:
1) The body of HTTP response wasn't read completely with io.ReadFull()
at the end of the Load() function. This deactivated HTTP keepalive,
so already open connections were not reused, but closed instead, and
new ones opened for every new request. io.Copy(ioutil.Discard,
resp.Body) before resp.Body.Close() remedies this.
2) Even with the above fix, somehow having MaxIdleConnsPerHost at its
default value of 2 wasn't enough to stop reconnecting. It is hard to
understand why this would be so detrimental, it could even be some
subtle Go runtime bug. Anyhow, setting this value to match the
connection limit, as set by connLimit global variable, finally nails
this ugly bug.
I fixed several other places where the response body wasn't read in
full (or at all). For example, json.NewDecoder() is also known not to
read the whole body of response.
Unfortunately, this is not over yet. :( The check command is firing up
to 40 simultaneous connections to the restic-server. Then, once again,
MaxIdleConnsPerHost is too low to support keepalive, and sockets in the
TIME_WAIT state pile up. But, as this kind of concurrency absolutely
kill the poor disk on the server side, this is a completely different
bug then.
$ bin/restic forget /d 7 /w 4 /m 12
argument "/d" is not a snapshot ID, ignoring
argument "7" is not a snapshot ID, ignoring
argument "/w" is not a snapshot ID, ignoring
argument "4" is not a snapshot ID, ignoring
argument "/m" is not a snapshot ID, ignoring
cound not find a snapshot for ID "12", ignoring
Equivalent to rsync's "-x" option.
Notes to the naming:
"--exclude-other-filesystems"
is used by Duplicity,
"--one-file-system"
is used rsync and tar.
This latter should be more familiar to the user.
The fuse code kept adding snapshots to the top-level dir "snapshots". In
addition, snapshots with the same timestamp (same second) were not added
correctly, they will now be suffixed by an incrementing counter, e.g.:
dr-xr-xr-x 1 fd0 users 0 Sep 18 15:01 2016-09-18T15:01:44+02:00
dr-xr-xr-x 1 fd0 users 0 Sep 18 15:01 2016-09-18T15:01:48+02:00
dr-xr-xr-x 1 fd0 users 0 Sep 18 15:01 2016-09-18T15:01:48+02:00-1
Closes #624