This keeps the data we need about sequence numbers and object counts
persistently in the database. The sizeTracker is expanded into a
metadataTracker than handled multiple folders, and the Counts struct is
made protobuf serializable. It gains a Sequence field to assist in
tracking that as well, and a collection of Counts become a CountsSet
(for serialization purposes).
The initial database scan is also a consistency check of the global
entries. This shouldn't strictly be necessary. Nonetheless I added a
created timestamp to the metadata and set a variable to compare against
that. When the time since the metadata creation is old enough, we drop
the metadata and rebuild from scratch like we used to, while also
consistency checking.
A new environment variable STCHECKDBEVERY can override this interval,
and for example be set to zero to force the check immediately.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4547
LGTM: imsodin
This removes a significant, complex chunk of database code. The
"replace" operation walked both the old and new in lockstep and made the
relevant changes to make the new situation correct. But since delta
indexes we pretty much never need this - we just used replace to drop
the existing data and start over.
This makes that explicit and removes the complexity.
(This is one of those things that would be annoying to make case
insensitive, while the actual "drop and then insert" that we do is
easier.)
This is fairly well unit tested...
The one change to the tests is to cover the fact that previously replace
with something identical didn't bump the sequence number, while
obviously removing everything and re-inserting does. This is not
behavior we depend on anywhere.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4500
LGTM: imsodin, AudriusButkevicius
So, when first implementing the database layer I added panics on every
unexpected error condition mostly to be sure to flush out bugs and
inconsistencies. Then it became sort of standard, and we don't seem to
have many bugs here any more so the panics are usually caused by things
like checksum errors on read. But it's not an optimal user experience to
crash all the time.
Here I've weeded out most of the panics, while retaining a few "can't
happen" ones like errors on marshalling and write that we really can't
recover from.
For the rest, I'm mostly treating any read error as "entry didn't
exist". This should mean we'll rescan the file and correct the info (if
scanning) or treat it as a new file and do conflict handling (when
pulling). In some cases things like our global stats may be slightly
incorrect until a restart, if a database entry goes suddenly missing
during runtime.
All in all, I think this makes us a bit more robust and friendly without
introducing too many risks for the user. If the database is truly toast,
probably many other things on the system will be toast as well...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4118
Harmonize how we use batches in the model, using ProtoSize() to judge
the actual weight of the entire batch instead of estimating. Use smaller
batches in the block map - I think we might have though that batch.Len()
in the leveldb was the batch size in bytes, but it's actually number of
operations.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4114
One more step on the path of the great refactoring. Touches rwfolder a
little bit since it uses the Lstat from fs as well, but mostly this is
just on the scanner as rwfolder is scheduled for a later refactor.
There are a couple of usages of fs.DefaultFilesystem that will in the
end become a filesystem injected from the top, but that comes later.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4070
LGTM: AudriusButkevicius, imsodin
After this change,
- Symlinks on Windows are always unsupported. Sorry.
- Symlinks are always enabled on other platforms. They are just a small
file like anything else. There is no need to special case them. If you
don't want to sync some symlinks, ignore them.
- The protocol doesn't differentiate between different "types" of
symlinks. If that distinction ever does become relevant the individual
devices can figure it out by looking at the destination when they
create the link.
It's backwards compatible in that all the old symlink types are still
understood to be symlinks, and the new SYMLINK type is equivalent to the
old SYMLINK_UNKNOWN which was always a valid way to do it.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3962
LGTM: AudriusButkevicius
Also tweaks the proto definitions:
- [packed=false] on the block_indexes field to retain compat with
v0.14.16 and earlier.
- Uses the vendored protobuf package in include paths.
And, "build.go setup" will install the vendored protoc-gen-gogofast.
This should ensure that a proto rebuild isn't so dependent on whatever
version of the compiler and package the developer has installed...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3864
We used to consider deleted files & directories 128 bytes large. After
the delta indexes change a bug slipped in where deleted files would be
weighted according to their old non-deleted size. Both ways are
incorrect (but the latest change made it worse), as if there are more
files deleted than remaining data in the repo the needSize can be
greater than the globalSize, resulting in a negative completion
percentage.
This change makes it so that deleted items are zero bytes large, which
makes more sense. Instead we expose the number of files that we need to
delete as a separate field in the Completion() result, and hack the
percentage down to 95% complete if it was 100% complete but we need to
delete files. This latter part is sort of ugly, but necessary to give
the user some sort of feedback.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3556
Furthermore:
1. Cleans configs received, migrates them as we receive them.
2. Clears indexes of devices we no longer share the folder with
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3478
This adds a new nanoseconds field to the FileInfo, populates it during
scans and sets the non-truncated time in Chtimes calls.
The actual file modification time is defined as modified_s seconds +
modified_ns nanoseconds. It's expected that the modified_ns field is <=
1e9 (that is, all whole seconds should go in the modified_s field) but
not really enforced. Given that it's an int32 the timestamp can be
adjusted += ~2.9 seconds by the modified_ns field...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3431
The previous commit loosened the locking around database updates.
Apparently that was not fine - what happens is that parallell updates
to the same file for different devices stomp on each others updates to
the global index, leaving it missing one of the two devices.
This changes the BEP protocol to use protocol buffer serialization
instead of XDR, and therefore also the database format. The local
discovery protocol is also updated to be protocol buffer format.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3276
LGTM: AudriusButkevicius
This adds a metric for "committed items" to the database instance that I
use in the test code, and a couple of tests that ensure that scans that
don't change anything also don't commit anything.
There was a case in the scanner where we set the invalid bit on files
that are ignored, even though they were already ignored and had the
invalid bit set. I had assumed this would result in an extra database
commit, but it was in fact filtered out by the Set... Anyway, I think we
can save some work on not pushing that change to the Set at all.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3298
When doing prefix scans in the database, "foo" should not be considered
a prefix of "foo2". Instead, it should match "foo" exactly and also
strings with the prefix "foo/". This is more restrictive than what the
standard leveldb prefix scan does so we add some code to enforce it.
Also exposes the initialScanCompleted on the rwfolder for testing, and
change it to be a channel (so we can wait for it from another
goroutine). Otherwise we can't be sure when the initial scan has
completed, and we need to wait for that or it might pick up changes
we're doing at an unexpected time.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3067
This happens automatically in the background anyway, and it can take a
long time on low powered devices at an inconvenient time. We just want
to get up and running as quickly as possible.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3000