At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes#8247
Skipping these makes the sequence numbering inconcistent; we've received
a file and suppsedly added it to the database, but if you check the
sequence number afterwards it didn't increase, i.e., we trigger [this
failure
condition](47f48faed7/lib/model/indexhandler.go (L447-L459))
and, similarly, a future update will look like there was a hole in the
numbering.
I propose to at least temporarily remove this optimisation in order for
things to make more sense. Is there a reason to keep this beyond saving
some database operations?
### Purpose
Resend our indexes since we fixed that index-sending issue.
I made a new thing to only drop the non-local-device index IDs, i.e.,
those for other devices. This means we will see a mismatch and resend
all indexes, but they will not. This is somewhat cleaner as it avoids
resending everything twice when two devices are upgraded, and in any
case, we have no reason to force a resend of incoming indexes here.
### Testing
It happens on my computer...
This makes a couple of small improvements to the folder summary
mechanism:
- The folder summary includes the local and remote sequence numbers in
clear text, rather than some odd sum that I'm not sure what it was
intended to represent.
- The folder summary event is generated when appropriate, regardless of
whether there is an event listener. We did this before because
generating it was expensive, and we wanted to avoid doing it
unnecessarily. Nowadays, however, it's mostly just reading out
pre-calculated metadata, and anyway, it's nice if it shows up reliably
when running with -verbose.
The point of all this is to make it easier to use these events to judge
when devices are, in fact, in sync. As-is, if I'm looking at two
devices, it's very difficult to reliably determine if they are in sync
or not. The reason is that while we can ask device A if it thinks it's
in sync, we can't see if the answer is "yes" because it has processed
all changes from B, or if it just doesn't know about the changes from B
yet. With proper sequence numbers in the event we can compare the two
and determine the truth. This makes testing a lot easier.
This adds the BlocksHash field from the FileInfo to our API output. It
can be useful for debugging, or for external tools. I'm intentionally
leaving it as an opaque base64 string because no meaning should be
derived from it: it's just a string.
This adds support for syncing extended attributes on supported
filesystem on Linux, macOS, FreeBSD and NetBSD. Windows is currently
excluded because the APIs seem onerous and annoying and frankly the uses
cases seem few and far between. On Unixes this also covers ACLs as those
are stored as extended attributes.
Similar to ownership syncing this will optional & opt-in, which two
settings controlling the main behavior: one to "sync" xattrs (read &
write) and another one to "scan" xattrs (only read them so other devices
can "sync" them, but not apply any locally).
Co-authored-by: Tomasz Wilczyński <twilczynski@naver.com>
Also adds idle time and keepalive parameters because how this is
configured has changed in the new package version. The values are those
that seems like might already be default, if keep-alives were enabled,
which is not obvious from the doc comments.
Also, Go 1.19 gofmt reformatting of comments.
This adds support for syncing ownership on Unixes and on Windows. The
scanner always picks up ownership information, but it is not applied
unless the new folder option "Sync Ownership" is set.
Ownership data is stored in a new FileInfo field called "platform data". This
is intended to hold further platform-specific data in the future
(specifically, extended attributes), which is why the whole design is a
bit overkill for just ownership.
This changes the GC mechanism so that the first pass (which reads all
FileInfos to populate bloom filters with block & version hashes) can
happen concurrently with normal database operations.
The big gcMut still exists, and we grab it temporarily to block all
other modifications while we set up the bloom filters. We then release
the lock and let other things happen, with those other things also
updating the bloom filters as required. Once the first phase is done we
again grab the gcMut, knowing that we are the sole modifier of the
database, and do the cleanup.
I also removed the final compaction step.
This makes us use TLS 1.3+ on sync connections by default. A new option
`insecureAllowOldTLSVersions` exists to allow communication with TLS
1.2-only clients (roughly Syncthing 1.2.2 and older). Even with that
option set you get a slightly simplified setup, with the cipher suite
order fixed instead of auto detected.