At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes#8247
Encrypted-to-encrypted connections (i.e., ones where both sides set a
password) used to work but were broken in the 1.28.0 release. The
culprit is the 5342bec1b refactor which slightly changed how the request
was constructed, resulting in a bad block hash field.
Co-authored-by: Simon Frei <freisim93@gmail.com>
The read/write loops may keep going for a while on a closing connection
with lots of read/write activity, as it's random which select case is
chosen. And if the connection is slow or even broken, a single
read/write
can take a long time/until timeout. Add initial non-blocking selects
with only the cases relevant to closing, to prioritize those.
### Purpose
Adds a new metric `syncthing_connections_active` which equals to the
amount of active connections per device.
Fixes#9527
<!--
Describe the purpose of this change. If there is an existing issue that
is
resolved by this pull request, ensure that the commit subject is on the
form
`Some short description (fixes#1234)` where 1234 is the issue number.
-->
### Testing
I've manually tested it by running syncthing with these changes locally
and examining the returned metrics from `/metrics`.
I've done the following things:
- Connect & disconnect a device
- Increase & decrease the number of connections and verify that the
value of the metric matches with the amount displayed in the GUI.
### Documentation
https://github.com/syncthing/docs/blob/main/includes/metrics-list.rst
needs to be regenerated with
[find-metrics.go](https://github.com/syncthing/docs/blob/main/_script/find-metrics/find-metrics.go)
## Authorship
Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
---------
Co-authored-by: Jakob Borg <jakob@kastelo.net>
This is a refactor of the protocol/model interface to take the actual
message as the parameter, instead of the broken-out fields:
```diff
type Model interface {
// An index was received from the peer device
- Index(conn Connection, folder string, files []FileInfo) error
+ Index(conn Connection, idx *Index) error
// An index update was received from the peer device
- IndexUpdate(conn Connection, folder string, files []FileInfo) error
+ IndexUpdate(conn Connection, idxUp *IndexUpdate) error
// A request was made by the peer device
- Request(conn Connection, folder, name string, blockNo, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error)
+ Request(conn Connection, req *Request) (RequestResponse, error)
// A cluster configuration message was received
- ClusterConfig(conn Connection, config ClusterConfig) error
+ ClusterConfig(conn Connection, config *ClusterConfig) error
// The peer device closed the connection or an error occurred
Closed(conn Connection, err error)
// The peer device sent progress updates for the files it is currently downloading
- DownloadProgress(conn Connection, folder string, updates []FileDownloadProgressUpdate) error
+ DownloadProgress(conn Connection, p *DownloadProgress) error
}
```
(and changing the `ClusterConfig` to `*ClusterConfig` for symmetry;
we'll be forced to use all pointers everywhere at some point anyway...)
The reason for this is that I have another thing cooking which is a
small troubleshooting change to check index consistency during transfer.
This required adding a field or two to the index/indexupdate messages,
and plumbing the extra parameters in umpteen changes is almost as big a
diff as this is. I figured let's do it once and avoid having to do that
in the future again...
The rest of the diff falls out of the change above, much of it being in
test code where we run these methods manually...
If syncOwnership is enabled and the remote uses for example a dockerized
Syncthing it can't fetch the ownername and groupname of the local
instance. Without this patch this led to an endless cycle of detected
changes on the remote and failing re-sync attempts.
This patch skips comparing the ownername and groupname if they zare empty
on one side.
See https://github.com/syncthing/syncthing/issues/9039 for details.
### Testing
Proposed by @calmh in
https://github.com/syncthing/syncthing/issues/9039#issuecomment-1870584783
and tested locally in my setup,
Setup PC 1:
- Syncthing is run in Docker as user `root` and has none of the users
configured that synchronize their files
Setup PC 2:
- this PC has all users locally setup
- Syncthing runs as `systemd` service as user `syncthing` and has
multiple capabilities set to set the correct owner and permissions
Setup PC 3:
- same as PC 2
Handling:
- `PC 1` is send & receive and uses just the `UID` and `GID` identifiers
to store the files
- `PC 2` and `PC 3` synchronize their files over `PC 1` but not directly
to each other
Outcome:
- `PC 2` and `PC 3` should send and receive their files with the correct
ownership and groups from `PC 1`
This adds our short device ID to the basic auth realm. This has at least
two consequences:
- It is different from what's presented by another device on the same
address (e.g., if I use SSH forwards to different dives on the same
local address), preventing credentials for one from being sent to
another.
- It is different from what we did previously, meaning we avoid cached
credentials from old versions interfering with the new login flow.
I don't *think* there should be things that depend on our precise realm
string, so this shouldn't break any existing setups...
Sneakily this also changes the session cookie and CSRF name, because I
think `id.Short().String()` is nicer than `id.String()[:5]` and the
short ID is two characters longer. That's also not a problem...
In principle a connection can close while it's in progress with
starting, and then it's undefined if we wait for goroutines to exit etc.
With this change, we will wait for start to complete before starting to
stop everything.
This adds the ability to have multiple concurrent connections to a single device. This is primarily useful when the network has multiple physical links for aggregated bandwidth. A single connection will never see a higher rate than a single link can give, but multiple connections are load-balanced over multiple links.
It is also incidentally useful for older multi-core CPUs, where bandwidth could be limited by the TLS performance of a single CPU core -- using multiple connections achieves concurrency in the required crypto calculations...
Co-authored-by: Simon Frei <freisim93@gmail.com>
Co-authored-by: tomasz1986 <twilczynski@naver.com>
Co-authored-by: bt90 <btom1990@googlemail.com>
This fixes various test issues with Go 1.20.
- Most tests rewritten to use fakefs where possible
- Some tests that were already skipped, or dubious (invasive,
unmaintainable, unclear what they even tested) have been removed
- Some actual code rewritten to better support testing in fakefs
Co-authored-by: Eric P <eric@kastelo.net>
The layout of the request differs based on whether it comes from an
untrusted device or a trusted device with encrypted enabled. Handle
both.
Closes#8819.
This adds a cache to the expensive key generation operations. It's fixes
size LRU/MRU stuff to keep memory usage bounded under absurd conditions.
Also closes#8600.
This adds the BlocksHash field from the FileInfo to our API output. It
can be useful for debugging, or for external tools. I'm intentionally
leaving it as an opaque base64 string because no meaning should be
derived from it: it's just a string.
* lib/connections: Cache isLAN decision for later external access.
The check whether a remote device's address is on a local network
currently happens when handling the Hello message, to configure the
limiters. Save the result to the ConnectionInfo and pass it out as
part of the model's ConnectionInfo struct in ConnectionStats().
* gui: Use provided connection attribute to distinguish LAN / WAN.
Replace the dumb IP address check which didn't catch common cases and
actually could contradict what the backend decided. That could have
been confusing if the GUI says WAN, but the limiter is not actually
applied because the backend thinks it's a LAN.
Add strings for QUIC and relay connections to also differentiate
between LAN and WAN.
* gui: Redefine reception level icons for all connection types.
Move the mapping to the JS code, as it is much easier to handle
multiple switch cases by fall-through there.
QUIC is regarded no less than TCP anymore. LAN and WAN make the
difference between levels 4 / 3 and 2 / 1:
{TCP,QUIC} LAN --> {TCP,QUIC} WAN --> Relay LAN --> Relay WAN -->
Disconnected.
Previous debug input didn't really give enough info to show what was
happening, while it also printed full block lists which are enormously
verbose. Now it consistently prints 1. what it sees on disk, 2. what it
got from CurrentFile (without blocks), 3. the action taken on that file.
This adds support for syncing extended attributes on supported
filesystem on Linux, macOS, FreeBSD and NetBSD. Windows is currently
excluded because the APIs seem onerous and annoying and frankly the uses
cases seem few and far between. On Unixes this also covers ACLs as those
are stored as extended attributes.
Similar to ownership syncing this will optional & opt-in, which two
settings controlling the main behavior: one to "sync" xattrs (read &
write) and another one to "scan" xattrs (only read them so other devices
can "sync" them, but not apply any locally).
Co-authored-by: Tomasz Wilczyński <twilczynski@naver.com>
This replaces old style errors.Wrap with modern fmt.Errorf and removes
the (direct) dependency on github.com/pkg/errors. A couple of cases are
adjusted by hand as previously errors.Wrap(nil, ...) would return nil,
which is not what fmt.Errorf does.
all: Add package runtimeos for runtime.GOOS comparisons
I grew tired of hand written string comparisons. This adds generated
constants for the GOOS values, and predefined Is$OS constants that can
be iffed on. In a couple of places I rewrote trivial switch:es to if:s,
and added Illumos where we checked for Solaris (because they are
effectively the same, and if we're going to target one of them that
would be Illumos...).
This adds support for syncing ownership on Unixes and on Windows. The
scanner always picks up ownership information, but it is not applied
unless the new folder option "Sync Ownership" is set.
Ownership data is stored in a new FileInfo field called "platform data". This
is intended to hold further platform-specific data in the future
(specifically, extended attributes), which is why the whole design is a
bit overkill for just ownership.
Having a separate mutex for the three or four instructions needed to
fetch and increment nextID means the overhead exceeds the cost of this
operation. nextID is now handled inside the critical section for
awaiting instead, while the more expensive channel creation has been
moved outside it.
This is mostly a simplification, though it may have minor performance
benefits in some situations. The single-threaded sender benchmark shows
no significant difference:
name old speed new speed delta
RequestsRawTCP-8 55.3MB/s ± 7% 56.6MB/s ± 6% ~ (p=0.190 n=10+10)
RequestsTLSoTCP-8 20.5MB/s ±20% 20.8MB/s ± 8% ~ (p=0.604 n=10+9)
* lib/protocol: Require at least 3.125% savings from compression
The new lz4 library doesn't need its output buffer to be the maximum
size, unlike the old one (which would allocate if it weren't). It can
take a buffer that is of a smaller size and will report if compressed
data can fit inside the buffer (with a small chance of reporting a false
negative). Use that property to our advantage by requiring compressed
data to be at most n-n/32 = .96875*n bytes long for n input bytes.
* lib/protocol: Remove unused receivers
To make DeepSource happy.
* lib/protocol: Micro-optimize lz4Compress
Only write the length if compression was successful. This is a memory
write, so the compiler can't reorder it.
Only check the return value of lz4.CompressBlock. Length-zero inputs
are always expanded by LZ4 compression (the library documents this),
so the check on len(src) isn't needed.
* lib/model: Remove bogus fields from connections API endpoint.
Switch the returned data type for the /rest/system/connections element
"total" to use only the Statistics struct. The other fields of the
ConnectionInfo struct are not populated and misleading.
* Lowercase JSON field names.
* lib/model: Get rid of ConnectionInfo.MarshalJSON().
It was missing the StartedAt field from the embedded Statistics
struct. Just lowercasing the JSON attribute names can be done just as
easily with annotations.
* lib/model: Remove bogus startedAt field from totals.
Instead of using the Statistics type with one field empty, just switch
to a free-form map with the three needed fields.