At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes#8247
Move infrastructure related commands to under `cmd/infra` and
development stuff to `cmd/dev`. The default build command builds the
regular user facing binaries: syncthing, stdiscosrv, and strelaysrv.
This adds a small package `geoip` which knows how to download and manage
the Maxmind GeoLite2 database we use. This removes the need for various
scripts to download and manage the geoip database, something that today
happens on Docker startup for the relay pool server and using various
hand written hacks for the usage reporting server.
The database is downloaded when needed and then refreshed on a
best-effort basis weekly.
Go is not cgroup aware and by default will set GOMAXPROCS to the number
of available threads, regardless of whether it is within the allocated
quota. This behaviour causes high amount of CPU throttling and degraded
application performance.
### Purpose
This implements CLI completion using the Kongplete module. As a side
effect a CLI structure for syncthing/cli was created for kongplete to be
able to parse and implement CLI completion.
### Testing
I've tested the autocompletion manually, and it had worked, but I hadn't
added any tests so as to test it automatically. Additionally, I ran `go
run build.go test` with all tests passing.
This adds a "token manager" which handles storing and checking expired
tokens, used for both sessions and CSRF tokens. It removes the old,
corresponding functionality for CSRFs which saved things in a file. The
result is less crap in the state directory, and active login sessions
now survive a Syncthing restart (this really annoyed me).
It also adds a boolean on login to create a longer-lived session cookie,
which is now possible and useful. Thus we can remain logged in over
browser restarts, which was also annoying... :)
<img width="1001" alt="Screenshot 2023-12-12 at 09 56 34"
src="https://github.com/syncthing/syncthing/assets/125426/55cb20c8-78fc-453e-825d-655b94c8623b">
Best viewed with whitespace-insensitive diff, as a bunch of the auth
functions became methods instead of closures which changed indentation.
Cleanup after #9275.
This renames `fmut` -> `mut`, removes the deadlock detector and
associated plumbing, renames some things from `...PRLocked` to
`...RLocked` and similar, and updates comments.
Apart from the removal of the deadlock detection machinery, no
functional code changes... i.e. almost 100% diff noise, have fun
reviewing.
`syncthing cli` subcommand was using urfave/cli as the command parser.
This PR replace it with kong, which the main command uses.
Some help texts and error message format are changed. Other than that,
all the command usage and logic remains unchanged.
There's only one place which still uses urfave/cli, which is `syncthing
cli config`, because it uses some magic to dynamically build commands
from struct reflects. I used kong's `passthrough:""` tag to pass any
argument following `syncthing cli config` to urfave/cli parser.
This PR also fixes#9041
---------
Co-authored-by: Jakob Borg <jakob@kastelo.net>
This pull request allows syncthing to request an IPv6
[pinhole](https://en.wikipedia.org/wiki/Firewall_pinhole), addressing
issue #7406. This helps users who prefer to use IPv6 for hosting their
services or are forced to do so because of
[CGNAT](https://en.wikipedia.org/wiki/Carrier-grade_NAT). Otherwise,
such users would have to configure their firewall manually to allow
syncthing traffic to pass through while IPv4 users can use UPnP to take
care of network configuration already.
### Testing
I have tested this in a virtual machine setup with miniupnpd running on
the virtualized router. It successfully added an IPv6 pinhole when used
with IPv6 only, an IPv4 port mapping when used with IPv4 only and both
when dual-stack (IPv4 and IPv6) is used.
Automated tests could be added for SOAP responses from the router but
automatically testing this with a real network is likely infeasible.
### Documentation
https://docs.syncthing.net/users/firewall.html could be updated to
mention the fact that UPnP now works with IPv6, although this change is
more "behind the scenes".
---------
Co-authored-by: Simon Frei <freisim93@gmail.com>
Co-authored-by: bt90 <btom1990@googlemail.com>
Co-authored-by: André Colomb <github.com@andre.colomb.de>