This changes the build script to build all the things in one go
invocation, instead of one invocation per cmd. This is a lot faster
because it means more things get compiled concurrently. It's especially
a lot faster when things *don't* need to be rebuilt, possibly because it
only needs to build the dependency map and such once instead of once per
binary.
In order for this to work we need to be able to pass the same ldflags to
all the binaries. This means we can't set the program name with an
ldflag.
When it needs to rebuild everything (go clean -cache):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 65.82s user 11.28s system 574% cpu 13.409 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 63.26s user 7.12s system 1220% cpu 5.766 total
On a subsequent run (nothing to build, just link the binaries):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 26.58s user 7.53s system 582% cpu 5.853 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 18.66s user 2.45s system 1090% cpu 1.935 total
This makes sure addresses are sorted when coming in from the API. The
database merge operation still checks for correct ordering (which is
quick) and sorts if it isn't correct (legacy database record or
replication peer), but then does a copy first.
Tested with -race in production...
This adds a certificate lifetime parameter to our certificate generation
and hard codes it to twenty years in some uninteresting places. In the
main binary there are a couple of constants but it results in twenty
years for the device certificate and 820 days for the HTTPS one. 820 is
less than the 825 maximum Apple allows nowadays.
This also means we must be prepared for certificates to expire, so I add
some handling for that and generate a new certificate when needed. For
self signed certificates we regenerate a month ahead of time. For other
certificates we leave well enough alone.
The relay and discosrv didn't use the new lib/build package, now they
do. Conversely the lib/build package wasn't aware there might be other
users and hard coded the program name - now it's set by the build
script
This is the result of:
- Changing build.go to take the protobuf version from the modules
instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code
* go mod init; rm -rf vendor
* tweak proto files and generation
* go mod vendor
* clean up build.go
* protobuf literals in tests
* downgrade gogo/protobuf
This changes the TLS and certificate handling in a few ways:
- We always use TLS 1.2, both for sync connections (as previously) and
the GUI/REST/discovery stuff. This is a tightening of the requirements
on the GUI. AS far as I can tell from caniusethis.com every browser from
2013 and forward supports TLS 1.2, so I think we should be fine.
- We always greate ECDSA certificates. Previously we'd create
ECDSA-with-RSA certificates for sync connections and pure RSA
certificates for the web stuff. The new default is more modern and the
same everywhere. These certificates are OK in TLS 1.2.
- We use the Go CPU detection stuff to choose the cipher suites to use,
indirectly. The TLS package uses CPU capabilities probing to select
either AES-GCM (fast if we have AES-NI) or ChaCha20 (faster if we
don't). These CPU detection things aren't exported though, so the tlsutil
package now does a quick TLS handshake with itself as part of init().
If the chosen cipher suite was AES-GCM we prioritize that, otherwise we
prefer ChaCha20. Some might call this ugly. I think it's awesome.
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
This adds autodetection of the fastest hashing library on startup, thus
handling the performance regression. It also adds an environment
variable to control the selection, STHASHING=standard (Go standard
library version, avoids SIGILL crash when the minio library has bugs on
odd CPUs), STHASHING=minio (to force using the minio version) or unset
for the default autodetection.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3617