Currently a random dev version has a version string like this:
v1.7.0-rc.1+23-gef3441bd6
That is, the tag name followed by a plus sign and the git describe
metadata (number of commits and hash) plus -dirty or -branchname in some
cases.
We introduced the plus sign in #473, where a dev version would
previously be called v0.9.0-42-gwhatever which is considered older than
v0.9.0 and hence caused a downgrade. The problem with the plus is that
per semver everything after the plus is ignored as build metadata, which
means we won't upgrade from v1.7.0-rc.1 to v1.7.0-rc.1+22-g946170f3f.
With this change the we instead either just add a dev suffix (if we're
already on a prerelease version) or we wind the patch version and add a
dev suffix.
v1.7.0-rc.1+23-gef3441bd6 => v1.7.0-rc.1.dev.23.gef3441bd6
v1.6.1+80-gef3441bd6 => v1.6.2-dev.80.gef3441bd6
This should preserve the ordering and keep versions semver-ish.
This changes the build script to build all the things in one go
invocation, instead of one invocation per cmd. This is a lot faster
because it means more things get compiled concurrently. It's especially
a lot faster when things *don't* need to be rebuilt, possibly because it
only needs to build the dependency map and such once instead of once per
binary.
In order for this to work we need to be able to pass the same ldflags to
all the binaries. This means we can't set the program name with an
ldflag.
When it needs to rebuild everything (go clean -cache):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 65.82s user 11.28s system 574% cpu 13.409 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 63.26s user 7.12s system 1220% cpu 5.766 total
On a subsequent run (nothing to build, just link the binaries):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 26.58s user 7.53s system 582% cpu 5.853 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 18.66s user 2.45s system 1090% cpu 1.935 total
The relay and discosrv didn't use the new lib/build package, now they
do. Conversely the lib/build package wasn't aware there might be other
users and hard coded the program name - now it's set by the build
script
Allow extending LDFLAGS by setting EXTRA_LDFLAGS to be able to pass
-extldflags=-zrelro -ldflags=-extldflags=-znow for Arch Linux packaging
to get full relro.
This is the result of:
- Changing build.go to take the protobuf version from the modules
instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code
* go mod init; rm -rf vendor
* tweak proto files and generation
* go mod vendor
* clean up build.go
* protobuf literals in tests
* downgrade gogo/protobuf
Fixes error "code in directory GOPATH/src/github.com/golang/lint/golint expects import golang.org/x/lint/golint" during the
go run build.go setup command.
See https://github.com/golang/lint/issues/415
This makes the environment variables easier to read/change.
Also expand the list so it's not inline, more readable that way.
The new architecture syntax for snapcraft allows specifying both the building architecture and the running architecture of the snap, so if we specify the build-on architecture as the host architecture and the run-on architecture as the target architecture, then snapcraft shouldn't need to install any cross-compilers, etc.
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648