This changes the build script to build all the things in one go
invocation, instead of one invocation per cmd. This is a lot faster
because it means more things get compiled concurrently. It's especially
a lot faster when things *don't* need to be rebuilt, possibly because it
only needs to build the dependency map and such once instead of once per
binary.
In order for this to work we need to be able to pass the same ldflags to
all the binaries. This means we can't set the program name with an
ldflag.
When it needs to rebuild everything (go clean -cache):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 65.82s user 11.28s system 574% cpu 13.409 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 63.26s user 7.12s system 1220% cpu 5.766 total
On a subsequent run (nothing to build, just link the binaries):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 26.58s user 7.53s system 582% cpu 5.853 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 18.66s user 2.45s system 1090% cpu 1.935 total
The relay and discosrv didn't use the new lib/build package, now they
do. Conversely the lib/build package wasn't aware there might be other
users and hard coded the program name - now it's set by the build
script
Allow extending LDFLAGS by setting EXTRA_LDFLAGS to be able to pass
-extldflags=-zrelro -ldflags=-extldflags=-znow for Arch Linux packaging
to get full relro.
This is the result of:
- Changing build.go to take the protobuf version from the modules
instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code
* go mod init; rm -rf vendor
* tweak proto files and generation
* go mod vendor
* clean up build.go
* protobuf literals in tests
* downgrade gogo/protobuf
Fixes error "code in directory GOPATH/src/github.com/golang/lint/golint expects import golang.org/x/lint/golint" during the
go run build.go setup command.
See https://github.com/golang/lint/issues/415
This makes the environment variables easier to read/change.
Also expand the list so it's not inline, more readable that way.
The new architecture syntax for snapcraft allows specifying both the building architecture and the running architecture of the snap, so if we specify the build-on architecture as the host architecture and the run-on architecture as the target architecture, then snapcraft shouldn't need to install any cross-compilers, etc.
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
As of Go 1.8 it's valid to not have a GOPATH set. We ask the Go tool
what the GOPATH seems to be, and if we find ourselves (or a valid copy
of ourselves...) in that location we do not fiddle with the GOPATH.
This adds support for building with the source placed anywhere and no
GOPATH set. The build script handles this by creating a temporary GOPATH
in the system temp dir (or another specified location) and mirroring the
source there before building. The resulting binaries etc still end up in
the same place as usual, meaning at least the "build", "install", "tar",
"zip", "deb", "snap", "test", "vet", "lint", "metalint" and "clean"
commands work without a GOPATH. To this end these commands internally
use fully qualified package paths like
"github.com/syncthing/syncthing/cmd/..." instead of "./cmd/..." like
before.
There is a new command "gopath" that prepares and echoes the directory
of the temporary GOPATH. This can be used to run other non-build go
commands:
export GOPATH=$(go run build.go gopath) // GOPATH is now set
go test -v -race github.com/syncthing/syncthing/cmd/...
There is a new option "-no-build-gopath" that prevents the
check-and-copy step, instead assuming the temporary GOPATH is already
created and up to date. This is a performance optimization for build
servers running multiple builds commands in sequence:
go run build.go gopath // creates a temporary GOPATH
go run build.go -no-build-gopath -goos=... tar // reuses GOPATH
go run build.go -no-build-gopath -goos=... tar // reuses GOPATH
The temporary GOPATH is placed in the system temporary directory
(os.TempDir()) unless overridden by the STTMPDIR variable. It is named
after the hash of the current directory where build.go is run. The
reason for this is that the name should be unique to a source checkout
without risk for conflict, but still persistent between runs of
build.go.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4253
LGTM: AudriusButkevicius, imsodin