Commit Graph

18 Commits

Author SHA1 Message Date
Jakob Borg
a80e6be353 cmd/stdiscosrv: Streamline context handling 2023-08-30 09:36:27 +02:00
Jakob Borg
acc532fc60 cmd/stdiscosrv: Explicitly enable HTTP/2
The server supports it, but it's not negotiated unless explicitly
allowed in the TLS config NextProtos.
2023-08-30 09:09:52 +02:00
Jakob Borg
92a4931850 cmd/stdiscosrv: Modernise TLS settings, remove excessive HTTP logging 2023-08-23 13:39:52 +02:00
Jakob Borg
bdfef9010f cmd/stdiscosrv: Serve compressed responses 2023-08-23 13:39:14 +02:00
deepsource-autofix[bot]
5130c414da
all: Unused parameter should be replaced by underscore (#8464)
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
2022-07-28 17:17:29 +02:00
Kebin Liu
083fa1803a
cmd/stdiscosrv: Support deploying behind Caddyserver (#8092) 2022-01-05 15:17:13 +01:00
greatroar
e7620e951d cmd/stdiscosrv: use strconv.Itoa 2021-11-27 15:35:07 +01:00
Simon Frei
9524b51708
all: Implement suture v4-api (#6947) 2020-11-17 13:19:04 +01:00
Audrius Butkevicius
f310bbaaac
cmd/stdiscosrv: Don't abuse wrong header (#6749) 2020-06-16 10:58:25 +02:00
Audrius Butkevicius
f619a7f4cc
lib/connections: Try TCP punchthrough (fixes #4259) (#5753) 2020-06-16 09:17:07 +02:00
Jakob Borg
9084510e1b
cmd/stdiscosrv: Sort addresses before replication (fixes #6093) (#6094)
This makes sure addresses are sorted when coming in from the API. The
database merge operation still checks for correct ordering (which is
quick) and sorts if it isn't correct (legacy database record or
replication peer), but then does a copy first.

Tested with -race in production...
2019-10-18 10:50:19 +02:00
Cyprien Devillez
6408a116f9 cmd/stdiscosrv: Add support for Traefik 2 as a reverse proxy (#6065) 2019-10-07 12:55:27 +01:00
Jakob Borg
e384c822b6 cmd/stdiscosrv: Be more picky about allowed addresses (fixes #5151) (#5153)
Filter out ludicrous stuff both from explicitly announced addresses and
potential automatic replacements.
2018-08-30 18:06:35 +01:00
Jakob Borg
59802c3981
cmd/stdiscosrv, vendor: Remove remnants of golang.org/x/net/context (#4843) 2018-03-27 07:37:34 -04:00
Jakob Borg
71fab4d250 cmd/stdiscosrv: Record time of failed lookup
So that we can eventually garbage collect keys that noone is asking
about any more.
2018-03-06 16:15:29 +01:00
Jakob Borg
b97d5bcca8
Remove KCP (fixes #4737) (#4741) 2018-02-09 11:40:57 +01:00
Jakob Borg
441230ff77 cmd/stdiscosrc: Handle address family indicator on other schemes than tcp 2018-01-28 10:24:48 +01:00
Jakob Borg
916ec63af6 cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:

- Protocol towards clients is unchanged.

- Recommended large scale design is still to be deployed nehind nginx (I
  tested, and it's still a lot faster at terminating TLS).

- Database backend is leveldb again, only. It scales enough, is easy to
  setup, and we don't need any backend to take care of.

- Server supports replication. This is a simple TCP channel - protect it
  with a firewall when deploying over the internet. (We deploy this within
  the same datacenter, and with firewall.) Any incoming client announces
  are sent over the replication channel(s) to other peer discosrvs.
  Incoming replication changes are applied to the database as if they came
  from clients, but without the TLS/certificate overhead.

- Metrics are exposed using the prometheus library, when enabled.

- The database values and replication protocol is protobuf, because JSON
  was quite CPU intensive when I tried that and benchmarked it.

- The "Retry-After" value for failed lookups gets slowly increased from
  a default of 120 seconds, by 5 seconds for each failed lookup,
  independently by each discosrv. This lowers the query load over time for
  clients that are never seen. The Retry-After maxes out at 3600 after a
  couple of weeks of this increase. The number of failed lookups is
  stored in the database, now and then (avoiding making each lookup a
  database put).

All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 08:52:31 +00:00