### Purpose
Since #8757, the Syncthing GUI now has an unauthenticated state. One
consequence of this is that `$scope.versionBase()` is not initialized
while unauthenticated, which causes the `docsURL` function to truncate
links to just `https://docs.syncthing.net`/, discarding the section
path. This currently affects at least the "Help > Introduction" link
reachable both while logged in and not. The issue is exacerbated in
https://github.com/syncthing/syncthing/pull/9175 where we sometimes want
to show additional contextual help links from the login page to
particular sections of the docs.
I don't think it's any worse to try to preserve the section path even
without an explicit version tag, than to fall back to just the host and
lose all context the link was attempting to provide.
### Testing
- On commit b1ed2802fb (before):
- Open the GUI, set a username and log out.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/
- Log in.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/v1.27.10/intro/gui
- On commit 44fef317800ce1d0795b4e2ebfbd5e9deda849ef (after):
- Open the GUI, set a username and log out.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/intro/gui
- Log in.
- Open the "Help" drop-down. The "Introduction" item links to:
https://docs.syncthing.net/v1.27.10/intro/gui
### Screenshots
This is a GUI change, but affecting only URLs in the markup with no
visual changes.
### Drawbacks
If a `docsURL` call generates a versionless link to a docs page that
doesn't exist on https://docs.syncthing.net - presumably because
Syncthing is not the latest version and links to a deleted page? - then
this will lead to GitHub's generic 404 page with no link to the
Syncthing docs root. Before, any versionless link would also be a
pathless link, leading to the Syncthing docs root instead of a 404 page.
### Purpose
The primary aim of this change is to minimize log clutter in production
environments. There are many lines in the logs coming from an expected
race condition when two devices connect `already connected to this
device`. These messages do not indicate errors and can overwhelm the log
files with unnecessary noise.
By lowering the logging level, we enhance the usability of the logs,
making it easier for users and developers to identify actual issues
without being distracted
### Testing
1. Build syncthing locally
2. Start two Syncthing instances
```bash
./syncthing -no-browser -home=~/.config/syncthing1
./syncthing -no-browser -home=~/.config/syncthing2
```
3. Enable the DEBUG logs from UI for `connections` package
4. Connect the synching instances by adding remote devices from the UI
5. Observe the logs for the message `XXXX already connected to this
device`
### Screenshots
![image](https://github.com/user-attachments/assets/882ccb4c-d39d-463a-8f66-2aad97010700)
## Authorship
Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
Move infrastructure related commands to under `cmd/infra` and
development stuff to `cmd/dev`. The default build command builds the
regular user facing binaries: syncthing, stdiscosrv, and strelaysrv.
Reasoning in comments. The main motivation is to avoid all the case
checks when walking the filesystem.
"again" as we already tried once, but it caused a major issue ragarding
mtimefs layer. The root of this problem has been fixed in the meantime
in ac8b3342a
* infrastructure:
chore(stdiscosrv): ensure incoming addresses are sorted and unique
chore(stdiscosrv): use zero-allocation merge in the common case
chore(stdiscosrv): properly clean out old addresses from memory
chore(stdiscosrv): calculate IPv6 GUA
The read/write loops may keep going for a while on a closing connection
with lots of read/write activity, as it's random which select case is
chosen. And if the connection is slow or even broken, a single
read/write
can take a long time/until timeout. Add initial non-blocking selects
with only the cases relevant to closing, to prioritize those.
This would have addressed a recent issue that arose when re-ordering our
"filesystem layers". Specifically moving the caseFilesystem to the
outermost layer. The previous cache included the filesystem, and as such
all the layers below. This isn't desirable (to put it mildly), as you
can create different variants of filesystems with different layers for
the same path and options. Concretely this did happen with the mtime
layer, which isn't always present. A test for the mtime related breakage
was added in #9687, and I intend to redo the caseFilesystem reordering
after this.
Ref: #9677
Followup to: #9687
This is to add the generation of `compat.json` as a release artifact. It
describes the runtime requirements of the release in question. The next
step is to have the upgrade server use this information to filter
releases provided to clients. This is per the discussion in #9656
---------
Co-authored-by: Ross Smith II <ross@smithii.com>
This changes the two remaining instances where we use insecure HTTPS to
use standard HTTPS certificate verification.
When we introduced these things, almost a decade ago, HTTPS certificates
were expensive and annoying to get, much of the web was still HTTP, and
many devices seemed to not have up-to-date CA bundles.
Nowadays _all_ of the web is HTTPS and I'm skeptical that any device can
work well without understanding LetsEncrypt certificates in particular.
Our current discovery servers use hardcoded certificates which has
several issues:
- Not great for security if it leaks as there is no way to rotate it
- Not great for infrastructure flexibility as we can't use many load
balancer or TLS termination services
- The certificate is a very oddball ECDSA-SHA384 type certificate which
has higher CPU cost than a more regular certificate, which has real
effects on our infrastructure
Using normal TLS certificates here improves these things.
I expect there will be some very few devices out there for which this
doesn't work. For the foreseeable future they can simply change the
config to use the old URLs and parameters -- it'll be years before we
can retire those entirely.
For the upgrade client this simply seems like better hygiene. While our
releases are signed anyway, protecting the metadata exchange is _better_
and, again, I doubt many clients will fail this today.
The test is quite odd and specific, but it does reproduce the issue that
caused #9677, so I'd propose to add it to have a simple regression test
for the basic scenario. Also the option to the fakefs might come handy
for other scenarios where you want to quickly test some behaviour on a
filesystem without nanosecond precision, without actually needing access
to one.
gui: Actually filter scope ID out of IPv6 address when using Remote GUI
(ref #8084)
The current code does not work, because it uses a string in the
replace() method instead of regex. Thus, change it to a proper regex.
Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
The preference for languages in the Accept-Language header field
should not be deduced from the listed order, but from the passed
"quality values", according to the HTTP specification:
https://httpwg.org/specs/rfc9110.html#field.accept-language
This implements the parsing of q=values and ordering within the API
backend, to not complicate things further in the GUI code. Entries
with invalid (unparseable) quality values are discarded completely.
* gui: Fix API endpoint in comment.
gui: Fix incorrect UI language auto detection (fixes#9668)
Currently, the code only checks whether the detected language partially
matches one of the available languages. This means that if the detected
language is "fi" but among the available languages there is only "fil"
and no "fi", then it will match "fi" with "fil", even though the two are
completely different languages.
With this change, the matching is only done when there is a hyphen in
the language code, e.g. "en" will match with "en-US".
Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.