### Purpose
As discussed in #9686
Syncthing currently does not check folderstate on remote device before
pulling. If no devices have a valid folderstate (i.e all devices have
the folder paused) it will still attempt to pull. On large folders this
will cause a hanging "Syncing" status.
This checks whether at least one connected device has the file available
and has a valid folderstate.
### Testing
Tested locally on multiple devices.
We're new to Go (all our stuff is Python) so please bear with!
Interested if there may be a better place to slot this in.
Thanks,
Jon
---------
Co-authored-by: Simon Frei <freisim93@gmail.com>
### Purpose
This fixes#9775. I also improved the comments as they were lacking.
My apologies for introducing this bug. In summary, the bug was
```
mode = mode ^ (ModeSymlink | ModeIrregular)
```
didn't correctly reset those bits. This correctly resets them:
```
mode = mode &^ ModeSymlink &^ ModeIrregular
```
Tested and working in Windows 11 version 10.0.22631.4317. I didn't test
in other versions, but I'm sure this is the only issue.
Currently we log on every single one of 10 retries deep in the upnp
stack. However we also return the failure as an error, which is bubbled
up a while until it's logged at debug level. Switch that around, such
that the repeat logging happens at debug level but the top-level happens
at info. There's some chance that this will newly log errors from
nat-pmp that were previously hidden in debug level - I hope those are
useful and not too numerous.
Also potentially this can even close#9324, my (very limited)
understanding of the reports/discussion there is that there's likely no
problem with syncthing beyond the excessive logging, it's some weird
router behaviour.
Skipping these makes the sequence numbering inconcistent; we've received
a file and suppsedly added it to the database, but if you check the
sequence number afterwards it didn't increase, i.e., we trigger [this
failure
condition](47f48faed7/lib/model/indexhandler.go (L447-L459))
and, similarly, a future update will look like there was a hole in the
numbering.
I propose to at least temporarily remove this optimisation in order for
things to make more sense. Is there a reason to keep this beyond saving
some database operations?
This should prevent the panic that occurred in this test run:
https://github.com/syncthing/syncthing/actions/runs/11095876010/job/30825046810
```
2024-09-29T21:01:53.5425372Z === RUN TestIssue4357
2024-09-29T21:01:53.5505943Z panic: runtime error: integer divide by zero [recovered]
2024-09-29T21:01:53.5512200Z panic: runtime error: integer divide by zero
2024-09-29T21:01:53.5516633Z
2024-09-29T21:01:53.5523018Z goroutine 2655 [running]:
2024-09-29T21:01:53.5524157Z github.com/thejerf/suture/v4.(*Supervisor).runService.func2.2()
2024-09-29T21:01:53.5527176Z /home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:563 +0xd0
2024-09-29T21:01:53.5530556Z panic({0x1080d20?, 0x1851290?})
2024-09-29T21:01:53.5564723Z /home/runner/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.1.linux-amd64/src/runtime/panic.go:785 +0x132
2024-09-29T21:01:53.5566616Z github.com/syncthing/syncthing/lib/model.(*model).numHashers(0xc0006f6180, {0x117dc1a, 0x7})
2024-09-29T21:01:53.5568061Z /home/runner/work/syncthing/syncthing/lib/model/model.go:2581 +0x210
2024-09-29T21:01:53.5569912Z github.com/syncthing/syncthing/lib/model.(*folder).scanSubdirsChangedAndNew(0xc00c38c808, {0x0, 0x0, 0x0}, 0xc0003fc060)
2024-09-29T21:01:53.5571612Z /home/runner/work/syncthing/syncthing/lib/model/folder.go:653 +0x250
2024-09-29T21:01:53.5573010Z github.com/syncthing/syncthing/lib/model.(*folder).scanSubdirs(0xc00c38c808, {0x0, 0x0, 0x0})
2024-09-29T21:01:53.5574447Z /home/runner/work/syncthing/syncthing/lib/model/folder.go:512 +0xd0f
2024-09-29T21:01:53.5576011Z github.com/syncthing/syncthing/lib/model.(*folder).scanTimerFired(0xc00c38c808)
2024-09-29T21:01:53.5577367Z /home/runner/work/syncthing/syncthing/lib/model/folder.go:916 +0x46
2024-09-29T21:01:53.5579010Z github.com/syncthing/syncthing/lib/model.(*folder).Serve(0xc00c38c808, {0x1307650, 0xc0006a0910})
2024-09-29T21:01:53.5580428Z /home/runner/work/syncthing/syncthing/lib/model/folder.go:205 +0xd7e
2024-09-29T21:01:53.5581624Z github.com/thejerf/suture/v4.(*Supervisor).runService.func2()
2024-09-29T21:01:53.5582978Z /home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:567 +0x249
2024-09-29T21:01:53.5584400Z created by github.com/thejerf/suture/v4.(*Supervisor).runService in goroutine 2651
2024-09-29T21:01:53.5585872Z /home/runner/go/pkg/mod/github.com/thejerf/suture/v4@v4.0.5/supervisor.go:541 +0x32a
2024-09-29T21:01:53.5661413Z FAIL github.com/syncthing/syncthing/lib/model 5.510s
```
### Testing
I have not been able to reproduce the panic throughout a few minutes of
continuously running the test without this fix, but judging by the
traceback it seems to only happen if the test happens to delete the
folder from config at the same time `scanTimerFired` triggers.
In ignores, normalize the input when parsing it.
When scanning, normalize earlier such that the path is already
normalized when checking ignores. This requires splitting normalization
of the string from normalization of the file, as we don't want to
attempt the latter if the file is ignored.
Closes#9598
---------
Co-authored-by: Jakob Borg <jakob@kastelo.net>
There was a bug that the unique ID was not set when reporting was
enabled, and thus the reports where rejected by the server. The unique
ID got set only on startup, so next time Syncthing restarted.
This makes sure to set the unique ID when blank.
I can see already in our Sentry data that there are a fair amount of
these warnings, and mostly the shape of it. Asking users to report them
will likely cause a lot of reporting effort to fairly little additional
value. We can do that when/if we have something more targeted to ask
for.
### Purpose
The primary aim of this change is to minimize log clutter in production
environments. There are many lines in the logs coming from an expected
race condition when two devices connect `already connected to this
device`. These messages do not indicate errors and can overwhelm the log
files with unnecessary noise.
By lowering the logging level, we enhance the usability of the logs,
making it easier for users and developers to identify actual issues
without being distracted
### Testing
1. Build syncthing locally
2. Start two Syncthing instances
```bash
./syncthing -no-browser -home=~/.config/syncthing1
./syncthing -no-browser -home=~/.config/syncthing2
```
3. Enable the DEBUG logs from UI for `connections` package
4. Connect the synching instances by adding remote devices from the UI
5. Observe the logs for the message `XXXX already connected to this
device`
### Screenshots
![image](https://github.com/user-attachments/assets/882ccb4c-d39d-463a-8f66-2aad97010700)
## Authorship
Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
Move infrastructure related commands to under `cmd/infra` and
development stuff to `cmd/dev`. The default build command builds the
regular user facing binaries: syncthing, stdiscosrv, and strelaysrv.
Reasoning in comments. The main motivation is to avoid all the case
checks when walking the filesystem.
"again" as we already tried once, but it caused a major issue ragarding
mtimefs layer. The root of this problem has been fixed in the meantime
in ac8b3342a
The read/write loops may keep going for a while on a closing connection
with lots of read/write activity, as it's random which select case is
chosen. And if the connection is slow or even broken, a single
read/write
can take a long time/until timeout. Add initial non-blocking selects
with only the cases relevant to closing, to prioritize those.
This would have addressed a recent issue that arose when re-ordering our
"filesystem layers". Specifically moving the caseFilesystem to the
outermost layer. The previous cache included the filesystem, and as such
all the layers below. This isn't desirable (to put it mildly), as you
can create different variants of filesystems with different layers for
the same path and options. Concretely this did happen with the mtime
layer, which isn't always present. A test for the mtime related breakage
was added in #9687, and I intend to redo the caseFilesystem reordering
after this.
Ref: #9677
Followup to: #9687
This is to add the generation of `compat.json` as a release artifact. It
describes the runtime requirements of the release in question. The next
step is to have the upgrade server use this information to filter
releases provided to clients. This is per the discussion in #9656
---------
Co-authored-by: Ross Smith II <ross@smithii.com>
This changes the two remaining instances where we use insecure HTTPS to
use standard HTTPS certificate verification.
When we introduced these things, almost a decade ago, HTTPS certificates
were expensive and annoying to get, much of the web was still HTTP, and
many devices seemed to not have up-to-date CA bundles.
Nowadays _all_ of the web is HTTPS and I'm skeptical that any device can
work well without understanding LetsEncrypt certificates in particular.
Our current discovery servers use hardcoded certificates which has
several issues:
- Not great for security if it leaks as there is no way to rotate it
- Not great for infrastructure flexibility as we can't use many load
balancer or TLS termination services
- The certificate is a very oddball ECDSA-SHA384 type certificate which
has higher CPU cost than a more regular certificate, which has real
effects on our infrastructure
Using normal TLS certificates here improves these things.
I expect there will be some very few devices out there for which this
doesn't work. For the foreseeable future they can simply change the
config to use the old URLs and parameters -- it'll be years before we
can retire those entirely.
For the upgrade client this simply seems like better hygiene. While our
releases are signed anyway, protecting the metadata exchange is _better_
and, again, I doubt many clients will fail this today.
The test is quite odd and specific, but it does reproduce the issue that
caused #9677, so I'd propose to add it to have a simple regression test
for the basic scenario. Also the option to the fakefs might come handy
for other scenarios where you want to quickly test some behaviour on a
filesystem without nanosecond precision, without actually needing access
to one.
The preference for languages in the Accept-Language header field
should not be deduced from the listed order, but from the passed
"quality values", according to the HTTP specification:
https://httpwg.org/specs/rfc9110.html#field.accept-language
This implements the parsing of q=values and ordering within the API
backend, to not complicate things further in the GUI code. Entries
with invalid (unparseable) quality values are discarded completely.
* gui: Fix API endpoint in comment.
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
This adds a header with the operating system version, verbatim in
whatever format the operating system reports it, to the upgrade check.
The intention is that the upgrade server can use this information to
filter out (or maybe just mark) potentially unsupported upgrades.
### Purpose
Wrap access to Model for users that use the syncthing Go package. See
discussion:
https://github.com/syncthing/syncthing/pull/9619#pullrequestreview-2212484910
### Testing
It works with the iOS app. Other than that, there are no current users
of this API (to my knowledge) as Model was only exposed recently form
the iOS app.
### Purpose
This PR contains the set of changes needed to make Syncthing work on iOS
for [my iOS app for
Syncthing](https://github.com/pixelspark/sushitrain).
Most changes originate from [the Mobius Sync
fork](http://github.com/MobiusSync/syncthing/tree/ios). I have removed
the changes from their fork that are not strictly needed for my app
(i.e. their changes to the GUI and command line utilities, for instance)
and squashed it all in a single commit.
In summary, the changes are:
* Resolve non-absolute paths to the 'Documents' folder (basically the
only one an app can/should write user data to by default on iOS)
* Tweaking of build flags/conditions for iOS (i.e. determine which
basicfs_watch, ignoreresult variant to build for iOS)
* Disable upgrade mechanism on iOS
* Make `RequestGlobal` and `PullerProgress` public symbols
* Expose syncthing.app's Model instance (app.M)
* Add no-op stub for SetLowPriority on iOS
I would very much appreciate these changes to be (eventually) merged to
mainline syncthing, as this would allow my iOS app to track the mainline
source code directly and removes the need (for me at least) for
maintaining a separate fork. Perhaps the Mobius folks can also benefit
from this (although as noted this branch does not contain their changes
to e.g. the GUI).
### Testing
This branch has been tested with the iOS app and appears to work fine.
The full set of MobiusSync changes has been used before with success.
### Screenshots
n/a
### Documentation
There should be no visible changes for users due to this set of changes.
---------
Co-authored-by: Simon Pickup <simon@pickupinfinity.com>
Tiny cleanup I noticed while trying to fix/test another issue
(https://github.com/syncthing/syncthing/pull/9600). I shortly tried to
figure out what it was used for in the past, but gave up without
results.
Previously we queried cache with backslashes, and stored entries with
slashes. As in no cache hits ever for non-toplevel files. I also
eventually remembered that cache is disabled by default, so this is a
bit pointless, but still right :P
### Purpose
Avoid the issue where the folder marker is deleted by overzealous
cleanup tools because it's just a useless, empty directory.
We create a small file containing a an admonishment to not delete the
directory, and some metadata that is just for human consumption at the
moment. (But it would parse as a valid yaml file if we wanted to read
this, at some point.)
This will only apply when _creating_ a folder marker, that is, existing
setups will not gain the file automatically. Obviously, when using a
custom folder marker none of this applies.
Also, slightly adjust the permission bits for the folder marker directory and file on Unixes, making sure the group & write bits are unset.
### Testing
I've created and deleted a few folders and it appears to behave as I
expect.
### Screenshots
```
jb@ok:~/somefolder % ls -la
total 0
drwxr-xr-x 3 jb staff 96 May 1 08:52 ./
drwx------ 12 jb staff 384 May 1 08:52 ../
drwxr-xr-x 3 jb staff 96 May 1 08:52 .stfolder/
jb@ok:~/somefolder % ls -l .stfolder
total 8
-rw-r--r-- 1 jb staff 122 May 1 08:52 syncthing-folder-39a4b0.txt
jb@ok:~/somefolder % cat .stfolder/syncthing-folder-39a4b0.txt
# This directory is a Syncthing folder marker.
# Do not delete.
folderID: xtdca-cudyf
created: 2024-05-01T08:52:49+02:00
```
Currently the maximum delay is always derived automatically from the
initial delay. This is fine in most cases, but for some use cases (large
files that take a long time to write) we need to be able to set a longer
max delay than the computed value (e.g., 15s delay with 10min timeout).
This adds a small package `geoip` which knows how to download and manage
the Maxmind GeoLite2 database we use. This removes the need for various
scripts to download and manage the geoip database, something that today
happens on Docker startup for the relay pool server and using various
hand written hacks for the usage reporting server.
The database is downloaded when needed and then refreshed on a
best-effort basis weekly.