This adds a statistic to track the last connection duration per device.
It isn't used for much in this PR, but it's available for #7223 to use
in deciding how to order device connection attempts (deprioritizing
devices that just dropped our connection the last time).
This adds error returns to model methods called by the protocol layer.
Returning an error will cause the connection to be torn down as the
message couldn't be handled. Using this to signal that a folder isn't
currently available will then cause a reconnection a few moments later,
when it'll hopefully work better.
Tested manually by running with STRECHECKDBEVERY=0 on a nontrivially
sized setup. This panics reliably before this patch, but just causes a
disconnect/reconnect now.
We incorrectly gave a too small buffer to lz4.Compress, causing it to
allocate in some cases (when the data actually becomes larger when
compressed). This then panicked when passed to the buffer pool.
This ensures a buffer that is large enough, and adds tripwires closer to
the source in case this ever pops up again. There is a test that
exercises the issue.
* lib/protocol: Wait for reader/writer loops on close (fixes#4170)
* waitgroup
* lib/model: Don't hold lock while closing connection
* fix comments
* review (lock once, func argument) and naming
* lib/model: Send cluster config before releasing pmut
* reshuffle
* add model.connReady to track cluster-config status
* Corrected comments/strings
* do it in protocol
This constructs the map of hashes of zero blocks from constants instead
of calculating it at startup time. A new test verifies that the map is
correct.
This avoids waiting until next ping and timeout until the connection is actually
closed both by notifying the peer of the disconnect and by immediately closing
the local end of the connection after that. As a nice side effect, info level
logging about dropped connections now have the actual reason in it, not a generic
timeout error which looks like a real problem with the connection.
Adds a receive only folder type that does not send changes, and where the user can optionally revert local changes. Also changes some of the icons to make the three folder types distinguishable.
We have the invalid bit to indicate that a file isn't good. That's enough for remote devices. For ourselves, it would be good to know sometimes why the file isn't good - because it's an unsupported type, because it matches an ignore pattern, or because we detected the data is bad and we need to rescan it.
Or, and this is the main future reason for the PR, because it's a change detected on a receive only device. We will want something like the invalid flag for those changes, but marking them as invalid today means the scanner will rehash them. Hence something more fine grained is required.
This introduces a LocalFlags fields to the FileInfo where we can stash things that we care about locally. For example,
FlagLocalUnsupported = 1 << 0 // The kind is unsupported, e.g. symlinks on Windows
FlagLocalIgnored = 1 << 1 // Matches local ignore patterns
FlagLocalMustRescan = 1 << 2 // Doesn't match content on disk, must be rechecked fully
The LocalFlags fields isn't sent over the wire; instead the Invalid attribute is calculated based on the flags at index sending time. It's on the FileInfo anyway because that's what we serialize to database etc.
The actual Invalid flag should after this just be considered when building the global state and figuring out availability for remote devices. It is not used for local file index entries.
So there were some issues here. The main problem was that
model.Close(deviceID) was overloaded to mean "the connection was closed
by the protocol layer" and "i want to close this connection". That meant
it could get called twice - once *to* close the connection and then once
more when the connection *was* closed.
After this refactor there is instead a Closed(conn) method that is the
callback. I didn't need to change the parameter in the end, but I think
it's clearer what it means when it takes the connection that was closed
instead of a device ID. To close a connection, the new close(deviceID)
method is used instead, which only closes the underlying connection and
leaves the cleanup to the Closed() callback.
I also changed how we do connection switching. Instead of the connection
service calling close and then adding the connection, it just adds the
new connection. The model knows that it already has a connection and
makes sure to close and clean out that one before adding the new
connection.
To make sure to sequence this properly I added a new map of channels
that get created on connection add and closed by Closed(), so that
AddConnection() can do the close and wait for the cleanup to happen
before proceeding.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3490