This adds the BlocksHash field from the FileInfo to our API output. It
can be useful for debugging, or for external tools. I'm intentionally
leaving it as an opaque base64 string because no meaning should be
derived from it: it's just a string.
* lib/connections: Cache isLAN decision for later external access.
The check whether a remote device's address is on a local network
currently happens when handling the Hello message, to configure the
limiters. Save the result to the ConnectionInfo and pass it out as
part of the model's ConnectionInfo struct in ConnectionStats().
* gui: Use provided connection attribute to distinguish LAN / WAN.
Replace the dumb IP address check which didn't catch common cases and
actually could contradict what the backend decided. That could have
been confusing if the GUI says WAN, but the limiter is not actually
applied because the backend thinks it's a LAN.
Add strings for QUIC and relay connections to also differentiate
between LAN and WAN.
* gui: Redefine reception level icons for all connection types.
Move the mapping to the JS code, as it is much easier to handle
multiple switch cases by fall-through there.
QUIC is regarded no less than TCP anymore. LAN and WAN make the
difference between levels 4 / 3 and 2 / 1:
{TCP,QUIC} LAN --> {TCP,QUIC} WAN --> Relay LAN --> Relay WAN -->
Disconnected.
Previous debug input didn't really give enough info to show what was
happening, while it also printed full block lists which are enormously
verbose. Now it consistently prints 1. what it sees on disk, 2. what it
got from CurrentFile (without blocks), 3. the action taken on that file.
This adds support for syncing extended attributes on supported
filesystem on Linux, macOS, FreeBSD and NetBSD. Windows is currently
excluded because the APIs seem onerous and annoying and frankly the uses
cases seem few and far between. On Unixes this also covers ACLs as those
are stored as extended attributes.
Similar to ownership syncing this will optional & opt-in, which two
settings controlling the main behavior: one to "sync" xattrs (read &
write) and another one to "scan" xattrs (only read them so other devices
can "sync" them, but not apply any locally).
Co-authored-by: Tomasz Wilczyński <twilczynski@naver.com>
This replaces old style errors.Wrap with modern fmt.Errorf and removes
the (direct) dependency on github.com/pkg/errors. A couple of cases are
adjusted by hand as previously errors.Wrap(nil, ...) would return nil,
which is not what fmt.Errorf does.
all: Add package runtimeos for runtime.GOOS comparisons
I grew tired of hand written string comparisons. This adds generated
constants for the GOOS values, and predefined Is$OS constants that can
be iffed on. In a couple of places I rewrote trivial switch:es to if:s,
and added Illumos where we checked for Solaris (because they are
effectively the same, and if we're going to target one of them that
would be Illumos...).
This adds support for syncing ownership on Unixes and on Windows. The
scanner always picks up ownership information, but it is not applied
unless the new folder option "Sync Ownership" is set.
Ownership data is stored in a new FileInfo field called "platform data". This
is intended to hold further platform-specific data in the future
(specifically, extended attributes), which is why the whole design is a
bit overkill for just ownership.
Having a separate mutex for the three or four instructions needed to
fetch and increment nextID means the overhead exceeds the cost of this
operation. nextID is now handled inside the critical section for
awaiting instead, while the more expensive channel creation has been
moved outside it.
This is mostly a simplification, though it may have minor performance
benefits in some situations. The single-threaded sender benchmark shows
no significant difference:
name old speed new speed delta
RequestsRawTCP-8 55.3MB/s ± 7% 56.6MB/s ± 6% ~ (p=0.190 n=10+10)
RequestsTLSoTCP-8 20.5MB/s ±20% 20.8MB/s ± 8% ~ (p=0.604 n=10+9)
* lib/protocol: Require at least 3.125% savings from compression
The new lz4 library doesn't need its output buffer to be the maximum
size, unlike the old one (which would allocate if it weren't). It can
take a buffer that is of a smaller size and will report if compressed
data can fit inside the buffer (with a small chance of reporting a false
negative). Use that property to our advantage by requiring compressed
data to be at most n-n/32 = .96875*n bytes long for n input bytes.
* lib/protocol: Remove unused receivers
To make DeepSource happy.
* lib/protocol: Micro-optimize lz4Compress
Only write the length if compression was successful. This is a memory
write, so the compiler can't reorder it.
Only check the return value of lz4.CompressBlock. Length-zero inputs
are always expanded by LZ4 compression (the library documents this),
so the check on len(src) isn't needed.
* lib/model: Remove bogus fields from connections API endpoint.
Switch the returned data type for the /rest/system/connections element
"total" to use only the Statistics struct. The other fields of the
ConnectionInfo struct are not populated and misleading.
* Lowercase JSON field names.
* lib/model: Get rid of ConnectionInfo.MarshalJSON().
It was missing the StartedAt field from the embedded Statistics
struct. Just lowercasing the JSON attribute names can be done just as
easily with annotations.
* lib/model: Remove bogus startedAt field from totals.
Instead of using the Statistics type with one field empty, just switch
to a free-form map with the three needed fields.
This truncates times meant for API consumption to second precision,
where fractions won't typically matter or add any value. Exception to
this is timestamps on logs and events, and of course I'm not touching
things like file metadata.
I'm not 100% certain this is an exhaustive change, but it's the things I
found by grepping and following the breadcrumbs from lib/api...
I also considered general-but-ugly solutions, like having the API
serializer itself do reflection magic or even regexps on returned
objects, but decided against it because aurgh...
This adds a statistic to track the last connection duration per device.
It isn't used for much in this PR, but it's available for #7223 to use
in deciding how to order device connection attempts (deprioritizing
devices that just dropped our connection the last time).