* lib/protocol: Require at least 3.125% savings from compression
The new lz4 library doesn't need its output buffer to be the maximum
size, unlike the old one (which would allocate if it weren't). It can
take a buffer that is of a smaller size and will report if compressed
data can fit inside the buffer (with a small chance of reporting a false
negative). Use that property to our advantage by requiring compressed
data to be at most n-n/32 = .96875*n bytes long for n input bytes.
* lib/protocol: Remove unused receivers
To make DeepSource happy.
* lib/protocol: Micro-optimize lz4Compress
Only write the length if compression was successful. This is a memory
write, so the compiler can't reorder it.
Only check the return value of lz4.CompressBlock. Length-zero inputs
are always expanded by LZ4 compression (the library documents this),
so the check on len(src) isn't needed.
* lib/model: Remove bogus fields from connections API endpoint.
Switch the returned data type for the /rest/system/connections element
"total" to use only the Statistics struct. The other fields of the
ConnectionInfo struct are not populated and misleading.
* Lowercase JSON field names.
* lib/model: Get rid of ConnectionInfo.MarshalJSON().
It was missing the StartedAt field from the embedded Statistics
struct. Just lowercasing the JSON attribute names can be done just as
easily with annotations.
* lib/model: Remove bogus startedAt field from totals.
Instead of using the Statistics type with one field empty, just switch
to a free-form map with the three needed fields.
This truncates times meant for API consumption to second precision,
where fractions won't typically matter or add any value. Exception to
this is timestamps on logs and events, and of course I'm not touching
things like file metadata.
I'm not 100% certain this is an exhaustive change, but it's the things I
found by grepping and following the breadcrumbs from lib/api...
I also considered general-but-ugly solutions, like having the API
serializer itself do reflection magic or even regexps on returned
objects, but decided against it because aurgh...
This adds a statistic to track the last connection duration per device.
It isn't used for much in this PR, but it's available for #7223 to use
in deciding how to order device connection attempts (deprioritizing
devices that just dropped our connection the last time).
This is shorter, skips two allocations, makes the function inlineable
and is safer, since the compiler now check whether
DeviceIDLength == sha256.Size.
crypto/rand output is cryptographically secure by the Go library
documentation's promise. That, rather than strength (= passes randomness
tests) is the property that Syncthing needs).