This adds a new nanoseconds field to the FileInfo, populates it during
scans and sets the non-truncated time in Chtimes calls.
The actual file modification time is defined as modified_s seconds +
modified_ns nanoseconds. It's expected that the modified_ns field is <=
1e9 (that is, all whole seconds should go in the modified_s field) but
not really enforced. Given that it's an int32 the timestamp can be
adjusted += ~2.9 seconds by the modified_ns field...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3431
Otherwise if the file grows during scanning the block list will be out
of sync with the stated size and things get confused. We could fixup the
size afterwards based on the block list, but then we might see other
inconsistencies as the mtime should have changed to reflect the new size
etc. Better stick to the original state and let the next scan pick up
the change.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3442
This changes the BEP protocol to use protocol buffer serialization
instead of XDR, and therefore also the database format. The local
discovery protocol is also updated to be protocol buffer format.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3276
LGTM: AudriusButkevicius
The old usage pattern was to create a Walker with a bunch of attributes,
then call Walk() on it and nothing else. This extracts the attributes
into a Config struct and exposes a Walk(cfg Config) method instead, as
there was no reason to expose the state-holding walker type.
Also creates a few no-op implementations of the necessary interfaces
so that we can skip nil checks and simiplify things here and there.
Definitely look at this diff without whitespace.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3060
By using copyBuffer we avoid a buffer allocation for each block we hash,
and by allocating space for the hashes up front we get one large backing
array instead of a small one for each block. For a 17 MiB file this
makes quite a difference in the amount of memory allocated:
benchmark old ns/op new ns/op delta
BenchmarkHashFile-8 102045110 100459158 -1.55%
benchmark old allocs new allocs delta
BenchmarkHashFile-8 415 144 -65.30%
benchmark old bytes new bytes delta
BenchmarkHashFile-8 4504296 48104 -98.93%
This implements a new debug/trace infrastructure based on a slightly
hacked up logger. Instead of the traditional "if debug { ... }" I've
rewritten the logger to have no-op Debugln and Debugf, unless debugging
has been enabled for a given "facility". The "facility" is just a
string, typically a package name.
This will be slightly slower than before; but not that much as it's
mostly a function call that returns immediately. For the cases where it
matters (the Debugln takes a hex.Dump() of something for example, and
it's not in a very occasional "if err != nil" branch) there is an
l.ShouldDebug(facility) that is fast enough to be used like the old "if
debug".
The point of all this is that we can now toggle debugging for the
various packages on and off at runtime. There's a new method
/rest/system/debug that can be POSTed a set of facilities to enable and
disable debug for, or GET from to get a list of facilities with
descriptions and their current debug status.
Similarly a /rest/system/log?since=... can grab the latest log entries,
up to 250 of them (hardcoded constant in main.go) plus the initial few.
Not implemented in this commit (but planned) is a simple debug GUI
available on /debug that shows the current log in an easily pasteable
format and has checkboxes to enable the various debug facilities.
The debug instructions to a user then becomes "visit this URL, check
these boxes, reproduce your problem, copy and paste the log". The actual
log viewer on the hypothetical /debug URL can poll regularly for new log
entries and this bypass the 250 line limit.
The existing STTRACE=foo variable is still obeyed and just sets the
start state of the system.