This changes the build script to build all the things in one go
invocation, instead of one invocation per cmd. This is a lot faster
because it means more things get compiled concurrently. It's especially
a lot faster when things *don't* need to be rebuilt, possibly because it
only needs to build the dependency map and such once instead of once per
binary.
In order for this to work we need to be able to pass the same ldflags to
all the binaries. This means we can't set the program name with an
ldflag.
When it needs to rebuild everything (go clean -cache):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 65.82s user 11.28s system 574% cpu 13.409 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 63.26s user 7.12s system 1220% cpu 5.766 total
On a subsequent run (nothing to build, just link the binaries):
( ./old-build -gocmd go1.14.2 build all 2> /dev/null; ) 26.58s user 7.53s system 582% cpu 5.853 total
( ./new-build -gocmd go1.14.2 build all 2> /dev/null; ) 18.66s user 2.45s system 1090% cpu 1.935 total
Successful LRU cache lookups modify the cache's recency list, so
RWMutex.RLock isn't enough protection.
Secondarily, multiple concurrent lookups with the same key should not
create separate rate limiters, so release the lock only when presence
of the key in the cache has been ascertained.
Co-authored-by: greatroar <@>
We set the STRESTART environment when starting the inner process after
the first time, but this didn't persist when restarting the monitor
process. Now it does.
- In the few places where we wrap errors, use the new Go 1.13 "%w"
construction instead of %s or %v.
- Where we create errors with constant strings, consistently use
errors.New and not fmt.Errorf.
- Remove capitalization from errors in the few places where we had that.
I was working on indirecting version vectors, and that resulted in some
refactoring and improving the existing block indirection stuff. We may
or may not end up doing the version vector indirection, but I think
these changes are reasonable anyhow and will simplify the diff
significantly if we do go there. The main points are:
- A bunch of renaming to make the indirection and GC not about "blocks"
but about "indirection".
- Adding a cutoff so that we don't actually indirect for small block
lists. This gets us better performance when handling small files as it
cuts out the indirection for quite small loss in space efficiency.
- Being paranoid and always recalculating the hash on put. This costs
some CPU, but the consequences if a buggy or malicious implementation
silently substituted the block list by lying about the hash would be bad.
Also retain the interval over restarts by storing last GC time in the
database. This to make sure that GC eventually happens even if the
interval is configured to a long time (say, a month).
* lib/db: Deduplicate block lists in database (fixes#5898)
This moves the block list in the database out from being just a field on
the FileInfo to being an object of its own. When putting a FileInfo we
marshal the block list separately and store it keyed by the sha256 of
the marshalled block list. When getting, if we are not doing a
"truncated" get, we do an extra read and unmarshal for the block list.
Old block lists are cleared out by a periodic GC sweep. The alternative
would be to use refcounting, but:
- There is a larger risk of getting that wrong and either dropping a
block list in error or keeping them around forever.
- It's tricky with our current database, as we don't have dirty reads.
This means that if we update two FileInfos with identical block lists in
the same transaction we can't just do read/modify/write for the ref
counters as we wouldn't see our own first update. See above about
tracking this and risks about getting it wrong.
GC uses a bloom filter for keys to avoid heavy RAM usage. GC can't run
concurrently with FileInfo updates so there is a new lock around those
operation at the lowlevel.
The end result is a much more compact database, especially for setups
with many peers where files get duplicated many times.
This is per-key-class stats for a large database I'm currently working
with, under the current schema:
```
0x00: 9138161 items, 870876 KB keys + 7397482 KB data, 95 B + 809 B avg, 1637651 B max
0x01: 185656 items, 10388 KB keys + 1790909 KB data, 55 B + 9646 B avg, 924525 B max
0x02: 916890 items, 84795 KB keys + 3667 KB data, 92 B + 4 B avg, 192 B max
0x03: 384 items, 27 KB keys + 5 KB data, 72 B + 15 B avg, 87 B max
0x04: 1109 items, 17 KB keys + 17 KB data, 15 B + 15 B avg, 69 B max
0x06: 383 items, 3 KB keys + 0 KB data, 9 B + 2 B avg, 18 B max
0x07: 510 items, 4 KB keys + 12 KB data, 9 B + 24 B avg, 41 B max
0x08: 1349 items, 12 KB keys + 10 KB data, 9 B + 8 B avg, 17 B max
0x09: 194 items, 0 KB keys + 123 KB data, 5 B + 634 B avg, 11484 B max
0x0a: 3 items, 0 KB keys + 0 KB data, 14 B + 7 B avg, 30 B max
0x0b: 181836 items, 2363 KB keys + 10694 KB data, 13 B + 58 B avg, 173 B max
Total 10426475 items, 968490 KB keys + 9202925 KB data.
```
Note 7.4 GB of data in class 00, total size 9.2 GB. After running the
migration we get this instead:
```
0x00: 9138161 items, 870876 KB keys + 2611392 KB data, 95 B + 285 B avg, 4788 B max
0x01: 185656 items, 10388 KB keys + 1790909 KB data, 55 B + 9646 B avg, 924525 B max
0x02: 916890 items, 84795 KB keys + 3667 KB data, 92 B + 4 B avg, 192 B max
0x03: 384 items, 27 KB keys + 5 KB data, 72 B + 15 B avg, 87 B max
0x04: 1109 items, 17 KB keys + 17 KB data, 15 B + 15 B avg, 69 B max
0x06: 383 items, 3 KB keys + 0 KB data, 9 B + 2 B avg, 18 B max
0x07: 510 items, 4 KB keys + 12 KB data, 9 B + 24 B avg, 41 B max
0x09: 194 items, 0 KB keys + 123 KB data, 5 B + 634 B avg, 11484 B max
0x0a: 3 items, 0 KB keys + 0 KB data, 14 B + 17 B avg, 51 B max
0x0b: 181836 items, 2363 KB keys + 10694 KB data, 13 B + 58 B avg, 173 B max
0x0d: 44282 items, 1461 KB keys + 61081 KB data, 33 B + 1379 B avg, 1637399 B max
Total 10469408 items, 969939 KB keys + 4477905 KB data.
```
Class 00 is now down to 2.6 GB, with just 61 MB added in class 0d.
There will be some additional reads in some cases which theoretically
hurts performance, but this will be more than compensated for by smaller
writes and better compaction.
On my own home setup which just has three devices and a handful of
folders the difference is smaller in absolute numbers of course, but
still less than half the old size:
```
0x00: 297122 items, 20894 KB keys + 306860 KB data, 70 B + 1032 B avg, 103237 B max
0x01: 115299 items, 7738 KB keys + 17542 KB data, 67 B + 152 B avg, 419 B max
0x02: 1430537 items, 121223 KB keys + 5722 KB data, 84 B + 4 B avg, 253 B max
...
Total 1947412 items, 151268 KB keys + 337485 KB data.
```
to:
```
0x00: 297122 items, 20894 KB keys + 37038 KB data, 70 B + 124 B avg, 520 B max
0x01: 115299 items, 7738 KB keys + 17542 KB data, 67 B + 152 B avg, 419 B max
0x02: 1430537 items, 121223 KB keys + 5722 KB data, 84 B + 4 B avg, 253 B max
...
0x0d: 18041 items, 595 KB keys + 71964 KB data, 33 B + 3988 B avg, 101109 B max
Total 1965447 items, 151863 KB keys + 139628 KB data.
```
* wip
* wip
* wip
* wip
This PR does two things, because one lead to the other:
- Move the leveldb specific stuff into a small "backend" package that
defines a backend interface and the leveldb implementation. This allows,
potentially, in the future, switching the db implementation so another
KV store should we wish to do so.
- Add proper error handling all along the way. The db and backend
packages are now errcheck clean. However, I drew the line at modifying
the FileSet API in order to keep this manageable and not continue
refactoring all of the rest of Syncthing. As such, the FileSet methods
still panic on database errors, except for the "database is closed"
error which is instead handled by silently returning as quickly as
possible, with the assumption that we're anyway "on the way out".
Since we've taken upon ourselves to create a log file by default on
Windows, this adds proper management of that log file. There are two new
options:
-log-max-old-files="3" Number of old files to keep (zero to keep only current).
-log-max-size="10485760" Maximum size of any file (zero to disable log rotation).
The default values result in four files (syncthing.log, synchting.0.log,
..., syncthing.3.log) each up to 10 MiB in size. To not use log rotation
at all, the user can say --log-max-size=0.
This is apparently an old benchmarking tool. I'd forgotten about it.
Since 67b8ef1f3e the build script tries to
build all binaries explicitly by default, and this fails on Windows as
this tool doesn't build on Windows.
Kill it with fire.
This makes sure addresses are sorted when coming in from the API. The
database merge operation still checks for correct ordering (which is
quick) and sorts if it isn't correct (legacy database record or
replication peer), but then does a copy first.
Tested with -race in production...
This adds a certificate lifetime parameter to our certificate generation
and hard codes it to twenty years in some uninteresting places. In the
main binary there are a couple of constants but it results in twenty
years for the device certificate and 820 days for the HTTPS one. 820 is
less than the 825 maximum Apple allows nowadays.
This also means we must be prepared for certificates to expire, so I add
some handling for that and generate a new certificate when needed. For
self signed certificates we regenerate a month ahead of time. For other
certificates we leave well enough alone.
The relay and discosrv didn't use the new lib/build package, now they
do. Conversely the lib/build package wasn't aware there might be other
users and hard coded the program name - now it's set by the build
script
This is the result of:
- Changing build.go to take the protobuf version from the modules
instead of hardcoded
- `go get github.com/gogo/protobuf@v1.3.0` to upgrade
- `go run build.go proto` to regenerate our code