Commit Graph

16 Commits

Author SHA1 Message Date
Jakob Borg
61b94b9ea5
lib/db: Drop indexes for outgoing data to force refresh (ref #9496) (#9502)
### Purpose

Resend our indexes since we fixed that index-sending issue.

I made a new thing to only drop the non-local-device index IDs, i.e.,
those for other devices. This means we will see a mismatch and resend
all indexes, but they will not. This is somewhat cleaner as it avoids
resending everything twice when two devices are upgraded, and in any
case, we have no reason to force a resend of incoming indexes here.

### Testing

It happens on my computer...
2024-04-08 11:14:27 +02:00
deepsource-autofix[bot]
755d21953f
all: Remove unused method receivers (#8462)
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
2022-07-28 17:32:45 +02:00
Audrius Butkevicius
87a0eecc31
lib/fs, lib/api, lib/model: Expose mtime remappings as part of /db/file (#7624)
* lib/fs, lib/api, lib/model: Expose mtime remappings as part of /db/file

* Fix wrong error returned by CLI

* Gofmt

* Better names

* Review comments

* Review comments
2021-05-03 11:28:25 +01:00
Simon Frei
bd0c9913cf
lib/db: Remove index ids when dropping folder (#7200) 2020-12-21 11:10:59 +01:00
André Colomb
7502997e7e
all: Store pending devices and folders in database (fixes #7178) (#6443) 2020-12-17 19:54:31 +01:00
Jakob Borg
a218a69530
lib/db: Use explicit byte type for type prefixes (#6754)
This makes Go 1.15 test/vet happy, avoiding "conversion from untyped int
to string yields a string of one rune" warning where we do
string(KeyTypeWhatever) in namespaced.go.

It also clarifies and enforces the currently allowed range of these
numbers so I think it's fine.
2020-06-17 09:37:07 +02:00
greatroar
b6a84ecb0c
lib/db: Remove unused keyer methods (#6723)
Co-authored-by: greatroar <@>
2020-06-08 19:19:18 +01:00
Jakob Borg
531ceb2b0f
Add indirection for large version vectors. (#6376)
This adds indirection of large version vectors in the same manner as we
already to block lists. The effect is the same: less duplicated data in
some situations.

To mitigate the impact for when this indirection
wouldn't be needed I've added an indirection cutoff for both blocks and
the new version vector stuff: we don't do the indirection at all for
small block lists or small version vectors, instead storing it directly
like we used to do. This is faster for small files and small setups.
2020-05-13 14:28:42 +02:00
Audrius Butkevicius
decb967969
all: Reorder sequences for better rename detection (#6574) 2020-05-11 20:15:11 +02:00
Simon Frei
cc2a55892f
lib: Repair sequence inconsistencies (#6367) 2020-03-18 17:34:46 +01:00
Jakob Borg
8fc2dfad0c
lib/db: Deduplicate block lists in database (fixes #5898) (#6283)
* lib/db: Deduplicate block lists in database (fixes #5898)

This moves the block list in the database out from being just a field on
the FileInfo to being an object of its own. When putting a FileInfo we
marshal the block list separately and store it keyed by the sha256 of
the marshalled block list. When getting, if we are not doing a
"truncated" get, we do an extra read and unmarshal for the block list.

Old block lists are cleared out by a periodic GC sweep. The alternative
would be to use refcounting, but:

- There is a larger risk of getting that wrong and either dropping a
  block list in error or keeping them around forever.

- It's tricky with our current database, as we don't have dirty reads.
  This means that if we update two FileInfos with identical block lists in
  the same transaction we can't just do read/modify/write for the ref
  counters as we wouldn't see our own first update. See above about
  tracking this and risks about getting it wrong.

GC uses a bloom filter for keys to avoid heavy RAM usage. GC can't run
concurrently with FileInfo updates so there is a new lock around those
operation at the lowlevel.

The end result is a much more compact database, especially for setups
with many peers where files get duplicated many times.

This is per-key-class stats for a large database I'm currently working
with, under the current schema:

```
 0x00:  9138161 items, 870876 KB keys + 7397482 KB data, 95 B +  809 B avg, 1637651 B max
 0x01:   185656 items,  10388 KB keys + 1790909 KB data, 55 B + 9646 B avg,  924525 B max
 0x02:   916890 items,  84795 KB keys +    3667 KB data, 92 B +    4 B avg,     192 B max
 0x03:      384 items,     27 KB keys +       5 KB data, 72 B +   15 B avg,      87 B max
 0x04:     1109 items,     17 KB keys +      17 KB data, 15 B +   15 B avg,      69 B max
 0x06:      383 items,      3 KB keys +       0 KB data,  9 B +    2 B avg,      18 B max
 0x07:      510 items,      4 KB keys +      12 KB data,  9 B +   24 B avg,      41 B max
 0x08:     1349 items,     12 KB keys +      10 KB data,  9 B +    8 B avg,      17 B max
 0x09:      194 items,      0 KB keys +     123 KB data,  5 B +  634 B avg,   11484 B max
 0x0a:        3 items,      0 KB keys +       0 KB data, 14 B +    7 B avg,      30 B max
 0x0b:   181836 items,   2363 KB keys +   10694 KB data, 13 B +   58 B avg,     173 B max
 Total 10426475 items, 968490 KB keys + 9202925 KB data.
```

Note 7.4 GB of data in class 00, total size 9.2 GB. After running the
migration we get this instead:

```
 0x00:  9138161 items, 870876 KB keys + 2611392 KB data, 95 B +  285 B avg,    4788 B max
 0x01:   185656 items,  10388 KB keys + 1790909 KB data, 55 B + 9646 B avg,  924525 B max
 0x02:   916890 items,  84795 KB keys +    3667 KB data, 92 B +    4 B avg,     192 B max
 0x03:      384 items,     27 KB keys +       5 KB data, 72 B +   15 B avg,      87 B max
 0x04:     1109 items,     17 KB keys +      17 KB data, 15 B +   15 B avg,      69 B max
 0x06:      383 items,      3 KB keys +       0 KB data,  9 B +    2 B avg,      18 B max
 0x07:      510 items,      4 KB keys +      12 KB data,  9 B +   24 B avg,      41 B max
 0x09:      194 items,      0 KB keys +     123 KB data,  5 B +  634 B avg,   11484 B max
 0x0a:        3 items,      0 KB keys +       0 KB data, 14 B +   17 B avg,      51 B max
 0x0b:   181836 items,   2363 KB keys +   10694 KB data, 13 B +   58 B avg,     173 B max
 0x0d:    44282 items,   1461 KB keys +   61081 KB data, 33 B + 1379 B avg, 1637399 B max
 Total 10469408 items, 969939 KB keys + 4477905 KB data.
```

Class 00 is now down to 2.6 GB, with just 61 MB added in class 0d.

There will be some additional reads in some cases which theoretically
hurts performance, but this will be more than compensated for by smaller
writes and better compaction.

On my own home setup which just has three devices and a handful of
folders the difference is smaller in absolute numbers of course, but
still less than half the old size:

```
 0x00:  297122 items,  20894 KB keys + 306860 KB data, 70 B + 1032 B avg, 103237 B max
 0x01:  115299 items,   7738 KB keys +  17542 KB data, 67 B +  152 B avg,    419 B max
 0x02: 1430537 items, 121223 KB keys +   5722 KB data, 84 B +    4 B avg,    253 B max
 ...
 Total 1947412 items, 151268 KB keys + 337485 KB data.
```

to:

```
 0x00:  297122 items,  20894 KB keys +  37038 KB data, 70 B +  124 B avg,    520 B max
 0x01:  115299 items,   7738 KB keys +  17542 KB data, 67 B +  152 B avg,    419 B max
 0x02: 1430537 items, 121223 KB keys +   5722 KB data, 84 B +    4 B avg,    253 B max
 ...
 0x0d:   18041 items,    595 KB keys +  71964 KB data, 33 B + 3988 B avg, 101109 B max
 Total 1965447 items, 151863 KB keys + 139628 KB data.
```

* wip

* wip

* wip

* wip
2020-01-24 08:35:44 +01:00
Jakob Borg
c71116ee94
Implement database abstraction, error checking (ref #5907) (#6107)
This PR does two things, because one lead to the other:

- Move the leveldb specific stuff into a small "backend" package that
defines a backend interface and the leveldb implementation. This allows,
potentially, in the future, switching the db implementation so another
KV store should we wish to do so.

- Add proper error handling all along the way. The db and backend
packages are now errcheck clean. However, I drew the line at modifying
the FileSet API in order to keep this manageable and not continue
refactoring all of the rest of Syncthing. As such, the FileSet methods
still panic on database errors, except for the "database is closed"
error which is instead handled by silently returning as quickly as
possible, with the assumption that we're anyway "on the way out".
2019-11-29 09:11:52 +01:00
Simon Frei
42bd42df5a lib/db: Do all update operations on a single item at once (#5441)
To do so the BlockMap struct has been removed. It behaves like any other prefixed
part of the database, but was not integrated in the recent keyer refactor. Now
the database is only flushed when files are in a consistent state.
2019-01-23 10:22:33 +01:00
Jakob Borg
f04d054b5a lib/db: Document current keyspace 2018-12-10 09:55:21 +01:00
Jakob Borg
3bc918ff78 lib/db: Properly remove FileInfos when dropping folder (#5260) 2018-10-11 12:09:44 +02:00
Jakob Borg
6a87aac84f
lib/db: Refactor key handling (ref #5198) (#5199)
This breaks out the key generation stuff into a separate type. It's
cleaner on its own, and it prepares for future stuff.
2018-09-18 10:41:06 +02:00