refactor: use modern Protobuf encoder (#9817)

At a high level, this is what I've done and why:

- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.

For the sake of discussion, some things I tried that turned out not to
work...

### Embedding / wrapping

Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:

```
package protocol

type FileInfo struct {
  *generated.FileInfo
}
```

This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.

### Aliasing

```
package protocol

type FileInfo = generated.FileInfo
```

Doesn't help because you can't attach methods to it, plus all the above.

### Generating the types into the target package like we do now and
attaching methods

This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).

### Methods to functions

I considered just moving all the methods we attach to functions in a
specific package, so that for example

```
package protocol

func (f FileInfo) Equal(other FileInfo) bool
```

would become

```
package fileinfos

func Equal(a, b *generated.FileInfo) bool
```

and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.

Fixes #8247
This commit is contained in:
Jakob Borg 2024-12-01 16:50:17 +01:00 committed by GitHub
parent 2b8ee4c7a5
commit 77970d5113
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
203 changed files with 7437 additions and 28636 deletions

View File

@ -1,26 +1,39 @@
linters-settings:
maligned:
suggest-new: true
linters: linters:
enable-all: true enable-all: true
disable: disable:
- goimports - cyclop
- depguard - depguard
- lll - exhaustive
- gochecknoinits - exhaustruct
- gochecknoglobals
- gofmt
- scopelint
- gocyclo
- funlen - funlen
- wsl - gci
- gochecknoglobals
- gochecknoinits
- gocognit - gocognit
- goconst
- gocyclo
- godox - godox
- gofmt
- goimports
- gomoddirectives
- inamedparam
- interfacebloat
- ireturn
- lll
- maintidx
- nestif
- nonamedreturns
- paralleltest
- protogetter
- scopelint
- tagalign
- tagliatelle
- testpackage
- varnamelen
- wsl
service: issues:
golangci-lint-version: 1.21.x exclude-dirs:
prepare: - internal/gen
- rm -f go.sum # 1.12 -> 1.13 issues with QUIC-go - cmd/dev
- GO111MODULE=on go mod vendor - repos
- go run build.go assets

12
buf.gen.yaml Normal file
View File

@ -0,0 +1,12 @@
version: v2
managed:
enabled: true
override:
- file_option: go_package_prefix
value: github.com/syncthing/syncthing/internal/gen
plugins:
- remote: buf.build/protocolbuffers/go:v1.35.1
out: .
opt: module=github.com/syncthing/syncthing
inputs:
- directory: proto

10
buf.yaml Normal file
View File

@ -0,0 +1,10 @@
version: v2
modules:
- path: proto
name: github.com/syncthing/syncthing
lint:
use:
- STANDARD
breaking:
use:
- WIRE_JSON

View File

@ -925,22 +925,9 @@ func updateDependencies() {
} }
func proto() { func proto() {
pv := protobufVersion() // buf needs to be installed
repo := "https://github.com/gogo/protobuf.git" // https://buf.build/docs/installation/
path := filepath.Join("repos", "protobuf") runPrint("buf", "generate")
runPrint(goCmd, "install", fmt.Sprintf("github.com/gogo/protobuf/protoc-gen-gogofast@%v", pv))
os.MkdirAll("repos", 0o755)
if _, err := os.Stat(path); err != nil {
runPrint("git", "clone", repo, path)
} else {
runPrintInDir(path, "git", "fetch")
}
runPrintInDir(path, "git", "checkout", pv)
runPrint(goCmd, "generate", "github.com/syncthing/syncthing/cmd/stdiscosrv")
runPrint(goCmd, "generate", "proto/generate.go")
} }
func testmocks() { func testmocks() {
@ -1483,14 +1470,6 @@ func (t target) BinaryName() string {
return t.binaryName return t.binaryName
} }
func protobufVersion() string {
bs, err := runError(goCmd, "list", "-f", "{{.Version}}", "-m", "github.com/gogo/protobuf")
if err != nil {
log.Fatal("Getting protobuf version:", err)
}
return string(bs)
}
func currentAndLatestVersions(n int) ([]string, error) { func currentAndLatestVersions(n int) ([]string, error) {
bs, err := runError("git", "tag", "--sort", "taggerdate") bs, err := runError("git", "tag", "--sort", "taggerdate")
if err != nil { if err != nil {

View File

@ -15,6 +15,9 @@ import (
"strings" "strings"
"time" "time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/discoproto"
_ "github.com/syncthing/syncthing/lib/automaxprocs" _ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/beacon" "github.com/syncthing/syncthing/lib/beacon"
"github.com/syncthing/syncthing/lib/discover" "github.com/syncthing/syncthing/lib/discover"
@ -75,20 +78,21 @@ func recv(bc beacon.Interface) {
continue continue
} }
var ann discover.Announce var ann discoproto.Announce
ann.Unmarshal(data[4:]) proto.Unmarshal(data[4:], &ann)
if ann.ID == myID { id, _ := protocol.DeviceIDFromBytes(ann.Id)
if id == myID {
// This is one of our own fake packets, don't print it. // This is one of our own fake packets, don't print it.
continue continue
} }
// Print announcement details for the first packet from a given // Print announcement details for the first packet from a given
// device ID and source address, or if -all was given. // device ID and source address, or if -all was given.
key := ann.ID.String() + src.String() key := id.String() + src.String()
if all || !seen[key] { if all || !seen[key] {
log.Printf("Announcement from %v\n", src) log.Printf("Announcement from %v\n", src)
log.Printf(" %v at %s\n", ann.ID, strings.Join(ann.Addresses, ", ")) log.Printf(" %v at %s\n", id, strings.Join(ann.Addresses, ", "))
seen[key] = true seen[key] = true
} }
} }
@ -96,11 +100,11 @@ func recv(bc beacon.Interface) {
// sends fake discovery announcements once every second // sends fake discovery announcements once every second
func send(bc beacon.Interface) { func send(bc beacon.Interface) {
ann := discover.Announce{ ann := &discoproto.Announce{
ID: myID, Id: myID[:],
Addresses: []string{"tcp://fake.example.com:12345"}, Addresses: []string{"tcp://fake.example.com:12345"},
} }
bs, _ := ann.Marshal() bs, _ := proto.Marshal(ann)
for { for {
bc.Send(bs) bc.Send(bs)

View File

@ -23,6 +23,7 @@ import (
lru "github.com/hashicorp/golang-lru/v2" lru "github.com/hashicorp/golang-lru/v2"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv/auto" "github.com/syncthing/syncthing/cmd/infra/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/assets" "github.com/syncthing/syncthing/lib/assets"
_ "github.com/syncthing/syncthing/lib/automaxprocs" _ "github.com/syncthing/syncthing/lib/automaxprocs"

View File

@ -17,17 +17,15 @@ import (
"log/slog" "log/slog"
"net" "net"
"net/http" "net/http"
_ "net/http/pprof"
"os" "os"
"regexp" "regexp"
"strings" "strings"
"time" "time"
_ "net/http/pprof"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/puzpuzpuz/xsync/v3" "github.com/puzpuzpuz/xsync/v3"
"github.com/syncthing/syncthing/lib/build" "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip" "github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/s3" "github.com/syncthing/syncthing/lib/s3"

View File

@ -13,8 +13,12 @@ import (
"log" "log"
amqp "github.com/rabbitmq/amqp091-go" amqp "github.com/rabbitmq/amqp091-go"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4" "github.com/thejerf/suture/v4"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/internal/protoutil"
"github.com/syncthing/syncthing/lib/protocol"
) )
type amqpReplicator struct { type amqpReplicator struct {
@ -22,7 +26,7 @@ type amqpReplicator struct {
broker string broker string
sender *amqpSender sender *amqpSender
receiver *amqpReceiver receiver *amqpReceiver
outbox chan ReplicationRecord outbox chan *discosrv.ReplicationRecord
} }
func newAMQPReplicator(broker, clientID string, db database) *amqpReplicator { func newAMQPReplicator(broker, clientID string, db database) *amqpReplicator {
@ -31,7 +35,7 @@ func newAMQPReplicator(broker, clientID string, db database) *amqpReplicator {
sender := &amqpSender{ sender := &amqpSender{
broker: broker, broker: broker,
clientID: clientID, clientID: clientID,
outbox: make(chan ReplicationRecord, replicationOutboxSize), outbox: make(chan *discosrv.ReplicationRecord, replicationOutboxSize),
} }
svc.Add(sender) svc.Add(sender)
@ -47,18 +51,18 @@ func newAMQPReplicator(broker, clientID string, db database) *amqpReplicator {
broker: broker, broker: broker,
sender: sender, sender: sender,
receiver: receiver, receiver: receiver,
outbox: make(chan ReplicationRecord, replicationOutboxSize), outbox: make(chan *discosrv.ReplicationRecord, replicationOutboxSize),
} }
} }
func (s *amqpReplicator) send(key *protocol.DeviceID, ps []DatabaseAddress, seen int64) { func (s *amqpReplicator) send(key *protocol.DeviceID, ps []*discosrv.DatabaseAddress, seen int64) {
s.sender.send(key, ps, seen) s.sender.send(key, ps, seen)
} }
type amqpSender struct { type amqpSender struct {
broker string broker string
clientID string clientID string
outbox chan ReplicationRecord outbox chan *discosrv.ReplicationRecord
} }
func (s *amqpSender) Serve(ctx context.Context) error { func (s *amqpSender) Serve(ctx context.Context) error {
@ -73,12 +77,12 @@ func (s *amqpSender) Serve(ctx context.Context) error {
for { for {
select { select {
case rec := <-s.outbox: case rec := <-s.outbox:
size := rec.Size() size := proto.Size(rec)
if len(buf) < size { if len(buf) < size {
buf = make([]byte, size) buf = make([]byte, size)
} }
n, err := rec.MarshalTo(buf) n, err := protoutil.MarshalTo(buf, rec)
if err != nil { if err != nil {
replicationSendsTotal.WithLabelValues("error").Inc() replicationSendsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication marshal: %w", err) return fmt.Errorf("replication marshal: %w", err)
@ -111,8 +115,8 @@ func (s *amqpSender) String() string {
return fmt.Sprintf("amqpSender(%q)", s.broker) return fmt.Sprintf("amqpSender(%q)", s.broker)
} }
func (s *amqpSender) send(key *protocol.DeviceID, ps []DatabaseAddress, seen int64) { func (s *amqpSender) send(key *protocol.DeviceID, ps []*discosrv.DatabaseAddress, seen int64) {
item := ReplicationRecord{ item := &discosrv.ReplicationRecord{
Key: key[:], Key: key[:],
Addresses: ps, Addresses: ps,
Seen: seen, Seen: seen,
@ -158,8 +162,8 @@ func (s *amqpReceiver) Serve(ctx context.Context) error {
continue continue
} }
var rec ReplicationRecord var rec discosrv.ReplicationRecord
if err := rec.Unmarshal(msg.Body); err != nil { if err := proto.Unmarshal(msg.Body, &rec); err != nil {
replicationRecvsTotal.WithLabelValues("error").Inc() replicationRecvsTotal.WithLabelValues("error").Inc()
return fmt.Errorf("replication unmarshal: %w", err) return fmt.Errorf("replication unmarshal: %w", err)
} }

View File

@ -28,6 +28,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/stringutil" "github.com/syncthing/syncthing/lib/stringutil"
) )
@ -52,7 +53,7 @@ type apiSrv struct {
} }
type replicator interface { type replicator interface {
send(key *protocol.DeviceID, addrs []DatabaseAddress, seen int64) send(key *protocol.DeviceID, addrs []*discosrv.DatabaseAddress, seen int64)
} }
type requestID int64 type requestID int64
@ -312,7 +313,7 @@ func (s *apiSrv) handleAnnounce(deviceID protocol.DeviceID, addresses []string)
slices.Sort(addresses) slices.Sort(addresses)
addresses = slices.Compact(addresses) addresses = slices.Compact(addresses)
dbAddrs := make([]DatabaseAddress, len(addresses)) dbAddrs := make([]*discosrv.DatabaseAddress, len(addresses))
for i := range addresses { for i := range addresses {
dbAddrs[i].Address = addresses[i] dbAddrs[i].Address = addresses[i]
dbAddrs[i].Expires = expire dbAddrs[i].Expires = expire
@ -511,7 +512,7 @@ func (lrw *loggingResponseWriter) WriteHeader(code int) {
lrw.ResponseWriter.WriteHeader(code) lrw.ResponseWriter.WriteHeader(code)
} }
func addressStrs(dbAddrs []DatabaseAddress) []string { func addressStrs(dbAddrs []*discosrv.DatabaseAddress) []string {
res := make([]string, len(dbAddrs)) res := make([]string, len(dbAddrs))
for i, a := range dbAddrs { for i, a := range dbAddrs {
res[i] = a.Address res[i] = a.Address

View File

@ -4,9 +4,6 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file, // License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/. // You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate go run ../../proto/scripts/protofmt.go database.proto
//go:generate protoc -I ../../ -I . --gogofast_out=. database.proto
package main package main
import ( import (
@ -25,6 +22,10 @@ import (
"time" "time"
"github.com/puzpuzpuz/xsync/v3" "github.com/puzpuzpuz/xsync/v3"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/internal/protoutil"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand" "github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/s3" "github.com/syncthing/syncthing/lib/s3"
@ -41,13 +42,13 @@ func (defaultClock) Now() time.Time {
} }
type database interface { type database interface {
put(key *protocol.DeviceID, rec DatabaseRecord) error put(key *protocol.DeviceID, rec *discosrv.DatabaseRecord) error
merge(key *protocol.DeviceID, addrs []DatabaseAddress, seen int64) error merge(key *protocol.DeviceID, addrs []*discosrv.DatabaseAddress, seen int64) error
get(key *protocol.DeviceID) (DatabaseRecord, error) get(key *protocol.DeviceID) (*discosrv.DatabaseRecord, error)
} }
type inMemoryStore struct { type inMemoryStore struct {
m *xsync.MapOf[protocol.DeviceID, DatabaseRecord] m *xsync.MapOf[protocol.DeviceID, *discosrv.DatabaseRecord]
dir string dir string
flushInterval time.Duration flushInterval time.Duration
s3 *s3.Session s3 *s3.Session
@ -61,7 +62,7 @@ func newInMemoryStore(dir string, flushInterval time.Duration, s3sess *s3.Sessio
hn = rand.String(8) hn = rand.String(8)
} }
s := &inMemoryStore{ s := &inMemoryStore{
m: xsync.NewMapOf[protocol.DeviceID, DatabaseRecord](), m: xsync.NewMapOf[protocol.DeviceID, *discosrv.DatabaseRecord](),
dir: dir, dir: dir,
flushInterval: flushInterval, flushInterval: flushInterval,
s3: s3sess, s3: s3sess,
@ -95,7 +96,7 @@ func newInMemoryStore(dir string, flushInterval time.Duration, s3sess *s3.Sessio
return s return s
} }
func (s *inMemoryStore) put(key *protocol.DeviceID, rec DatabaseRecord) error { func (s *inMemoryStore) put(key *protocol.DeviceID, rec *discosrv.DatabaseRecord) error {
t0 := time.Now() t0 := time.Now()
s.m.Store(*key, rec) s.m.Store(*key, rec)
databaseOperations.WithLabelValues(dbOpPut, dbResSuccess).Inc() databaseOperations.WithLabelValues(dbOpPut, dbResSuccess).Inc()
@ -103,16 +104,17 @@ func (s *inMemoryStore) put(key *protocol.DeviceID, rec DatabaseRecord) error {
return nil return nil
} }
func (s *inMemoryStore) merge(key *protocol.DeviceID, addrs []DatabaseAddress, seen int64) error { func (s *inMemoryStore) merge(key *protocol.DeviceID, addrs []*discosrv.DatabaseAddress, seen int64) error {
t0 := time.Now() t0 := time.Now()
newRec := DatabaseRecord{ newRec := &discosrv.DatabaseRecord{
Addresses: addrs, Addresses: addrs,
Seen: seen, Seen: seen,
} }
oldRec, _ := s.m.Load(*key) if oldRec, ok := s.m.Load(*key); ok {
newRec = merge(oldRec, newRec) newRec = merge(oldRec, newRec)
}
s.m.Store(*key, newRec) s.m.Store(*key, newRec)
databaseOperations.WithLabelValues(dbOpMerge, dbResSuccess).Inc() databaseOperations.WithLabelValues(dbOpMerge, dbResSuccess).Inc()
@ -121,7 +123,7 @@ func (s *inMemoryStore) merge(key *protocol.DeviceID, addrs []DatabaseAddress, s
return nil return nil
} }
func (s *inMemoryStore) get(key *protocol.DeviceID) (DatabaseRecord, error) { func (s *inMemoryStore) get(key *protocol.DeviceID) (*discosrv.DatabaseRecord, error) {
t0 := time.Now() t0 := time.Now()
defer func() { defer func() {
databaseOperationSeconds.WithLabelValues(dbOpGet).Observe(time.Since(t0).Seconds()) databaseOperationSeconds.WithLabelValues(dbOpGet).Observe(time.Since(t0).Seconds())
@ -130,7 +132,7 @@ func (s *inMemoryStore) get(key *protocol.DeviceID) (DatabaseRecord, error) {
rec, ok := s.m.Load(*key) rec, ok := s.m.Load(*key)
if !ok { if !ok {
databaseOperations.WithLabelValues(dbOpGet, dbResNotFound).Inc() databaseOperations.WithLabelValues(dbOpGet, dbResNotFound).Inc()
return DatabaseRecord{}, nil return &discosrv.DatabaseRecord{}, nil
} }
rec.Addresses = expire(rec.Addresses, s.clock.Now()) rec.Addresses = expire(rec.Addresses, s.clock.Now())
@ -176,7 +178,7 @@ func (s *inMemoryStore) expireAndCalculateStatistics() {
current, currentIPv4, currentIPv6, currentIPv6GUA, last24h, last1w := 0, 0, 0, 0, 0, 0 current, currentIPv4, currentIPv6, currentIPv6GUA, last24h, last1w := 0, 0, 0, 0, 0, 0
n := 0 n := 0
s.m.Range(func(key protocol.DeviceID, rec DatabaseRecord) bool { s.m.Range(func(key protocol.DeviceID, rec *discosrv.DatabaseRecord) bool {
if n%1000 == 0 { if n%1000 == 0 {
runtime.Gosched() runtime.Gosched()
} }
@ -261,7 +263,7 @@ func (s *inMemoryStore) write() (err error) {
now := s.clock.Now() now := s.clock.Now()
cutoff1w := now.Add(-7 * 24 * time.Hour).UnixNano() cutoff1w := now.Add(-7 * 24 * time.Hour).UnixNano()
n := 0 n := 0
s.m.Range(func(key protocol.DeviceID, value DatabaseRecord) bool { s.m.Range(func(key protocol.DeviceID, value *discosrv.DatabaseRecord) bool {
if n%1000 == 0 { if n%1000 == 0 {
runtime.Gosched() runtime.Gosched()
} }
@ -271,16 +273,16 @@ func (s *inMemoryStore) write() (err error) {
// drop the record if it's older than a week // drop the record if it's older than a week
return true return true
} }
rec := ReplicationRecord{ rec := &discosrv.ReplicationRecord{
Key: key[:], Key: key[:],
Addresses: value.Addresses, Addresses: value.Addresses,
Seen: value.Seen, Seen: value.Seen,
} }
s := rec.Size() s := proto.Size(rec)
if s+4 > len(buf) { if s+4 > len(buf) {
buf = make([]byte, s+4) buf = make([]byte, s+4)
} }
n, err := rec.MarshalTo(buf[4:]) n, err := protoutil.MarshalTo(buf[4:], rec)
if err != nil { if err != nil {
rangeErr = err rangeErr = err
return false return false
@ -349,8 +351,8 @@ func (s *inMemoryStore) read() (int, error) {
if _, err := io.ReadFull(br, buf[:n]); err != nil { if _, err := io.ReadFull(br, buf[:n]); err != nil {
return nr, err return nr, err
} }
rec := ReplicationRecord{} rec := &discosrv.ReplicationRecord{}
if err := rec.Unmarshal(buf[:n]); err != nil { if err := proto.Unmarshal(buf[:n], rec); err != nil {
return nr, err return nr, err
} }
key, err := protocol.DeviceIDFromBytes(rec.Key) key, err := protocol.DeviceIDFromBytes(rec.Key)
@ -362,9 +364,9 @@ func (s *inMemoryStore) read() (int, error) {
continue continue
} }
slices.SortFunc(rec.Addresses, DatabaseAddress.Cmp) slices.SortFunc(rec.Addresses, Cmp)
rec.Addresses = slices.CompactFunc(rec.Addresses, DatabaseAddress.Equal) rec.Addresses = slices.CompactFunc(rec.Addresses, Equal)
s.m.Store(key, DatabaseRecord{ s.m.Store(key, &discosrv.DatabaseRecord{
Addresses: expire(rec.Addresses, s.clock.Now()), Addresses: expire(rec.Addresses, s.clock.Now()),
Seen: rec.Seen, Seen: rec.Seen,
}) })
@ -377,7 +379,7 @@ func (s *inMemoryStore) read() (int, error) {
// result is the union of the two address sets, with the newer expiry time // result is the union of the two address sets, with the newer expiry time
// chosen for any duplicates. The address list in a is overwritten and // chosen for any duplicates. The address list in a is overwritten and
// reused for the result. // reused for the result.
func merge(a, b DatabaseRecord) DatabaseRecord { func merge(a, b *discosrv.DatabaseRecord) *discosrv.DatabaseRecord {
// Both lists must be sorted for this to work. // Both lists must be sorted for this to work.
a.Seen = max(a.Seen, b.Seen) a.Seen = max(a.Seen, b.Seen)
@ -396,7 +398,7 @@ func merge(a, b DatabaseRecord) DatabaseRecord {
aIdx++ aIdx++
case 1: case 1:
// a > b, insert b before a // a > b, insert b before a
a.Addresses = append(a.Addresses[:aIdx], append([]DatabaseAddress{b.Addresses[bIdx]}, a.Addresses[aIdx:]...)...) a.Addresses = append(a.Addresses[:aIdx], append([]*discosrv.DatabaseAddress{b.Addresses[bIdx]}, a.Addresses[aIdx:]...)...)
bIdx++ bIdx++
} }
} }
@ -410,7 +412,7 @@ func merge(a, b DatabaseRecord) DatabaseRecord {
// expire returns the list of addresses after removing expired entries. // expire returns the list of addresses after removing expired entries.
// Expiration happen in place, so the slice given as the parameter is // Expiration happen in place, so the slice given as the parameter is
// destroyed. Internal order is preserved. // destroyed. Internal order is preserved.
func expire(addrs []DatabaseAddress, now time.Time) []DatabaseAddress { func expire(addrs []*discosrv.DatabaseAddress, now time.Time) []*discosrv.DatabaseAddress {
cutoff := now.UnixNano() cutoff := now.UnixNano()
naddrs := addrs[:0] naddrs := addrs[:0]
for i := range addrs { for i := range addrs {
@ -428,13 +430,13 @@ func expire(addrs []DatabaseAddress, now time.Time) []DatabaseAddress {
return naddrs return naddrs
} }
func (d DatabaseAddress) Cmp(other DatabaseAddress) (n int) { func Cmp(d, other *discosrv.DatabaseAddress) (n int) {
if c := cmp.Compare(d.Address, other.Address); c != 0 { if c := cmp.Compare(d.Address, other.Address); c != 0 {
return c return c
} }
return cmp.Compare(d.Expires, other.Expires) return cmp.Compare(d.Expires, other.Expires)
} }
func (d DatabaseAddress) Equal(other DatabaseAddress) bool { func Equal(d, other *discosrv.DatabaseAddress) bool {
return d.Address == other.Address return d.Address == other.Address
} }

View File

@ -1,792 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: database.proto
package main
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type DatabaseRecord struct {
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
}
func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
func (m *DatabaseRecord) String() string { return proto.CompactTextString(m) }
func (*DatabaseRecord) ProtoMessage() {}
func (*DatabaseRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{0}
}
func (m *DatabaseRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DatabaseRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DatabaseRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseRecord.Merge(m, src)
}
func (m *DatabaseRecord) XXX_Size() int {
return m.Size()
}
func (m *DatabaseRecord) XXX_DiscardUnknown() {
xxx_messageInfo_DatabaseRecord.DiscardUnknown(m)
}
var xxx_messageInfo_DatabaseRecord proto.InternalMessageInfo
type ReplicationRecord struct {
Key []byte `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
Addresses []DatabaseAddress `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
}
func (m *ReplicationRecord) Reset() { *m = ReplicationRecord{} }
func (m *ReplicationRecord) String() string { return proto.CompactTextString(m) }
func (*ReplicationRecord) ProtoMessage() {}
func (*ReplicationRecord) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{1}
}
func (m *ReplicationRecord) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ReplicationRecord) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ReplicationRecord.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ReplicationRecord) XXX_Merge(src proto.Message) {
xxx_messageInfo_ReplicationRecord.Merge(m, src)
}
func (m *ReplicationRecord) XXX_Size() int {
return m.Size()
}
func (m *ReplicationRecord) XXX_DiscardUnknown() {
xxx_messageInfo_ReplicationRecord.DiscardUnknown(m)
}
var xxx_messageInfo_ReplicationRecord proto.InternalMessageInfo
type DatabaseAddress struct {
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"`
Expires int64 `protobuf:"varint,2,opt,name=expires,proto3" json:"expires,omitempty"`
}
func (m *DatabaseAddress) Reset() { *m = DatabaseAddress{} }
func (m *DatabaseAddress) String() string { return proto.CompactTextString(m) }
func (*DatabaseAddress) ProtoMessage() {}
func (*DatabaseAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_b90fe3356ea5df07, []int{2}
}
func (m *DatabaseAddress) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DatabaseAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DatabaseAddress.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DatabaseAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_DatabaseAddress.Merge(m, src)
}
func (m *DatabaseAddress) XXX_Size() int {
return m.Size()
}
func (m *DatabaseAddress) XXX_DiscardUnknown() {
xxx_messageInfo_DatabaseAddress.DiscardUnknown(m)
}
var xxx_messageInfo_DatabaseAddress proto.InternalMessageInfo
func init() {
proto.RegisterType((*DatabaseRecord)(nil), "main.DatabaseRecord")
proto.RegisterType((*ReplicationRecord)(nil), "main.ReplicationRecord")
proto.RegisterType((*DatabaseAddress)(nil), "main.DatabaseAddress")
}
func init() { proto.RegisterFile("database.proto", fileDescriptor_b90fe3356ea5df07) }
var fileDescriptor_b90fe3356ea5df07 = []byte{
// 243 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4b, 0x49, 0x2c, 0x49,
0x4c, 0x4a, 0x2c, 0x4e, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0xc9, 0x4d, 0xcc, 0xcc,
0x93, 0x52, 0x2e, 0x4a, 0x2d, 0xc8, 0x2f, 0xd6, 0x07, 0x0b, 0x25, 0x95, 0xa6, 0xe9, 0xa7, 0xe7,
0xa7, 0xe7, 0x83, 0x39, 0x60, 0x16, 0x44, 0xa9, 0x52, 0x3c, 0x17, 0x9f, 0x0b, 0x54, 0x73, 0x50,
0x6a, 0x72, 0x7e, 0x51, 0x8a, 0x90, 0x25, 0x17, 0x67, 0x62, 0x4a, 0x4a, 0x51, 0x6a, 0x71, 0x71,
0x6a, 0xb1, 0x04, 0xa3, 0x02, 0xb3, 0x06, 0xb7, 0x91, 0xa8, 0x1e, 0xc8, 0x40, 0x3d, 0x98, 0x42,
0x47, 0x88, 0xb4, 0x13, 0xcb, 0x89, 0x7b, 0xf2, 0x0c, 0x41, 0x08, 0xd5, 0x42, 0x42, 0x5c, 0x2c,
0xc5, 0xa9, 0xa9, 0x79, 0x12, 0xcc, 0x0a, 0x8c, 0x1a, 0xcc, 0x41, 0x60, 0xb6, 0x52, 0x09, 0x97,
0x60, 0x50, 0x6a, 0x41, 0x4e, 0x66, 0x72, 0x62, 0x49, 0x66, 0x7e, 0x1e, 0xd4, 0x0e, 0x01, 0x2e,
0xe6, 0xec, 0xd4, 0x4a, 0x09, 0x46, 0x05, 0x46, 0x0d, 0x9e, 0x20, 0x10, 0x13, 0xd5, 0x56, 0x26,
0x8a, 0x6d, 0x75, 0xe5, 0xe2, 0x47, 0xd3, 0x27, 0x24, 0xc1, 0xc5, 0x0e, 0xd5, 0x03, 0xb6, 0x97,
0x33, 0x08, 0xc6, 0x05, 0xc9, 0xa4, 0x56, 0x14, 0x64, 0x16, 0x81, 0x6d, 0x06, 0x99, 0x01, 0xe3,
0x3a, 0xc9, 0x9c, 0x78, 0x28, 0xc7, 0x70, 0xe2, 0x91, 0x1c, 0xe3, 0x85, 0x47, 0x72, 0x8c, 0x0f,
0x1e, 0xc9, 0x31, 0x4e, 0x78, 0x2c, 0xc7, 0x70, 0xe1, 0xb1, 0x1c, 0xc3, 0x8d, 0xc7, 0x72, 0x0c,
0x49, 0x6c, 0xe0, 0x20, 0x34, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xc6, 0x0b, 0x9b, 0x77, 0x7f,
0x01, 0x00, 0x00,
}
func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *DatabaseRecord) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ReplicationRecord) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ReplicationRecord) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Seen != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
i--
dAtA[i] = 0x18
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintDatabase(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x12
}
}
if len(m.Key) > 0 {
i -= len(m.Key)
copy(dAtA[i:], m.Key)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *DatabaseAddress) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *DatabaseAddress) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Expires != 0 {
i = encodeVarintDatabase(dAtA, i, uint64(m.Expires))
i--
dAtA[i] = 0x10
}
if len(m.Address) > 0 {
i -= len(m.Address)
copy(dAtA[i:], m.Address)
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func encodeVarintDatabase(dAtA []byte, offset int, v uint64) int {
offset -= sovDatabase(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *DatabaseRecord) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Addresses) > 0 {
for _, e := range m.Addresses {
l = e.Size()
n += 1 + l + sovDatabase(uint64(l))
}
}
if m.Seen != 0 {
n += 1 + sovDatabase(uint64(m.Seen))
}
return n
}
func (m *ReplicationRecord) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Key)
if l > 0 {
n += 1 + l + sovDatabase(uint64(l))
}
if len(m.Addresses) > 0 {
for _, e := range m.Addresses {
l = e.Size()
n += 1 + l + sovDatabase(uint64(l))
}
}
if m.Seen != 0 {
n += 1 + sovDatabase(uint64(m.Seen))
}
return n
}
func (m *DatabaseAddress) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Address)
if l > 0 {
n += 1 + l + sovDatabase(uint64(l))
}
if m.Expires != 0 {
n += 1 + sovDatabase(uint64(m.Expires))
}
return n
}
func sovDatabase(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozDatabase(x uint64) (n int) {
return sovDatabase(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: DatabaseRecord: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: DatabaseRecord: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, DatabaseAddress{})
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
}
m.Seen = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ReplicationRecord: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ReplicationRecord: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Key = append(m.Key[:0], dAtA[iNdEx:postIndex]...)
if m.Key == nil {
m.Key = []byte{}
}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, DatabaseAddress{})
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
}
m.Seen = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Seen |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: DatabaseAddress: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: DatabaseAddress: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthDatabase
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDatabase
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Address = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType)
}
m.Expires = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDatabase
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Expires |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipDatabase(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthDatabase
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipDatabase(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowDatabase
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthDatabase
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupDatabase
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthDatabase
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupDatabase = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -1,32 +0,0 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
syntax = "proto3";
package main;
import "repos/protobuf/gogoproto/gogo.proto";
option (gogoproto.goproto_getters_all) = false;
option (gogoproto.goproto_unkeyed_all) = false;
option (gogoproto.goproto_unrecognized_all) = false;
option (gogoproto.goproto_sizecache_all) = false;
message DatabaseRecord {
repeated DatabaseAddress addresses = 1 [(gogoproto.nullable) = false];
int64 seen = 3; // Unix nanos, last device announce
}
message ReplicationRecord {
bytes key = 1; // raw 32 byte device ID
repeated DatabaseAddress addresses = 2 [(gogoproto.nullable) = false];
int64 seen = 3; // Unix nanos, last device announce
}
message DatabaseAddress {
string address = 1;
int64 expires = 2; // Unix nanos
}

View File

@ -12,6 +12,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/syncthing/syncthing/internal/gen/discosrv"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
@ -39,7 +40,7 @@ func TestDatabaseGetSet(t *testing.T) {
// Put a record // Put a record
rec.Addresses = []DatabaseAddress{ rec.Addresses = []*discosrv.DatabaseAddress{
{Address: "tcp://1.2.3.4:5", Expires: tc.Now().Add(time.Minute).UnixNano()}, {Address: "tcp://1.2.3.4:5", Expires: tc.Now().Add(time.Minute).UnixNano()},
} }
if err := db.put(&protocol.EmptyDeviceID, rec); err != nil { if err := db.put(&protocol.EmptyDeviceID, rec); err != nil {
@ -65,7 +66,7 @@ func TestDatabaseGetSet(t *testing.T) {
tc.wind(30 * time.Second) tc.wind(30 * time.Second)
addrs := []DatabaseAddress{ addrs := []*discosrv.DatabaseAddress{
{Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()}, {Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()},
} }
if err := db.merge(&protocol.EmptyDeviceID, addrs, tc.Now().UnixNano()); err != nil { if err := db.merge(&protocol.EmptyDeviceID, addrs, tc.Now().UnixNano()); err != nil {
@ -112,7 +113,7 @@ func TestDatabaseGetSet(t *testing.T) {
// Set an address // Set an address
addrs = []DatabaseAddress{ addrs = []*discosrv.DatabaseAddress{
{Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()}, {Address: "tcp://6.7.8.9:0", Expires: tc.Now().Add(time.Minute).UnixNano()},
} }
if err := db.merge(&protocol.GlobalDeviceID, addrs, tc.Now().UnixNano()); err != nil { if err := db.merge(&protocol.GlobalDeviceID, addrs, tc.Now().UnixNano()); err != nil {
@ -134,28 +135,28 @@ func TestDatabaseGetSet(t *testing.T) {
func TestFilter(t *testing.T) { func TestFilter(t *testing.T) {
// all cases are expired with t=10 // all cases are expired with t=10
cases := []struct { cases := []struct {
a []DatabaseAddress a []*discosrv.DatabaseAddress
b []DatabaseAddress b []*discosrv.DatabaseAddress
}{ }{
{ {
a: nil, a: nil,
b: nil, b: nil,
}, },
{ {
a: []DatabaseAddress{{Address: "a", Expires: 9}, {Address: "b", Expires: 9}, {Address: "c", Expires: 9}}, a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 9}, {Address: "b", Expires: 9}, {Address: "c", Expires: 9}},
b: []DatabaseAddress{}, b: []*discosrv.DatabaseAddress{},
}, },
{ {
a: []DatabaseAddress{{Address: "a", Expires: 10}}, a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
b: []DatabaseAddress{{Address: "a", Expires: 10}}, b: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
}, },
{ {
a: []DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}}, a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
b: []DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}}, b: []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
}, },
{ {
a: []DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "c", Expires: 5}, {Address: "d", Expires: 15}, {Address: "e", Expires: 5}}, a: []*discosrv.DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "c", Expires: 5}, {Address: "d", Expires: 15}, {Address: "e", Expires: 5}},
b: []DatabaseAddress{{Address: "b", Expires: 15}, {Address: "d", Expires: 15}}, b: []*discosrv.DatabaseAddress{{Address: "b", Expires: 15}, {Address: "d", Expires: 15}},
}, },
} }
@ -169,62 +170,62 @@ func TestFilter(t *testing.T) {
func TestMerge(t *testing.T) { func TestMerge(t *testing.T) {
cases := []struct { cases := []struct {
a, b, res []DatabaseAddress a, b, res []*discosrv.DatabaseAddress
}{ }{
{nil, nil, nil}, {nil, nil, nil},
{ {
nil, nil,
[]DatabaseAddress{{Address: "a", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
}, },
{ {
nil, nil,
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 10}, {Address: "c", Expires: 10}},
}, },
{ {
[]DatabaseAddress{{Address: "a", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 15}},
}, },
{ {
[]DatabaseAddress{{Address: "a", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}},
[]DatabaseAddress{{Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
}, },
{ {
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 15}},
}, },
{ {
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "b", Expires: 15}, {Address: "c", Expires: 20}}, []*discosrv.DatabaseAddress{{Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
}, },
{ {
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}}, []*discosrv.DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}},
}, },
{ {
[]DatabaseAddress{{Address: "y", Expires: 10}, {Address: "z", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "y", Expires: 10}, {Address: "z", Expires: 10}},
[]DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}},
[]DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "y", Expires: 10}, {Address: "z", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 5}, {Address: "b", Expires: 15}, {Address: "y", Expires: 10}, {Address: "z", Expires: 10}},
}, },
{ {
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "d", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "d", Expires: 10}},
[]DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}}, []*discosrv.DatabaseAddress{{Address: "b", Expires: 5}, {Address: "c", Expires: 20}},
[]DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}, {Address: "d", Expires: 10}}, []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}, {Address: "c", Expires: 20}, {Address: "d", Expires: 10}},
}, },
} }
for _, tc := range cases { for _, tc := range cases {
rec := merge(DatabaseRecord{Addresses: tc.a}, DatabaseRecord{Addresses: tc.b}) rec := merge(&discosrv.DatabaseRecord{Addresses: tc.a}, &discosrv.DatabaseRecord{Addresses: tc.b})
if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) { if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) {
t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res) t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res)
} }
rec = merge(DatabaseRecord{Addresses: tc.b}, DatabaseRecord{Addresses: tc.a}) rec = merge(&discosrv.DatabaseRecord{Addresses: tc.b}, &discosrv.DatabaseRecord{Addresses: tc.a})
if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) { if fmt.Sprint(rec.Addresses) != fmt.Sprint(tc.res) {
t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res) t.Errorf("Incorrect result %v, expected %v", rec.Addresses, tc.res)
} }
@ -233,9 +234,9 @@ func TestMerge(t *testing.T) {
func BenchmarkMergeEqual(b *testing.B) { func BenchmarkMergeEqual(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
ar := []DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}} ar := []*discosrv.DatabaseAddress{{Address: "a", Expires: 10}, {Address: "b", Expires: 15}}
br := []DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 10}} br := []*discosrv.DatabaseAddress{{Address: "a", Expires: 15}, {Address: "b", Expires: 10}}
res := merge(DatabaseRecord{Addresses: ar}, DatabaseRecord{Addresses: br}) res := merge(&discosrv.DatabaseRecord{Addresses: ar}, &discosrv.DatabaseRecord{Addresses: br})
if len(res.Addresses) != 2 { if len(res.Addresses) != 2 {
b.Fatal("wrong length") b.Fatal("wrong length")
} }

View File

@ -11,22 +11,22 @@ import (
"crypto/tls" "crypto/tls"
"log" "log"
"net/http" "net/http"
_ "net/http/pprof"
"os" "os"
"os/signal" "os/signal"
"runtime" "runtime"
"time" "time"
_ "net/http/pprof"
"github.com/alecthomas/kong" "github.com/alecthomas/kong"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/thejerf/suture/v4"
_ "github.com/syncthing/syncthing/lib/automaxprocs" _ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build" "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand" "github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/s3" "github.com/syncthing/syncthing/lib/s3"
"github.com/syncthing/syncthing/lib/tlsutil" "github.com/syncthing/syncthing/lib/tlsutil"
"github.com/thejerf/suture/v4"
) )
const ( const (

View File

@ -12,9 +12,8 @@ import (
"time" "time"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol" syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/relay/protocol" "github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
) )
var ( var (

View File

@ -19,6 +19,8 @@ import (
"syscall" "syscall"
"time" "time"
"golang.org/x/time/rate"
_ "github.com/syncthing/syncthing/lib/automaxprocs" _ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build" "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
@ -30,7 +32,6 @@ import (
"github.com/syncthing/syncthing/lib/relay/protocol" "github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil" "github.com/syncthing/syncthing/lib/tlsutil"
_ "github.com/syncthing/syncthing/lib/upnp" _ "github.com/syncthing/syncthing/lib/upnp"
"golang.org/x/time/rate"
) )
var ( var (

View File

@ -11,6 +11,10 @@ import (
"fmt" "fmt"
"time" "time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db" "github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
@ -33,19 +37,19 @@ func indexDump() error {
name := nulString(key[1+4+4:]) name := nulString(key[1+4+4:])
fmt.Printf("[device] F:%d D:%d N:%q", folder, device, name) fmt.Printf("[device] F:%d D:%d N:%q", folder, device, name)
var f protocol.FileInfo var f bep.FileInfo
err := f.Unmarshal(it.Value()) err := proto.Unmarshal(it.Value(), &f)
if err != nil { if err != nil {
return err return err
} }
fmt.Printf(" V:%v\n", f) fmt.Printf(" V:%v\n", &f)
case db.KeyTypeGlobal: case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:]) folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:]) name := nulString(key[1+4:])
var flv db.VersionList var flv dbproto.VersionList
flv.Unmarshal(it.Value()) proto.Unmarshal(it.Value(), &flv)
fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, flv) fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, &flv)
case db.KeyTypeBlock: case db.KeyTypeBlock:
folder := binary.BigEndian.Uint32(key[1:]) folder := binary.BigEndian.Uint32(key[1:])
@ -94,11 +98,11 @@ func indexDump() error {
case db.KeyTypeFolderMeta: case db.KeyTypeFolderMeta:
folder := binary.BigEndian.Uint32(key[1:]) folder := binary.BigEndian.Uint32(key[1:])
fmt.Printf("[foldermeta] F:%d", folder) fmt.Printf("[foldermeta] F:%d", folder)
var cs db.CountsSet var cs dbproto.CountsSet
if err := cs.Unmarshal(it.Value()); err != nil { if err := proto.Unmarshal(it.Value(), &cs); err != nil {
fmt.Printf(" (invalid)\n") fmt.Printf(" (invalid)\n")
} else { } else {
fmt.Printf(" V:%v\n", cs) fmt.Printf(" V:%v\n", &cs)
} }
case db.KeyTypeMiscData: case db.KeyTypeMiscData:
@ -125,20 +129,20 @@ func indexDump() error {
case db.KeyTypeVersion: case db.KeyTypeVersion:
fmt.Printf("[version] H:%x", key[1:]) fmt.Printf("[version] H:%x", key[1:])
var v protocol.Vector var v bep.Vector
err := v.Unmarshal(it.Value()) err := proto.Unmarshal(it.Value(), &v)
if err != nil { if err != nil {
fmt.Printf(" (invalid)\n") fmt.Printf(" (invalid)\n")
} else { } else {
fmt.Printf(" V:%v\n", v) fmt.Printf(" V:%v\n", &v)
} }
case db.KeyTypePendingFolder: case db.KeyTypePendingFolder:
device := binary.BigEndian.Uint32(key[1:]) device := binary.BigEndian.Uint32(key[1:])
folder := string(key[5:]) folder := string(key[5:])
var of db.ObservedFolder var of dbproto.ObservedFolder
of.Unmarshal(it.Value()) proto.Unmarshal(it.Value(), &of)
fmt.Printf("[pendingFolder] D:%d F:%s V:%v\n", device, folder, of) fmt.Printf("[pendingFolder] D:%d F:%s V:%v\n", device, folder, &of)
case db.KeyTypePendingDevice: case db.KeyTypePendingDevice:
device := "<invalid>" device := "<invalid>"
@ -146,9 +150,9 @@ func indexDump() error {
if err == nil { if err == nil {
device = dev.String() device = dev.String()
} }
var od db.ObservedDevice var od dbproto.ObservedDevice
od.Unmarshal(it.Value()) proto.Unmarshal(it.Value(), &od)
fmt.Printf("[pendingDevice] D:%v V:%v\n", device, od) fmt.Printf("[pendingDevice] D:%v V:%v\n", device, &od)
default: default:
fmt.Printf("[??? %d]\n %x\n %x\n", key[0], key, it.Value()) fmt.Printf("[??? %d]\n %x\n %x\n", key[0], key, it.Value())

View File

@ -13,6 +13,10 @@ import (
"fmt" "fmt"
"sort" "sort"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db" "github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
@ -42,12 +46,12 @@ func indexCheck() (err error) {
folders := make(map[uint32]string) folders := make(map[uint32]string)
devices := make(map[uint32]string) devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32) deviceToIDs := make(map[string]uint32)
fileInfos := make(map[fileInfoKey]protocol.FileInfo) fileInfos := make(map[fileInfoKey]*bep.FileInfo)
globals := make(map[globalKey]db.VersionList) globals := make(map[globalKey]*dbproto.VersionList)
sequences := make(map[sequenceKey]string) sequences := make(map[sequenceKey]string)
needs := make(map[globalKey]struct{}) needs := make(map[globalKey]struct{})
blocklists := make(map[string]struct{}) blocklists := make(map[string]struct{})
versions := make(map[string]protocol.Vector) versions := make(map[string]*bep.Vector)
usedBlocklists := make(map[string]struct{}) usedBlocklists := make(map[string]struct{})
usedVersions := make(map[string]struct{}) usedVersions := make(map[string]struct{})
var localDeviceKey uint32 var localDeviceKey uint32
@ -74,26 +78,26 @@ func indexCheck() (err error) {
device := binary.BigEndian.Uint32(key[1+4:]) device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:]) name := nulString(key[1+4+4:])
var f protocol.FileInfo var f bep.FileInfo
err := f.Unmarshal(it.Value()) err := proto.Unmarshal(it.Value(), &f)
if err != nil { if err != nil {
fmt.Println("Unable to unmarshal FileInfo:", err) fmt.Println("Unable to unmarshal FileInfo:", err)
success = false success = false
continue continue
} }
fileInfos[fileInfoKey{folder, device, name}] = f fileInfos[fileInfoKey{folder, device, name}] = &f
case db.KeyTypeGlobal: case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:]) folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:]) name := nulString(key[1+4:])
var flv db.VersionList var flv dbproto.VersionList
if err := flv.Unmarshal(it.Value()); err != nil { if err := proto.Unmarshal(it.Value(), &flv); err != nil {
fmt.Println("Unable to unmarshal VersionList:", err) fmt.Println("Unable to unmarshal VersionList:", err)
success = false success = false
continue continue
} }
globals[globalKey{folder, name}] = flv globals[globalKey{folder, name}] = &flv
case db.KeyTypeFolderIdx: case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:]) key := binary.BigEndian.Uint32(it.Key()[1:])
@ -124,13 +128,13 @@ func indexCheck() (err error) {
case db.KeyTypeVersion: case db.KeyTypeVersion:
hash := string(key[1:]) hash := string(key[1:])
var v protocol.Vector var v bep.Vector
if err := v.Unmarshal(it.Value()); err != nil { if err := proto.Unmarshal(it.Value(), &v); err != nil {
fmt.Println("Unable to unmarshal Vector:", err) fmt.Println("Unable to unmarshal Vector:", err)
success = false success = false
continue continue
} }
versions[hash] = v versions[hash] = &v
} }
} }
@ -248,25 +252,27 @@ func indexCheck() (err error) {
if fi.VersionHash != nil { if fi.VersionHash != nil {
fiv = versions[string(fi.VersionHash)] fiv = versions[string(fi.VersionHash)]
} }
if !fiv.Equal(version) { if !protocol.VectorFromWire(fiv).Equal(version) {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, version, fi.Version) fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, version, fi.Version)
success = false success = false
} }
if fi.IsInvalid() != invalid { ffi := protocol.FileInfoFromDB(fi)
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, invalid, fi.IsInvalid()) if ffi.IsInvalid() != invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, invalid, ffi.IsInvalid())
success = false success = false
} }
if fi.IsDeleted() != deleted { if ffi.IsDeleted() != deleted {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo deleted mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, deleted, fi.IsDeleted()) fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo deleted mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, deleted, ffi.IsDeleted())
success = false success = false
} }
} }
for i, fv := range vl.RawVersions { for i, fv := range vl.Versions {
ver := protocol.VectorFromWire(fv.Version)
for _, device := range fv.Devices { for _, device := range fv.Devices {
checkGlobal(i, device, fv.Version, false, fv.Deleted) checkGlobal(i, device, ver, false, fv.Deleted)
} }
for _, device := range fv.InvalidDevices { for _, device := range fv.InvalidDevices {
checkGlobal(i, device, fv.Version, true, fv.Deleted) checkGlobal(i, device, ver, true, fv.Deleted)
} }
} }
@ -276,10 +282,10 @@ func indexCheck() (err error) {
if needsLocally(vl) { if needsLocally(vl) {
_, ok := needs[gk] _, ok := needs[gk]
if !ok { if !ok {
fv, _ := vl.GetGlobal() fv, _ := vlGetGlobal(vl)
devB, _ := fv.FirstDevice() devB, _ := fvFirstDevice(fv)
dev := deviceToIDs[string(devB)] dev := deviceToIDs[string(devB)]
fi := fileInfos[fileInfoKey{gk.folder, dev, gk.name}] fi := protocol.FileInfoFromDB(fileInfos[fileInfoKey{gk.folder, dev, gk.name}])
if !fi.IsDeleted() && !fi.IsIgnored() { if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder) fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
} }
@ -345,11 +351,84 @@ func indexCheck() (err error) {
return nil return nil
} }
func needsLocally(vl db.VersionList) bool { func needsLocally(vl *dbproto.VersionList) bool {
gfv, gok := vl.GetGlobal() gfv, gok := vlGetGlobal(vl)
if !gok { // That's weird, but we hardly need something non-existent if !gok { // That's weird, but we hardly need something non-existent
return false return false
} }
fv, ok := vl.Get(protocol.LocalDeviceID[:]) fv, ok := vlGet(vl, protocol.LocalDeviceID[:])
return db.Need(gfv, ok, fv.Version) return db.Need(gfv, ok, protocol.VectorFromWire(fv.Version))
}
// Get returns a FileVersion that contains the given device and whether it has
// been found at all.
func vlGet(vl *dbproto.VersionList, device []byte) (*dbproto.FileVersion, bool) {
_, i, _, ok := vlFindDevice(vl, device)
if !ok {
return &dbproto.FileVersion{}, false
}
return vl.Versions[i], true
}
// GetGlobal returns the current global FileVersion. The returned FileVersion
// may be invalid, if all FileVersions are invalid. Returns false only if
// VersionList is empty.
func vlGetGlobal(vl *dbproto.VersionList) (*dbproto.FileVersion, bool) {
i := vlFindGlobal(vl)
if i == -1 {
return nil, false
}
return vl.Versions[i], true
}
// findGlobal returns the first version that isn't invalid, or if all versions are
// invalid just the first version (i.e. 0) or -1, if there's no versions at all.
func vlFindGlobal(vl *dbproto.VersionList) int {
for i := range vl.Versions {
if !fvIsInvalid(vl.Versions[i]) {
return i
}
}
if len(vl.Versions) == 0 {
return -1
}
return 0
}
// findDevice returns whether the device is in InvalidVersions or Versions and
// in InvalidDevices or Devices (true for invalid), the positions in the version
// and device slices and whether it has been found at all.
func vlFindDevice(vl *dbproto.VersionList, device []byte) (bool, int, int, bool) {
for i, v := range vl.Versions {
if j := deviceIndex(v.Devices, device); j != -1 {
return false, i, j, true
}
if j := deviceIndex(v.InvalidDevices, device); j != -1 {
return true, i, j, true
}
}
return false, -1, -1, false
}
func deviceIndex(devices [][]byte, device []byte) int {
for i, dev := range devices {
if bytes.Equal(device, dev) {
return i
}
}
return -1
}
func fvFirstDevice(fv *dbproto.FileVersion) ([]byte, bool) {
if len(fv.Devices) != 0 {
return fv.Devices[0], true
}
if len(fv.InvalidDevices) != 0 {
return fv.InvalidDevices[0], true
}
return nil, false
}
func fvIsInvalid(fv *dbproto.FileVersion) bool {
return fv == nil || len(fv.Devices) == 0
} }

View File

@ -17,6 +17,9 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/fs" "github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil" "github.com/syncthing/syncthing/lib/osutil"
@ -280,10 +283,11 @@ func loadEncryptedFileInfo(fd fs.File) (*protocol.FileInfo, error) {
return nil, err return nil, err
} }
var encFi protocol.FileInfo var encFi bep.FileInfo
if err := encFi.Unmarshal(trailer); err != nil { if err := proto.Unmarshal(trailer, &encFi); err != nil {
return nil, err return nil, err
} }
fi := protocol.FileInfoFromWire(&encFi)
return &encFi, nil return &fi, nil
} }

View File

@ -30,7 +30,6 @@ import (
"time" "time"
"github.com/alecthomas/kong" "github.com/alecthomas/kong"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/thejerf/suture/v4" "github.com/thejerf/suture/v4"
"github.com/willabides/kongplete" "github.com/willabides/kongplete"
@ -38,6 +37,7 @@ import (
"github.com/syncthing/syncthing/cmd/syncthing/cmdutil" "github.com/syncthing/syncthing/cmd/syncthing/cmdutil"
"github.com/syncthing/syncthing/cmd/syncthing/decrypt" "github.com/syncthing/syncthing/cmd/syncthing/decrypt"
"github.com/syncthing/syncthing/cmd/syncthing/generate" "github.com/syncthing/syncthing/cmd/syncthing/generate"
_ "github.com/syncthing/syncthing/lib/automaxprocs"
"github.com/syncthing/syncthing/lib/build" "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db" "github.com/syncthing/syncthing/lib/db"

1
go.mod
View File

@ -14,7 +14,6 @@ require (
github.com/getsentry/raven-go v0.2.0 github.com/getsentry/raven-go v0.2.0
github.com/go-ldap/ldap/v3 v3.4.8 github.com/go-ldap/ldap/v3 v3.4.8
github.com/gobwas/glob v0.2.3 github.com/gobwas/glob v0.2.3
github.com/gogo/protobuf v1.3.2
github.com/greatroar/blobloom v0.8.0 github.com/greatroar/blobloom v0.8.0
github.com/hashicorp/golang-lru/v2 v2.0.7 github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/jackpal/gateway v1.0.15 github.com/jackpal/gateway v1.0.15

10
go.sum
View File

@ -66,8 +66,6 @@ github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1v
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E= github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0= github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
@ -135,8 +133,6 @@ github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4d
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs= github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc= github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0= github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
@ -252,7 +248,6 @@ github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0 h1:okhMind4q9H1OxF44gN
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXceWtbTHq6lqcTvYPBKLNejBEbnUsQJtU= github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXceWtbTHq6lqcTvYPBKLNejBEbnUsQJtU=
github.com/willabides/kongplete v0.4.0 h1:eivXxkp5ud5+4+NVN9e4goxC5mSh3n1RHov+gsblM2g= github.com/willabides/kongplete v0.4.0 h1:eivXxkp5ud5+4+NVN9e4goxC5mSh3n1RHov+gsblM2g=
github.com/willabides/kongplete v0.4.0/go.mod h1:0P0jtWD9aTsqPSUAl4de35DLghrr57XcayPyvqSi2X8= github.com/willabides/kongplete v0.4.0/go.mod h1:0P0jtWD9aTsqPSUAl4de35DLghrr57XcayPyvqSi2X8=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0= github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
@ -274,7 +269,6 @@ golang.org/x/crypto v0.29.0 h1:L5SG1JTTXupVV3n6sUqMTeWbjAyfPwoda2DLX8J8FrQ=
golang.org/x/crypto v0.29.0/go.mod h1:+F4F4N5hv6v38hfeYwTdx20oUvLLc+QfrE9Ax9HtgRg= golang.org/x/crypto v0.29.0/go.mod h1:+F4F4N5hv6v38hfeYwTdx20oUvLLc+QfrE9Ax9HtgRg=
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c h1:7dEasQXItcW1xKJ2+gg5VOiBnqWrJc+rq0DPKyvvdbY= golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c h1:7dEasQXItcW1xKJ2+gg5VOiBnqWrJc+rq0DPKyvvdbY=
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c/go.mod h1:NQtJDoLvd6faHhE7m4T/1IY708gDefGGjR/iUW8yQQ8= golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c/go.mod h1:NQtJDoLvd6faHhE7m4T/1IY708gDefGGjR/iUW8yQQ8=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
@ -284,7 +278,6 @@ golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
@ -301,7 +294,6 @@ golang.org/x/net v0.31.0 h1:68CPQngjLL0r2AlUKiSxtQFKvzRVbnzLwMUn5SzcLHo=
golang.org/x/net v0.31.0/go.mod h1:P4fl1q7dY2hnZFxEk4pPSkDHF+QqjitcnDjUQyMM+pM= golang.org/x/net v0.31.0/go.mod h1:P4fl1q7dY2hnZFxEk4pPSkDHF+QqjitcnDjUQyMM+pM=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -357,9 +349,7 @@ golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.27.0 h1:qEKojBykQkQ4EynWy4S8Weg69NumxKdn40Fce3uc/8o= golang.org/x/tools v0.27.0 h1:qEKojBykQkQ4EynWy4S8Weg69NumxKdn40Fce3uc/8o=

View File

@ -55,7 +55,6 @@ Jakob Borg, Audrius Butkevicius, Jesse Lucas, Simon Frei, Tomasz Wilczyński, Al
<li><a href="https://github.com/chmduquesne/rollinghash">chmduquesne/rollinghash</a>, Copyright &copy; 2015 Christophe-Marie Duquesne.</li> <li><a href="https://github.com/chmduquesne/rollinghash">chmduquesne/rollinghash</a>, Copyright &copy; 2015 Christophe-Marie Duquesne.</li>
<li><a href="https://github.com/d4l3k/messagediff">d4l3k/messagediff</a>, Copyright &copy; 2015 Tristan Rice.</li> <li><a href="https://github.com/d4l3k/messagediff">d4l3k/messagediff</a>, Copyright &copy; 2015 Tristan Rice.</li>
<li><a href="https://github.com/gobwas/glob">gobwas/glob</a>, Copyright &copy; 2016 Sergey Kamardin.</li> <li><a href="https://github.com/gobwas/glob">gobwas/glob</a>, Copyright &copy; 2016 Sergey Kamardin.</li>
<li><a href="https://github.com/gogo/protobuf">gogo/protobuf</a>, Copyright &copy; 2013 The GoGo Authors.</li>
<li><a href="https://github.com/golang/groupcache">golang/groupcache</a>, Copyright &copy; 2013 Google Inc.</li> <li><a href="https://github.com/golang/groupcache">golang/groupcache</a>, Copyright &copy; 2013 Google Inc.</li>
<li><a href="https://github.com/golang/protobuf">golang/protobuf</a>, Copyright &copy; 2010 The Go Authors.</li> <li><a href="https://github.com/golang/protobuf">golang/protobuf</a>, Copyright &copy; 2010 The Go Authors.</li>
<li><a href="https://github.com/golang/snappy">golang/snappy</a>, Copyright &copy; 2011 The Snappy-Go Authors.</li> <li><a href="https://github.com/golang/snappy">golang/snappy</a>, Copyright &copy; 2011 The Snappy-Go Authors.</li>

View File

@ -0,0 +1,143 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.35.1
// protoc (unknown)
// source: apiproto/tokenset.proto
package apiproto
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type TokenSet struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// token -> expiry time (epoch nanoseconds)
Tokens map[string]int64 `protobuf:"bytes,1,rep,name=tokens,proto3" json:"tokens,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
}
func (x *TokenSet) Reset() {
*x = TokenSet{}
mi := &file_apiproto_tokenset_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *TokenSet) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TokenSet) ProtoMessage() {}
func (x *TokenSet) ProtoReflect() protoreflect.Message {
mi := &file_apiproto_tokenset_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TokenSet.ProtoReflect.Descriptor instead.
func (*TokenSet) Descriptor() ([]byte, []int) {
return file_apiproto_tokenset_proto_rawDescGZIP(), []int{0}
}
func (x *TokenSet) GetTokens() map[string]int64 {
if x != nil {
return x.Tokens
}
return nil
}
var File_apiproto_tokenset_proto protoreflect.FileDescriptor
var file_apiproto_tokenset_proto_rawDesc = []byte{
0x0a, 0x17, 0x61, 0x70, 0x69, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x74, 0x6f, 0x6b, 0x65, 0x6e,
0x73, 0x65, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x08, 0x61, 0x70, 0x69, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x22, 0x7d, 0x0a, 0x08, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x53, 0x65, 0x74, 0x12,
0x36, 0x0a, 0x06, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x1e, 0x2e, 0x61, 0x70, 0x69, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x54, 0x6f, 0x6b, 0x65, 0x6e,
0x53, 0x65, 0x74, 0x2e, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52,
0x06, 0x74, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x1a, 0x39, 0x0a, 0x0b, 0x54, 0x6f, 0x6b, 0x65, 0x6e,
0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20,
0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75,
0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02,
0x38, 0x01, 0x42, 0x93, 0x01, 0x0a, 0x0c, 0x63, 0x6f, 0x6d, 0x2e, 0x61, 0x70, 0x69, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x42, 0x0d, 0x54, 0x6f, 0x6b, 0x65, 0x6e, 0x73, 0x65, 0x74, 0x50, 0x72, 0x6f,
0x74, 0x6f, 0x50, 0x01, 0x5a, 0x34, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x73, 0x79, 0x6e, 0x63, 0x74, 0x68, 0x69, 0x6e, 0x67, 0x2f, 0x73, 0x79, 0x6e, 0x63, 0x74,
0x68, 0x69, 0x6e, 0x67, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x67, 0x65,
0x6e, 0x2f, 0x61, 0x70, 0x69, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xa2, 0x02, 0x03, 0x41, 0x58, 0x58,
0xaa, 0x02, 0x08, 0x41, 0x70, 0x69, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xca, 0x02, 0x08, 0x41, 0x70,
0x69, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xe2, 0x02, 0x14, 0x41, 0x70, 0x69, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x08,
0x41, 0x70, 0x69, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_apiproto_tokenset_proto_rawDescOnce sync.Once
file_apiproto_tokenset_proto_rawDescData = file_apiproto_tokenset_proto_rawDesc
)
func file_apiproto_tokenset_proto_rawDescGZIP() []byte {
file_apiproto_tokenset_proto_rawDescOnce.Do(func() {
file_apiproto_tokenset_proto_rawDescData = protoimpl.X.CompressGZIP(file_apiproto_tokenset_proto_rawDescData)
})
return file_apiproto_tokenset_proto_rawDescData
}
var file_apiproto_tokenset_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_apiproto_tokenset_proto_goTypes = []any{
(*TokenSet)(nil), // 0: apiproto.TokenSet
nil, // 1: apiproto.TokenSet.TokensEntry
}
var file_apiproto_tokenset_proto_depIdxs = []int32{
1, // 0: apiproto.TokenSet.tokens:type_name -> apiproto.TokenSet.TokensEntry
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_apiproto_tokenset_proto_init() }
func file_apiproto_tokenset_proto_init() {
if File_apiproto_tokenset_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_apiproto_tokenset_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_apiproto_tokenset_proto_goTypes,
DependencyIndexes: file_apiproto_tokenset_proto_depIdxs,
MessageInfos: file_apiproto_tokenset_proto_msgTypes,
}.Build()
File_apiproto_tokenset_proto = out.File
file_apiproto_tokenset_proto_rawDesc = nil
file_apiproto_tokenset_proto_goTypes = nil
file_apiproto_tokenset_proto_depIdxs = nil
}

2335
internal/gen/bep/bep.pb.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,938 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.35.1
// protoc (unknown)
// source: dbproto/structs.proto
package dbproto
import (
bep "github.com/syncthing/syncthing/internal/gen/bep"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// Same as bep.FileInfo, but without blocks
type FileInfoTruncated struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
ModifiedS int64 `protobuf:"varint,5,opt,name=modified_s,json=modifiedS,proto3" json:"modified_s,omitempty"`
ModifiedBy uint64 `protobuf:"varint,12,opt,name=modified_by,json=modifiedBy,proto3" json:"modified_by,omitempty"`
Version *bep.Vector `protobuf:"bytes,9,opt,name=version,proto3" json:"version,omitempty"`
Sequence int64 `protobuf:"varint,10,opt,name=sequence,proto3" json:"sequence,omitempty"`
SymlinkTarget string `protobuf:"bytes,17,opt,name=symlink_target,json=symlinkTarget,proto3" json:"symlink_target,omitempty"`
BlocksHash []byte `protobuf:"bytes,18,opt,name=blocks_hash,json=blocksHash,proto3" json:"blocks_hash,omitempty"`
Encrypted []byte `protobuf:"bytes,19,opt,name=encrypted,proto3" json:"encrypted,omitempty"`
Type bep.FileInfoType `protobuf:"varint,2,opt,name=type,proto3,enum=bep.FileInfoType" json:"type,omitempty"`
Permissions uint32 `protobuf:"varint,4,opt,name=permissions,proto3" json:"permissions,omitempty"`
ModifiedNs int32 `protobuf:"varint,11,opt,name=modified_ns,json=modifiedNs,proto3" json:"modified_ns,omitempty"`
BlockSize int32 `protobuf:"varint,13,opt,name=block_size,json=blockSize,proto3" json:"block_size,omitempty"`
Platform *bep.PlatformData `protobuf:"bytes,14,opt,name=platform,proto3" json:"platform,omitempty"`
// The local_flags fields stores flags that are relevant to the local
// host only. It is not part of the protocol, doesn't get sent or
// received (we make sure to zero it), nonetheless we need it on our
// struct and to be able to serialize it to/from the database.
LocalFlags uint32 `protobuf:"varint,1000,opt,name=local_flags,json=localFlags,proto3" json:"local_flags,omitempty"`
// The version_hash is an implementation detail and not part of the wire
// format.
VersionHash []byte `protobuf:"bytes,1001,opt,name=version_hash,json=versionHash,proto3" json:"version_hash,omitempty"`
// The time when the inode was last changed (i.e., permissions, xattrs
// etc changed). This is host-local, not sent over the wire.
InodeChangeNs int64 `protobuf:"varint,1002,opt,name=inode_change_ns,json=inodeChangeNs,proto3" json:"inode_change_ns,omitempty"`
// The size of the data appended to the encrypted file on disk. This is
// host-local, not sent over the wire.
EncryptionTrailerSize int32 `protobuf:"varint,1003,opt,name=encryption_trailer_size,json=encryptionTrailerSize,proto3" json:"encryption_trailer_size,omitempty"`
Deleted bool `protobuf:"varint,6,opt,name=deleted,proto3" json:"deleted,omitempty"`
Invalid bool `protobuf:"varint,7,opt,name=invalid,proto3" json:"invalid,omitempty"`
NoPermissions bool `protobuf:"varint,8,opt,name=no_permissions,json=noPermissions,proto3" json:"no_permissions,omitempty"`
}
func (x *FileInfoTruncated) Reset() {
*x = FileInfoTruncated{}
mi := &file_dbproto_structs_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *FileInfoTruncated) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*FileInfoTruncated) ProtoMessage() {}
func (x *FileInfoTruncated) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use FileInfoTruncated.ProtoReflect.Descriptor instead.
func (*FileInfoTruncated) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{0}
}
func (x *FileInfoTruncated) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *FileInfoTruncated) GetSize() int64 {
if x != nil {
return x.Size
}
return 0
}
func (x *FileInfoTruncated) GetModifiedS() int64 {
if x != nil {
return x.ModifiedS
}
return 0
}
func (x *FileInfoTruncated) GetModifiedBy() uint64 {
if x != nil {
return x.ModifiedBy
}
return 0
}
func (x *FileInfoTruncated) GetVersion() *bep.Vector {
if x != nil {
return x.Version
}
return nil
}
func (x *FileInfoTruncated) GetSequence() int64 {
if x != nil {
return x.Sequence
}
return 0
}
func (x *FileInfoTruncated) GetSymlinkTarget() string {
if x != nil {
return x.SymlinkTarget
}
return ""
}
func (x *FileInfoTruncated) GetBlocksHash() []byte {
if x != nil {
return x.BlocksHash
}
return nil
}
func (x *FileInfoTruncated) GetEncrypted() []byte {
if x != nil {
return x.Encrypted
}
return nil
}
func (x *FileInfoTruncated) GetType() bep.FileInfoType {
if x != nil {
return x.Type
}
return bep.FileInfoType(0)
}
func (x *FileInfoTruncated) GetPermissions() uint32 {
if x != nil {
return x.Permissions
}
return 0
}
func (x *FileInfoTruncated) GetModifiedNs() int32 {
if x != nil {
return x.ModifiedNs
}
return 0
}
func (x *FileInfoTruncated) GetBlockSize() int32 {
if x != nil {
return x.BlockSize
}
return 0
}
func (x *FileInfoTruncated) GetPlatform() *bep.PlatformData {
if x != nil {
return x.Platform
}
return nil
}
func (x *FileInfoTruncated) GetLocalFlags() uint32 {
if x != nil {
return x.LocalFlags
}
return 0
}
func (x *FileInfoTruncated) GetVersionHash() []byte {
if x != nil {
return x.VersionHash
}
return nil
}
func (x *FileInfoTruncated) GetInodeChangeNs() int64 {
if x != nil {
return x.InodeChangeNs
}
return 0
}
func (x *FileInfoTruncated) GetEncryptionTrailerSize() int32 {
if x != nil {
return x.EncryptionTrailerSize
}
return 0
}
func (x *FileInfoTruncated) GetDeleted() bool {
if x != nil {
return x.Deleted
}
return false
}
func (x *FileInfoTruncated) GetInvalid() bool {
if x != nil {
return x.Invalid
}
return false
}
func (x *FileInfoTruncated) GetNoPermissions() bool {
if x != nil {
return x.NoPermissions
}
return false
}
type FileVersion struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Version *bep.Vector `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"`
Deleted bool `protobuf:"varint,2,opt,name=deleted,proto3" json:"deleted,omitempty"`
Devices [][]byte `protobuf:"bytes,3,rep,name=devices,proto3" json:"devices,omitempty"`
InvalidDevices [][]byte `protobuf:"bytes,4,rep,name=invalid_devices,json=invalidDevices,proto3" json:"invalid_devices,omitempty"`
}
func (x *FileVersion) Reset() {
*x = FileVersion{}
mi := &file_dbproto_structs_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *FileVersion) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*FileVersion) ProtoMessage() {}
func (x *FileVersion) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use FileVersion.ProtoReflect.Descriptor instead.
func (*FileVersion) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{1}
}
func (x *FileVersion) GetVersion() *bep.Vector {
if x != nil {
return x.Version
}
return nil
}
func (x *FileVersion) GetDeleted() bool {
if x != nil {
return x.Deleted
}
return false
}
func (x *FileVersion) GetDevices() [][]byte {
if x != nil {
return x.Devices
}
return nil
}
func (x *FileVersion) GetInvalidDevices() [][]byte {
if x != nil {
return x.InvalidDevices
}
return nil
}
type VersionList struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Versions []*FileVersion `protobuf:"bytes,1,rep,name=versions,proto3" json:"versions,omitempty"`
}
func (x *VersionList) Reset() {
*x = VersionList{}
mi := &file_dbproto_structs_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *VersionList) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*VersionList) ProtoMessage() {}
func (x *VersionList) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use VersionList.ProtoReflect.Descriptor instead.
func (*VersionList) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{2}
}
func (x *VersionList) GetVersions() []*FileVersion {
if x != nil {
return x.Versions
}
return nil
}
// BlockList is the structure used to store block lists
type BlockList struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Blocks []*bep.BlockInfo `protobuf:"bytes,1,rep,name=blocks,proto3" json:"blocks,omitempty"`
}
func (x *BlockList) Reset() {
*x = BlockList{}
mi := &file_dbproto_structs_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *BlockList) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*BlockList) ProtoMessage() {}
func (x *BlockList) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use BlockList.ProtoReflect.Descriptor instead.
func (*BlockList) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{3}
}
func (x *BlockList) GetBlocks() []*bep.BlockInfo {
if x != nil {
return x.Blocks
}
return nil
}
// IndirectionHashesOnly is used to only unmarshal the indirection hashes
// from a FileInfo
type IndirectionHashesOnly struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
BlocksHash []byte `protobuf:"bytes,18,opt,name=blocks_hash,json=blocksHash,proto3" json:"blocks_hash,omitempty"`
VersionHash []byte `protobuf:"bytes,1001,opt,name=version_hash,json=versionHash,proto3" json:"version_hash,omitempty"`
}
func (x *IndirectionHashesOnly) Reset() {
*x = IndirectionHashesOnly{}
mi := &file_dbproto_structs_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *IndirectionHashesOnly) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*IndirectionHashesOnly) ProtoMessage() {}
func (x *IndirectionHashesOnly) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use IndirectionHashesOnly.ProtoReflect.Descriptor instead.
func (*IndirectionHashesOnly) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{4}
}
func (x *IndirectionHashesOnly) GetBlocksHash() []byte {
if x != nil {
return x.BlocksHash
}
return nil
}
func (x *IndirectionHashesOnly) GetVersionHash() []byte {
if x != nil {
return x.VersionHash
}
return nil
}
// For each folder and device we keep one of these to track the current
// counts and sequence. We also keep one for the global state of the folder.
type Counts struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Files int32 `protobuf:"varint,1,opt,name=files,proto3" json:"files,omitempty"`
Directories int32 `protobuf:"varint,2,opt,name=directories,proto3" json:"directories,omitempty"`
Symlinks int32 `protobuf:"varint,3,opt,name=symlinks,proto3" json:"symlinks,omitempty"`
Deleted int32 `protobuf:"varint,4,opt,name=deleted,proto3" json:"deleted,omitempty"`
Bytes int64 `protobuf:"varint,5,opt,name=bytes,proto3" json:"bytes,omitempty"`
Sequence int64 `protobuf:"varint,6,opt,name=sequence,proto3" json:"sequence,omitempty"` // zero for the global state
DeviceId []byte `protobuf:"bytes,17,opt,name=device_id,json=deviceId,proto3" json:"device_id,omitempty"` // device ID for remote devices, or special values for local/global
LocalFlags uint32 `protobuf:"varint,18,opt,name=local_flags,json=localFlags,proto3" json:"local_flags,omitempty"` // the local flag for this count bucket
}
func (x *Counts) Reset() {
*x = Counts{}
mi := &file_dbproto_structs_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Counts) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Counts) ProtoMessage() {}
func (x *Counts) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Counts.ProtoReflect.Descriptor instead.
func (*Counts) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{5}
}
func (x *Counts) GetFiles() int32 {
if x != nil {
return x.Files
}
return 0
}
func (x *Counts) GetDirectories() int32 {
if x != nil {
return x.Directories
}
return 0
}
func (x *Counts) GetSymlinks() int32 {
if x != nil {
return x.Symlinks
}
return 0
}
func (x *Counts) GetDeleted() int32 {
if x != nil {
return x.Deleted
}
return 0
}
func (x *Counts) GetBytes() int64 {
if x != nil {
return x.Bytes
}
return 0
}
func (x *Counts) GetSequence() int64 {
if x != nil {
return x.Sequence
}
return 0
}
func (x *Counts) GetDeviceId() []byte {
if x != nil {
return x.DeviceId
}
return nil
}
func (x *Counts) GetLocalFlags() uint32 {
if x != nil {
return x.LocalFlags
}
return 0
}
type CountsSet struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Counts []*Counts `protobuf:"bytes,1,rep,name=counts,proto3" json:"counts,omitempty"`
Created int64 `protobuf:"varint,2,opt,name=created,proto3" json:"created,omitempty"` // unix nanos
}
func (x *CountsSet) Reset() {
*x = CountsSet{}
mi := &file_dbproto_structs_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CountsSet) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CountsSet) ProtoMessage() {}
func (x *CountsSet) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CountsSet.ProtoReflect.Descriptor instead.
func (*CountsSet) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{6}
}
func (x *CountsSet) GetCounts() []*Counts {
if x != nil {
return x.Counts
}
return nil
}
func (x *CountsSet) GetCreated() int64 {
if x != nil {
return x.Created
}
return 0
}
type ObservedFolder struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Time *timestamppb.Timestamp `protobuf:"bytes,1,opt,name=time,proto3" json:"time,omitempty"`
Label string `protobuf:"bytes,2,opt,name=label,proto3" json:"label,omitempty"`
ReceiveEncrypted bool `protobuf:"varint,3,opt,name=receive_encrypted,json=receiveEncrypted,proto3" json:"receive_encrypted,omitempty"`
RemoteEncrypted bool `protobuf:"varint,4,opt,name=remote_encrypted,json=remoteEncrypted,proto3" json:"remote_encrypted,omitempty"`
}
func (x *ObservedFolder) Reset() {
*x = ObservedFolder{}
mi := &file_dbproto_structs_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ObservedFolder) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ObservedFolder) ProtoMessage() {}
func (x *ObservedFolder) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ObservedFolder.ProtoReflect.Descriptor instead.
func (*ObservedFolder) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{7}
}
func (x *ObservedFolder) GetTime() *timestamppb.Timestamp {
if x != nil {
return x.Time
}
return nil
}
func (x *ObservedFolder) GetLabel() string {
if x != nil {
return x.Label
}
return ""
}
func (x *ObservedFolder) GetReceiveEncrypted() bool {
if x != nil {
return x.ReceiveEncrypted
}
return false
}
func (x *ObservedFolder) GetRemoteEncrypted() bool {
if x != nil {
return x.RemoteEncrypted
}
return false
}
type ObservedDevice struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Time *timestamppb.Timestamp `protobuf:"bytes,1,opt,name=time,proto3" json:"time,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
Address string `protobuf:"bytes,3,opt,name=address,proto3" json:"address,omitempty"`
}
func (x *ObservedDevice) Reset() {
*x = ObservedDevice{}
mi := &file_dbproto_structs_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ObservedDevice) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ObservedDevice) ProtoMessage() {}
func (x *ObservedDevice) ProtoReflect() protoreflect.Message {
mi := &file_dbproto_structs_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ObservedDevice.ProtoReflect.Descriptor instead.
func (*ObservedDevice) Descriptor() ([]byte, []int) {
return file_dbproto_structs_proto_rawDescGZIP(), []int{8}
}
func (x *ObservedDevice) GetTime() *timestamppb.Timestamp {
if x != nil {
return x.Time
}
return nil
}
func (x *ObservedDevice) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *ObservedDevice) GetAddress() string {
if x != nil {
return x.Address
}
return ""
}
var File_dbproto_structs_proto protoreflect.FileDescriptor
var file_dbproto_structs_proto_rawDesc = []byte{
0x0a, 0x15, 0x64, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x73, 0x74, 0x72, 0x75, 0x63, 0x74,
0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x07, 0x64, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x1a, 0x0d, 0x62, 0x65, 0x70, 0x2f, 0x62, 0x65, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a,
0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x22, 0xe5, 0x05, 0x0a, 0x11, 0x46, 0x69, 0x6c, 0x65, 0x49, 0x6e, 0x66, 0x6f, 0x54, 0x72, 0x75,
0x6e, 0x63, 0x61, 0x74, 0x65, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01,
0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x69,
0x7a, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x04, 0x73, 0x69, 0x7a, 0x65, 0x12, 0x1d,
0x0a, 0x0a, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x65, 0x64, 0x5f, 0x73, 0x18, 0x05, 0x20, 0x01,
0x28, 0x03, 0x52, 0x09, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x65, 0x64, 0x53, 0x12, 0x1f, 0x0a,
0x0b, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x65, 0x64, 0x5f, 0x62, 0x79, 0x18, 0x0c, 0x20, 0x01,
0x28, 0x04, 0x52, 0x0a, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x65, 0x64, 0x42, 0x79, 0x12, 0x25,
0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x0b, 0x2e, 0x62, 0x65, 0x70, 0x2e, 0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x07, 0x76, 0x65,
0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63,
0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63,
0x65, 0x12, 0x25, 0x0a, 0x0e, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x5f, 0x74, 0x61, 0x72,
0x67, 0x65, 0x74, 0x18, 0x11, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x73, 0x79, 0x6d, 0x6c, 0x69,
0x6e, 0x6b, 0x54, 0x61, 0x72, 0x67, 0x65, 0x74, 0x12, 0x1f, 0x0a, 0x0b, 0x62, 0x6c, 0x6f, 0x63,
0x6b, 0x73, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0x12, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0a, 0x62,
0x6c, 0x6f, 0x63, 0x6b, 0x73, 0x48, 0x61, 0x73, 0x68, 0x12, 0x1c, 0x0a, 0x09, 0x65, 0x6e, 0x63,
0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x18, 0x13, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x65, 0x6e,
0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x12, 0x25, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18,
0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x11, 0x2e, 0x62, 0x65, 0x70, 0x2e, 0x46, 0x69, 0x6c, 0x65,
0x49, 0x6e, 0x66, 0x6f, 0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x20,
0x0a, 0x0b, 0x70, 0x65, 0x72, 0x6d, 0x69, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x04, 0x20,
0x01, 0x28, 0x0d, 0x52, 0x0b, 0x70, 0x65, 0x72, 0x6d, 0x69, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x73,
0x12, 0x1f, 0x0a, 0x0b, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x65, 0x64, 0x5f, 0x6e, 0x73, 0x18,
0x0b, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0a, 0x6d, 0x6f, 0x64, 0x69, 0x66, 0x69, 0x65, 0x64, 0x4e,
0x73, 0x12, 0x1d, 0x0a, 0x0a, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x5f, 0x73, 0x69, 0x7a, 0x65, 0x18,
0x0d, 0x20, 0x01, 0x28, 0x05, 0x52, 0x09, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x53, 0x69, 0x7a, 0x65,
0x12, 0x2d, 0x0a, 0x08, 0x70, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x18, 0x0e, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x11, 0x2e, 0x62, 0x65, 0x70, 0x2e, 0x50, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72,
0x6d, 0x44, 0x61, 0x74, 0x61, 0x52, 0x08, 0x70, 0x6c, 0x61, 0x74, 0x66, 0x6f, 0x72, 0x6d, 0x12,
0x20, 0x0a, 0x0b, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x5f, 0x66, 0x6c, 0x61, 0x67, 0x73, 0x18, 0xe8,
0x07, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x6c, 0x6f, 0x63, 0x61, 0x6c, 0x46, 0x6c, 0x61, 0x67,
0x73, 0x12, 0x22, 0x0a, 0x0c, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x68, 0x61, 0x73,
0x68, 0x18, 0xe9, 0x07, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0b, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f,
0x6e, 0x48, 0x61, 0x73, 0x68, 0x12, 0x27, 0x0a, 0x0f, 0x69, 0x6e, 0x6f, 0x64, 0x65, 0x5f, 0x63,
0x68, 0x61, 0x6e, 0x67, 0x65, 0x5f, 0x6e, 0x73, 0x18, 0xea, 0x07, 0x20, 0x01, 0x28, 0x03, 0x52,
0x0d, 0x69, 0x6e, 0x6f, 0x64, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x4e, 0x73, 0x12, 0x37,
0x0a, 0x17, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x74, 0x72, 0x61,
0x69, 0x6c, 0x65, 0x72, 0x5f, 0x73, 0x69, 0x7a, 0x65, 0x18, 0xeb, 0x07, 0x20, 0x01, 0x28, 0x05,
0x52, 0x15, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x54, 0x72, 0x61, 0x69,
0x6c, 0x65, 0x72, 0x53, 0x69, 0x7a, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x64, 0x65, 0x6c, 0x65, 0x74,
0x65, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x08, 0x52, 0x07, 0x64, 0x65, 0x6c, 0x65, 0x74, 0x65,
0x64, 0x12, 0x18, 0x0a, 0x07, 0x69, 0x6e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x18, 0x07, 0x20, 0x01,
0x28, 0x08, 0x52, 0x07, 0x69, 0x6e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x12, 0x25, 0x0a, 0x0e, 0x6e,
0x6f, 0x5f, 0x70, 0x65, 0x72, 0x6d, 0x69, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x08, 0x20,
0x01, 0x28, 0x08, 0x52, 0x0d, 0x6e, 0x6f, 0x50, 0x65, 0x72, 0x6d, 0x69, 0x73, 0x73, 0x69, 0x6f,
0x6e, 0x73, 0x4a, 0x04, 0x08, 0x10, 0x10, 0x11, 0x22, 0x91, 0x01, 0x0a, 0x0b, 0x46, 0x69, 0x6c,
0x65, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x25, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73,
0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0b, 0x2e, 0x62, 0x65, 0x70, 0x2e,
0x56, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12,
0x18, 0x0a, 0x07, 0x64, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08,
0x52, 0x07, 0x64, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x12, 0x18, 0x0a, 0x07, 0x64, 0x65, 0x76,
0x69, 0x63, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x07, 0x64, 0x65, 0x76, 0x69,
0x63, 0x65, 0x73, 0x12, 0x27, 0x0a, 0x0f, 0x69, 0x6e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x5f, 0x64,
0x65, 0x76, 0x69, 0x63, 0x65, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x0e, 0x69, 0x6e,
0x76, 0x61, 0x6c, 0x69, 0x64, 0x44, 0x65, 0x76, 0x69, 0x63, 0x65, 0x73, 0x22, 0x3f, 0x0a, 0x0b,
0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x4c, 0x69, 0x73, 0x74, 0x12, 0x30, 0x0a, 0x08, 0x76,
0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e,
0x64, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2e, 0x46, 0x69, 0x6c, 0x65, 0x56, 0x65, 0x72, 0x73,
0x69, 0x6f, 0x6e, 0x52, 0x08, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x33, 0x0a,
0x09, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x4c, 0x69, 0x73, 0x74, 0x12, 0x26, 0x0a, 0x06, 0x62, 0x6c,
0x6f, 0x63, 0x6b, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x62, 0x65, 0x70,
0x2e, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x06, 0x62, 0x6c, 0x6f, 0x63,
0x6b, 0x73, 0x22, 0x5c, 0x0a, 0x15, 0x49, 0x6e, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x69, 0x6f,
0x6e, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, 0x4f, 0x6e, 0x6c, 0x79, 0x12, 0x1f, 0x0a, 0x0b, 0x62,
0x6c, 0x6f, 0x63, 0x6b, 0x73, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0x12, 0x20, 0x01, 0x28, 0x0c,
0x52, 0x0a, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x73, 0x48, 0x61, 0x73, 0x68, 0x12, 0x22, 0x0a, 0x0c,
0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x18, 0xe9, 0x07, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x0b, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x48, 0x61, 0x73, 0x68,
0x22, 0xe6, 0x01, 0x0a, 0x06, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x66,
0x69, 0x6c, 0x65, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x05, 0x52, 0x05, 0x66, 0x69, 0x6c, 0x65,
0x73, 0x12, 0x20, 0x0a, 0x0b, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x69, 0x65, 0x73,
0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0b, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72,
0x69, 0x65, 0x73, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x73, 0x18,
0x03, 0x20, 0x01, 0x28, 0x05, 0x52, 0x08, 0x73, 0x79, 0x6d, 0x6c, 0x69, 0x6e, 0x6b, 0x73, 0x12,
0x18, 0x0a, 0x07, 0x64, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x05,
0x52, 0x07, 0x64, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x79, 0x74,
0x65, 0x73, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x52, 0x05, 0x62, 0x79, 0x74, 0x65, 0x73, 0x12,
0x1a, 0x0a, 0x08, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x65, 0x18, 0x06, 0x20, 0x01, 0x28,
0x03, 0x52, 0x08, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x64,
0x65, 0x76, 0x69, 0x63, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x11, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x08,
0x64, 0x65, 0x76, 0x69, 0x63, 0x65, 0x49, 0x64, 0x12, 0x1f, 0x0a, 0x0b, 0x6c, 0x6f, 0x63, 0x61,
0x6c, 0x5f, 0x66, 0x6c, 0x61, 0x67, 0x73, 0x18, 0x12, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x0a, 0x6c,
0x6f, 0x63, 0x61, 0x6c, 0x46, 0x6c, 0x61, 0x67, 0x73, 0x22, 0x4e, 0x0a, 0x09, 0x43, 0x6f, 0x75,
0x6e, 0x74, 0x73, 0x53, 0x65, 0x74, 0x12, 0x27, 0x0a, 0x06, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73,
0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x64, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x2e, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x52, 0x06, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x12,
0x18, 0x0a, 0x07, 0x63, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03,
0x52, 0x07, 0x63, 0x72, 0x65, 0x61, 0x74, 0x65, 0x64, 0x22, 0xae, 0x01, 0x0a, 0x0e, 0x4f, 0x62,
0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x46, 0x6f, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x2e, 0x0a, 0x04,
0x74, 0x69, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f,
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x04, 0x74, 0x69, 0x6d, 0x65, 0x12, 0x14, 0x0a, 0x05,
0x6c, 0x61, 0x62, 0x65, 0x6c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x6c, 0x61, 0x62,
0x65, 0x6c, 0x12, 0x2b, 0x0a, 0x11, 0x72, 0x65, 0x63, 0x65, 0x69, 0x76, 0x65, 0x5f, 0x65, 0x6e,
0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x10, 0x72,
0x65, 0x63, 0x65, 0x69, 0x76, 0x65, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x12,
0x29, 0x0a, 0x10, 0x72, 0x65, 0x6d, 0x6f, 0x74, 0x65, 0x5f, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70,
0x74, 0x65, 0x64, 0x18, 0x04, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0f, 0x72, 0x65, 0x6d, 0x6f, 0x74,
0x65, 0x45, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x65, 0x64, 0x22, 0x6e, 0x0a, 0x0e, 0x4f, 0x62,
0x73, 0x65, 0x72, 0x76, 0x65, 0x64, 0x44, 0x65, 0x76, 0x69, 0x63, 0x65, 0x12, 0x2e, 0x0a, 0x04,
0x74, 0x69, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f,
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x04, 0x74, 0x69, 0x6d, 0x65, 0x12, 0x12, 0x0a, 0x04,
0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65,
0x12, 0x18, 0x0a, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28,
0x09, 0x52, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x42, 0x8c, 0x01, 0x0a, 0x0b, 0x63,
0x6f, 0x6d, 0x2e, 0x64, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x0c, 0x53, 0x74, 0x72, 0x75,
0x63, 0x74, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x33, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x79, 0x6e, 0x63, 0x74, 0x68, 0x69, 0x6e, 0x67,
0x2f, 0x73, 0x79, 0x6e, 0x63, 0x74, 0x68, 0x69, 0x6e, 0x67, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72,
0x6e, 0x61, 0x6c, 0x2f, 0x67, 0x65, 0x6e, 0x2f, 0x64, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xa2,
0x02, 0x03, 0x44, 0x58, 0x58, 0xaa, 0x02, 0x07, 0x44, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xca,
0x02, 0x07, 0x44, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xe2, 0x02, 0x13, 0x44, 0x62, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0xea,
0x02, 0x07, 0x44, 0x62, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x33,
}
var (
file_dbproto_structs_proto_rawDescOnce sync.Once
file_dbproto_structs_proto_rawDescData = file_dbproto_structs_proto_rawDesc
)
func file_dbproto_structs_proto_rawDescGZIP() []byte {
file_dbproto_structs_proto_rawDescOnce.Do(func() {
file_dbproto_structs_proto_rawDescData = protoimpl.X.CompressGZIP(file_dbproto_structs_proto_rawDescData)
})
return file_dbproto_structs_proto_rawDescData
}
var file_dbproto_structs_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_dbproto_structs_proto_goTypes = []any{
(*FileInfoTruncated)(nil), // 0: dbproto.FileInfoTruncated
(*FileVersion)(nil), // 1: dbproto.FileVersion
(*VersionList)(nil), // 2: dbproto.VersionList
(*BlockList)(nil), // 3: dbproto.BlockList
(*IndirectionHashesOnly)(nil), // 4: dbproto.IndirectionHashesOnly
(*Counts)(nil), // 5: dbproto.Counts
(*CountsSet)(nil), // 6: dbproto.CountsSet
(*ObservedFolder)(nil), // 7: dbproto.ObservedFolder
(*ObservedDevice)(nil), // 8: dbproto.ObservedDevice
(*bep.Vector)(nil), // 9: bep.Vector
(bep.FileInfoType)(0), // 10: bep.FileInfoType
(*bep.PlatformData)(nil), // 11: bep.PlatformData
(*bep.BlockInfo)(nil), // 12: bep.BlockInfo
(*timestamppb.Timestamp)(nil), // 13: google.protobuf.Timestamp
}
var file_dbproto_structs_proto_depIdxs = []int32{
9, // 0: dbproto.FileInfoTruncated.version:type_name -> bep.Vector
10, // 1: dbproto.FileInfoTruncated.type:type_name -> bep.FileInfoType
11, // 2: dbproto.FileInfoTruncated.platform:type_name -> bep.PlatformData
9, // 3: dbproto.FileVersion.version:type_name -> bep.Vector
1, // 4: dbproto.VersionList.versions:type_name -> dbproto.FileVersion
12, // 5: dbproto.BlockList.blocks:type_name -> bep.BlockInfo
5, // 6: dbproto.CountsSet.counts:type_name -> dbproto.Counts
13, // 7: dbproto.ObservedFolder.time:type_name -> google.protobuf.Timestamp
13, // 8: dbproto.ObservedDevice.time:type_name -> google.protobuf.Timestamp
9, // [9:9] is the sub-list for method output_type
9, // [9:9] is the sub-list for method input_type
9, // [9:9] is the sub-list for extension type_name
9, // [9:9] is the sub-list for extension extendee
0, // [0:9] is the sub-list for field type_name
}
func init() { file_dbproto_structs_proto_init() }
func file_dbproto_structs_proto_init() {
if File_dbproto_structs_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_dbproto_structs_proto_rawDesc,
NumEnums: 0,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_dbproto_structs_proto_goTypes,
DependencyIndexes: file_dbproto_structs_proto_depIdxs,
MessageInfos: file_dbproto_structs_proto_msgTypes,
}.Build()
File_dbproto_structs_proto = out.File
file_dbproto_structs_proto_rawDesc = nil
file_dbproto_structs_proto_goTypes = nil
file_dbproto_structs_proto_depIdxs = nil
}

View File

@ -0,0 +1,155 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.35.1
// protoc (unknown)
// source: discoproto/local.proto
package discoproto
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Announce struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id []byte `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Addresses []string `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses,omitempty"`
InstanceId int64 `protobuf:"varint,3,opt,name=instance_id,json=instanceId,proto3" json:"instance_id,omitempty"`
}
func (x *Announce) Reset() {
*x = Announce{}
mi := &file_discoproto_local_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Announce) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Announce) ProtoMessage() {}
func (x *Announce) ProtoReflect() protoreflect.Message {
mi := &file_discoproto_local_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Announce.ProtoReflect.Descriptor instead.
func (*Announce) Descriptor() ([]byte, []int) {
return file_discoproto_local_proto_rawDescGZIP(), []int{0}
}
func (x *Announce) GetId() []byte {
if x != nil {
return x.Id
}
return nil
}
func (x *Announce) GetAddresses() []string {
if x != nil {
return x.Addresses
}
return nil
}
func (x *Announce) GetInstanceId() int64 {
if x != nil {
return x.InstanceId
}
return 0
}
var File_discoproto_local_proto protoreflect.FileDescriptor
var file_discoproto_local_proto_rawDesc = []byte{
0x0a, 0x16, 0x64, 0x69, 0x73, 0x63, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x6c, 0x6f, 0x63,
0x61, 0x6c, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0a, 0x64, 0x69, 0x73, 0x63, 0x6f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x22, 0x59, 0x0a, 0x08, 0x41, 0x6e, 0x6e, 0x6f, 0x75, 0x6e, 0x63, 0x65,
0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x02, 0x69, 0x64,
0x12, 0x1c, 0x0a, 0x09, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x65, 0x73, 0x18, 0x02, 0x20,
0x03, 0x28, 0x09, 0x52, 0x09, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x65, 0x73, 0x12, 0x1f,
0x0a, 0x0b, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20,
0x01, 0x28, 0x03, 0x52, 0x0a, 0x69, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x49, 0x64, 0x42,
0x9c, 0x01, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x64, 0x69, 0x73, 0x63, 0x6f, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x42, 0x0a, 0x4c, 0x6f, 0x63, 0x61, 0x6c, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01,
0x5a, 0x36, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x79, 0x6e,
0x63, 0x74, 0x68, 0x69, 0x6e, 0x67, 0x2f, 0x73, 0x79, 0x6e, 0x63, 0x74, 0x68, 0x69, 0x6e, 0x67,
0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x67, 0x65, 0x6e, 0x2f, 0x64, 0x69,
0x73, 0x63, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xa2, 0x02, 0x03, 0x44, 0x58, 0x58, 0xaa, 0x02,
0x0a, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xca, 0x02, 0x0a, 0x44, 0x69,
0x73, 0x63, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0xe2, 0x02, 0x16, 0x44, 0x69, 0x73, 0x63, 0x6f,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74,
0x61, 0xea, 0x02, 0x0a, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_discoproto_local_proto_rawDescOnce sync.Once
file_discoproto_local_proto_rawDescData = file_discoproto_local_proto_rawDesc
)
func file_discoproto_local_proto_rawDescGZIP() []byte {
file_discoproto_local_proto_rawDescOnce.Do(func() {
file_discoproto_local_proto_rawDescData = protoimpl.X.CompressGZIP(file_discoproto_local_proto_rawDescData)
})
return file_discoproto_local_proto_rawDescData
}
var file_discoproto_local_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_discoproto_local_proto_goTypes = []any{
(*Announce)(nil), // 0: discoproto.Announce
}
var file_discoproto_local_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_discoproto_local_proto_init() }
func file_discoproto_local_proto_init() {
if File_discoproto_local_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_discoproto_local_proto_rawDesc,
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_discoproto_local_proto_goTypes,
DependencyIndexes: file_discoproto_local_proto_depIdxs,
MessageInfos: file_discoproto_local_proto_msgTypes,
}.Build()
File_discoproto_local_proto = out.File
file_discoproto_local_proto_rawDesc = nil
file_discoproto_local_proto_goTypes = nil
file_discoproto_local_proto_depIdxs = nil
}

View File

@ -0,0 +1,276 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.35.1
// protoc (unknown)
// source: discosrv/discosrv.proto
package discosrv
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type DatabaseRecord struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Addresses []*DatabaseAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses,omitempty"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"` // Unix nanos, last device announce
}
func (x *DatabaseRecord) Reset() {
*x = DatabaseRecord{}
mi := &file_discosrv_discosrv_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DatabaseRecord) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DatabaseRecord) ProtoMessage() {}
func (x *DatabaseRecord) ProtoReflect() protoreflect.Message {
mi := &file_discosrv_discosrv_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DatabaseRecord.ProtoReflect.Descriptor instead.
func (*DatabaseRecord) Descriptor() ([]byte, []int) {
return file_discosrv_discosrv_proto_rawDescGZIP(), []int{0}
}
func (x *DatabaseRecord) GetAddresses() []*DatabaseAddress {
if x != nil {
return x.Addresses
}
return nil
}
func (x *DatabaseRecord) GetSeen() int64 {
if x != nil {
return x.Seen
}
return 0
}
type ReplicationRecord struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Key []byte `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` // raw 32 byte device ID
Addresses []*DatabaseAddress `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses,omitempty"`
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"` // Unix nanos, last device announce
}
func (x *ReplicationRecord) Reset() {
*x = ReplicationRecord{}
mi := &file_discosrv_discosrv_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *ReplicationRecord) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ReplicationRecord) ProtoMessage() {}
func (x *ReplicationRecord) ProtoReflect() protoreflect.Message {
mi := &file_discosrv_discosrv_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ReplicationRecord.ProtoReflect.Descriptor instead.
func (*ReplicationRecord) Descriptor() ([]byte, []int) {
return file_discosrv_discosrv_proto_rawDescGZIP(), []int{1}
}
func (x *ReplicationRecord) GetKey() []byte {
if x != nil {
return x.Key
}
return nil
}
func (x *ReplicationRecord) GetAddresses() []*DatabaseAddress {
if x != nil {
return x.Addresses
}
return nil
}
func (x *ReplicationRecord) GetSeen() int64 {
if x != nil {
return x.Seen
}
return 0
}
type DatabaseAddress struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"`
Expires int64 `protobuf:"varint,2,opt,name=expires,proto3" json:"expires,omitempty"` // Unix nanos
}
func (x *DatabaseAddress) Reset() {
*x = DatabaseAddress{}
mi := &file_discosrv_discosrv_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DatabaseAddress) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DatabaseAddress) ProtoMessage() {}
func (x *DatabaseAddress) ProtoReflect() protoreflect.Message {
mi := &file_discosrv_discosrv_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DatabaseAddress.ProtoReflect.Descriptor instead.
func (*DatabaseAddress) Descriptor() ([]byte, []int) {
return file_discosrv_discosrv_proto_rawDescGZIP(), []int{2}
}
func (x *DatabaseAddress) GetAddress() string {
if x != nil {
return x.Address
}
return ""
}
func (x *DatabaseAddress) GetExpires() int64 {
if x != nil {
return x.Expires
}
return 0
}
var File_discosrv_discosrv_proto protoreflect.FileDescriptor
var file_discosrv_discosrv_proto_rawDesc = []byte{
0x0a, 0x17, 0x64, 0x69, 0x73, 0x63, 0x6f, 0x73, 0x72, 0x76, 0x2f, 0x64, 0x69, 0x73, 0x63, 0x6f,
0x73, 0x72, 0x76, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x08, 0x64, 0x69, 0x73, 0x63, 0x6f,
0x73, 0x72, 0x76, 0x22, 0x5d, 0x0a, 0x0e, 0x44, 0x61, 0x74, 0x61, 0x62, 0x61, 0x73, 0x65, 0x52,
0x65, 0x63, 0x6f, 0x72, 0x64, 0x12, 0x37, 0x0a, 0x09, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73,
0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x64, 0x69, 0x73, 0x63, 0x6f,
0x73, 0x72, 0x76, 0x2e, 0x44, 0x61, 0x74, 0x61, 0x62, 0x61, 0x73, 0x65, 0x41, 0x64, 0x64, 0x72,
0x65, 0x73, 0x73, 0x52, 0x09, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x65, 0x73, 0x12, 0x12,
0x0a, 0x04, 0x73, 0x65, 0x65, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x04, 0x73, 0x65,
0x65, 0x6e, 0x22, 0x72, 0x0a, 0x11, 0x52, 0x65, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x52, 0x65, 0x63, 0x6f, 0x72, 0x64, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01,
0x20, 0x01, 0x28, 0x0c, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x37, 0x0a, 0x09, 0x61, 0x64, 0x64,
0x72, 0x65, 0x73, 0x73, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x64,
0x69, 0x73, 0x63, 0x6f, 0x73, 0x72, 0x76, 0x2e, 0x44, 0x61, 0x74, 0x61, 0x62, 0x61, 0x73, 0x65,
0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x52, 0x09, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73,
0x65, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x65, 0x65, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03,
0x52, 0x04, 0x73, 0x65, 0x65, 0x6e, 0x22, 0x45, 0x0a, 0x0f, 0x44, 0x61, 0x74, 0x61, 0x62, 0x61,
0x73, 0x65, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x61, 0x64, 0x64,
0x72, 0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x61, 0x64, 0x64, 0x72,
0x65, 0x73, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x65, 0x78, 0x70, 0x69, 0x72, 0x65, 0x73, 0x18, 0x02,
0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x65, 0x78, 0x70, 0x69, 0x72, 0x65, 0x73, 0x42, 0x93, 0x01,
0x0a, 0x0c, 0x63, 0x6f, 0x6d, 0x2e, 0x64, 0x69, 0x73, 0x63, 0x6f, 0x73, 0x72, 0x76, 0x42, 0x0d,
0x44, 0x69, 0x73, 0x63, 0x6f, 0x73, 0x72, 0x76, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a,
0x34, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x73, 0x79, 0x6e, 0x63,
0x74, 0x68, 0x69, 0x6e, 0x67, 0x2f, 0x73, 0x79, 0x6e, 0x63, 0x74, 0x68, 0x69, 0x6e, 0x67, 0x2f,
0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x67, 0x65, 0x6e, 0x2f, 0x64, 0x69, 0x73,
0x63, 0x6f, 0x73, 0x72, 0x76, 0xa2, 0x02, 0x03, 0x44, 0x58, 0x58, 0xaa, 0x02, 0x08, 0x44, 0x69,
0x73, 0x63, 0x6f, 0x73, 0x72, 0x76, 0xca, 0x02, 0x08, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x73, 0x72,
0x76, 0xe2, 0x02, 0x14, 0x44, 0x69, 0x73, 0x63, 0x6f, 0x73, 0x72, 0x76, 0x5c, 0x47, 0x50, 0x42,
0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x08, 0x44, 0x69, 0x73, 0x63, 0x6f,
0x73, 0x72, 0x76, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_discosrv_discosrv_proto_rawDescOnce sync.Once
file_discosrv_discosrv_proto_rawDescData = file_discosrv_discosrv_proto_rawDesc
)
func file_discosrv_discosrv_proto_rawDescGZIP() []byte {
file_discosrv_discosrv_proto_rawDescOnce.Do(func() {
file_discosrv_discosrv_proto_rawDescData = protoimpl.X.CompressGZIP(file_discosrv_discosrv_proto_rawDescData)
})
return file_discosrv_discosrv_proto_rawDescData
}
var file_discosrv_discosrv_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
var file_discosrv_discosrv_proto_goTypes = []any{
(*DatabaseRecord)(nil), // 0: discosrv.DatabaseRecord
(*ReplicationRecord)(nil), // 1: discosrv.ReplicationRecord
(*DatabaseAddress)(nil), // 2: discosrv.DatabaseAddress
}
var file_discosrv_discosrv_proto_depIdxs = []int32{
2, // 0: discosrv.DatabaseRecord.addresses:type_name -> discosrv.DatabaseAddress
2, // 1: discosrv.ReplicationRecord.addresses:type_name -> discosrv.DatabaseAddress
2, // [2:2] is the sub-list for method output_type
2, // [2:2] is the sub-list for method input_type
2, // [2:2] is the sub-list for extension type_name
2, // [2:2] is the sub-list for extension extendee
0, // [0:2] is the sub-list for field type_name
}
func init() { file_discosrv_discosrv_proto_init() }
func file_discosrv_discosrv_proto_init() {
if File_discosrv_discosrv_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_discosrv_discosrv_proto_rawDesc,
NumEnums: 0,
NumMessages: 3,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_discosrv_discosrv_proto_goTypes,
DependencyIndexes: file_discosrv_discosrv_proto_depIdxs,
MessageInfos: file_discosrv_discosrv_proto_msgTypes,
}.Build()
File_discosrv_discosrv_proto = out.File
file_discosrv_discosrv_proto_rawDesc = nil
file_discosrv_discosrv_proto_goTypes = nil
file_discosrv_discosrv_proto_depIdxs = nil
}

View File

@ -0,0 +1,24 @@
package protoutil
import (
"fmt"
"google.golang.org/protobuf/proto"
)
func MarshalTo(buf []byte, pb proto.Message) (int, error) {
if sz := proto.Size(pb); len(buf) < sz {
return 0, fmt.Errorf("buffer too small")
} else if sz == 0 {
return 0, nil
}
opts := proto.MarshalOptions{}
bs, err := opts.MarshalAppend(buf[:0], pb)
if err != nil {
return 0, err
}
if &buf[0] != &bs[0] {
panic("can't happen: slice was reallocated")
}
return len(bs), nil
}

View File

@ -1748,10 +1748,10 @@ func (*service) getSystemBrowse(w http.ResponseWriter, r *http.Request) {
current := qs.Get("current") current := qs.Get("current")
// Default value or in case of error unmarshalling ends up being basic fs. // Default value or in case of error unmarshalling ends up being basic fs.
var fsType fs.FilesystemType var fsType config.FilesystemType
fsType.UnmarshalText([]byte(qs.Get("filesystem"))) fsType.UnmarshalText([]byte(qs.Get("filesystem")))
sendJSON(w, browse(fsType, current)) sendJSON(w, browse(fsType.ToFS(), current))
} }
func browse(fsType fs.FilesystemType, current string) []string { func browse(fsType fs.FilesystemType, current string) []string {
@ -1870,10 +1870,10 @@ func (*service) getHeapProf(w http.ResponseWriter, _ *http.Request) {
pprof.WriteHeapProfile(w) pprof.WriteHeapProfile(w)
} }
func toJsonFileInfoSlice(fs []db.FileInfoTruncated) []jsonFileInfoTrunc { func toJsonFileInfoSlice(fs []protocol.FileInfo) []jsonFileInfo {
res := make([]jsonFileInfoTrunc, len(fs)) res := make([]jsonFileInfo, len(fs))
for i, f := range fs { for i, f := range fs {
res[i] = jsonFileInfoTrunc(f) res[i] = jsonFileInfo(f)
} }
return res return res
} }
@ -1888,15 +1888,7 @@ func (f jsonFileInfo) MarshalJSON() ([]byte, error) {
return json.Marshal(m) return json.Marshal(m)
} }
type jsonFileInfoTrunc db.FileInfoTruncated func fileIntfJSONMap(f protocol.FileInfo) map[string]interface{} {
func (f jsonFileInfoTrunc) MarshalJSON() ([]byte, error) {
m := fileIntfJSONMap(db.FileInfoTruncated(f))
m["numBlocks"] = nil // explicitly unknown
return json.Marshal(m)
}
func fileIntfJSONMap(f protocol.FileIntf) map[string]interface{} {
out := map[string]interface{}{ out := map[string]interface{}{
"name": f.FileName(), "name": f.FileName(),
"type": f.FileType().String(), "type": f.FileType().String(),

View File

@ -25,6 +25,8 @@ import (
"time" "time"
"github.com/d4l3k/messagediff" "github.com/d4l3k/messagediff"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/lib/assets" "github.com/syncthing/syncthing/lib/assets"
"github.com/syncthing/syncthing/lib/build" "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
@ -46,7 +48,6 @@ import (
"github.com/syncthing/syncthing/lib/sync" "github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil" "github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/ur" "github.com/syncthing/syncthing/lib/ur"
"github.com/thejerf/suture/v4"
) )
var ( var (

View File

@ -12,6 +12,9 @@ import (
"strings" "strings"
"time" "time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/apiproto"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db" "github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
@ -28,16 +31,16 @@ type tokenManager struct {
timeNow func() time.Time // can be overridden for testing timeNow func() time.Time // can be overridden for testing
mut sync.Mutex mut sync.Mutex
tokens *TokenSet tokens *apiproto.TokenSet
saveTimer *time.Timer saveTimer *time.Timer
} }
func newTokenManager(key string, miscDB *db.NamespacedKV, lifetime time.Duration, maxItems int) *tokenManager { func newTokenManager(key string, miscDB *db.NamespacedKV, lifetime time.Duration, maxItems int) *tokenManager {
tokens := &TokenSet{ tokens := &apiproto.TokenSet{
Tokens: make(map[string]int64), Tokens: make(map[string]int64),
} }
if bs, ok, _ := miscDB.Bytes(key); ok { if bs, ok, _ := miscDB.Bytes(key); ok {
_ = tokens.Unmarshal(bs) // best effort _ = proto.Unmarshal(bs, tokens) // best effort
} }
return &tokenManager{ return &tokenManager{
key: key, key: key,
@ -136,7 +139,7 @@ func (m *tokenManager) scheduledSave() {
m.saveTimer = nil m.saveTimer = nil
bs, _ := m.tokens.Marshal() // can't fail bs, _ := proto.Marshal(m.tokens) // can't fail
_ = m.miscDB.PutBytes(m.key, bs) // can fail, but what are we going to do? _ = m.miscDB.PutBytes(m.key, bs) // can fail, but what are we going to do?
} }

View File

@ -1,411 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/api/tokenset.proto
package api
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type TokenSet struct {
// token -> expiry time (epoch nanoseconds)
Tokens map[string]int64 `protobuf:"bytes,1,rep,name=tokens,proto3" json:"tokens" xml:"token" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
}
func (m *TokenSet) Reset() { *m = TokenSet{} }
func (m *TokenSet) String() string { return proto.CompactTextString(m) }
func (*TokenSet) ProtoMessage() {}
func (*TokenSet) Descriptor() ([]byte, []int) {
return fileDescriptor_9ea8707737c33b38, []int{0}
}
func (m *TokenSet) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *TokenSet) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_TokenSet.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *TokenSet) XXX_Merge(src proto.Message) {
xxx_messageInfo_TokenSet.Merge(m, src)
}
func (m *TokenSet) XXX_Size() int {
return m.ProtoSize()
}
func (m *TokenSet) XXX_DiscardUnknown() {
xxx_messageInfo_TokenSet.DiscardUnknown(m)
}
var xxx_messageInfo_TokenSet proto.InternalMessageInfo
func init() {
proto.RegisterType((*TokenSet)(nil), "api.TokenSet")
proto.RegisterMapType((map[string]int64)(nil), "api.TokenSet.TokensEntry")
}
func init() { proto.RegisterFile("lib/api/tokenset.proto", fileDescriptor_9ea8707737c33b38) }
var fileDescriptor_9ea8707737c33b38 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0xcb, 0xc9, 0x4c, 0xd2,
0x4f, 0x2c, 0xc8, 0xd4, 0x2f, 0xc9, 0xcf, 0x4e, 0xcd, 0x2b, 0x4e, 0x2d, 0xd1, 0x2b, 0x28, 0xca,
0x2f, 0xc9, 0x17, 0x62, 0x4e, 0x2c, 0xc8, 0x54, 0x3a, 0xce, 0xc8, 0xc5, 0x11, 0x02, 0x12, 0x0f,
0x4e, 0x2d, 0x11, 0x0a, 0xe0, 0x62, 0x83, 0xa8, 0x91, 0x60, 0x54, 0x60, 0xd6, 0xe0, 0x36, 0x92,
0xd4, 0x4b, 0x2c, 0xc8, 0xd4, 0x83, 0x49, 0x43, 0x18, 0xc5, 0xae, 0x79, 0x25, 0x45, 0x95, 0x4e,
0xb2, 0x27, 0xee, 0xc9, 0x33, 0xbc, 0xba, 0x27, 0x0f, 0xd5, 0xf0, 0xe9, 0x9e, 0x3c, 0x77, 0x45,
0x6e, 0x8e, 0x95, 0x12, 0x98, 0xab, 0x14, 0x04, 0x15, 0x96, 0xca, 0xe4, 0xe2, 0x46, 0xd2, 0x25,
0xa4, 0xc6, 0xc5, 0x9c, 0x9d, 0x5a, 0x29, 0xc1, 0xa8, 0xc0, 0xa8, 0xc1, 0xe9, 0x24, 0xf2, 0xea,
0x9e, 0x3c, 0x88, 0xfb, 0xe9, 0x9e, 0x3c, 0x27, 0x58, 0x6f, 0x76, 0x6a, 0xa5, 0x52, 0x10, 0x48,
0x44, 0x48, 0x8f, 0x8b, 0xb5, 0x2c, 0x31, 0xa7, 0x34, 0x55, 0x82, 0x49, 0x81, 0x51, 0x83, 0xd9,
0x49, 0xe2, 0xd5, 0x3d, 0x79, 0x88, 0x00, 0xdc, 0x1e, 0x30, 0x4f, 0x29, 0x08, 0x22, 0x6a, 0xc5,
0x64, 0xc1, 0xe8, 0xe4, 0x71, 0xe2, 0xa1, 0x1c, 0xc3, 0x85, 0x87, 0x72, 0x0c, 0x27, 0x1e, 0xc9,
0x31, 0x5e, 0x78, 0x24, 0xc7, 0x38, 0xe1, 0xb1, 0x1c, 0xc3, 0x82, 0xc7, 0x72, 0x8c, 0x17, 0x1e,
0xcb, 0x31, 0xdc, 0x78, 0x2c, 0xc7, 0x10, 0xa5, 0x96, 0x9e, 0x59, 0x92, 0x51, 0x9a, 0xa4, 0x97,
0x9c, 0x9f, 0xab, 0x5f, 0x5c, 0x99, 0x97, 0x5c, 0x92, 0x91, 0x99, 0x97, 0x8e, 0xc4, 0x82, 0x86,
0x53, 0x12, 0x1b, 0x38, 0x7c, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xfe, 0x25, 0x31, 0x49,
0x39, 0x01, 0x00, 0x00,
}
func (m *TokenSet) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *TokenSet) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *TokenSet) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Tokens) > 0 {
for k := range m.Tokens {
v := m.Tokens[k]
baseI := i
i = encodeVarintTokenset(dAtA, i, uint64(v))
i--
dAtA[i] = 0x10
i -= len(k)
copy(dAtA[i:], k)
i = encodeVarintTokenset(dAtA, i, uint64(len(k)))
i--
dAtA[i] = 0xa
i = encodeVarintTokenset(dAtA, i, uint64(baseI-i))
i--
dAtA[i] = 0xa
}
}
return len(dAtA) - i, nil
}
func encodeVarintTokenset(dAtA []byte, offset int, v uint64) int {
offset -= sovTokenset(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *TokenSet) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Tokens) > 0 {
for k, v := range m.Tokens {
_ = k
_ = v
mapEntrySize := 1 + len(k) + sovTokenset(uint64(len(k))) + 1 + sovTokenset(uint64(v))
n += mapEntrySize + 1 + sovTokenset(uint64(mapEntrySize))
}
}
return n
}
func sovTokenset(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozTokenset(x uint64) (n int) {
return sovTokenset(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *TokenSet) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTokenset
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: TokenSet: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: TokenSet: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Tokens", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTokenset
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthTokenset
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTokenset
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Tokens == nil {
m.Tokens = make(map[string]int64)
}
var mapkey string
var mapvalue int64
for iNdEx < postIndex {
entryPreIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTokenset
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
if fieldNum == 1 {
var stringLenmapkey uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTokenset
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapkey |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLenmapkey := int(stringLenmapkey)
if intStringLenmapkey < 0 {
return ErrInvalidLengthTokenset
}
postStringIndexmapkey := iNdEx + intStringLenmapkey
if postStringIndexmapkey < 0 {
return ErrInvalidLengthTokenset
}
if postStringIndexmapkey > l {
return io.ErrUnexpectedEOF
}
mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
iNdEx = postStringIndexmapkey
} else if fieldNum == 2 {
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTokenset
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
mapvalue |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
} else {
iNdEx = entryPreIndex
skippy, err := skipTokenset(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTokenset
}
if (iNdEx + skippy) > postIndex {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
m.Tokens[mapkey] = mapvalue
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTokenset(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTokenset
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipTokenset(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowTokenset
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowTokenset
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowTokenset
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthTokenset
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupTokenset
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthTokenset
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthTokenset = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowTokenset = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupTokenset = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -6,6 +6,13 @@
package config package config
type AuthMode int32
const (
AuthModeStatic AuthMode = 0
AuthModeLDAP AuthMode = 1
)
func (t AuthMode) String() string { func (t AuthMode) String() string {
switch t { switch t {
case AuthModeStatic: case AuthModeStatic:

View File

@ -1,69 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/authmode.proto
package config
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
_ "github.com/syncthing/syncthing/proto/ext"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type AuthMode int32
const (
AuthModeStatic AuthMode = 0
AuthModeLDAP AuthMode = 1
)
var AuthMode_name = map[int32]string{
0: "AUTH_MODE_STATIC",
1: "AUTH_MODE_LDAP",
}
var AuthMode_value = map[string]int32{
"AUTH_MODE_STATIC": 0,
"AUTH_MODE_LDAP": 1,
}
func (AuthMode) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_8e30b562e1bcea1e, []int{0}
}
func init() {
proto.RegisterEnum("config.AuthMode", AuthMode_name, AuthMode_value)
}
func init() { proto.RegisterFile("lib/config/authmode.proto", fileDescriptor_8e30b562e1bcea1e) }
var fileDescriptor_8e30b562e1bcea1e = []byte{
// 234 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0xcc, 0xc9, 0x4c, 0xd2,
0x4f, 0xce, 0xcf, 0x4b, 0xcb, 0x4c, 0xd7, 0x4f, 0x2c, 0x2d, 0xc9, 0xc8, 0xcd, 0x4f, 0x49, 0xd5,
0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x83, 0x08, 0x4b, 0x29, 0x17, 0xa5, 0x16, 0xe4, 0x17,
0xeb, 0x83, 0x05, 0x93, 0x4a, 0xd3, 0xf4, 0xd3, 0xf3, 0xd3, 0xf3, 0xc1, 0x1c, 0x30, 0x0b, 0xa2,
0x58, 0x8a, 0x33, 0xb5, 0xa2, 0x04, 0xc2, 0xd4, 0x2a, 0xe0, 0xe2, 0x70, 0x2c, 0x2d, 0xc9, 0xf0,
0xcd, 0x4f, 0x49, 0x15, 0xd2, 0xe0, 0x12, 0x70, 0x0c, 0x0d, 0xf1, 0x88, 0xf7, 0xf5, 0x77, 0x71,
0x8d, 0x0f, 0x0e, 0x71, 0x0c, 0xf1, 0x74, 0x16, 0x60, 0x90, 0x12, 0xea, 0x9a, 0xab, 0xc0, 0x07,
0x53, 0x13, 0x5c, 0x92, 0x58, 0x92, 0x99, 0x2c, 0x64, 0xc2, 0xc5, 0x87, 0x50, 0xe9, 0xe3, 0xe2,
0x18, 0x20, 0xc0, 0x28, 0xa5, 0xd0, 0x35, 0x57, 0x81, 0x07, 0xa6, 0x0e, 0x24, 0x76, 0xa9, 0x4f,
0x15, 0x85, 0x2f, 0xc5, 0xb2, 0x62, 0x89, 0x1c, 0x83, 0x93, 0xf7, 0x89, 0x87, 0x72, 0x0c, 0x17,
0x1e, 0xca, 0x31, 0x9c, 0x78, 0x24, 0xc7, 0x78, 0xe1, 0x91, 0x1c, 0xe3, 0x84, 0xc7, 0x72, 0x0c,
0x0b, 0x1e, 0xcb, 0x31, 0x5e, 0x78, 0x2c, 0xc7, 0x70, 0xe3, 0xb1, 0x1c, 0x43, 0x94, 0x66, 0x7a,
0x66, 0x49, 0x46, 0x69, 0x92, 0x5e, 0x72, 0x7e, 0xae, 0x7e, 0x71, 0x65, 0x5e, 0x72, 0x49, 0x46,
0x66, 0x5e, 0x3a, 0x12, 0x0b, 0x11, 0x0a, 0x49, 0x6c, 0x60, 0x5f, 0x18, 0x03, 0x02, 0x00, 0x00,
0xff, 0xff, 0x48, 0x80, 0x1f, 0x0c, 0x1a, 0x01, 0x00, 0x00,
}

View File

@ -6,6 +6,14 @@
package config package config
type BlockPullOrder int32
const (
BlockPullOrderStandard BlockPullOrder = 0
BlockPullOrderRandom BlockPullOrder = 1
BlockPullOrderInOrder BlockPullOrder = 2
)
func (o BlockPullOrder) String() string { func (o BlockPullOrder) String() string {
switch o { switch o {
case BlockPullOrderStandard: case BlockPullOrderStandard:

View File

@ -1,73 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/blockpullorder.proto
package config
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type BlockPullOrder int32
const (
BlockPullOrderStandard BlockPullOrder = 0
BlockPullOrderRandom BlockPullOrder = 1
BlockPullOrderInOrder BlockPullOrder = 2
)
var BlockPullOrder_name = map[int32]string{
0: "BLOCK_PULL_ORDER_STANDARD",
1: "BLOCK_PULL_ORDER_RANDOM",
2: "BLOCK_PULL_ORDER_IN_ORDER",
}
var BlockPullOrder_value = map[string]int32{
"BLOCK_PULL_ORDER_STANDARD": 0,
"BLOCK_PULL_ORDER_RANDOM": 1,
"BLOCK_PULL_ORDER_IN_ORDER": 2,
}
func (BlockPullOrder) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_3c46a5289006da6c, []int{0}
}
func init() {
proto.RegisterEnum("config.BlockPullOrder", BlockPullOrder_name, BlockPullOrder_value)
}
func init() { proto.RegisterFile("lib/config/blockpullorder.proto", fileDescriptor_3c46a5289006da6c) }
var fileDescriptor_3c46a5289006da6c = []byte{
// 271 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0xcf, 0xc9, 0x4c, 0xd2,
0x4f, 0xce, 0xcf, 0x4b, 0xcb, 0x4c, 0xd7, 0x4f, 0xca, 0xc9, 0x4f, 0xce, 0x2e, 0x28, 0xcd, 0xc9,
0xc9, 0x2f, 0x4a, 0x49, 0x2d, 0xd2, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x83, 0x48, 0x4a,
0x29, 0x17, 0xa5, 0x16, 0xe4, 0x17, 0xeb, 0x83, 0x05, 0x93, 0x4a, 0xd3, 0xf4, 0xd3, 0xf3, 0xd3,
0xf3, 0xc1, 0x1c, 0x30, 0x0b, 0xa2, 0x58, 0xeb, 0x10, 0x23, 0x17, 0x9f, 0x13, 0xc8, 0x94, 0x80,
0xd2, 0x9c, 0x1c, 0x7f, 0x90, 0x29, 0x42, 0x96, 0x5c, 0x92, 0x4e, 0x3e, 0xfe, 0xce, 0xde, 0xf1,
0x01, 0xa1, 0x3e, 0x3e, 0xf1, 0xfe, 0x41, 0x2e, 0xae, 0x41, 0xf1, 0xc1, 0x21, 0x8e, 0x7e, 0x2e,
0x8e, 0x41, 0x2e, 0x02, 0x0c, 0x52, 0x52, 0x5d, 0x73, 0x15, 0xc4, 0x50, 0xb5, 0x04, 0x97, 0x24,
0xe6, 0xa5, 0x24, 0x16, 0xa5, 0x08, 0x99, 0x72, 0x89, 0x63, 0x68, 0x0d, 0x72, 0xf4, 0x73, 0xf1,
0xf7, 0x15, 0x60, 0x94, 0x92, 0xe8, 0x9a, 0xab, 0x20, 0x82, 0xaa, 0x31, 0x28, 0x31, 0x2f, 0x25,
0x3f, 0x57, 0xc8, 0x02, 0x8b, 0x8d, 0x9e, 0x7e, 0x10, 0x86, 0x00, 0x93, 0x94, 0x64, 0xd7, 0x5c,
0x05, 0x51, 0x54, 0x8d, 0x9e, 0x79, 0x60, 0x4a, 0x8a, 0x65, 0xc5, 0x12, 0x39, 0x06, 0x27, 0xef,
0x13, 0x0f, 0xe5, 0x18, 0x2e, 0x3c, 0x94, 0x63, 0x38, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39,
0xc6, 0x09, 0x8f, 0xe5, 0x18, 0x16, 0x3c, 0x96, 0x63, 0xbc, 0xf0, 0x58, 0x8e, 0xe1, 0xc6, 0x63,
0x39, 0x86, 0x28, 0xcd, 0xf4, 0xcc, 0x92, 0x8c, 0xd2, 0x24, 0xbd, 0xe4, 0xfc, 0x5c, 0xfd, 0xe2,
0xca, 0xbc, 0xe4, 0x92, 0x8c, 0xcc, 0xbc, 0x74, 0x24, 0x16, 0x22, 0x4c, 0x93, 0xd8, 0xc0, 0x01,
0x63, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x8c, 0x0c, 0xb7, 0x46, 0x68, 0x01, 0x00, 0x00,
}

58
lib/config/compression.go Normal file
View File

@ -0,0 +1,58 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package config
import (
"github.com/syncthing/syncthing/lib/protocol"
)
type Compression int32
const (
CompressionMetadata Compression = 0
CompressionNever Compression = 1
CompressionAlways Compression = 2
)
var compressionMarshal = map[Compression]string{
CompressionNever: "never",
CompressionMetadata: "metadata",
CompressionAlways: "always",
}
var compressionUnmarshal = map[string]Compression{
// Legacy
"false": CompressionNever,
"true": CompressionMetadata,
// Current
"never": CompressionNever,
"metadata": CompressionMetadata,
"always": CompressionAlways,
}
func (c Compression) MarshalText() ([]byte, error) {
return []byte(compressionMarshal[c]), nil
}
func (c *Compression) UnmarshalText(bs []byte) error {
*c = compressionUnmarshal[string(bs)]
return nil
}
func (c Compression) ToProtocol() protocol.Compression {
switch c {
case CompressionNever:
return protocol.CompressionNever
case CompressionAlways:
return protocol.CompressionAlways
case CompressionMetadata:
return protocol.CompressionMetadata
default:
return protocol.CompressionMetadata
}
}

View File

@ -1,6 +1,6 @@
// Copyright (C) 2015 The Protocol Authors. // Copyright (C) 2015 The Protocol Authors.
package protocol package config
import "testing" import "testing"

View File

@ -99,6 +99,28 @@ var (
errFolderPathEmpty = errors.New("folder has empty path") errFolderPathEmpty = errors.New("folder has empty path")
) )
type Configuration struct {
Version int `json:"version" xml:"version,attr"`
Folders []FolderConfiguration `json:"folders" xml:"folder"`
Devices []DeviceConfiguration `json:"devices" xml:"device"`
GUI GUIConfiguration `json:"gui" xml:"gui"`
LDAP LDAPConfiguration `json:"ldap" xml:"ldap"`
Options OptionsConfiguration `json:"options" xml:"options"`
IgnoredDevices []ObservedDevice `json:"remoteIgnoredDevices" xml:"remoteIgnoredDevice"`
DeprecatedPendingDevices []ObservedDevice `json:"-" xml:"pendingDevice,omitempty"` // Deprecated: Do not use.
Defaults Defaults `json:"defaults" xml:"defaults"`
}
type Defaults struct {
Folder FolderConfiguration `json:"folder" xml:"folder"`
Device DeviceConfiguration `json:"device" xml:"device"`
Ignores Ignores `json:"ignores" xml:"ignores"`
}
type Ignores struct {
Lines []string `json:"lines" xml:"line"`
}
func New(myID protocol.DeviceID) Configuration { func New(myID protocol.DeviceID) Configuration {
var cfg Configuration var cfg Configuration
cfg.Version = CurrentVersion cfg.Version = CurrentVersion

File diff suppressed because it is too large Load Diff

View File

@ -100,7 +100,7 @@ func TestDefaultValues(t *testing.T) {
}, },
Defaults: Defaults{ Defaults: Defaults{
Folder: FolderConfiguration{ Folder: FolderConfiguration{
FilesystemType: fs.FilesystemTypeBasic, FilesystemType: FilesystemTypeBasic,
Path: "~", Path: "~",
Type: FolderTypeSendReceive, Type: FolderTypeSendReceive,
Devices: []FolderDeviceConfiguration{{DeviceID: device1}}, Devices: []FolderDeviceConfiguration{{DeviceID: device1}},
@ -127,7 +127,7 @@ func TestDefaultValues(t *testing.T) {
Device: DeviceConfiguration{ Device: DeviceConfiguration{
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
AllowedNetworks: []string{}, AllowedNetworks: []string{},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
Ignores: Ignores{ Ignores: Ignores{
@ -175,7 +175,7 @@ func TestDeviceConfig(t *testing.T) {
expectedFolders := []FolderConfiguration{ expectedFolders := []FolderConfiguration{
{ {
ID: "test", ID: "test",
FilesystemType: fs.FilesystemTypeBasic, FilesystemType: FilesystemTypeBasic,
Path: "testdata", Path: "testdata",
Devices: []FolderDeviceConfiguration{{DeviceID: device1}, {DeviceID: device4}}, Devices: []FolderDeviceConfiguration{{DeviceID: device1}, {DeviceID: device4}},
Type: FolderTypeSendOnly, Type: FolderTypeSendOnly,
@ -205,7 +205,7 @@ func TestDeviceConfig(t *testing.T) {
DeviceID: device1, DeviceID: device1,
Name: "node one", Name: "node one",
Addresses: []string{"tcp://a"}, Addresses: []string{"tcp://a"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
@ -213,7 +213,7 @@ func TestDeviceConfig(t *testing.T) {
DeviceID: device4, DeviceID: device4,
Name: "node two", Name: "node two",
Addresses: []string{"tcp://b"}, Addresses: []string{"tcp://b"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
@ -344,7 +344,7 @@ func TestDeviceAddressesDynamic(t *testing.T) {
DeviceID: device4, DeviceID: device4,
Name: name, // Set when auto created Name: name, // Set when auto created
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
@ -368,21 +368,21 @@ func TestDeviceCompression(t *testing.T) {
device1: { device1: {
DeviceID: device1, DeviceID: device1,
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
device2: { device2: {
DeviceID: device2, DeviceID: device2,
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
device3: { device3: {
DeviceID: device3, DeviceID: device3,
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
Compression: protocol.CompressionNever, Compression: CompressionNever,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
@ -390,7 +390,7 @@ func TestDeviceCompression(t *testing.T) {
DeviceID: device4, DeviceID: device4,
Name: name, // Set when auto created Name: name, // Set when auto created
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
@ -433,7 +433,7 @@ func TestDeviceAddressesStatic(t *testing.T) {
DeviceID: device4, DeviceID: device4,
Name: name, // Set when auto created Name: name, // Set when auto created
Addresses: []string{"dynamic"}, Addresses: []string{"dynamic"},
Compression: protocol.CompressionMetadata, Compression: CompressionMetadata,
AllowedNetworks: []string{}, AllowedNetworks: []string{},
IgnoredFolders: []ObservedFolder{}, IgnoredFolders: []ObservedFolder{},
}, },
@ -556,7 +556,7 @@ func TestFolderCheckPath(t *testing.T) {
for _, testcase := range testcases { for _, testcase := range testcases {
cfg := FolderConfiguration{ cfg := FolderConfiguration{
FilesystemType: fs.FilesystemTypeFake, FilesystemType: FilesystemTypeFake,
MarkerName: DefaultMarkerName, MarkerName: DefaultMarkerName,
} }
@ -1281,7 +1281,7 @@ func adjustDeviceConfiguration(cfg *DeviceConfiguration, id protocol.DeviceID, n
func adjustFolderConfiguration(cfg *FolderConfiguration, id, label string, fsType fs.FilesystemType, path string) { func adjustFolderConfiguration(cfg *FolderConfiguration, id, label string, fsType fs.FilesystemType, path string) {
cfg.ID = id cfg.ID = id
cfg.Label = label cfg.Label = label
cfg.FilesystemType = fsType cfg.FilesystemType = FilesystemType(fsType)
cfg.Path = path cfg.Path = path
} }

View File

@ -0,0 +1,86 @@
// Copyright (C) 2020 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package config
import "github.com/syncthing/syncthing/lib/fs"
type CopyRangeMethod int32
const (
CopyRangeMethodStandard CopyRangeMethod = 0
CopyRangeMethodIoctl CopyRangeMethod = 1
CopyRangeMethodCopyFileRange CopyRangeMethod = 2
CopyRangeMethodSendFile CopyRangeMethod = 3
CopyRangeMethodDuplicateExtents CopyRangeMethod = 4
CopyRangeMethodAllWithFallback CopyRangeMethod = 5
)
func (o CopyRangeMethod) String() string {
switch o {
case CopyRangeMethodStandard:
return "standard"
case CopyRangeMethodIoctl:
return "ioctl"
case CopyRangeMethodCopyFileRange:
return "copy_file_range"
case CopyRangeMethodSendFile:
return "sendfile"
case CopyRangeMethodDuplicateExtents:
return "duplicate_extents"
case CopyRangeMethodAllWithFallback:
return "all"
default:
return "unknown"
}
}
func (o CopyRangeMethod) ToFS() fs.CopyRangeMethod {
switch o {
case CopyRangeMethodStandard:
return fs.CopyRangeMethodStandard
case CopyRangeMethodIoctl:
return fs.CopyRangeMethodIoctl
case CopyRangeMethodCopyFileRange:
return fs.CopyRangeMethodCopyFileRange
case CopyRangeMethodSendFile:
return fs.CopyRangeMethodSendFile
case CopyRangeMethodDuplicateExtents:
return fs.CopyRangeMethodDuplicateExtents
case CopyRangeMethodAllWithFallback:
return fs.CopyRangeMethodAllWithFallback
default:
return fs.CopyRangeMethodStandard
}
}
func (o CopyRangeMethod) MarshalText() ([]byte, error) {
return []byte(o.String()), nil
}
func (o *CopyRangeMethod) UnmarshalText(bs []byte) error {
switch string(bs) {
case "standard":
*o = CopyRangeMethodStandard
case "ioctl":
*o = CopyRangeMethodIoctl
case "copy_file_range":
*o = CopyRangeMethodCopyFileRange
case "sendfile":
*o = CopyRangeMethodSendFile
case "duplicate_extents":
*o = CopyRangeMethodDuplicateExtents
case "all":
*o = CopyRangeMethodAllWithFallback
default:
*o = CopyRangeMethodStandard
}
return nil
}
func (o *CopyRangeMethod) ParseDefault(str string) error {
return o.UnmarshalText([]byte(str))
}

View File

@ -9,10 +9,34 @@ package config
import ( import (
"fmt" "fmt"
"sort" "sort"
"github.com/syncthing/syncthing/lib/protocol"
) )
const defaultNumConnections = 1 // number of connections to use by default; may change in the future. const defaultNumConnections = 1 // number of connections to use by default; may change in the future.
type DeviceConfiguration struct {
DeviceID protocol.DeviceID `json:"deviceID" xml:"id,attr" nodefault:"true"`
Name string `json:"name" xml:"name,attr,omitempty"`
Addresses []string `json:"addresses" xml:"address,omitempty"`
Compression Compression `json:"compression" xml:"compression,attr"`
CertName string `json:"certName" xml:"certName,attr,omitempty"`
Introducer bool `json:"introducer" xml:"introducer,attr"`
SkipIntroductionRemovals bool `json:"skipIntroductionRemovals" xml:"skipIntroductionRemovals,attr"`
IntroducedBy protocol.DeviceID `json:"introducedBy" xml:"introducedBy,attr" nodefault:"true"`
Paused bool `json:"paused" xml:"paused"`
AllowedNetworks []string `json:"allowedNetworks" xml:"allowedNetwork,omitempty"`
AutoAcceptFolders bool `json:"autoAcceptFolders" xml:"autoAcceptFolders"`
MaxSendKbps int `json:"maxSendKbps" xml:"maxSendKbps"`
MaxRecvKbps int `json:"maxRecvKbps" xml:"maxRecvKbps"`
IgnoredFolders []ObservedFolder `json:"ignoredFolders" xml:"ignoredFolder"`
DeprecatedPendingFolders []ObservedFolder `json:"-" xml:"pendingFolder,omitempty"` // Deprecated: Do not use.
MaxRequestKiB int `json:"maxRequestKiB" xml:"maxRequestKiB"`
Untrusted bool `json:"untrusted" xml:"untrusted"`
RemoteGUIPort int `json:"remoteGUIPort" xml:"remoteGUIPort"`
RawNumConnections int `json:"numConnections" xml:"numConnections"`
}
func (cfg DeviceConfiguration) Copy() DeviceConfiguration { func (cfg DeviceConfiguration) Copy() DeviceConfiguration {
c := cfg c := cfg
c.Addresses = make([]string, len(cfg.Addresses)) c.Addresses = make([]string, len(cfg.Addresses))

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,54 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package config
import "github.com/syncthing/syncthing/lib/fs"
type FilesystemType int32
const (
FilesystemTypeBasic FilesystemType = 0
FilesystemTypeFake FilesystemType = 1
)
func (t FilesystemType) String() string {
switch t {
case FilesystemTypeBasic:
return "basic"
case FilesystemTypeFake:
return "fake"
default:
return "unknown"
}
}
func (t FilesystemType) ToFS() fs.FilesystemType {
switch t {
case FilesystemTypeBasic:
return fs.FilesystemTypeBasic
case FilesystemTypeFake:
return fs.FilesystemTypeFake
default:
return fs.FilesystemTypeBasic
}
}
func (t FilesystemType) MarshalText() ([]byte, error) {
return []byte(t.String()), nil
}
func (t *FilesystemType) UnmarshalText(bs []byte) error {
switch string(bs) {
case "basic":
*t = FilesystemTypeBasic
case "fake":
*t = FilesystemTypeFake
default:
*t = FilesystemTypeBasic
}
return nil
}

View File

@ -33,11 +33,82 @@ var (
const ( const (
DefaultMarkerName = ".stfolder" DefaultMarkerName = ".stfolder"
EncryptionTokenName = "syncthing-encryption_password_token" EncryptionTokenName = "syncthing-encryption_password_token" //nolint: gosec
maxConcurrentWritesDefault = 2 maxConcurrentWritesDefault = 2
maxConcurrentWritesLimit = 64 maxConcurrentWritesLimit = 64
) )
type FolderDeviceConfiguration struct {
DeviceID protocol.DeviceID `json:"deviceID" xml:"id,attr"`
IntroducedBy protocol.DeviceID `json:"introducedBy" xml:"introducedBy,attr"`
EncryptionPassword string `json:"encryptionPassword" xml:"encryptionPassword"`
}
type FolderConfiguration struct {
ID string `json:"id" xml:"id,attr" nodefault:"true"`
Label string `json:"label" xml:"label,attr" restart:"false"`
FilesystemType FilesystemType `json:"filesystemType" xml:"filesystemType"`
Path string `json:"path" xml:"path,attr" default:"~"`
Type FolderType `json:"type" xml:"type,attr"`
Devices []FolderDeviceConfiguration `json:"devices" xml:"device"`
RescanIntervalS int `json:"rescanIntervalS" xml:"rescanIntervalS,attr" default:"3600"`
FSWatcherEnabled bool `json:"fsWatcherEnabled" xml:"fsWatcherEnabled,attr" default:"true"`
FSWatcherDelayS float64 `json:"fsWatcherDelayS" xml:"fsWatcherDelayS,attr" default:"10"`
FSWatcherTimeoutS float64 `json:"fsWatcherTimeoutS" xml:"fsWatcherTimeoutS,attr"`
IgnorePerms bool `json:"ignorePerms" xml:"ignorePerms,attr"`
AutoNormalize bool `json:"autoNormalize" xml:"autoNormalize,attr" default:"true"`
MinDiskFree Size `json:"minDiskFree" xml:"minDiskFree" default:"1 %"`
Versioning VersioningConfiguration `json:"versioning" xml:"versioning"`
Copiers int `json:"copiers" xml:"copiers"`
PullerMaxPendingKiB int `json:"pullerMaxPendingKiB" xml:"pullerMaxPendingKiB"`
Hashers int `json:"hashers" xml:"hashers"`
Order PullOrder `json:"order" xml:"order"`
IgnoreDelete bool `json:"ignoreDelete" xml:"ignoreDelete"`
ScanProgressIntervalS int `json:"scanProgressIntervalS" xml:"scanProgressIntervalS"`
PullerPauseS int `json:"pullerPauseS" xml:"pullerPauseS"`
MaxConflicts int `json:"maxConflicts" xml:"maxConflicts" default:"10"`
DisableSparseFiles bool `json:"disableSparseFiles" xml:"disableSparseFiles"`
DisableTempIndexes bool `json:"disableTempIndexes" xml:"disableTempIndexes"`
Paused bool `json:"paused" xml:"paused"`
WeakHashThresholdPct int `json:"weakHashThresholdPct" xml:"weakHashThresholdPct"`
MarkerName string `json:"markerName" xml:"markerName"`
CopyOwnershipFromParent bool `json:"copyOwnershipFromParent" xml:"copyOwnershipFromParent"`
RawModTimeWindowS int `json:"modTimeWindowS" xml:"modTimeWindowS"`
MaxConcurrentWrites int `json:"maxConcurrentWrites" xml:"maxConcurrentWrites" default:"2"`
DisableFsync bool `json:"disableFsync" xml:"disableFsync"`
BlockPullOrder BlockPullOrder `json:"blockPullOrder" xml:"blockPullOrder"`
CopyRangeMethod CopyRangeMethod `json:"copyRangeMethod" xml:"copyRangeMethod" default:"standard"`
CaseSensitiveFS bool `json:"caseSensitiveFS" xml:"caseSensitiveFS"`
JunctionsAsDirs bool `json:"junctionsAsDirs" xml:"junctionsAsDirs"`
SyncOwnership bool `json:"syncOwnership" xml:"syncOwnership"`
SendOwnership bool `json:"sendOwnership" xml:"sendOwnership"`
SyncXattrs bool `json:"syncXattrs" xml:"syncXattrs"`
SendXattrs bool `json:"sendXattrs" xml:"sendXattrs"`
XattrFilter XattrFilter `json:"xattrFilter" xml:"xattrFilter"`
// Legacy deprecated
DeprecatedReadOnly bool `json:"-" xml:"ro,attr,omitempty"` // Deprecated: Do not use.
DeprecatedMinDiskFreePct float64 `json:"-" xml:"minDiskFreePct,omitempty"` // Deprecated: Do not use.
DeprecatedPullers int `json:"-" xml:"pullers,omitempty"` // Deprecated: Do not use.
DeprecatedScanOwnership bool `json:"-" xml:"scanOwnership,omitempty"` // Deprecated: Do not use.
}
// Extended attribute filter. This is a list of patterns to match (glob
// style), each with an action (permit or deny). First match is used. If the
// filter is empty, all strings are permitted. If the filter is non-empty,
// the default action becomes deny. To counter this, you can use the "*"
// pattern to match all strings at the end of the filter. There are also
// limits on the size of accepted attributes.
type XattrFilter struct {
Entries []XattrFilterEntry `json:"entries" xml:"entry"`
MaxSingleEntrySize int `json:"maxSingleEntrySize" xml:"maxSingleEntrySize" default:"1024"`
MaxTotalSize int `json:"maxTotalSize" xml:"maxTotalSize" default:"4096"`
}
type XattrFilterEntry struct {
Match string `json:"match" xml:"match,attr"`
Permit bool `json:"permit" xml:"permit,attr"`
}
func (f FolderConfiguration) Copy() FolderConfiguration { func (f FolderConfiguration) Copy() FolderConfiguration {
c := f c := f
c.Devices = make([]FolderDeviceConfiguration, len(f.Devices)) c.Devices = make([]FolderDeviceConfiguration, len(f.Devices))
@ -53,7 +124,7 @@ func (f FolderConfiguration) Filesystem(fset *db.FileSet) fs.Filesystem {
// This is intentionally not a pointer method, because things like // This is intentionally not a pointer method, because things like
// cfg.Folders["default"].Filesystem(nil) should be valid. // cfg.Folders["default"].Filesystem(nil) should be valid.
opts := make([]fs.Option, 0, 3) opts := make([]fs.Option, 0, 3)
if f.FilesystemType == fs.FilesystemTypeBasic && f.JunctionsAsDirs { if f.FilesystemType == FilesystemTypeBasic && f.JunctionsAsDirs {
opts = append(opts, new(fs.OptionJunctionsAsDirs)) opts = append(opts, new(fs.OptionJunctionsAsDirs))
} }
if !f.CaseSensitiveFS { if !f.CaseSensitiveFS {
@ -62,7 +133,7 @@ func (f FolderConfiguration) Filesystem(fset *db.FileSet) fs.Filesystem {
if fset != nil { if fset != nil {
opts = append(opts, fset.MtimeOption()) opts = append(opts, fset.MtimeOption())
} }
return fs.NewFilesystem(f.FilesystemType, f.Path, opts...) return fs.NewFilesystem(f.FilesystemType.ToFS(), f.Path, opts...)
} }
func (f FolderConfiguration) ModTimeWindow() time.Duration { func (f FolderConfiguration) ModTimeWindow() time.Duration {
@ -300,7 +371,7 @@ func (f *FolderConfiguration) CheckAvailableSpace(req uint64) error {
fs := f.Filesystem(nil) fs := f.Filesystem(nil)
usage, err := fs.Usage(".") usage, err := fs.Usage(".")
if err != nil { if err != nil {
return nil return nil //nolint: nilerr
} }
if err := checkAvailableSpace(req, f.MinDiskFree, usage); err != nil { if err := checkAvailableSpace(req, f.MinDiskFree, usage); err != nil {
return fmt.Errorf("insufficient space in folder %v (%v): %w", f.Description(), fs.URI(), err) return fmt.Errorf("insufficient space in folder %v (%v): %w", f.Description(), fs.URI(), err)

File diff suppressed because it is too large Load Diff

View File

@ -6,6 +6,15 @@
package config package config
type FolderType int32
const (
FolderTypeSendReceive FolderType = 0
FolderTypeSendOnly FolderType = 1
FolderTypeReceiveOnly FolderType = 2
FolderTypeReceiveEncrypted FolderType = 3
)
func (t FolderType) String() string { func (t FolderType) String() string {
switch t { switch t {
case FolderTypeSendReceive: case FolderTypeSendReceive:

View File

@ -1,77 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/foldertype.proto
package config
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type FolderType int32
const (
FolderTypeSendReceive FolderType = 0
FolderTypeSendOnly FolderType = 1
FolderTypeReceiveOnly FolderType = 2
FolderTypeReceiveEncrypted FolderType = 3
)
var FolderType_name = map[int32]string{
0: "FOLDER_TYPE_SEND_RECEIVE",
1: "FOLDER_TYPE_SEND_ONLY",
2: "FOLDER_TYPE_RECEIVE_ONLY",
3: "FOLDER_TYPE_RECEIVE_ENCRYPTED",
}
var FolderType_value = map[string]int32{
"FOLDER_TYPE_SEND_RECEIVE": 0,
"FOLDER_TYPE_SEND_ONLY": 1,
"FOLDER_TYPE_RECEIVE_ONLY": 2,
"FOLDER_TYPE_RECEIVE_ENCRYPTED": 3,
}
func (FolderType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_ea6ddb20c0633575, []int{0}
}
func init() {
proto.RegisterEnum("config.FolderType", FolderType_name, FolderType_value)
}
func init() { proto.RegisterFile("lib/config/foldertype.proto", fileDescriptor_ea6ddb20c0633575) }
var fileDescriptor_ea6ddb20c0633575 = []byte{
// 287 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0xce, 0xc9, 0x4c, 0xd2,
0x4f, 0xce, 0xcf, 0x4b, 0xcb, 0x4c, 0xd7, 0x4f, 0xcb, 0xcf, 0x49, 0x49, 0x2d, 0x2a, 0xa9, 0x2c,
0x48, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x83, 0x48, 0x48, 0x29, 0x17, 0xa5, 0x16,
0xe4, 0x17, 0xeb, 0x83, 0x05, 0x93, 0x4a, 0xd3, 0xf4, 0xd3, 0xf3, 0xd3, 0xf3, 0xc1, 0x1c, 0x30,
0x0b, 0xa2, 0x58, 0xeb, 0x17, 0x23, 0x17, 0x97, 0x1b, 0xd8, 0x84, 0x90, 0xca, 0x82, 0x54, 0x21,
0x73, 0x2e, 0x09, 0x37, 0x7f, 0x1f, 0x17, 0xd7, 0xa0, 0xf8, 0x90, 0xc8, 0x00, 0xd7, 0xf8, 0x60,
0x57, 0x3f, 0x97, 0xf8, 0x20, 0x57, 0x67, 0x57, 0xcf, 0x30, 0x57, 0x01, 0x06, 0x29, 0xc9, 0xae,
0xb9, 0x0a, 0xa2, 0x08, 0xd5, 0xc1, 0xa9, 0x79, 0x29, 0x41, 0xa9, 0xc9, 0xa9, 0x99, 0x65, 0xa9,
0x42, 0x86, 0x5c, 0xa2, 0x18, 0x1a, 0xfd, 0xfd, 0x7c, 0x22, 0x05, 0x18, 0xa5, 0xc4, 0xba, 0xe6,
0x2a, 0x08, 0xa1, 0xea, 0xf2, 0xcf, 0xcb, 0xa9, 0x44, 0xb7, 0x0b, 0x6a, 0x0d, 0x44, 0x17, 0x13,
0xba, 0x5d, 0x50, 0x7b, 0xc0, 0x1a, 0x1d, 0xb9, 0x64, 0xb1, 0x69, 0x74, 0xf5, 0x73, 0x0e, 0x8a,
0x0c, 0x08, 0x71, 0x75, 0x11, 0x60, 0x96, 0x92, 0xeb, 0x9a, 0xab, 0x20, 0x85, 0xa1, 0xdb, 0x35,
0x2f, 0xb9, 0xa8, 0xb2, 0xa0, 0x24, 0x35, 0x45, 0x8a, 0x65, 0xc5, 0x12, 0x39, 0x06, 0x27, 0xef,
0x13, 0x0f, 0xe5, 0x18, 0x2e, 0x3c, 0x94, 0x63, 0x38, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39,
0xc6, 0x09, 0x8f, 0xe5, 0x18, 0x16, 0x3c, 0x96, 0x63, 0xbc, 0xf0, 0x58, 0x8e, 0xe1, 0xc6, 0x63,
0x39, 0x86, 0x28, 0xcd, 0xf4, 0xcc, 0x92, 0x8c, 0xd2, 0x24, 0xbd, 0xe4, 0xfc, 0x5c, 0xfd, 0xe2,
0xca, 0xbc, 0xe4, 0x92, 0x8c, 0xcc, 0xbc, 0x74, 0x24, 0x16, 0x22, 0x1e, 0x92, 0xd8, 0xc0, 0x01,
0x6a, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0xc9, 0x87, 0xbe, 0x2d, 0x9c, 0x01, 0x00, 0x00,
}

View File

@ -18,6 +18,23 @@ import (
"github.com/syncthing/syncthing/lib/rand" "github.com/syncthing/syncthing/lib/rand"
) )
type GUIConfiguration struct {
Enabled bool `json:"enabled" xml:"enabled,attr" default:"true"`
RawAddress string `json:"address" xml:"address" default:"127.0.0.1:8384"`
RawUnixSocketPermissions string `json:"unixSocketPermissions" xml:"unixSocketPermissions,omitempty"`
User string `json:"user" xml:"user,omitempty"`
Password string `json:"password" xml:"password,omitempty"`
AuthMode AuthMode `json:"authMode" xml:"authMode,omitempty"`
RawUseTLS bool `json:"useTLS" xml:"tls,attr"`
APIKey string `json:"apiKey" xml:"apikey,omitempty"`
InsecureAdminAccess bool `json:"insecureAdminAccess" xml:"insecureAdminAccess,omitempty"`
Theme string `json:"theme" xml:"theme" default:"default"`
Debugging bool `json:"debugging" xml:"debugging,attr"`
InsecureSkipHostCheck bool `json:"insecureSkipHostcheck" xml:"insecureSkipHostcheck,omitempty"`
InsecureAllowFrameLoading bool `json:"insecureAllowFrameLoading" xml:"insecureAllowFrameLoading,omitempty"`
SendBasicAuthPrompt bool `json:"sendBasicAuthPrompt" xml:"sendBasicAuthPrompt,attr"`
}
func (c GUIConfiguration) IsAuthEnabled() bool { func (c GUIConfiguration) IsAuthEnabled() bool {
// This function should match isAuthEnabled() in syncthingController.js // This function should match isAuthEnabled() in syncthingController.js
return c.AuthMode == AuthModeLDAP || (len(c.User) > 0 && len(c.Password) > 0) return c.AuthMode == AuthModeLDAP || (len(c.User) > 0 && len(c.Password) > 0)

View File

@ -1,840 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/guiconfiguration.proto
package config
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
_ "github.com/syncthing/syncthing/proto/ext"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type GUIConfiguration struct {
Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled" xml:"enabled,attr" default:"true"`
RawAddress string `protobuf:"bytes,2,opt,name=address,proto3" json:"address" xml:"address" default:"127.0.0.1:8384"`
RawUnixSocketPermissions string `protobuf:"bytes,3,opt,name=unix_socket_permissions,json=unixSocketPermissions,proto3" json:"unixSocketPermissions" xml:"unixSocketPermissions,omitempty"`
User string `protobuf:"bytes,4,opt,name=user,proto3" json:"user" xml:"user,omitempty"`
Password string `protobuf:"bytes,5,opt,name=password,proto3" json:"password" xml:"password,omitempty"`
AuthMode AuthMode `protobuf:"varint,6,opt,name=auth_mode,json=authMode,proto3,enum=config.AuthMode" json:"authMode" xml:"authMode,omitempty"`
RawUseTLS bool `protobuf:"varint,7,opt,name=use_tls,json=useTls,proto3" json:"useTLS" xml:"tls,attr"`
APIKey string `protobuf:"bytes,8,opt,name=api_key,json=apiKey,proto3" json:"apiKey" xml:"apikey,omitempty"`
InsecureAdminAccess bool `protobuf:"varint,9,opt,name=insecure_admin_access,json=insecureAdminAccess,proto3" json:"insecureAdminAccess" xml:"insecureAdminAccess,omitempty"`
Theme string `protobuf:"bytes,10,opt,name=theme,proto3" json:"theme" xml:"theme" default:"default"`
Debugging bool `protobuf:"varint,11,opt,name=debugging,proto3" json:"debugging" xml:"debugging,attr"`
InsecureSkipHostCheck bool `protobuf:"varint,12,opt,name=insecure_skip_host_check,json=insecureSkipHostCheck,proto3" json:"insecureSkipHostcheck" xml:"insecureSkipHostcheck,omitempty"`
InsecureAllowFrameLoading bool `protobuf:"varint,13,opt,name=insecure_allow_frame_loading,json=insecureAllowFrameLoading,proto3" json:"insecureAllowFrameLoading" xml:"insecureAllowFrameLoading,omitempty"`
SendBasicAuthPrompt bool `protobuf:"varint,14,opt,name=send_basic_auth_prompt,json=sendBasicAuthPrompt,proto3" json:"sendBasicAuthPrompt" xml:"sendBasicAuthPrompt,attr"`
}
func (m *GUIConfiguration) Reset() { *m = GUIConfiguration{} }
func (m *GUIConfiguration) String() string { return proto.CompactTextString(m) }
func (*GUIConfiguration) ProtoMessage() {}
func (*GUIConfiguration) Descriptor() ([]byte, []int) {
return fileDescriptor_2a9586d611855d64, []int{0}
}
func (m *GUIConfiguration) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *GUIConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_GUIConfiguration.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *GUIConfiguration) XXX_Merge(src proto.Message) {
xxx_messageInfo_GUIConfiguration.Merge(m, src)
}
func (m *GUIConfiguration) XXX_Size() int {
return m.ProtoSize()
}
func (m *GUIConfiguration) XXX_DiscardUnknown() {
xxx_messageInfo_GUIConfiguration.DiscardUnknown(m)
}
var xxx_messageInfo_GUIConfiguration proto.InternalMessageInfo
func init() {
proto.RegisterType((*GUIConfiguration)(nil), "config.GUIConfiguration")
}
func init() { proto.RegisterFile("lib/config/guiconfiguration.proto", fileDescriptor_2a9586d611855d64) }
var fileDescriptor_2a9586d611855d64 = []byte{
// 888 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x55, 0xcd, 0x6e, 0xdb, 0x46,
0x10, 0x16, 0x5b, 0x47, 0xb2, 0xb6, 0x89, 0x60, 0xb0, 0x4d, 0xca, 0x04, 0x0d, 0xd7, 0x51, 0xd8,
0xc2, 0x01, 0x02, 0x39, 0x71, 0x5a, 0x24, 0xf0, 0xa1, 0x80, 0x1c, 0x20, 0x4d, 0x60, 0x17, 0x30,
0xe8, 0xfa, 0x92, 0x0b, 0xb1, 0x22, 0xd7, 0xd2, 0x42, 0xfc, 0x2b, 0x77, 0x09, 0x5b, 0x87, 0xf6,
0x01, 0x7a, 0x2a, 0xdc, 0x73, 0x81, 0x3e, 0x43, 0x2f, 0x7d, 0x85, 0xdc, 0xa4, 0x53, 0xd1, 0xd3,
0x02, 0x91, 0xd1, 0x0b, 0x8f, 0x3c, 0xe6, 0x54, 0xec, 0xf2, 0x47, 0xa2, 0x4c, 0x37, 0xb9, 0xed,
0x7c, 0xf3, 0xcd, 0x7c, 0x33, 0xc3, 0x19, 0x10, 0xdc, 0x73, 0xc9, 0x60, 0xdb, 0x0e, 0xfc, 0x13,
0x32, 0xdc, 0x1e, 0xc6, 0x24, 0x7b, 0xc5, 0x11, 0x62, 0x24, 0xf0, 0x7b, 0x61, 0x14, 0xb0, 0x40,
0x6d, 0x66, 0xe0, 0x9d, 0xdb, 0x4b, 0x54, 0x14, 0xb3, 0x91, 0x17, 0x38, 0x38, 0xa3, 0xdc, 0x69,
0xe3, 0x33, 0x96, 0x3d, 0xbb, 0xff, 0xde, 0x00, 0x1b, 0xdf, 0x1d, 0xbf, 0x7a, 0xbe, 0x9c, 0x48,
0x1d, 0x80, 0x16, 0xf6, 0xd1, 0xc0, 0xc5, 0x8e, 0xa6, 0x6c, 0x2a, 0x5b, 0xeb, 0x7b, 0x2f, 0x13,
0x0e, 0x0b, 0x28, 0xe5, 0xf0, 0xde, 0x99, 0xe7, 0xee, 0x76, 0x73, 0xfb, 0x21, 0x62, 0x2c, 0xea,
0x6e, 0x3a, 0xf8, 0x04, 0xc5, 0x2e, 0xdb, 0xed, 0xb2, 0x28, 0xc6, 0xdd, 0x64, 0x6a, 0x5c, 0x5f,
0xf6, 0xbf, 0x9b, 0x1a, 0x6b, 0xc2, 0x61, 0x16, 0x59, 0xd4, 0x9f, 0x40, 0x0b, 0x39, 0x4e, 0x84,
0x29, 0xd5, 0x3e, 0xda, 0x54, 0xb6, 0xda, 0x7b, 0xf6, 0x9c, 0x43, 0x60, 0xa2, 0xd3, 0x7e, 0x86,
0x0a, 0xc5, 0x9c, 0x90, 0x72, 0xf8, 0x95, 0x54, 0xcc, 0xed, 0x25, 0xb1, 0xc7, 0x3b, 0x4f, 0x7b,
0x8f, 0x7a, 0x8f, 0x7a, 0x8f, 0x77, 0x9f, 0x3d, 0x79, 0xf6, 0x75, 0xf7, 0xdd, 0xd4, 0xe8, 0x54,
0xa1, 0xf3, 0x99, 0xb1, 0x94, 0xd4, 0x2c, 0x52, 0xaa, 0x7f, 0x2b, 0xe0, 0xf3, 0xd8, 0x27, 0x67,
0x16, 0x0d, 0xec, 0x31, 0x66, 0x56, 0x88, 0x23, 0x8f, 0x50, 0x4a, 0x02, 0x9f, 0x6a, 0x1f, 0xcb,
0x7a, 0x7e, 0x57, 0xe6, 0x1c, 0x6a, 0x26, 0x3a, 0x3d, 0xf6, 0xc9, 0xd9, 0x91, 0x64, 0x1d, 0x2e,
0x48, 0x09, 0x87, 0x37, 0xe3, 0x3a, 0x47, 0xca, 0xe1, 0x97, 0xb2, 0xd8, 0x5a, 0xef, 0xc3, 0xc0,
0x23, 0x0c, 0x7b, 0x21, 0x9b, 0x88, 0x11, 0xc1, 0xf7, 0x70, 0xce, 0x67, 0xc6, 0x95, 0x05, 0x98,
0xf5, 0xf2, 0xea, 0x0b, 0xb0, 0x16, 0x53, 0x1c, 0x69, 0x6b, 0xb2, 0x89, 0x9d, 0x84, 0x43, 0x69,
0xa7, 0x1c, 0x7e, 0x96, 0x95, 0x45, 0x71, 0x54, 0xad, 0xa2, 0x53, 0x85, 0x4c, 0xc9, 0x57, 0x5f,
0x83, 0xf5, 0x10, 0x51, 0x7a, 0x1a, 0x44, 0x8e, 0x76, 0x4d, 0xe6, 0xfa, 0x36, 0xe1, 0xb0, 0xc4,
0x52, 0x0e, 0x35, 0x99, 0xaf, 0x00, 0xaa, 0x39, 0xd5, 0xcb, 0xb0, 0x59, 0xc6, 0xaa, 0x1e, 0x68,
0x8b, 0x8d, 0xb4, 0xc4, 0x4a, 0x6a, 0xcd, 0x4d, 0x65, 0xab, 0xb3, 0xb3, 0xd1, 0xcb, 0x56, 0xb5,
0xd7, 0x8f, 0xd9, 0xe8, 0xfb, 0xc0, 0xc1, 0x99, 0x1c, 0xca, 0xad, 0x52, 0xae, 0x00, 0x56, 0xe4,
0x2e, 0xc3, 0x66, 0x19, 0xab, 0x62, 0xd0, 0x8a, 0x29, 0xb6, 0x98, 0x4b, 0xb5, 0x96, 0x5c, 0xe7,
0x83, 0x39, 0x87, 0x6d, 0x31, 0x58, 0x8a, 0x7f, 0x38, 0x38, 0x4a, 0x38, 0x6c, 0xc6, 0xf2, 0x95,
0x72, 0xd8, 0x91, 0x2a, 0xcc, 0xa5, 0xd9, 0x5a, 0x27, 0x53, 0x63, 0xbd, 0x30, 0xd2, 0xa9, 0x91,
0xf3, 0xce, 0x67, 0xc6, 0x22, 0xdc, 0x94, 0xa0, 0x4b, 0x85, 0x0c, 0x0a, 0x89, 0x35, 0xc6, 0x13,
0x6d, 0x5d, 0x0e, 0x4c, 0xc8, 0x34, 0xfb, 0x87, 0xaf, 0xf6, 0xf1, 0x44, 0x68, 0xa0, 0x90, 0xec,
0xe3, 0x49, 0xca, 0xe1, 0xad, 0xac, 0x93, 0x90, 0x8c, 0xf1, 0xa4, 0xda, 0xc7, 0xc6, 0x2a, 0x78,
0x3e, 0x33, 0xf2, 0x0c, 0x66, 0x1e, 0xaf, 0xfe, 0xa6, 0x80, 0x9b, 0xc4, 0xa7, 0xd8, 0x8e, 0x23,
0x6c, 0x21, 0xc7, 0x23, 0xbe, 0x85, 0x6c, 0x5b, 0xdc, 0x51, 0x5b, 0x36, 0x67, 0x25, 0x1c, 0x7e,
0x5a, 0x10, 0xfa, 0xc2, 0xdf, 0x97, 0xee, 0x94, 0xc3, 0xfb, 0x52, 0xb8, 0xc6, 0x57, 0xad, 0xe2,
0xee, 0xff, 0x32, 0xcc, 0xba, 0xe4, 0xea, 0x3e, 0xb8, 0xc6, 0x46, 0xd8, 0xc3, 0x1a, 0x90, 0xad,
0x7f, 0x93, 0x70, 0x98, 0x01, 0x29, 0x87, 0x77, 0xb3, 0x99, 0x0a, 0x6b, 0xe9, 0x74, 0xf3, 0x87,
0xb8, 0xd9, 0x56, 0xfe, 0x36, 0xb3, 0x10, 0xf5, 0x18, 0xb4, 0x1d, 0x3c, 0x88, 0x87, 0x43, 0xe2,
0x0f, 0xb5, 0x4f, 0x64, 0x57, 0x4f, 0x13, 0x0e, 0x17, 0x60, 0xb9, 0xcd, 0x25, 0x52, 0x7e, 0xae,
0x4e, 0x15, 0x32, 0x17, 0x41, 0xea, 0x5f, 0x0a, 0xd0, 0xca, 0xc9, 0xd1, 0x31, 0x09, 0xad, 0x51,
0x40, 0x99, 0x65, 0x8f, 0xb0, 0x3d, 0xd6, 0xae, 0x4b, 0x99, 0x9f, 0xc5, 0x5d, 0x17, 0x9c, 0xa3,
0x31, 0x09, 0x5f, 0x06, 0x94, 0x49, 0x42, 0x79, 0xd7, 0xb5, 0xde, 0x95, 0xbb, 0x7e, 0x0f, 0x27,
0x9d, 0x1a, 0xf5, 0x22, 0xe6, 0x25, 0xf8, 0xb9, 0x80, 0xd5, 0x3f, 0x15, 0xf0, 0xc5, 0xe2, 0x9b,
0xbb, 0x6e, 0x70, 0x6a, 0x9d, 0x44, 0xc8, 0xc3, 0x96, 0x1b, 0x20, 0x47, 0x0c, 0xe9, 0x86, 0xac,
0xfe, 0xc7, 0x84, 0xc3, 0xdb, 0xe5, 0xd7, 0x11, 0xb4, 0x17, 0x82, 0x75, 0x90, 0x91, 0x52, 0x0e,
0x1f, 0x54, 0x17, 0x60, 0x95, 0x51, 0xed, 0xe2, 0xfe, 0x07, 0xf0, 0xcc, 0xab, 0xe5, 0xd4, 0x5f,
0x14, 0x70, 0x8b, 0x62, 0xdf, 0xb1, 0x06, 0x88, 0x12, 0xdb, 0x92, 0x17, 0x1f, 0x46, 0x81, 0x17,
0x32, 0xad, 0x23, 0xcb, 0x3d, 0x16, 0x9b, 0x2a, 0x18, 0x7b, 0x82, 0x20, 0x0e, 0xff, 0x50, 0xba,
0x53, 0x0e, 0x75, 0x59, 0x68, 0x8d, 0xaf, 0xfc, 0xce, 0xda, 0x55, 0x4e, 0xb3, 0x2e, 0xe5, 0xde,
0xfe, 0x9b, 0xb7, 0x7a, 0x63, 0xf6, 0x56, 0x6f, 0xbc, 0x99, 0xeb, 0xca, 0x6c, 0xae, 0x2b, 0xbf,
0x5e, 0xe8, 0x8d, 0x3f, 0x2e, 0x74, 0x65, 0x76, 0xa1, 0x37, 0xfe, 0xb9, 0xd0, 0x1b, 0xaf, 0x1f,
0x0c, 0x09, 0x1b, 0xc5, 0x83, 0x9e, 0x1d, 0x78, 0xdb, 0x74, 0xe2, 0xdb, 0x6c, 0x44, 0xfc, 0xe1,
0xd2, 0x6b, 0xf1, 0x3b, 0x1d, 0x34, 0xe5, 0xbf, 0xf3, 0xc9, 0x7f, 0x01, 0x00, 0x00, 0xff, 0xff,
0xfe, 0x19, 0xb2, 0x3c, 0x8e, 0x07, 0x00, 0x00,
}
func (m *GUIConfiguration) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *GUIConfiguration) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *GUIConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.SendBasicAuthPrompt {
i--
if m.SendBasicAuthPrompt {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x70
}
if m.InsecureAllowFrameLoading {
i--
if m.InsecureAllowFrameLoading {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x68
}
if m.InsecureSkipHostCheck {
i--
if m.InsecureSkipHostCheck {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x60
}
if m.Debugging {
i--
if m.Debugging {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x58
}
if len(m.Theme) > 0 {
i -= len(m.Theme)
copy(dAtA[i:], m.Theme)
i = encodeVarintGuiconfiguration(dAtA, i, uint64(len(m.Theme)))
i--
dAtA[i] = 0x52
}
if m.InsecureAdminAccess {
i--
if m.InsecureAdminAccess {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x48
}
if len(m.APIKey) > 0 {
i -= len(m.APIKey)
copy(dAtA[i:], m.APIKey)
i = encodeVarintGuiconfiguration(dAtA, i, uint64(len(m.APIKey)))
i--
dAtA[i] = 0x42
}
if m.RawUseTLS {
i--
if m.RawUseTLS {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x38
}
if m.AuthMode != 0 {
i = encodeVarintGuiconfiguration(dAtA, i, uint64(m.AuthMode))
i--
dAtA[i] = 0x30
}
if len(m.Password) > 0 {
i -= len(m.Password)
copy(dAtA[i:], m.Password)
i = encodeVarintGuiconfiguration(dAtA, i, uint64(len(m.Password)))
i--
dAtA[i] = 0x2a
}
if len(m.User) > 0 {
i -= len(m.User)
copy(dAtA[i:], m.User)
i = encodeVarintGuiconfiguration(dAtA, i, uint64(len(m.User)))
i--
dAtA[i] = 0x22
}
if len(m.RawUnixSocketPermissions) > 0 {
i -= len(m.RawUnixSocketPermissions)
copy(dAtA[i:], m.RawUnixSocketPermissions)
i = encodeVarintGuiconfiguration(dAtA, i, uint64(len(m.RawUnixSocketPermissions)))
i--
dAtA[i] = 0x1a
}
if len(m.RawAddress) > 0 {
i -= len(m.RawAddress)
copy(dAtA[i:], m.RawAddress)
i = encodeVarintGuiconfiguration(dAtA, i, uint64(len(m.RawAddress)))
i--
dAtA[i] = 0x12
}
if m.Enabled {
i--
if m.Enabled {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
func encodeVarintGuiconfiguration(dAtA []byte, offset int, v uint64) int {
offset -= sovGuiconfiguration(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *GUIConfiguration) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Enabled {
n += 2
}
l = len(m.RawAddress)
if l > 0 {
n += 1 + l + sovGuiconfiguration(uint64(l))
}
l = len(m.RawUnixSocketPermissions)
if l > 0 {
n += 1 + l + sovGuiconfiguration(uint64(l))
}
l = len(m.User)
if l > 0 {
n += 1 + l + sovGuiconfiguration(uint64(l))
}
l = len(m.Password)
if l > 0 {
n += 1 + l + sovGuiconfiguration(uint64(l))
}
if m.AuthMode != 0 {
n += 1 + sovGuiconfiguration(uint64(m.AuthMode))
}
if m.RawUseTLS {
n += 2
}
l = len(m.APIKey)
if l > 0 {
n += 1 + l + sovGuiconfiguration(uint64(l))
}
if m.InsecureAdminAccess {
n += 2
}
l = len(m.Theme)
if l > 0 {
n += 1 + l + sovGuiconfiguration(uint64(l))
}
if m.Debugging {
n += 2
}
if m.InsecureSkipHostCheck {
n += 2
}
if m.InsecureAllowFrameLoading {
n += 2
}
if m.SendBasicAuthPrompt {
n += 2
}
return n
}
func sovGuiconfiguration(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozGuiconfiguration(x uint64) (n int) {
return sovGuiconfiguration(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *GUIConfiguration) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: GUIConfiguration: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: GUIConfiguration: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Enabled", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.Enabled = bool(v != 0)
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field RawAddress", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGuiconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGuiconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.RawAddress = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field RawUnixSocketPermissions", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGuiconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGuiconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.RawUnixSocketPermissions = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field User", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGuiconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGuiconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.User = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Password", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGuiconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGuiconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Password = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 6:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field AuthMode", wireType)
}
m.AuthMode = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.AuthMode |= AuthMode(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 7:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field RawUseTLS", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.RawUseTLS = bool(v != 0)
case 8:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field APIKey", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGuiconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGuiconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.APIKey = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 9:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InsecureAdminAccess", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.InsecureAdminAccess = bool(v != 0)
case 10:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Theme", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthGuiconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGuiconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Theme = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 11:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Debugging", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.Debugging = bool(v != 0)
case 12:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InsecureSkipHostCheck", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.InsecureSkipHostCheck = bool(v != 0)
case 13:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InsecureAllowFrameLoading", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.InsecureAllowFrameLoading = bool(v != 0)
case 14:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field SendBasicAuthPrompt", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.SendBasicAuthPrompt = bool(v != 0)
default:
iNdEx = preIndex
skippy, err := skipGuiconfiguration(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthGuiconfiguration
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipGuiconfiguration(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowGuiconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthGuiconfiguration
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupGuiconfiguration
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthGuiconfiguration
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthGuiconfiguration = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowGuiconfiguration = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupGuiconfiguration = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -6,6 +6,15 @@
package config package config
type LDAPConfiguration struct {
Address string `json:"address" xml:"address,omitempty"`
BindDN string `json:"bindDN" xml:"bindDN,omitempty"`
Transport LDAPTransport `json:"transport" xml:"transport,omitempty"`
InsecureSkipVerify bool `json:"insecureSkipVerify" xml:"insecureSkipVerify,omitempty" default:"false"`
SearchBaseDN string `json:"searchBaseDN" xml:"searchBaseDN,omitempty"`
SearchFilter string `json:"searchFilter" xml:"searchFilter,omitempty"`
}
func (c LDAPConfiguration) Copy() LDAPConfiguration { func (c LDAPConfiguration) Copy() LDAPConfiguration {
return c return c
} }

View File

@ -1,526 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/ldapconfiguration.proto
package config
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
_ "github.com/syncthing/syncthing/proto/ext"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type LDAPConfiguration struct {
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address" xml:"address,omitempty"`
BindDN string `protobuf:"bytes,2,opt,name=bind_dn,json=bindDn,proto3" json:"bindDN" xml:"bindDN,omitempty"`
Transport LDAPTransport `protobuf:"varint,3,opt,name=transport,proto3,enum=config.LDAPTransport" json:"transport" xml:"transport,omitempty"`
InsecureSkipVerify bool `protobuf:"varint,4,opt,name=insecure_skip_verify,json=insecureSkipVerify,proto3" json:"insecureSkipVerify" xml:"insecureSkipVerify,omitempty" default:"false"`
SearchBaseDN string `protobuf:"bytes,5,opt,name=search_base_dn,json=searchBaseDn,proto3" json:"searchBaseDN" xml:"searchBaseDN,omitempty"`
SearchFilter string `protobuf:"bytes,6,opt,name=search_filter,json=searchFilter,proto3" json:"searchFilter" xml:"searchFilter,omitempty"`
}
func (m *LDAPConfiguration) Reset() { *m = LDAPConfiguration{} }
func (m *LDAPConfiguration) String() string { return proto.CompactTextString(m) }
func (*LDAPConfiguration) ProtoMessage() {}
func (*LDAPConfiguration) Descriptor() ([]byte, []int) {
return fileDescriptor_9681ad7e41c73956, []int{0}
}
func (m *LDAPConfiguration) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *LDAPConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_LDAPConfiguration.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *LDAPConfiguration) XXX_Merge(src proto.Message) {
xxx_messageInfo_LDAPConfiguration.Merge(m, src)
}
func (m *LDAPConfiguration) XXX_Size() int {
return m.ProtoSize()
}
func (m *LDAPConfiguration) XXX_DiscardUnknown() {
xxx_messageInfo_LDAPConfiguration.DiscardUnknown(m)
}
var xxx_messageInfo_LDAPConfiguration proto.InternalMessageInfo
func init() {
proto.RegisterType((*LDAPConfiguration)(nil), "config.LDAPConfiguration")
}
func init() {
proto.RegisterFile("lib/config/ldapconfiguration.proto", fileDescriptor_9681ad7e41c73956)
}
var fileDescriptor_9681ad7e41c73956 = []byte{
// 500 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x53, 0xbf, 0x6f, 0xd3, 0x40,
0x14, 0xb6, 0x81, 0xba, 0xc4, 0x2a, 0x15, 0x35, 0x50, 0x4c, 0x55, 0xf9, 0x22, 0xcb, 0x43, 0x90,
0x50, 0x22, 0x95, 0xad, 0x4c, 0x35, 0x15, 0x03, 0x20, 0x84, 0x5c, 0xe8, 0xc0, 0x12, 0xf9, 0xc7,
0x39, 0x39, 0xd5, 0x39, 0x5b, 0x77, 0xe7, 0xaa, 0xe1, 0xaf, 0x80, 0xfe, 0x05, 0xdd, 0xf8, 0x57,
0xba, 0xc5, 0x23, 0xd3, 0x49, 0x4d, 0x36, 0x8f, 0x1e, 0x99, 0x50, 0xce, 0x4e, 0x63, 0xa7, 0x51,
0xb7, 0xf7, 0xbe, 0xef, 0xbd, 0xef, 0x7d, 0x77, 0x4f, 0x4f, 0x35, 0x23, 0xe4, 0xf5, 0xfc, 0x18,
0x87, 0x68, 0xd0, 0x8b, 0x02, 0x37, 0x29, 0xc3, 0x94, 0xb8, 0x0c, 0xc5, 0xb8, 0x9b, 0x90, 0x98,
0xc5, 0x9a, 0x52, 0x82, 0x7b, 0xc6, 0x4a, 0x2d, 0x23, 0x2e, 0xa6, 0x49, 0x4c, 0x58, 0x59, 0xb7,
0xd7, 0x82, 0x17, 0x55, 0x68, 0xfe, 0x56, 0xd4, 0x9d, 0xcf, 0xc7, 0x47, 0x5f, 0xdf, 0xd7, 0xe5,
0xb4, 0xef, 0xea, 0xa6, 0x1b, 0x04, 0x04, 0x52, 0xaa, 0xcb, 0x6d, 0xb9, 0xd3, 0xb2, 0xdf, 0xe5,
0x1c, 0x2c, 0xa0, 0x82, 0x83, 0x97, 0x17, 0xa3, 0xe8, 0xd0, 0xac, 0xf2, 0x37, 0xf1, 0x08, 0x31,
0x38, 0x4a, 0xd8, 0xd8, 0xcc, 0x27, 0xd6, 0xce, 0x1d, 0xd4, 0x59, 0x34, 0x6a, 0xb1, 0xba, 0xe9,
0x21, 0x1c, 0xf4, 0x03, 0xac, 0x3f, 0x10, 0xb2, 0xa7, 0x53, 0x0e, 0x14, 0x1b, 0xe1, 0xe0, 0xf8,
0x4b, 0xce, 0x81, 0xe2, 0x89, 0xa8, 0xe0, 0x60, 0x57, 0xe8, 0x97, 0x69, 0x53, 0xfe, 0xe9, 0x2a,
0x58, 0x4c, 0xac, 0xaa, 0xef, 0x32, 0xb3, 0x2a, 0x2d, 0xa7, 0x44, 0xb0, 0x76, 0xae, 0xb6, 0x6e,
0xdf, 0xae, 0x3f, 0x6c, 0xcb, 0x9d, 0xed, 0x83, 0x17, 0xdd, 0xf2, 0x63, 0xba, 0xf3, 0x57, 0x7f,
0x5b, 0x90, 0xf6, 0x51, 0xce, 0xc1, 0xb2, 0xb6, 0xe0, 0xe0, 0x95, 0xb0, 0x70, 0x8b, 0x34, 0x5d,
0x3c, 0x5b, 0x83, 0x3b, 0xcb, 0x76, 0xed, 0x8f, 0xac, 0x3e, 0x47, 0x98, 0x42, 0x3f, 0x25, 0xb0,
0x4f, 0xcf, 0x50, 0xd2, 0x3f, 0x87, 0x04, 0x85, 0x63, 0xfd, 0x51, 0x5b, 0xee, 0x3c, 0xb6, 0xd3,
0x9c, 0x03, 0x6d, 0xc1, 0x9f, 0x9c, 0xa1, 0xe4, 0x54, 0xb0, 0x05, 0x07, 0x07, 0x62, 0xea, 0x5d,
0xaa, 0x36, 0xbe, 0x1d, 0xc0, 0xd0, 0x4d, 0x23, 0x76, 0x68, 0x86, 0x6e, 0x44, 0xe1, 0xdc, 0xce,
0xfe, 0x7d, 0x0d, 0xff, 0x26, 0xd6, 0x86, 0xa8, 0x74, 0xd6, 0x8c, 0xd4, 0xae, 0x64, 0x75, 0x9b,
0x42, 0x97, 0xf8, 0xc3, 0xbe, 0xe7, 0x52, 0x38, 0x5f, 0xcd, 0x86, 0x58, 0xcd, 0xcf, 0x29, 0x07,
0x5b, 0x27, 0x82, 0xb1, 0x5d, 0x0a, 0xc5, 0x82, 0xb6, 0x68, 0x2d, 0x2f, 0x38, 0xd8, 0x17, 0x6e,
0xeb, 0x60, 0xf3, 0x9b, 0x76, 0xd7, 0x53, 0xc5, 0xc4, 0x6a, 0x28, 0x5d, 0x66, 0x56, 0x63, 0x92,
0x53, 0x67, 0xb1, 0x16, 0xab, 0x4f, 0x2a, 0x87, 0x21, 0x8a, 0x18, 0x24, 0xba, 0x22, 0x0c, 0x7e,
0x5c, 0x1a, 0xfa, 0x20, 0xf0, 0x15, 0x43, 0x25, 0xb8, 0xd6, 0xd0, 0x2a, 0xe5, 0x34, 0x74, 0xec,
0x4f, 0xd7, 0x37, 0x86, 0x94, 0xdd, 0x18, 0xd2, 0xf5, 0xd4, 0x90, 0xb3, 0xa9, 0x21, 0xff, 0x9a,
0x19, 0xd2, 0xd5, 0xcc, 0x90, 0xb3, 0x99, 0x21, 0xfd, 0x9d, 0x19, 0xd2, 0x8f, 0xd7, 0x03, 0xc4,
0x86, 0xa9, 0xd7, 0xf5, 0xe3, 0x51, 0x8f, 0x8e, 0xb1, 0xcf, 0x86, 0x08, 0x0f, 0x6a, 0xd1, 0xf2,
0xfe, 0x3c, 0x45, 0xdc, 0xd9, 0xdb, 0xff, 0x01, 0x00, 0x00, 0xff, 0xff, 0xbf, 0x64, 0x4b, 0x05,
0xc0, 0x03, 0x00, 0x00,
}
func (m *LDAPConfiguration) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *LDAPConfiguration) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *LDAPConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.SearchFilter) > 0 {
i -= len(m.SearchFilter)
copy(dAtA[i:], m.SearchFilter)
i = encodeVarintLdapconfiguration(dAtA, i, uint64(len(m.SearchFilter)))
i--
dAtA[i] = 0x32
}
if len(m.SearchBaseDN) > 0 {
i -= len(m.SearchBaseDN)
copy(dAtA[i:], m.SearchBaseDN)
i = encodeVarintLdapconfiguration(dAtA, i, uint64(len(m.SearchBaseDN)))
i--
dAtA[i] = 0x2a
}
if m.InsecureSkipVerify {
i--
if m.InsecureSkipVerify {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x20
}
if m.Transport != 0 {
i = encodeVarintLdapconfiguration(dAtA, i, uint64(m.Transport))
i--
dAtA[i] = 0x18
}
if len(m.BindDN) > 0 {
i -= len(m.BindDN)
copy(dAtA[i:], m.BindDN)
i = encodeVarintLdapconfiguration(dAtA, i, uint64(len(m.BindDN)))
i--
dAtA[i] = 0x12
}
if len(m.Address) > 0 {
i -= len(m.Address)
copy(dAtA[i:], m.Address)
i = encodeVarintLdapconfiguration(dAtA, i, uint64(len(m.Address)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func encodeVarintLdapconfiguration(dAtA []byte, offset int, v uint64) int {
offset -= sovLdapconfiguration(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *LDAPConfiguration) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Address)
if l > 0 {
n += 1 + l + sovLdapconfiguration(uint64(l))
}
l = len(m.BindDN)
if l > 0 {
n += 1 + l + sovLdapconfiguration(uint64(l))
}
if m.Transport != 0 {
n += 1 + sovLdapconfiguration(uint64(m.Transport))
}
if m.InsecureSkipVerify {
n += 2
}
l = len(m.SearchBaseDN)
if l > 0 {
n += 1 + l + sovLdapconfiguration(uint64(l))
}
l = len(m.SearchFilter)
if l > 0 {
n += 1 + l + sovLdapconfiguration(uint64(l))
}
return n
}
func sovLdapconfiguration(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozLdapconfiguration(x uint64) (n int) {
return sovLdapconfiguration(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *LDAPConfiguration) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: LDAPConfiguration: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: LDAPConfiguration: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthLdapconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthLdapconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Address = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field BindDN", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthLdapconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthLdapconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.BindDN = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Transport", wireType)
}
m.Transport = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Transport |= LDAPTransport(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InsecureSkipVerify", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.InsecureSkipVerify = bool(v != 0)
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SearchBaseDN", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthLdapconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthLdapconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SearchBaseDN = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 6:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field SearchFilter", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthLdapconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthLdapconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SearchFilter = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipLdapconfiguration(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthLdapconfiguration
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipLdapconfiguration(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLdapconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthLdapconfiguration
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupLdapconfiguration
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthLdapconfiguration
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthLdapconfiguration = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowLdapconfiguration = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupLdapconfiguration = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -6,6 +6,14 @@
package config package config
type LDAPTransport int32
const (
LDAPTransportPlain LDAPTransport = 0
LDAPTransportTLS LDAPTransport = 2
LDAPTransportStartTLS LDAPTransport = 3
)
func (t LDAPTransport) String() string { func (t LDAPTransport) String() string {
switch t { switch t {
case LDAPTransportPlain: case LDAPTransportPlain:

View File

@ -1,75 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/ldaptransport.proto
package config
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
_ "github.com/syncthing/syncthing/proto/ext"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type LDAPTransport int32
const (
LDAPTransportPlain LDAPTransport = 0
LDAPTransportTLS LDAPTransport = 2
LDAPTransportStartTLS LDAPTransport = 3
)
var LDAPTransport_name = map[int32]string{
0: "LDAP_TRANSPORT_PLAIN",
2: "LDAP_TRANSPORT_TLS",
3: "LDAP_TRANSPORT_START_TLS",
}
var LDAPTransport_value = map[string]int32{
"LDAP_TRANSPORT_PLAIN": 0,
"LDAP_TRANSPORT_TLS": 2,
"LDAP_TRANSPORT_START_TLS": 3,
}
func (LDAPTransport) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_79795fc8505b82bf, []int{0}
}
func init() {
proto.RegisterEnum("config.LDAPTransport", LDAPTransport_name, LDAPTransport_value)
}
func init() { proto.RegisterFile("lib/config/ldaptransport.proto", fileDescriptor_79795fc8505b82bf) }
var fileDescriptor_79795fc8505b82bf = []byte{
// 273 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0xcb, 0xc9, 0x4c, 0xd2,
0x4f, 0xce, 0xcf, 0x4b, 0xcb, 0x4c, 0xd7, 0xcf, 0x49, 0x49, 0x2c, 0x28, 0x29, 0x4a, 0xcc, 0x2b,
0x2e, 0xc8, 0x2f, 0x2a, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x83, 0xc8, 0x49, 0x29,
0x17, 0xa5, 0x16, 0xe4, 0x17, 0xeb, 0x83, 0x05, 0x93, 0x4a, 0xd3, 0xf4, 0xd3, 0xf3, 0xd3, 0xf3,
0xc1, 0x1c, 0x30, 0x0b, 0xa2, 0x58, 0x8a, 0x33, 0xb5, 0x02, 0xaa, 0x4f, 0xeb, 0x23, 0x23, 0x17,
0xaf, 0x8f, 0x8b, 0x63, 0x40, 0x08, 0xcc, 0x3c, 0x21, 0x37, 0x2e, 0x11, 0x90, 0x40, 0x7c, 0x48,
0x90, 0xa3, 0x5f, 0x70, 0x80, 0x7f, 0x50, 0x48, 0x7c, 0x80, 0x8f, 0xa3, 0xa7, 0x9f, 0x00, 0x83,
0x94, 0x4e, 0xd7, 0x5c, 0x05, 0x21, 0x14, 0xc5, 0x01, 0x39, 0x89, 0x99, 0x79, 0x97, 0xfa, 0x54,
0xb1, 0x88, 0x0a, 0x39, 0x70, 0x09, 0xa1, 0x99, 0x13, 0xe2, 0x13, 0x2c, 0xc0, 0x24, 0xa5, 0xd1,
0x35, 0x57, 0x41, 0x00, 0x45, 0x7d, 0x88, 0x4f, 0xf0, 0xa5, 0x3e, 0x55, 0x0c, 0x31, 0xa1, 0x00,
0x2e, 0x09, 0x34, 0x13, 0x82, 0x43, 0x1c, 0xa1, 0xe6, 0x30, 0x4b, 0x19, 0x75, 0xcd, 0x55, 0x10,
0x45, 0xd1, 0x13, 0x5c, 0x92, 0x08, 0x33, 0x0c, 0xbb, 0x84, 0x14, 0xcb, 0x8a, 0x25, 0x72, 0x0c,
0x4e, 0xde, 0x27, 0x1e, 0xca, 0x31, 0x5c, 0x78, 0x28, 0xc7, 0x70, 0xe2, 0x91, 0x1c, 0xe3, 0x85,
0x47, 0x72, 0x8c, 0x13, 0x1e, 0xcb, 0x31, 0x2c, 0x78, 0x2c, 0xc7, 0x78, 0xe1, 0xb1, 0x1c, 0xc3,
0x8d, 0xc7, 0x72, 0x0c, 0x51, 0x9a, 0xe9, 0x99, 0x25, 0x19, 0xa5, 0x49, 0x7a, 0xc9, 0xf9, 0xb9,
0xfa, 0xc5, 0x95, 0x79, 0xc9, 0x25, 0x19, 0x99, 0x79, 0xe9, 0x48, 0x2c, 0x44, 0x64, 0x24, 0xb1,
0x81, 0xc3, 0xd1, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0xc1, 0x56, 0xde, 0x17, 0xa1, 0x01, 0x00,
0x00,
}

View File

@ -114,7 +114,7 @@ func migrateToConfigV35(cfg *Configuration) {
for i, fcfg := range cfg.Folders { for i, fcfg := range cfg.Folders {
params := fcfg.Versioning.Params params := fcfg.Versioning.Params
if params["fsType"] != "" { if params["fsType"] != "" {
var fsType fs.FilesystemType var fsType FilesystemType
_ = fsType.UnmarshalText([]byte(params["fsType"])) _ = fsType.UnmarshalText([]byte(params["fsType"]))
cfg.Folders[i].Versioning.FSType = fsType cfg.Folders[i].Versioning.FSType = fsType
} }
@ -228,7 +228,7 @@ func migrateToConfigV23(cfg *Configuration) {
func migrateToConfigV22(cfg *Configuration) { func migrateToConfigV22(cfg *Configuration) {
for i := range cfg.Folders { for i := range cfg.Folders {
cfg.Folders[i].FilesystemType = fs.FilesystemTypeBasic cfg.Folders[i].FilesystemType = FilesystemTypeBasic
// Migrate to templated external versioner commands // Migrate to templated external versioner commands
if cfg.Folders[i].Versioning.Type == "external" { if cfg.Folders[i].Versioning.Type == "external" {
cfg.Folders[i].Versioning.Params["command"] += " %FOLDER_PATH% %FILE_PATH%" cfg.Folders[i].Versioning.Params["command"] += " %FOLDER_PATH% %FILE_PATH%"
@ -238,7 +238,7 @@ func migrateToConfigV22(cfg *Configuration) {
func migrateToConfigV21(cfg *Configuration) { func migrateToConfigV21(cfg *Configuration) {
for _, folder := range cfg.Folders { for _, folder := range cfg.Folders {
if folder.FilesystemType != fs.FilesystemTypeBasic { if folder.FilesystemType != FilesystemTypeBasic {
continue continue
} }
switch folder.Versioning.Type { switch folder.Versioning.Type {

26
lib/config/observed.go Normal file
View File

@ -0,0 +1,26 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package config
import (
"time"
"github.com/syncthing/syncthing/lib/protocol"
)
type ObservedFolder struct {
Time time.Time `json:"time" xml:"time,attr"`
ID string `json:"id" xml:"id,attr"`
Label string `json:"label" xml:"label,attr"`
}
type ObservedDevice struct {
Time time.Time `json:"time" xml:"time,attr"`
ID protocol.DeviceID `json:"deviceID" xml:"id,attr"`
Name string `json:"name" xml:"name,attr"`
Address string `json:"address" xml:"address,attr"`
}

View File

@ -1,716 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/observed.proto
package config
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
github_com_syncthing_syncthing_lib_protocol "github.com/syncthing/syncthing/lib/protocol"
_ "github.com/syncthing/syncthing/proto/ext"
_ "google.golang.org/protobuf/types/known/timestamppb"
io "io"
math "math"
math_bits "math/bits"
time "time"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
var _ = time.Kitchen
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type ObservedFolder struct {
Time time.Time `protobuf:"bytes,1,opt,name=time,proto3,stdtime" json:"time" xml:"time,attr"`
ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id" xml:"id,attr"`
Label string `protobuf:"bytes,3,opt,name=label,proto3" json:"label" xml:"label,attr"`
}
func (m *ObservedFolder) Reset() { *m = ObservedFolder{} }
func (m *ObservedFolder) String() string { return proto.CompactTextString(m) }
func (*ObservedFolder) ProtoMessage() {}
func (*ObservedFolder) Descriptor() ([]byte, []int) {
return fileDescriptor_49f68ff7b178722f, []int{0}
}
func (m *ObservedFolder) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ObservedFolder) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ObservedFolder.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ObservedFolder) XXX_Merge(src proto.Message) {
xxx_messageInfo_ObservedFolder.Merge(m, src)
}
func (m *ObservedFolder) XXX_Size() int {
return m.ProtoSize()
}
func (m *ObservedFolder) XXX_DiscardUnknown() {
xxx_messageInfo_ObservedFolder.DiscardUnknown(m)
}
var xxx_messageInfo_ObservedFolder proto.InternalMessageInfo
type ObservedDevice struct {
Time time.Time `protobuf:"bytes,1,opt,name=time,proto3,stdtime" json:"time" xml:"time,attr"`
ID github_com_syncthing_syncthing_lib_protocol.DeviceID `protobuf:"bytes,2,opt,name=id,proto3,customtype=github.com/syncthing/syncthing/lib/protocol.DeviceID" json:"deviceID" xml:"id,attr"`
Name string `protobuf:"bytes,3,opt,name=name,proto3" json:"name" xml:"name,attr"`
Address string `protobuf:"bytes,4,opt,name=address,proto3" json:"address" xml:"address,attr"`
}
func (m *ObservedDevice) Reset() { *m = ObservedDevice{} }
func (m *ObservedDevice) String() string { return proto.CompactTextString(m) }
func (*ObservedDevice) ProtoMessage() {}
func (*ObservedDevice) Descriptor() ([]byte, []int) {
return fileDescriptor_49f68ff7b178722f, []int{1}
}
func (m *ObservedDevice) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ObservedDevice) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ObservedDevice.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ObservedDevice) XXX_Merge(src proto.Message) {
xxx_messageInfo_ObservedDevice.Merge(m, src)
}
func (m *ObservedDevice) XXX_Size() int {
return m.ProtoSize()
}
func (m *ObservedDevice) XXX_DiscardUnknown() {
xxx_messageInfo_ObservedDevice.DiscardUnknown(m)
}
var xxx_messageInfo_ObservedDevice proto.InternalMessageInfo
func init() {
proto.RegisterType((*ObservedFolder)(nil), "config.ObservedFolder")
proto.RegisterType((*ObservedDevice)(nil), "config.ObservedDevice")
}
func init() { proto.RegisterFile("lib/config/observed.proto", fileDescriptor_49f68ff7b178722f) }
var fileDescriptor_49f68ff7b178722f = []byte{
// 440 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x93, 0x3f, 0x6f, 0xd4, 0x30,
0x00, 0xc5, 0xe3, 0xf4, 0x68, 0x39, 0x53, 0xfe, 0x28, 0xd3, 0x71, 0x83, 0x5d, 0x9d, 0x32, 0x1c,
0x02, 0x25, 0xfc, 0x9b, 0x10, 0x42, 0xe2, 0x74, 0x42, 0x3a, 0x75, 0x40, 0x8a, 0x98, 0x98, 0x48,
0x62, 0x37, 0xb5, 0x94, 0x9c, 0xab, 0xc4, 0xad, 0xca, 0xc6, 0xc8, 0xd8, 0xf2, 0x09, 0xf8, 0x38,
0xb7, 0x5d, 0x46, 0xc4, 0x60, 0xd4, 0xcb, 0x96, 0x31, 0x12, 0x3b, 0x8a, 0x9d, 0xf8, 0x6e, 0x42,
0x4c, 0x6c, 0x7e, 0x4f, 0xcf, 0x3f, 0xf9, 0xbd, 0x28, 0xf0, 0x61, 0xca, 0x22, 0x3f, 0xe6, 0xcb,
0x13, 0x96, 0xf8, 0x3c, 0x2a, 0x68, 0x7e, 0x41, 0x89, 0x77, 0x96, 0x73, 0xc1, 0x9d, 0x7d, 0x6d,
0x8f, 0x71, 0xc2, 0x79, 0x92, 0x52, 0x5f, 0xb9, 0xd1, 0xf9, 0x89, 0x2f, 0x58, 0x46, 0x0b, 0x11,
0x66, 0x67, 0x3a, 0x38, 0x1e, 0xd2, 0x4b, 0xa1, 0x8f, 0x93, 0xdf, 0x00, 0xde, 0x7b, 0xdf, 0x61,
0xde, 0xf1, 0x94, 0xd0, 0xdc, 0xf9, 0x04, 0x07, 0xed, 0x85, 0x11, 0x38, 0x02, 0xd3, 0x3b, 0xcf,
0xc7, 0x9e, 0xa6, 0x79, 0x3d, 0xcd, 0xfb, 0xd0, 0xd3, 0x66, 0x4f, 0x57, 0x12, 0x5b, 0xb5, 0xc4,
0x2a, 0xdf, 0x48, 0x7c, 0xff, 0x32, 0x4b, 0x5f, 0x4d, 0x5a, 0xf1, 0x24, 0x14, 0x22, 0x9f, 0x5c,
0xfd, 0xc2, 0xa0, 0x5e, 0xbb, 0x43, 0xe3, 0x04, 0x2a, 0xe9, 0xbc, 0x81, 0x36, 0x23, 0x23, 0xfb,
0x08, 0x4c, 0x87, 0x33, 0x6f, 0x23, 0xb1, 0xbd, 0x98, 0xd7, 0x12, 0xdb, 0x8c, 0x34, 0x12, 0xdf,
0x55, 0x0c, 0x46, 0x34, 0xa1, 0x5e, 0xbb, 0x07, 0xdd, 0xf9, 0x5b, 0xe9, 0xda, 0x8b, 0x79, 0x60,
0x33, 0xe2, 0xbc, 0x85, 0xb7, 0xd2, 0x30, 0xa2, 0xe9, 0x68, 0x4f, 0x21, 0x1e, 0xd7, 0x12, 0x6b,
0xa3, 0x91, 0xf8, 0x81, 0xba, 0xaf, 0x94, 0x41, 0xc0, 0xad, 0x0c, 0x74, 0x70, 0x72, 0xbd, 0xb7,
0xed, 0x3d, 0xa7, 0x17, 0x2c, 0xa6, 0xff, 0xa1, 0xf7, 0x35, 0x30, 0xc5, 0x0f, 0x67, 0x5f, 0x40,
0x4b, 0xf9, 0x29, 0xf1, 0xcb, 0x84, 0x89, 0xd3, 0xf3, 0xc8, 0x8b, 0x79, 0xe6, 0x17, 0x9f, 0x97,
0xb1, 0x38, 0x65, 0xcb, 0x64, 0xe7, 0xd4, 0x7e, 0x70, 0xf5, 0x88, 0x98, 0xa7, 0x9e, 0x7e, 0xeb,
0x62, 0x6e, 0x56, 0xbb, 0x4d, 0x3a, 0xe7, 0x6f, 0xdb, 0x35, 0x6b, 0xd7, 0xe4, 0xbe, 0x96, 0x2e,
0xd8, 0xd9, 0xf2, 0x35, 0x1c, 0x2c, 0xc3, 0x8c, 0x76, 0x53, 0x4e, 0xdb, 0x56, 0xad, 0x36, 0xad,
0x5a, 0x61, 0x78, 0x43, 0xa3, 0x02, 0x95, 0x72, 0x8e, 0xe1, 0x41, 0x48, 0x48, 0x4e, 0x8b, 0x62,
0x34, 0x50, 0x80, 0x67, 0xb5, 0xc4, 0xbd, 0xd5, 0x48, 0xec, 0x28, 0x46, 0xa7, 0x0d, 0xe6, 0x70,
0xd7, 0x08, 0xfa, 0xf8, 0xec, 0x78, 0x75, 0x83, 0xac, 0xf2, 0x06, 0x59, 0xab, 0x0d, 0x02, 0xe5,
0x06, 0x81, 0xab, 0x0a, 0x59, 0xdf, 0x2b, 0x04, 0xca, 0x0a, 0x59, 0x3f, 0x2a, 0x64, 0x7d, 0x7c,
0xf4, 0x0f, 0x53, 0xe9, 0x9f, 0x20, 0xda, 0x57, 0x93, 0xbd, 0xf8, 0x13, 0x00, 0x00, 0xff, 0xff,
0xd0, 0xf0, 0x82, 0x78, 0x30, 0x03, 0x00, 0x00,
}
func (m *ObservedFolder) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ObservedFolder) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ObservedFolder) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Label) > 0 {
i -= len(m.Label)
copy(dAtA[i:], m.Label)
i = encodeVarintObserved(dAtA, i, uint64(len(m.Label)))
i--
dAtA[i] = 0x1a
}
if len(m.ID) > 0 {
i -= len(m.ID)
copy(dAtA[i:], m.ID)
i = encodeVarintObserved(dAtA, i, uint64(len(m.ID)))
i--
dAtA[i] = 0x12
}
n1, err1 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
if err1 != nil {
return 0, err1
}
i -= n1
i = encodeVarintObserved(dAtA, i, uint64(n1))
i--
dAtA[i] = 0xa
return len(dAtA) - i, nil
}
func (m *ObservedDevice) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ObservedDevice) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ObservedDevice) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Address) > 0 {
i -= len(m.Address)
copy(dAtA[i:], m.Address)
i = encodeVarintObserved(dAtA, i, uint64(len(m.Address)))
i--
dAtA[i] = 0x22
}
if len(m.Name) > 0 {
i -= len(m.Name)
copy(dAtA[i:], m.Name)
i = encodeVarintObserved(dAtA, i, uint64(len(m.Name)))
i--
dAtA[i] = 0x1a
}
{
size := m.ID.ProtoSize()
i -= size
if _, err := m.ID.MarshalTo(dAtA[i:]); err != nil {
return 0, err
}
i = encodeVarintObserved(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x12
n2, err2 := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Time, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdTime(m.Time):])
if err2 != nil {
return 0, err2
}
i -= n2
i = encodeVarintObserved(dAtA, i, uint64(n2))
i--
dAtA[i] = 0xa
return len(dAtA) - i, nil
}
func encodeVarintObserved(dAtA []byte, offset int, v uint64) int {
offset -= sovObserved(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *ObservedFolder) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
n += 1 + l + sovObserved(uint64(l))
l = len(m.ID)
if l > 0 {
n += 1 + l + sovObserved(uint64(l))
}
l = len(m.Label)
if l > 0 {
n += 1 + l + sovObserved(uint64(l))
}
return n
}
func (m *ObservedDevice) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Time)
n += 1 + l + sovObserved(uint64(l))
l = m.ID.ProtoSize()
n += 1 + l + sovObserved(uint64(l))
l = len(m.Name)
if l > 0 {
n += 1 + l + sovObserved(uint64(l))
}
l = len(m.Address)
if l > 0 {
n += 1 + l + sovObserved(uint64(l))
}
return n
}
func sovObserved(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozObserved(x uint64) (n int) {
return sovObserved(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *ObservedFolder) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ObservedFolder: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ObservedFolder: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Label", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Label = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipObserved(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthObserved
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ObservedDevice) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ObservedDevice: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ObservedDevice: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Time", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Time, dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.ID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowObserved
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthObserved
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthObserved
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Address = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipObserved(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthObserved
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipObserved(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowObserved
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowObserved
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowObserved
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthObserved
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupObserved
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthObserved
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthObserved = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowObserved = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupObserved = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -16,6 +16,81 @@ import (
"github.com/syncthing/syncthing/lib/structutil" "github.com/syncthing/syncthing/lib/structutil"
) )
type OptionsConfiguration struct {
RawListenAddresses []string `json:"listenAddresses" xml:"listenAddress" default:"default"`
RawGlobalAnnServers []string `json:"globalAnnounceServers" xml:"globalAnnounceServer" default:"default"`
GlobalAnnEnabled bool `json:"globalAnnounceEnabled" xml:"globalAnnounceEnabled" default:"true"`
LocalAnnEnabled bool `json:"localAnnounceEnabled" xml:"localAnnounceEnabled" default:"true"`
LocalAnnPort int `json:"localAnnouncePort" xml:"localAnnouncePort" default:"21027"`
LocalAnnMCAddr string `json:"localAnnounceMCAddr" xml:"localAnnounceMCAddr" default:"[ff12::8384]:21027"`
MaxSendKbps int `json:"maxSendKbps" xml:"maxSendKbps"`
MaxRecvKbps int `json:"maxRecvKbps" xml:"maxRecvKbps"`
ReconnectIntervalS int `json:"reconnectionIntervalS" xml:"reconnectionIntervalS" default:"60"`
RelaysEnabled bool `json:"relaysEnabled" xml:"relaysEnabled" default:"true"`
RelayReconnectIntervalM int `json:"relayReconnectIntervalM" xml:"relayReconnectIntervalM" default:"10"`
StartBrowser bool `json:"startBrowser" xml:"startBrowser" default:"true"`
NATEnabled bool `json:"natEnabled" xml:"natEnabled" default:"true"`
NATLeaseM int `json:"natLeaseMinutes" xml:"natLeaseMinutes" default:"60"`
NATRenewalM int `json:"natRenewalMinutes" xml:"natRenewalMinutes" default:"30"`
NATTimeoutS int `json:"natTimeoutSeconds" xml:"natTimeoutSeconds" default:"10"`
URAccepted int `json:"urAccepted" xml:"urAccepted"`
URSeen int `json:"urSeen" xml:"urSeen"`
URUniqueID string `json:"urUniqueId" xml:"urUniqueID"`
URURL string `json:"urURL" xml:"urURL" default:"https://data.syncthing.net/newdata"`
URPostInsecurely bool `json:"urPostInsecurely" xml:"urPostInsecurely" default:"false"`
URInitialDelayS int `json:"urInitialDelayS" xml:"urInitialDelayS" default:"1800"`
AutoUpgradeIntervalH int `json:"autoUpgradeIntervalH" xml:"autoUpgradeIntervalH" default:"12"`
UpgradeToPreReleases bool `json:"upgradeToPreReleases" xml:"upgradeToPreReleases"`
KeepTemporariesH int `json:"keepTemporariesH" xml:"keepTemporariesH" default:"24"`
CacheIgnoredFiles bool `json:"cacheIgnoredFiles" xml:"cacheIgnoredFiles" default:"false"`
ProgressUpdateIntervalS int `json:"progressUpdateIntervalS" xml:"progressUpdateIntervalS" default:"5"`
LimitBandwidthInLan bool `json:"limitBandwidthInLan" xml:"limitBandwidthInLan" default:"false"`
MinHomeDiskFree Size `json:"minHomeDiskFree" xml:"minHomeDiskFree" default:"1 %"`
ReleasesURL string `json:"releasesURL" xml:"releasesURL" default:"https://upgrades.syncthing.net/meta.json"`
AlwaysLocalNets []string `json:"alwaysLocalNets" xml:"alwaysLocalNet"`
OverwriteRemoteDevNames bool `json:"overwriteRemoteDeviceNamesOnConnect" xml:"overwriteRemoteDeviceNamesOnConnect" default:"false"`
TempIndexMinBlocks int `json:"tempIndexMinBlocks" xml:"tempIndexMinBlocks" default:"10"`
UnackedNotificationIDs []string `json:"unackedNotificationIDs" xml:"unackedNotificationID"`
TrafficClass int `json:"trafficClass" xml:"trafficClass"`
DeprecatedDefaultFolderPath string `json:"-" xml:"defaultFolderPath,omitempty"` // Deprecated: Do not use.
SetLowPriority bool `json:"setLowPriority" xml:"setLowPriority" default:"true"`
RawMaxFolderConcurrency int `json:"maxFolderConcurrency" xml:"maxFolderConcurrency"`
CRURL string `json:"crURL" xml:"crashReportingURL" default:"https://crash.syncthing.net/newcrash"`
CREnabled bool `json:"crashReportingEnabled" xml:"crashReportingEnabled" default:"true"`
StunKeepaliveStartS int `json:"stunKeepaliveStartS" xml:"stunKeepaliveStartS" default:"180"`
StunKeepaliveMinS int `json:"stunKeepaliveMinS" xml:"stunKeepaliveMinS" default:"20"`
RawStunServers []string `json:"stunServers" xml:"stunServer" default:"default"`
DatabaseTuning Tuning `json:"databaseTuning" xml:"databaseTuning" restart:"true"`
RawMaxCIRequestKiB int `json:"maxConcurrentIncomingRequestKiB" xml:"maxConcurrentIncomingRequestKiB"`
AnnounceLANAddresses bool `json:"announceLANAddresses" xml:"announceLANAddresses" default:"true"`
SendFullIndexOnUpgrade bool `json:"sendFullIndexOnUpgrade" xml:"sendFullIndexOnUpgrade"`
FeatureFlags []string `json:"featureFlags" xml:"featureFlag"`
// The number of connections at which we stop trying to connect to more
// devices, zero meaning no limit. Does not affect incoming connections.
ConnectionLimitEnough int `json:"connectionLimitEnough" xml:"connectionLimitEnough"`
// The maximum number of connections which we will allow in total, zero
// meaning no limit. Affects incoming connections and prevents
// attempting outgoing connections.
ConnectionLimitMax int `json:"connectionLimitMax" xml:"connectionLimitMax"`
// When set, this allows TLS 1.2 on sync connections, where we otherwise
// default to TLS 1.3+ only.
InsecureAllowOldTLSVersions bool `json:"insecureAllowOldTLSVersions" xml:"insecureAllowOldTLSVersions"`
ConnectionPriorityTCPLAN int `json:"connectionPriorityTcpLan" xml:"connectionPriorityTcpLan" default:"10"`
ConnectionPriorityQUICLAN int `json:"connectionPriorityQuicLan" xml:"connectionPriorityQuicLan" default:"20"`
ConnectionPriorityTCPWAN int `json:"connectionPriorityTcpWan" xml:"connectionPriorityTcpWan" default:"30"`
ConnectionPriorityQUICWAN int `json:"connectionPriorityQuicWan" xml:"connectionPriorityQuicWan" default:"40"`
ConnectionPriorityRelay int `json:"connectionPriorityRelay" xml:"connectionPriorityRelay" default:"50"`
ConnectionPriorityUpgradeThreshold int `json:"connectionPriorityUpgradeThreshold" xml:"connectionPriorityUpgradeThreshold" default:"0"`
// Legacy deprecated
DeprecatedUPnPEnabled bool `json:"-" xml:"upnpEnabled,omitempty"` // Deprecated: Do not use.
DeprecatedUPnPLeaseM int `json:"-" xml:"upnpLeaseMinutes,omitempty"` // Deprecated: Do not use.
DeprecatedUPnPRenewalM int `json:"-" xml:"upnpRenewalMinutes,omitempty"` // Deprecated: Do not use.
DeprecatedUPnPTimeoutS int `json:"-" xml:"upnpTimeoutSeconds,omitempty"` // Deprecated: Do not use.
DeprecatedRelayServers []string `json:"-" xml:"relayServer,omitempty"` // Deprecated: Do not use.
DeprecatedMinHomeDiskFreePct float64 `json:"-" xml:"minHomeDiskFreePct,omitempty"` // Deprecated: Do not use.
DeprecatedMaxConcurrentScans int `json:"-" xml:"maxConcurrentScans,omitempty"` // Deprecated: Do not use.
}
func (opts OptionsConfiguration) Copy() OptionsConfiguration { func (opts OptionsConfiguration) Copy() OptionsConfiguration {
optsCopy := opts optsCopy := opts
optsCopy.RawListenAddresses = make([]string, len(opts.RawListenAddresses)) optsCopy.RawListenAddresses = make([]string, len(opts.RawListenAddresses))

File diff suppressed because it is too large Load Diff

View File

@ -6,6 +6,17 @@
package config package config
type PullOrder int32
const (
PullOrderRandom PullOrder = 0
PullOrderAlphabetic PullOrder = 1
PullOrderSmallestFirst PullOrder = 2
PullOrderLargestFirst PullOrder = 3
PullOrderOldestFirst PullOrder = 4
PullOrderNewestFirst PullOrder = 5
)
func (o PullOrder) String() string { func (o PullOrder) String() string {
switch o { switch o {
case PullOrderRandom: case PullOrderRandom:

View File

@ -1,87 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/pullorder.proto
package config
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type PullOrder int32
const (
PullOrderRandom PullOrder = 0
PullOrderAlphabetic PullOrder = 1
PullOrderSmallestFirst PullOrder = 2
PullOrderLargestFirst PullOrder = 3
PullOrderOldestFirst PullOrder = 4
PullOrderNewestFirst PullOrder = 5
)
var PullOrder_name = map[int32]string{
0: "PULL_ORDER_RANDOM",
1: "PULL_ORDER_ALPHABETIC",
2: "PULL_ORDER_SMALLEST_FIRST",
3: "PULL_ORDER_LARGEST_FIRST",
4: "PULL_ORDER_OLDEST_FIRST",
5: "PULL_ORDER_NEWEST_FIRST",
}
var PullOrder_value = map[string]int32{
"PULL_ORDER_RANDOM": 0,
"PULL_ORDER_ALPHABETIC": 1,
"PULL_ORDER_SMALLEST_FIRST": 2,
"PULL_ORDER_LARGEST_FIRST": 3,
"PULL_ORDER_OLDEST_FIRST": 4,
"PULL_ORDER_NEWEST_FIRST": 5,
}
func (PullOrder) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_2fa3f5222a7755bf, []int{0}
}
func init() {
proto.RegisterEnum("config.PullOrder", PullOrder_name, PullOrder_value)
}
func init() { proto.RegisterFile("lib/config/pullorder.proto", fileDescriptor_2fa3f5222a7755bf) }
var fileDescriptor_2fa3f5222a7755bf = []byte{
// 347 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0xd1, 0xbf, 0x4a, 0xc3, 0x40,
0x1c, 0xc0, 0xf1, 0xa4, 0xd6, 0x82, 0xb7, 0x58, 0x53, 0x6b, 0xdb, 0x1b, 0x8e, 0x80, 0x93, 0x1d,
0x1a, 0x50, 0x44, 0x1c, 0x53, 0x9b, 0x6a, 0xf1, 0xda, 0x94, 0xa4, 0x22, 0xb8, 0x94, 0x24, 0x4d,
0xd3, 0xc0, 0x35, 0x17, 0xf2, 0x07, 0xf1, 0x15, 0x32, 0xf9, 0x02, 0x01, 0x07, 0x07, 0x1f, 0xa5,
0x63, 0xc1, 0xc5, 0xb5, 0xcd, 0x8b, 0x88, 0x29, 0x26, 0x41, 0x70, 0xfb, 0xdd, 0x8f, 0xef, 0xe7,
0x96, 0x1f, 0x80, 0xc4, 0xd6, 0x05, 0x83, 0x3a, 0x73, 0xdb, 0x12, 0xdc, 0x90, 0x10, 0xea, 0xcd,
0x4c, 0xaf, 0xe3, 0x7a, 0x34, 0xa0, 0x5c, 0x65, 0xb7, 0x87, 0xa7, 0x9e, 0xe9, 0x52, 0x5f, 0x48,
0x97, 0x7a, 0x38, 0x17, 0x2c, 0x6a, 0xd1, 0xf4, 0x91, 0x4e, 0xbb, 0xb8, 0xfd, 0x59, 0x02, 0x07,
0xe3, 0x90, 0x10, 0xf9, 0xe7, 0x03, 0xae, 0x0d, 0x8e, 0xc6, 0x0f, 0x18, 0x4f, 0x65, 0xa5, 0x27,
0x29, 0x53, 0x45, 0x1c, 0xf5, 0xe4, 0x61, 0x95, 0x81, 0xb5, 0x28, 0xe6, 0x0f, 0xb3, 0x4a, 0xd1,
0x9c, 0x19, 0x5d, 0x72, 0xe7, 0xa0, 0x5e, 0x68, 0x45, 0x3c, 0xbe, 0x13, 0xbb, 0xd2, 0x64, 0x70,
0x53, 0x65, 0x61, 0x23, 0x8a, 0xf9, 0x5a, 0xd6, 0x8b, 0xc4, 0x5d, 0x68, 0xba, 0x19, 0xd8, 0x06,
0x77, 0x0d, 0x5a, 0x05, 0xa3, 0x0e, 0x45, 0x8c, 0x25, 0x75, 0x32, 0xed, 0x0f, 0x14, 0x75, 0x52,
0x2d, 0x41, 0x18, 0xc5, 0xfc, 0x49, 0xe6, 0xd4, 0xa5, 0x46, 0x88, 0xe9, 0x07, 0x7d, 0xdb, 0xf3,
0x03, 0xee, 0x0a, 0x34, 0x0b, 0x14, 0x8b, 0xca, 0x6d, 0x2e, 0xf7, 0x60, 0x2b, 0x8a, 0xf9, 0x7a,
0x26, 0xb1, 0xe6, 0x59, 0x19, 0xbc, 0x04, 0x8d, 0x02, 0x94, 0x71, 0x2f, 0x77, 0x65, 0xd8, 0x8c,
0x62, 0xfe, 0x38, 0x73, 0x32, 0x99, 0xfd, 0xc3, 0x46, 0xd2, 0x63, 0xce, 0xf6, 0xff, 0xb0, 0x91,
0xf9, 0xfc, 0xcb, 0x60, 0xf9, 0xe3, 0x1d, 0x31, 0xdd, 0xfb, 0xd5, 0x06, 0x31, 0xeb, 0x0d, 0x62,
0x56, 0x5b, 0xc4, 0xae, 0xb7, 0x88, 0x7d, 0x4d, 0x10, 0xf3, 0x96, 0x20, 0x76, 0x9d, 0x20, 0xe6,
0x2b, 0x41, 0xcc, 0xd3, 0x99, 0x65, 0x07, 0x8b, 0x50, 0xef, 0x18, 0x74, 0x29, 0xf8, 0x2f, 0x8e,
0x11, 0x2c, 0x6c, 0xc7, 0x2a, 0x4c, 0xf9, 0x7d, 0xf5, 0x4a, 0x7a, 0xa9, 0x8b, 0xef, 0x00, 0x00,
0x00, 0xff, 0xff, 0x09, 0x68, 0xa0, 0x7d, 0xf4, 0x01, 0x00, 0x00,
}

View File

@ -14,6 +14,11 @@ import (
"github.com/syncthing/syncthing/lib/fs" "github.com/syncthing/syncthing/lib/fs"
) )
type Size struct {
Value float64 `json:"value" xml:",chardata"`
Unit string `json:"unit" xml:"unit,attr"`
}
func ParseSize(s string) (Size, error) { func ParseSize(s string) (Size, error) {
s = strings.TrimSpace(s) s = strings.TrimSpace(s)
if s == "" { if s == "" {

View File

@ -1,336 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/size.proto
package config
import (
encoding_binary "encoding/binary"
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
_ "github.com/syncthing/syncthing/proto/ext"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type Size struct {
Value float64 `protobuf:"fixed64,1,opt,name=value,proto3" json:"value" xml:",chardata"`
Unit string `protobuf:"bytes,2,opt,name=unit,proto3" json:"unit" xml:"unit,attr"`
}
func (m *Size) Reset() { *m = Size{} }
func (*Size) ProtoMessage() {}
func (*Size) Descriptor() ([]byte, []int) {
return fileDescriptor_4d75cb8f619bd299, []int{0}
}
func (m *Size) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Size) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Size.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Size) XXX_Merge(src proto.Message) {
xxx_messageInfo_Size.Merge(m, src)
}
func (m *Size) XXX_Size() int {
return m.ProtoSize()
}
func (m *Size) XXX_DiscardUnknown() {
xxx_messageInfo_Size.DiscardUnknown(m)
}
var xxx_messageInfo_Size proto.InternalMessageInfo
func init() {
proto.RegisterType((*Size)(nil), "config.Size")
}
func init() { proto.RegisterFile("lib/config/size.proto", fileDescriptor_4d75cb8f619bd299) }
var fileDescriptor_4d75cb8f619bd299 = []byte{
// 251 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0xcd, 0xc9, 0x4c, 0xd2,
0x4f, 0xce, 0xcf, 0x4b, 0xcb, 0x4c, 0xd7, 0x2f, 0xce, 0xac, 0x4a, 0xd5, 0x2b, 0x28, 0xca, 0x2f,
0xc9, 0x17, 0x62, 0x83, 0x08, 0x49, 0x29, 0x17, 0xa5, 0x16, 0xe4, 0x17, 0xeb, 0x83, 0x05, 0x93,
0x4a, 0xd3, 0xf4, 0xd3, 0xf3, 0xd3, 0xf3, 0xc1, 0x1c, 0x30, 0x0b, 0xa2, 0x58, 0x8a, 0x33, 0xb5,
0xa2, 0x04, 0xc2, 0x54, 0xea, 0x66, 0xe4, 0x62, 0x09, 0xce, 0xac, 0x4a, 0x15, 0xb2, 0xe7, 0x62,
0x2d, 0x4b, 0xcc, 0x29, 0x4d, 0x95, 0x60, 0x54, 0x60, 0xd4, 0x60, 0x74, 0xd2, 0x7c, 0x75, 0x4f,
0x1e, 0x22, 0xf0, 0xe9, 0x9e, 0x3c, 0x7f, 0x45, 0x6e, 0x8e, 0x95, 0x92, 0x4e, 0x72, 0x46, 0x62,
0x51, 0x4a, 0x62, 0x49, 0xa2, 0xd2, 0xab, 0xf3, 0x2a, 0x9c, 0x70, 0x5e, 0x10, 0x44, 0x99, 0x90,
0x0d, 0x17, 0x4b, 0x69, 0x5e, 0x66, 0x89, 0x04, 0x93, 0x02, 0xa3, 0x06, 0xa7, 0x93, 0xc6, 0xab,
0x7b, 0xf2, 0x60, 0x3e, 0x5c, 0x3b, 0x88, 0xa3, 0x93, 0x58, 0x52, 0x52, 0x04, 0xd6, 0x0e, 0xe7,
0x05, 0x81, 0x55, 0x59, 0xb1, 0xcc, 0x58, 0x20, 0xcf, 0xe0, 0xe4, 0x7d, 0xe2, 0xa1, 0x1c, 0xc3,
0x85, 0x87, 0x72, 0x0c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0x38, 0xe1, 0xb1, 0x1c,
0xc3, 0x82, 0xc7, 0x72, 0x8c, 0x17, 0x1e, 0xcb, 0x31, 0xdc, 0x78, 0x2c, 0xc7, 0x10, 0xa5, 0x99,
0x9e, 0x59, 0x92, 0x51, 0x9a, 0xa4, 0x97, 0x9c, 0x9f, 0xab, 0x5f, 0x5c, 0x99, 0x97, 0x5c, 0x92,
0x91, 0x99, 0x97, 0x8e, 0xc4, 0x42, 0x84, 0x4e, 0x12, 0x1b, 0xd8, 0x87, 0xc6, 0x80, 0x00, 0x00,
0x00, 0xff, 0xff, 0x65, 0x1e, 0xa3, 0x25, 0x32, 0x01, 0x00, 0x00,
}
func (m *Size) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Size) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Size) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Unit) > 0 {
i -= len(m.Unit)
copy(dAtA[i:], m.Unit)
i = encodeVarintSize(dAtA, i, uint64(len(m.Unit)))
i--
dAtA[i] = 0x12
}
if m.Value != 0 {
i -= 8
encoding_binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Value))))
i--
dAtA[i] = 0x9
}
return len(dAtA) - i, nil
}
func encodeVarintSize(dAtA []byte, offset int, v uint64) int {
offset -= sovSize(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *Size) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Value != 0 {
n += 9
}
l = len(m.Unit)
if l > 0 {
n += 1 + l + sovSize(uint64(l))
}
return n
}
func sovSize(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozSize(x uint64) (n int) {
return sovSize(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *Size) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSize
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Size: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Size: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 1 {
return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
}
var v uint64
if (iNdEx + 8) > l {
return io.ErrUnexpectedEOF
}
v = uint64(encoding_binary.LittleEndian.Uint64(dAtA[iNdEx:]))
iNdEx += 8
m.Value = float64(math.Float64frombits(v))
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Unit", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowSize
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthSize
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthSize
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Unit = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipSize(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthSize
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipSize(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowSize
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowSize
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowSize
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthSize
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupSize
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthSize
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthSize = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowSize = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupSize = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -6,6 +6,14 @@
package config package config
type Tuning int32
const (
TuningAuto Tuning = 0
TuningSmall Tuning = 1
TuningLarge Tuning = 2
)
func (t Tuning) String() string { func (t Tuning) String() string {
switch t { switch t {
case TuningAuto: case TuningAuto:

View File

@ -1,71 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/tuning.proto
package config
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type Tuning int32
const (
TuningAuto Tuning = 0
TuningSmall Tuning = 1
TuningLarge Tuning = 2
)
var Tuning_name = map[int32]string{
0: "TUNING_AUTO",
1: "TUNING_SMALL",
2: "TUNING_LARGE",
}
var Tuning_value = map[string]int32{
"TUNING_AUTO": 0,
"TUNING_SMALL": 1,
"TUNING_LARGE": 2,
}
func (Tuning) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_204cfa1615fdfefd, []int{0}
}
func init() {
proto.RegisterEnum("config.Tuning", Tuning_name, Tuning_value)
}
func init() { proto.RegisterFile("lib/config/tuning.proto", fileDescriptor_204cfa1615fdfefd) }
var fileDescriptor_204cfa1615fdfefd = []byte{
// 228 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0xcf, 0xc9, 0x4c, 0xd2,
0x4f, 0xce, 0xcf, 0x4b, 0xcb, 0x4c, 0xd7, 0x2f, 0x29, 0xcd, 0xcb, 0xcc, 0x4b, 0xd7, 0x2b, 0x28,
0xca, 0x2f, 0xc9, 0x17, 0x62, 0x83, 0x08, 0x4a, 0x29, 0x17, 0xa5, 0x16, 0xe4, 0x17, 0xeb, 0x83,
0x05, 0x93, 0x4a, 0xd3, 0xf4, 0xd3, 0xf3, 0xd3, 0xf3, 0xc1, 0x1c, 0x30, 0x0b, 0xa2, 0x58, 0xab,
0x94, 0x8b, 0x2d, 0x04, 0xac, 0x59, 0x48, 0x9e, 0x8b, 0x3b, 0x24, 0xd4, 0xcf, 0xd3, 0xcf, 0x3d,
0xde, 0x31, 0x34, 0xc4, 0x5f, 0x80, 0x41, 0x8a, 0xaf, 0x6b, 0xae, 0x02, 0x17, 0x44, 0xd2, 0xb1,
0xb4, 0x24, 0x5f, 0x48, 0x91, 0x8b, 0x07, 0xaa, 0x20, 0xd8, 0xd7, 0xd1, 0xc7, 0x47, 0x80, 0x51,
0x8a, 0xbf, 0x6b, 0xae, 0x02, 0x37, 0x44, 0x45, 0x70, 0x6e, 0x62, 0x4e, 0x0e, 0x92, 0x12, 0x1f,
0xc7, 0x20, 0x77, 0x57, 0x01, 0x26, 0x64, 0x25, 0x3e, 0x89, 0x45, 0xe9, 0xa9, 0x52, 0x2c, 0x2b,
0x96, 0xc8, 0x31, 0x38, 0x79, 0x9f, 0x78, 0x28, 0xc7, 0x70, 0xe1, 0xa1, 0x1c, 0xc3, 0x89, 0x47,
0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x4e, 0x78, 0x2c, 0xc7, 0xb0, 0xe0, 0xb1, 0x1c, 0xe3, 0x85,
0xc7, 0x72, 0x0c, 0x37, 0x1e, 0xcb, 0x31, 0x44, 0x69, 0xa6, 0x67, 0x96, 0x64, 0x94, 0x26, 0xe9,
0x25, 0xe7, 0xe7, 0xea, 0x17, 0x57, 0xe6, 0x25, 0x97, 0x64, 0x64, 0xe6, 0xa5, 0x23, 0xb1, 0x10,
0xbe, 0x4f, 0x62, 0x03, 0x7b, 0xc5, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x9f, 0x69, 0xc8, 0xbc,
0x12, 0x01, 0x00, 0x00,
}

View File

@ -11,17 +11,29 @@ import (
"encoding/xml" "encoding/xml"
"sort" "sort"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/structutil" "github.com/syncthing/syncthing/lib/structutil"
) )
// VersioningConfiguration is used in the code and for JSON serialization
type VersioningConfiguration struct {
Type string `json:"type" xml:"type,attr"`
Params map[string]string `json:"params" xml:"parameter" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
CleanupIntervalS int `json:"cleanupIntervalS" xml:"cleanupIntervalS" default:"3600"`
FSPath string `json:"fsPath" xml:"fsPath"`
FSType FilesystemType `json:"fsType" xml:"fsType"`
}
func (c *VersioningConfiguration) Reset() {
*c = VersioningConfiguration{}
}
// internalVersioningConfiguration is used in XML serialization // internalVersioningConfiguration is used in XML serialization
type internalVersioningConfiguration struct { type internalVersioningConfiguration struct {
Type string `xml:"type,attr,omitempty"` Type string `xml:"type,attr,omitempty"`
Params []internalParam `xml:"param"` Params []internalParam `xml:"param"`
CleanupIntervalS int `xml:"cleanupIntervalS" default:"3600"` CleanupIntervalS int `xml:"cleanupIntervalS" default:"3600"`
FSPath string `xml:"fsPath"` FSPath string `xml:"fsPath"`
FSType fs.FilesystemType `xml:"fsType"` FSType FilesystemType `xml:"fsType"`
} }
type internalParam struct { type internalParam struct {

View File

@ -1,591 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/config/versioningconfiguration.proto
package config
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
fs "github.com/syncthing/syncthing/lib/fs"
_ "github.com/syncthing/syncthing/proto/ext"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
// VersioningConfiguration is used in the code and for JSON serialization
type VersioningConfiguration struct {
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type" xml:"type,attr"`
Params map[string]string `protobuf:"bytes,2,rep,name=parameters,proto3" json:"params" xml:"parameter" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
CleanupIntervalS int `protobuf:"varint,3,opt,name=cleanup_interval_s,json=cleanupIntervalS,proto3,casttype=int" json:"cleanupIntervalS" xml:"cleanupIntervalS" default:"3600"`
FSPath string `protobuf:"bytes,4,opt,name=fs_path,json=fsPath,proto3" json:"fsPath" xml:"fsPath"`
FSType fs.FilesystemType `protobuf:"varint,5,opt,name=fs_type,json=fsType,proto3,enum=fs.FilesystemType" json:"fsType" xml:"fsType"`
}
func (m *VersioningConfiguration) Reset() { *m = VersioningConfiguration{} }
func (m *VersioningConfiguration) String() string { return proto.CompactTextString(m) }
func (*VersioningConfiguration) ProtoMessage() {}
func (*VersioningConfiguration) Descriptor() ([]byte, []int) {
return fileDescriptor_95ba6bdb22ffea81, []int{0}
}
func (m *VersioningConfiguration) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *VersioningConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_VersioningConfiguration.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *VersioningConfiguration) XXX_Merge(src proto.Message) {
xxx_messageInfo_VersioningConfiguration.Merge(m, src)
}
func (m *VersioningConfiguration) XXX_Size() int {
return m.ProtoSize()
}
func (m *VersioningConfiguration) XXX_DiscardUnknown() {
xxx_messageInfo_VersioningConfiguration.DiscardUnknown(m)
}
var xxx_messageInfo_VersioningConfiguration proto.InternalMessageInfo
func init() {
proto.RegisterType((*VersioningConfiguration)(nil), "config.VersioningConfiguration")
proto.RegisterMapType((map[string]string)(nil), "config.VersioningConfiguration.ParametersEntry")
}
func init() {
proto.RegisterFile("lib/config/versioningconfiguration.proto", fileDescriptor_95ba6bdb22ffea81)
}
var fileDescriptor_95ba6bdb22ffea81 = []byte{
// 514 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x53, 0x4f, 0x6b, 0xdb, 0x4e,
0x10, 0x95, 0xfc, 0xef, 0x87, 0x95, 0x1f, 0x4d, 0x59, 0x0a, 0x15, 0x3e, 0x68, 0x8d, 0x70, 0x8b,
0x0a, 0x45, 0x0e, 0x09, 0x94, 0x62, 0x0a, 0x05, 0x97, 0xa6, 0x94, 0xf6, 0x10, 0x94, 0xd0, 0x43,
0x7b, 0x30, 0x6b, 0x77, 0x65, 0x2f, 0x91, 0x57, 0x42, 0xbb, 0x36, 0x51, 0x3f, 0x45, 0xe8, 0x27,
0xe8, 0xc7, 0xf1, 0xcd, 0x3e, 0xf6, 0xb4, 0x10, 0xfb, 0xa6, 0xa3, 0x8e, 0xe9, 0xa5, 0xec, 0xae,
0xa2, 0x9a, 0x94, 0xde, 0xe6, 0xcd, 0x7b, 0xf3, 0x66, 0x46, 0xb3, 0xb2, 0xbc, 0x88, 0x8c, 0xfb,
0x93, 0x98, 0x86, 0x64, 0xda, 0x5f, 0xe2, 0x94, 0x91, 0x98, 0x12, 0x3a, 0xd5, 0x89, 0x45, 0x8a,
0x38, 0x89, 0xa9, 0x9f, 0xa4, 0x31, 0x8f, 0x41, 0x4b, 0x27, 0x3b, 0x40, 0x56, 0x84, 0xac, 0xcf,
0xb3, 0x04, 0x33, 0xcd, 0x75, 0xda, 0xf8, 0x8a, 0xeb, 0xd0, 0xfd, 0xd5, 0xb0, 0x1e, 0x7f, 0xaa,
0x8c, 0xde, 0xec, 0x1b, 0x81, 0x57, 0x56, 0x43, 0x56, 0xd9, 0x66, 0xd7, 0xf4, 0xda, 0x43, 0x2f,
0x17, 0x50, 0xe1, 0x42, 0xc0, 0xc3, 0xab, 0x79, 0x34, 0x70, 0x25, 0x78, 0x8e, 0x38, 0x4f, 0xdd,
0x7c, 0xdd, 0x6b, 0x57, 0x28, 0x50, 0x2a, 0x70, 0x6d, 0x5a, 0x56, 0x82, 0x52, 0x34, 0xc7, 0x1c,
0xa7, 0xcc, 0xae, 0x75, 0xeb, 0xde, 0xc1, 0x71, 0xdf, 0xd7, 0x63, 0xf9, 0xff, 0xe8, 0xe9, 0x9f,
0x55, 0x15, 0x6f, 0x29, 0x4f, 0xb3, 0xe1, 0xeb, 0x95, 0x80, 0xc6, 0x56, 0xc0, 0x96, 0x22, 0x58,
0x2e, 0x60, 0x4b, 0x99, 0xb2, 0x6a, 0x8a, 0xaa, 0x87, 0x5b, 0xac, 0x7b, 0x25, 0xf9, 0x7d, 0xd3,
0x2b, 0x0b, 0x82, 0xbd, 0x19, 0xc0, 0x37, 0x0b, 0x4c, 0x22, 0x8c, 0xe8, 0x22, 0x19, 0x11, 0xca,
0x71, 0xba, 0x44, 0xd1, 0x88, 0xd9, 0xf5, 0xae, 0xe9, 0x35, 0x87, 0x1f, 0x73, 0x01, 0x1f, 0x96,
0xec, 0xfb, 0x92, 0x3c, 0x2f, 0x04, 0x7c, 0xa2, 0x9a, 0xdc, 0x27, 0xdc, 0xee, 0x57, 0x1c, 0xa2,
0x45, 0xc4, 0x07, 0xee, 0xc9, 0x8b, 0xa3, 0x23, 0xf7, 0x56, 0xc0, 0x3a, 0xa1, 0xfc, 0x76, 0xdd,
0x6b, 0x48, 0x1c, 0xfc, 0xe5, 0x04, 0xde, 0x59, 0xff, 0x85, 0x6c, 0x94, 0x20, 0x3e, 0xb3, 0x1b,
0xea, 0x7b, 0xfa, 0x72, 0xab, 0xd3, 0xf3, 0x33, 0xc4, 0x67, 0x72, 0xab, 0x90, 0xc9, 0xa8, 0x10,
0xf0, 0x7f, 0xd5, 0x50, 0x43, 0x57, 0x2e, 0xa2, 0x35, 0x41, 0xa9, 0x00, 0x5f, 0x94, 0x91, 0x3a,
0x4c, 0xb3, 0x6b, 0x7a, 0x0f, 0x8e, 0x81, 0x1f, 0x32, 0xff, 0x94, 0x44, 0x98, 0x65, 0x8c, 0xe3,
0xf9, 0x45, 0x96, 0xe0, 0x3b, 0x73, 0x19, 0x6b, 0xf3, 0x0b, 0x7d, 0xb8, 0x3b, 0x73, 0x09, 0x4b,
0x73, 0x19, 0x06, 0xa5, 0xa2, 0x33, 0xb7, 0x0e, 0xef, 0x5d, 0x00, 0x3c, 0xb5, 0xea, 0x97, 0x38,
0x2b, 0x1f, 0xc1, 0xa3, 0x5c, 0x40, 0x09, 0x0b, 0x01, 0xdb, 0xca, 0xea, 0x12, 0x67, 0x6e, 0x20,
0x33, 0xc0, 0xb7, 0x9a, 0x4b, 0x14, 0x2d, 0xb0, 0x5d, 0x53, 0x4a, 0x3b, 0x17, 0x50, 0x27, 0x0a,
0x01, 0x0f, 0x94, 0x56, 0x21, 0x37, 0xd0, 0xd9, 0x41, 0xed, 0xa5, 0x39, 0xfc, 0xb0, 0xba, 0x71,
0x8c, 0xcd, 0x8d, 0x63, 0xac, 0xb6, 0x8e, 0xb9, 0xd9, 0x3a, 0xe6, 0xf5, 0xce, 0x31, 0x7e, 0xec,
0x1c, 0x73, 0xb3, 0x73, 0x8c, 0x9f, 0x3b, 0xc7, 0xf8, 0xfc, 0x6c, 0x4a, 0xf8, 0x6c, 0x31, 0xf6,
0x27, 0xf1, 0xbc, 0xcf, 0x32, 0x3a, 0xe1, 0x33, 0x42, 0xa7, 0x7b, 0xd1, 0x9f, 0xff, 0x61, 0xdc,
0x52, 0x2f, 0xfa, 0xe4, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0xd1, 0x04, 0xd0, 0xd5, 0x24, 0x03,
0x00, 0x00,
}
func (m *VersioningConfiguration) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *VersioningConfiguration) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *VersioningConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.FSType != 0 {
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(m.FSType))
i--
dAtA[i] = 0x28
}
if len(m.FSPath) > 0 {
i -= len(m.FSPath)
copy(dAtA[i:], m.FSPath)
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(len(m.FSPath)))
i--
dAtA[i] = 0x22
}
if m.CleanupIntervalS != 0 {
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(m.CleanupIntervalS))
i--
dAtA[i] = 0x18
}
if len(m.Params) > 0 {
for k := range m.Params {
v := m.Params[k]
baseI := i
i -= len(v)
copy(dAtA[i:], v)
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(len(v)))
i--
dAtA[i] = 0x12
i -= len(k)
copy(dAtA[i:], k)
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(len(k)))
i--
dAtA[i] = 0xa
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(baseI-i))
i--
dAtA[i] = 0x12
}
}
if len(m.Type) > 0 {
i -= len(m.Type)
copy(dAtA[i:], m.Type)
i = encodeVarintVersioningconfiguration(dAtA, i, uint64(len(m.Type)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func encodeVarintVersioningconfiguration(dAtA []byte, offset int, v uint64) int {
offset -= sovVersioningconfiguration(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *VersioningConfiguration) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Type)
if l > 0 {
n += 1 + l + sovVersioningconfiguration(uint64(l))
}
if len(m.Params) > 0 {
for k, v := range m.Params {
_ = k
_ = v
mapEntrySize := 1 + len(k) + sovVersioningconfiguration(uint64(len(k))) + 1 + len(v) + sovVersioningconfiguration(uint64(len(v)))
n += mapEntrySize + 1 + sovVersioningconfiguration(uint64(mapEntrySize))
}
}
if m.CleanupIntervalS != 0 {
n += 1 + sovVersioningconfiguration(uint64(m.CleanupIntervalS))
}
l = len(m.FSPath)
if l > 0 {
n += 1 + l + sovVersioningconfiguration(uint64(l))
}
if m.FSType != 0 {
n += 1 + sovVersioningconfiguration(uint64(m.FSType))
}
return n
}
func sovVersioningconfiguration(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozVersioningconfiguration(x uint64) (n int) {
return sovVersioningconfiguration(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *VersioningConfiguration) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: VersioningConfiguration: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: VersioningConfiguration: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthVersioningconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Type = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Params", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthVersioningconfiguration
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Params == nil {
m.Params = make(map[string]string)
}
var mapkey string
var mapvalue string
for iNdEx < postIndex {
entryPreIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
if fieldNum == 1 {
var stringLenmapkey uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapkey |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLenmapkey := int(stringLenmapkey)
if intStringLenmapkey < 0 {
return ErrInvalidLengthVersioningconfiguration
}
postStringIndexmapkey := iNdEx + intStringLenmapkey
if postStringIndexmapkey < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if postStringIndexmapkey > l {
return io.ErrUnexpectedEOF
}
mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
iNdEx = postStringIndexmapkey
} else if fieldNum == 2 {
var stringLenmapvalue uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapvalue |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLenmapvalue := int(stringLenmapvalue)
if intStringLenmapvalue < 0 {
return ErrInvalidLengthVersioningconfiguration
}
postStringIndexmapvalue := iNdEx + intStringLenmapvalue
if postStringIndexmapvalue < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if postStringIndexmapvalue > l {
return io.ErrUnexpectedEOF
}
mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])
iNdEx = postStringIndexmapvalue
} else {
iNdEx = entryPreIndex
skippy, err := skipVersioningconfiguration(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if (iNdEx + skippy) > postIndex {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
m.Params[mapkey] = mapvalue
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field CleanupIntervalS", wireType)
}
m.CleanupIntervalS = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.CleanupIntervalS |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field FSPath", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthVersioningconfiguration
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.FSPath = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 5:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field FSType", wireType)
}
m.FSType = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.FSType |= fs.FilesystemType(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipVersioningconfiguration(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthVersioningconfiguration
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipVersioningconfiguration(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowVersioningconfiguration
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthVersioningconfiguration
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupVersioningconfiguration
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthVersioningconfiguration
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthVersioningconfiguration = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowVersioningconfiguration = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupVersioningconfiguration = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -17,12 +17,13 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/osutil" "github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sliceutil" "github.com/syncthing/syncthing/lib/sliceutil"
"github.com/syncthing/syncthing/lib/sync" "github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture/v4"
) )
const ( const (

View File

@ -12,10 +12,11 @@ import (
"io" "io"
"sync/atomic" "sync/atomic"
"golang.org/x/time/rate"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync" "github.com/syncthing/syncthing/lib/sync"
"golang.org/x/time/rate"
) )
// limiter manages a read and write rate limit, reacting to config changes // limiter manages a read and write rate limit, reacting to config changes

View File

@ -15,10 +15,11 @@ import (
"sync/atomic" "sync/atomic"
"testing" "testing"
"golang.org/x/time/rate"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/time/rate"
) )
var ( var (

View File

@ -28,6 +28,9 @@ import (
stdsync "sync" stdsync "sync"
"time" "time"
"github.com/thejerf/suture/v4"
"golang.org/x/time/rate"
"github.com/syncthing/syncthing/lib/build" "github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections/registry" "github.com/syncthing/syncthing/lib/connections/registry"
@ -45,9 +48,6 @@ import (
// Registers NAT service providers // Registers NAT service providers
_ "github.com/syncthing/syncthing/lib/pmp" _ "github.com/syncthing/syncthing/lib/pmp"
_ "github.com/syncthing/syncthing/lib/upnp" _ "github.com/syncthing/syncthing/lib/upnp"
"github.com/thejerf/suture/v4"
"golang.org/x/time/rate"
) )
var ( var (
@ -445,7 +445,7 @@ func (s *service) handleHellos(ctx context.Context) error {
// connections are limited. // connections are limited.
rd, wr := s.limiter.getLimiters(remoteID, c, c.IsLocal()) rd, wr := s.limiter.getLimiters(remoteID, c, c.IsLocal())
protoConn := protocol.NewConnection(remoteID, rd, wr, c, s.model, c, deviceCfg.Compression, s.cfg.FolderPasswords(remoteID), s.keyGen) protoConn := protocol.NewConnection(remoteID, rd, wr, c, s.model, c, deviceCfg.Compression.ToProtocol(), s.cfg.FolderPasswords(remoteID), s.keyGen)
s.accountAddedConnection(protoConn, hello, s.cfg.Options().ConnectionPriorityUpgradeThreshold) s.accountAddedConnection(protoConn, hello, s.cfg.Options().ConnectionPriorityUpgradeThreshold)
go func() { go func() {
<-protoConn.Closed() <-protoConn.Closed()
@ -1284,6 +1284,7 @@ func (r nextDialRegistry) redialDevice(device protocol.DeviceID, now time.Time)
} }
dev.attempts++ dev.attempts++
dev.nextDial = make(map[string]time.Time) dev.nextDial = make(map[string]time.Time)
r[device] = dev
return return
} }
if dev.attempts >= dialCoolDownMaxAttempts && now.Before(dev.coolDownIntervalStart.Add(dialCoolDownDelay)) { if dev.attempts >= dialCoolDownMaxAttempts && now.Before(dev.coolDownIntervalStart.Add(dialCoolDownDelay)) {

View File

@ -15,14 +15,14 @@ import (
"net/url" "net/url"
"time" "time"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/lib/config" "github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections/registry" "github.com/syncthing/syncthing/lib/connections/registry"
"github.com/syncthing/syncthing/lib/nat" "github.com/syncthing/syncthing/lib/nat"
"github.com/syncthing/syncthing/lib/osutil" "github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/stats" "github.com/syncthing/syncthing/lib/stats"
"github.com/thejerf/suture/v4"
) )
type tlsConn interface { type tlsConn interface {

View File

@ -185,7 +185,7 @@ func BenchmarkNeedHalf(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, benchS) snap := snapshot(b, benchS)
snap.WithNeed(protocol.LocalDeviceID, func(fi protocol.FileIntf) bool { snap.WithNeed(protocol.LocalDeviceID, func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })
@ -209,7 +209,7 @@ func BenchmarkNeedHalfRemote(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, fset) snap := snapshot(b, fset)
snap.WithNeed(remoteDevice0, func(fi protocol.FileIntf) bool { snap.WithNeed(remoteDevice0, func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })
@ -230,7 +230,7 @@ func BenchmarkHave(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, benchS) snap := snapshot(b, benchS)
snap.WithHave(protocol.LocalDeviceID, func(fi protocol.FileIntf) bool { snap.WithHave(protocol.LocalDeviceID, func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })
@ -251,7 +251,7 @@ func BenchmarkGlobal(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, benchS) snap := snapshot(b, benchS)
snap.WithGlobal(func(fi protocol.FileIntf) bool { snap.WithGlobal(func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })
@ -272,7 +272,7 @@ func BenchmarkNeedHalfTruncated(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, benchS) snap := snapshot(b, benchS)
snap.WithNeedTruncated(protocol.LocalDeviceID, func(fi protocol.FileIntf) bool { snap.WithNeedTruncated(protocol.LocalDeviceID, func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })
@ -293,7 +293,7 @@ func BenchmarkHaveTruncated(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, benchS) snap := snapshot(b, benchS)
snap.WithHaveTruncated(protocol.LocalDeviceID, func(fi protocol.FileIntf) bool { snap.WithHaveTruncated(protocol.LocalDeviceID, func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })
@ -314,7 +314,7 @@ func BenchmarkGlobalTruncated(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
count := 0 count := 0
snap := snapshot(b, benchS) snap := snapshot(b, benchS)
snap.WithGlobalTruncated(func(fi protocol.FileIntf) bool { snap.WithGlobalTruncated(func(fi protocol.FileInfo) bool {
count++ count++
return true return true
}) })

View File

@ -30,107 +30,10 @@ func genBlocks(n int) []protocol.BlockInfo {
return b return b
} }
func TestIgnoredFiles(t *testing.T) {
ldb, err := openJSONS("testdata/v0.14.48-ignoredfiles.db.jsons")
if err != nil {
t.Fatal(err)
}
db := newLowlevel(t, ldb)
defer db.Close()
if err := UpdateSchema(db); err != nil {
t.Fatal(err)
}
fs := newFileSet(t, "test", db)
// The contents of the database are like this:
//
// fs := newFileSet(t, "test", db)
// fs.Update(protocol.LocalDeviceID, []protocol.FileInfo{
// { // invalid (ignored) file
// Name: "foo",
// Type: protocol.FileInfoTypeFile,
// Invalid: true,
// Version: protocol.Vector{Counters: []protocol.Counter{{ID: 1, Value: 1000}}},
// },
// { // regular file
// Name: "bar",
// Type: protocol.FileInfoTypeFile,
// Version: protocol.Vector{Counters: []protocol.Counter{{ID: 1, Value: 1001}}},
// },
// })
// fs.Update(protocol.DeviceID{42}, []protocol.FileInfo{
// { // invalid file
// Name: "baz",
// Type: protocol.FileInfoTypeFile,
// Invalid: true,
// Version: protocol.Vector{Counters: []protocol.Counter{{ID: 42, Value: 1000}}},
// },
// { // regular file
// Name: "quux",
// Type: protocol.FileInfoTypeFile,
// Version: protocol.Vector{Counters: []protocol.Counter{{ID: 42, Value: 1002}}},
// },
// })
// Local files should have the "ignored" bit in addition to just being
// generally invalid if we want to look at the simulation of that bit.
snap := snapshot(t, fs)
defer snap.Release()
fi, ok := snap.Get(protocol.LocalDeviceID, "foo")
if !ok {
t.Fatal("foo should exist")
}
if !fi.IsInvalid() {
t.Error("foo should be invalid")
}
if !fi.IsIgnored() {
t.Error("foo should be ignored")
}
fi, ok = snap.Get(protocol.LocalDeviceID, "bar")
if !ok {
t.Fatal("bar should exist")
}
if fi.IsInvalid() {
t.Error("bar should not be invalid")
}
if fi.IsIgnored() {
t.Error("bar should not be ignored")
}
// Remote files have the invalid bit as usual, and the IsInvalid() method
// should pick this up too.
fi, ok = snap.Get(protocol.DeviceID{42}, "baz")
if !ok {
t.Fatal("baz should exist")
}
if !fi.IsInvalid() {
t.Error("baz should be invalid")
}
if !fi.IsInvalid() {
t.Error("baz should be invalid")
}
fi, ok = snap.Get(protocol.DeviceID{42}, "quux")
if !ok {
t.Fatal("quux should exist")
}
if fi.IsInvalid() {
t.Error("quux should not be invalid")
}
if fi.IsInvalid() {
t.Error("quux should not be invalid")
}
}
const myID = 1 const myID = 1
var ( var (
remoteDevice0, remoteDevice1 protocol.DeviceID remoteDevice0, remoteDevice1 protocol.DeviceID
update0to3Folder = "UpdateSchema0to3"
invalid = "invalid" invalid = "invalid"
slashPrefixed = "/notgood" slashPrefixed = "/notgood"
haveUpdate0to3 map[protocol.DeviceID][]protocol.FileInfo haveUpdate0to3 map[protocol.DeviceID][]protocol.FileInfo
@ -157,142 +60,6 @@ func init() {
} }
} }
func TestUpdate0to3(t *testing.T) {
ldb, err := openJSONS("testdata/v0.14.45-update0to3.db.jsons")
if err != nil {
t.Fatal(err)
}
db := newLowlevel(t, ldb)
defer db.Close()
updater := schemaUpdater{db}
folder := []byte(update0to3Folder)
if err := updater.updateSchema0to1(0); err != nil {
t.Fatal(err)
}
trans, err := db.newReadOnlyTransaction()
if err != nil {
t.Fatal(err)
}
defer trans.Release()
if _, ok, err := trans.getFile(folder, protocol.LocalDeviceID[:], []byte(slashPrefixed)); err != nil {
t.Fatal(err)
} else if ok {
t.Error("File prefixed by '/' was not removed during transition to schema 1")
}
var key []byte
key, err = db.keyer.GenerateGlobalVersionKey(nil, folder, []byte(invalid))
if err != nil {
t.Fatal(err)
}
if _, err := db.Get(key); err != nil {
t.Error("Invalid file wasn't added to global list")
}
if err := updater.updateSchema1to2(1); err != nil {
t.Fatal(err)
}
found := false
trans, err = db.newReadOnlyTransaction()
if err != nil {
t.Fatal(err)
}
defer trans.Release()
_ = trans.withHaveSequence(folder, 0, func(fi protocol.FileIntf) bool {
f := fi.(protocol.FileInfo)
l.Infoln(f)
if found {
t.Error("Unexpected additional file via sequence", f.FileName())
return true
}
if e := haveUpdate0to3[protocol.LocalDeviceID][0]; f.IsEquivalentOptional(e, protocol.FileInfoComparison{IgnorePerms: true, IgnoreBlocks: true}) {
found = true
} else {
t.Errorf("Wrong file via sequence, got %v, expected %v", f, e)
}
return true
})
if !found {
t.Error("Local file wasn't added to sequence bucket", err)
}
if err := updater.updateSchema2to3(2); err != nil {
t.Fatal(err)
}
need := map[string]protocol.FileInfo{
haveUpdate0to3[remoteDevice0][0].Name: haveUpdate0to3[remoteDevice0][0],
haveUpdate0to3[remoteDevice1][0].Name: haveUpdate0to3[remoteDevice1][0],
haveUpdate0to3[remoteDevice0][2].Name: haveUpdate0to3[remoteDevice0][2],
}
trans, err = db.newReadOnlyTransaction()
if err != nil {
t.Fatal(err)
}
defer trans.Release()
key, err = trans.keyer.GenerateNeedFileKey(nil, folder, nil)
if err != nil {
t.Fatal(err)
}
dbi, err := trans.NewPrefixIterator(key)
if err != nil {
t.Fatal(err)
}
defer dbi.Release()
for dbi.Next() {
name := trans.keyer.NameFromGlobalVersionKey(dbi.Key())
key, err = trans.keyer.GenerateGlobalVersionKey(key, folder, name)
bs, err := trans.Get(key)
if err != nil {
t.Fatal(err)
}
var vl VersionListDeprecated
if err := vl.Unmarshal(bs); err != nil {
t.Fatal(err)
}
key, err = trans.keyer.GenerateDeviceFileKey(key, folder, vl.Versions[0].Device, name)
if err != nil {
t.Fatal(err)
}
fi, ok, err := trans.getFileTrunc(key, false)
if err != nil {
t.Fatal(err)
}
if !ok {
device := "<invalid>"
if dev, err := protocol.DeviceIDFromBytes(vl.Versions[0].Device); err != nil {
device = dev.String()
}
t.Fatal("surprise missing global file", string(name), device)
}
e, ok := need[fi.FileName()]
if !ok {
t.Error("Got unexpected needed file:", fi.FileName())
}
f := fi.(protocol.FileInfo)
delete(need, f.Name)
if !f.IsEquivalentOptional(e, protocol.FileInfoComparison{IgnorePerms: true, IgnoreBlocks: true}) {
t.Errorf("Wrong needed file, got %v, expected %v", f, e)
}
}
if dbi.Error() != nil {
t.Fatal(err)
}
for n := range need {
t.Errorf(`Missing needed file "%v"`, n)
}
}
// TestRepairSequence checks that a few hand-crafted messed-up sequence entries get fixed. // TestRepairSequence checks that a few hand-crafted messed-up sequence entries get fixed.
func TestRepairSequence(t *testing.T) { func TestRepairSequence(t *testing.T) {
db := newLowlevelMemory(t) db := newLowlevelMemory(t)
@ -446,11 +213,10 @@ func TestRepairSequence(t *testing.T) {
} }
defer it.Release() defer it.Release()
for it.Next() { for it.Next() {
intf, ok, err := ro.getFileTrunc(it.Value(), false) fi, ok, err := ro.getFileTrunc(it.Value(), false)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
fi := intf.(protocol.FileInfo)
seq := ro.keyer.SequenceFromSequenceKey(it.Key()) seq := ro.keyer.SequenceFromSequenceKey(it.Key())
if !ok { if !ok {
t.Errorf("Sequence entry %v points at nothing", seq) t.Errorf("Sequence entry %v points at nothing", seq)
@ -531,81 +297,6 @@ func TestCheckGlobals(t *testing.T) {
} }
} }
func TestUpdateTo10(t *testing.T) {
ldb, err := openJSONS("./testdata/v1.4.0-updateTo10.json")
if err != nil {
t.Fatal(err)
}
db := newLowlevel(t, ldb)
defer db.Close()
UpdateSchema(db)
folder := "test"
meta, err := db.getMetaAndCheck(folder)
if err != nil {
t.Fatal(err)
}
empty := Counts{}
c := meta.Counts(protocol.LocalDeviceID, needFlag)
if c.Files != 1 {
t.Error("Expected 1 needed file locally, got", c.Files)
}
c.Files = 0
if c.Deleted != 1 {
t.Error("Expected 1 needed deletion locally, got", c.Deleted)
}
c.Deleted = 0
if !c.Equal(empty) {
t.Error("Expected all counts to be zero, got", c)
}
c = meta.Counts(remoteDevice0, needFlag)
if !c.Equal(empty) {
t.Error("Expected all counts to be zero, got", c)
}
trans, err := db.newReadOnlyTransaction()
if err != nil {
t.Fatal(err)
}
defer trans.Release()
// a
vl, err := trans.getGlobalVersions(nil, []byte(folder), []byte("a"))
if err != nil {
t.Fatal(err)
}
for _, v := range vl.RawVersions {
if !v.Deleted {
t.Error("Unexpected undeleted global version for a")
}
}
// b
vl, err = trans.getGlobalVersions(nil, []byte(folder), []byte("b"))
if err != nil {
t.Fatal(err)
}
if !vl.RawVersions[0].Deleted {
t.Error("vl.Versions[0] not deleted for b")
}
if vl.RawVersions[1].Deleted {
t.Error("vl.Versions[1] deleted for b")
}
// c
vl, err = trans.getGlobalVersions(nil, []byte(folder), []byte("c"))
if err != nil {
t.Fatal(err)
}
if vl.RawVersions[0].Deleted {
t.Error("vl.Versions[0] deleted for c")
}
if !vl.RawVersions[1].Deleted {
t.Error("vl.Versions[1] not deleted for c")
}
}
func TestDropDuplicates(t *testing.T) { func TestDropDuplicates(t *testing.T) {
names := []string{ names := []string{
"foo", "foo",
@ -769,7 +460,7 @@ func TestUpdateTo14(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
fiBs := mustMarshal(&fileWOBlocks) fiBs := mustMarshal(fileWOBlocks.ToWire(true))
if err := trans.Put(key, fiBs); err != nil { if err := trans.Put(key, fiBs); err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -799,9 +490,9 @@ func TestUpdateTo14(t *testing.T) {
if vl, err := ro.getGlobalVersions(nil, folder, name); err != nil { if vl, err := ro.getGlobalVersions(nil, folder, name); err != nil {
t.Fatal(err) t.Fatal(err)
} else if fv, ok := vl.GetGlobal(); !ok { } else if fv, ok := vlGetGlobal(vl); !ok {
t.Error("missing global") t.Error("missing global")
} else if !fv.IsInvalid() { } else if !fvIsInvalid(fv) {
t.Error("global not marked as invalid") t.Error("global not marked as invalid")
} }
} }
@ -874,8 +565,8 @@ func TestCheckLocalNeed(t *testing.T) {
t.Errorf("Expected 2 needed files locally, got %v in meta", c.Files) t.Errorf("Expected 2 needed files locally, got %v in meta", c.Files)
} }
needed := make([]protocol.FileInfo, 0, 2) needed := make([]protocol.FileInfo, 0, 2)
snap.WithNeed(protocol.LocalDeviceID, func(fi protocol.FileIntf) bool { snap.WithNeed(protocol.LocalDeviceID, func(fi protocol.FileInfo) bool {
needed = append(needed, fi.(protocol.FileInfo)) needed = append(needed, fi)
return true return true
}) })
if l := len(needed); l != 2 { if l := len(needed); l != 2 {
@ -939,7 +630,7 @@ func TestDuplicateNeedCount(t *testing.T) {
fs = newFileSet(t, folder, db) fs = newFileSet(t, folder, db)
found := false found := false
for _, c := range fs.meta.counts.Counts { for _, c := range fs.meta.counts.Counts {
if bytes.Equal(protocol.LocalDeviceID[:], c.DeviceID) && c.LocalFlags == needFlag { if protocol.LocalDeviceID == c.DeviceID && c.LocalFlags == needFlag {
if found { if found {
t.Fatal("second need count for local device encountered") t.Fatal("second need count for local device encountered")
} }

View File

@ -19,6 +19,10 @@ import (
"time" "time"
"github.com/greatroar/blobloom" "github.com/greatroar/blobloom"
"github.com/thejerf/suture/v4"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db/backend" "github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs" "github.com/syncthing/syncthing/lib/fs"
@ -26,7 +30,6 @@ import (
"github.com/syncthing/syncthing/lib/stringutil" "github.com/syncthing/syncthing/lib/stringutil"
"github.com/syncthing/syncthing/lib/svcutil" "github.com/syncthing/syncthing/lib/svcutil"
"github.com/syncthing/syncthing/lib/sync" "github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture/v4"
) )
const ( const (
@ -521,8 +524,8 @@ func (db *Lowlevel) checkGlobals(folderStr string) (int, error) {
var dk []byte var dk []byte
ro := t.readOnlyTransaction ro := t.readOnlyTransaction
for dbi.Next() { for dbi.Next() {
var vl VersionList var vl dbproto.VersionList
if err := vl.Unmarshal(dbi.Value()); err != nil || vl.Empty() { if err := proto.Unmarshal(dbi.Value(), &vl); err != nil || len(vl.Versions) == 0 {
if err := t.Delete(dbi.Key()); err != nil && !backend.IsNotFound(err) { if err := t.Delete(dbi.Key()); err != nil && !backend.IsNotFound(err) {
return 0, err return 0, err
} }
@ -535,9 +538,9 @@ func (db *Lowlevel) checkGlobals(folderStr string) (int, error) {
// we find those and clear them out. // we find those and clear them out.
name := db.keyer.NameFromGlobalVersionKey(dbi.Key()) name := db.keyer.NameFromGlobalVersionKey(dbi.Key())
newVL := &VersionList{} newVL := &dbproto.VersionList{}
var changed, changedHere bool var changed, changedHere bool
for _, fv := range vl.RawVersions { for _, fv := range vl.Versions {
changedHere, err = checkGlobalsFilterDevices(dk, folder, name, fv.Devices, newVL, ro) changedHere, err = checkGlobalsFilterDevices(dk, folder, name, fv.Devices, newVL, ro)
if err != nil { if err != nil {
return 0, err return 0, err
@ -551,7 +554,7 @@ func (db *Lowlevel) checkGlobals(folderStr string) (int, error) {
changed = changed || changedHere changed = changed || changedHere
} }
if newVL.Empty() { if len(newVL.Versions) == 0 {
if err := t.Delete(dbi.Key()); err != nil && !backend.IsNotFound(err) { if err := t.Delete(dbi.Key()); err != nil && !backend.IsNotFound(err) {
return 0, err return 0, err
} }
@ -572,7 +575,7 @@ func (db *Lowlevel) checkGlobals(folderStr string) (int, error) {
return fixed, t.Commit() return fixed, t.Commit()
} }
func checkGlobalsFilterDevices(dk, folder, name []byte, devices [][]byte, vl *VersionList, t readOnlyTransaction) (bool, error) { func checkGlobalsFilterDevices(dk, folder, name []byte, devices [][]byte, vl *dbproto.VersionList, t readOnlyTransaction) (bool, error) {
var changed bool var changed bool
var err error var err error
for _, device := range devices { for _, device := range devices {
@ -588,7 +591,7 @@ func checkGlobalsFilterDevices(dk, folder, name []byte, devices [][]byte, vl *Ve
changed = true changed = true
continue continue
} }
_, _, _, _, _, _, err = vl.update(folder, device, f, t) _, _, _, _, _, _, err = vlUpdate(vl, folder, device, f, t)
if err != nil { if err != nil {
return false, err return false, err
} }
@ -610,7 +613,7 @@ func (db *Lowlevel) getIndexID(device, folder []byte) (protocol.IndexID, error)
var id protocol.IndexID var id protocol.IndexID
if err := id.Unmarshal(cur); err != nil { if err := id.Unmarshal(cur); err != nil {
return 0, nil return 0, nil //nolint: nilerr
} }
return id, nil return id, nil
@ -816,11 +819,11 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) (err error) {
default: default:
} }
var hashes IndirectionHashesOnly var hashes dbproto.IndirectionHashesOnly
if err := hashes.Unmarshal(it.Value()); err != nil { if err := proto.Unmarshal(it.Value(), &hashes); err != nil {
return err return err
} }
db.recordIndirectionHashes(hashes) db.recordIndirectionHashes(&hashes)
} }
it.Release() it.Release()
if err := it.Error(); err != nil { if err := it.Error(); err != nil {
@ -924,10 +927,10 @@ func (db *Lowlevel) gcIndirect(ctx context.Context) (err error) {
} }
func (db *Lowlevel) recordIndirectionHashesForFile(f *protocol.FileInfo) { func (db *Lowlevel) recordIndirectionHashesForFile(f *protocol.FileInfo) {
db.recordIndirectionHashes(IndirectionHashesOnly{BlocksHash: f.BlocksHash, VersionHash: f.VersionHash}) db.recordIndirectionHashes(&dbproto.IndirectionHashesOnly{BlocksHash: f.BlocksHash, VersionHash: f.VersionHash})
} }
func (db *Lowlevel) recordIndirectionHashes(hs IndirectionHashesOnly) { func (db *Lowlevel) recordIndirectionHashes(hs *dbproto.IndirectionHashesOnly) {
// must be called with gcMut held (at least read-held) // must be called with gcMut held (at least read-held)
if db.blockFilter != nil && len(hs.BlocksHash) > 0 { if db.blockFilter != nil && len(hs.BlocksHash) > 0 {
db.blockFilter.add(hs.BlocksHash) db.blockFilter.add(hs.BlocksHash)
@ -966,7 +969,7 @@ func (b *bloomFilter) hash(id []byte) uint64 {
} }
var h maphash.Hash var h maphash.Hash
h.SetSeed(b.seed) h.SetSeed(b.seed)
h.Write(id) _, _ = h.Write(id)
return h.Sum64() return h.Sum64()
} }
@ -1035,7 +1038,7 @@ func (db *Lowlevel) getMetaAndCheckGCLocked(folder string) (*metadataTracker, er
func (db *Lowlevel) loadMetadataTracker(folder string) (*metadataTracker, error) { func (db *Lowlevel) loadMetadataTracker(folder string) (*metadataTracker, error) {
meta := newMetadataTracker(db.keyer, db.evLogger) meta := newMetadataTracker(db.keyer, db.evLogger)
if err := meta.fromDB(db, []byte(folder)); err != nil { if err := meta.fromDB(db, []byte(folder)); err != nil {
if err == errMetaInconsistent { if errors.Is(err, errMetaInconsistent) {
l.Infof("Stored folder metadata for %q is inconsistent; recalculating", folder) l.Infof("Stored folder metadata for %q is inconsistent; recalculating", folder)
} else { } else {
l.Infof("No stored folder metadata for %q; recalculating", folder) l.Infof("No stored folder metadata for %q; recalculating", folder)
@ -1071,7 +1074,7 @@ func (db *Lowlevel) recalcMeta(folderStr string) (*metadataTracker, error) {
defer t.close() defer t.close()
var deviceID protocol.DeviceID var deviceID protocol.DeviceID
err = t.withAllFolderTruncated(folder, func(device []byte, f FileInfoTruncated) bool { err = t.withAllFolderTruncated(folder, func(device []byte, f protocol.FileInfo) bool {
copy(deviceID[:], device) copy(deviceID[:], device)
meta.addFile(deviceID, f) meta.addFile(deviceID, f)
return true return true
@ -1080,7 +1083,7 @@ func (db *Lowlevel) recalcMeta(folderStr string) (*metadataTracker, error) {
return nil, err return nil, err
} }
err = t.withGlobal(folder, nil, true, func(f protocol.FileIntf) bool { err = t.withGlobal(folder, nil, true, func(f protocol.FileInfo) bool {
meta.addFile(protocol.GlobalDeviceID, f) meta.addFile(protocol.GlobalDeviceID, f)
return true return true
}) })
@ -1089,7 +1092,7 @@ func (db *Lowlevel) recalcMeta(folderStr string) (*metadataTracker, error) {
} }
meta.emptyNeeded(protocol.LocalDeviceID) meta.emptyNeeded(protocol.LocalDeviceID)
err = t.withNeed(folder, protocol.LocalDeviceID[:], true, func(f protocol.FileIntf) bool { err = t.withNeed(folder, protocol.LocalDeviceID[:], true, func(f protocol.FileInfo) bool {
meta.addNeeded(protocol.LocalDeviceID, f) meta.addNeeded(protocol.LocalDeviceID, f)
return true return true
}) })
@ -1098,7 +1101,7 @@ func (db *Lowlevel) recalcMeta(folderStr string) (*metadataTracker, error) {
} }
for _, device := range meta.devices() { for _, device := range meta.devices() {
meta.emptyNeeded(device) meta.emptyNeeded(device)
err = t.withNeed(folder, device[:], true, func(f protocol.FileIntf) bool { err = t.withNeed(folder, device[:], true, func(f protocol.FileInfo) bool {
meta.addNeeded(device, f) meta.addNeeded(device, f)
return true return true
}) })
@ -1132,7 +1135,7 @@ func (db *Lowlevel) verifyLocalSequence(curSeq int64, folder string) (bool, erro
return false, err return false, err
} }
ok := true ok := true
if err := t.withHaveSequence([]byte(folder), curSeq+1, func(fi protocol.FileIntf) bool { if err := t.withHaveSequence([]byte(folder), curSeq+1, func(_ protocol.FileInfo) bool {
ok = false // we got something, which we should not have ok = false // we got something, which we should not have
return false return false
}); err != nil { }); err != nil {
@ -1204,8 +1207,7 @@ func (db *Lowlevel) repairSequenceGCLocked(folderStr string, meta *metadataTrack
} }
return 0, err return 0, err
} }
fi := intf.(protocol.FileInfo) if sk, err = t.keyer.GenerateSequenceKey(sk, folder, intf.Sequence); err != nil {
if sk, err = t.keyer.GenerateSequenceKey(sk, folder, fi.Sequence); err != nil {
return 0, err return 0, err
} }
switch dk, err = t.Get(sk); { switch dk, err = t.Get(sk); {
@ -1216,14 +1218,14 @@ func (db *Lowlevel) repairSequenceGCLocked(folderStr string, meta *metadataTrack
fallthrough fallthrough
case !bytes.Equal(it.Key(), dk): case !bytes.Equal(it.Key(), dk):
fixed++ fixed++
fi.Sequence = meta.nextLocalSeq() intf.Sequence = meta.nextLocalSeq()
if sk, err = t.keyer.GenerateSequenceKey(sk, folder, fi.Sequence); err != nil { if sk, err = t.keyer.GenerateSequenceKey(sk, folder, intf.Sequence); err != nil {
return 0, err return 0, err
} }
if err := t.Put(sk, it.Key()); err != nil { if err := t.Put(sk, it.Key()); err != nil {
return 0, err return 0, err
} }
if err := t.putFile(it.Key(), fi); err != nil { if err := t.putFile(it.Key(), intf); err != nil {
return 0, err return 0, err
} }
} }
@ -1307,9 +1309,8 @@ func (db *Lowlevel) checkLocalNeed(folder []byte) (int, error) {
} }
} }
next() next()
t.withNeedIteratingGlobal(folder, protocol.LocalDeviceID[:], true, func(fi protocol.FileIntf) bool { itErr := t.withNeedIteratingGlobal(folder, protocol.LocalDeviceID[:], true, func(fi protocol.FileInfo) bool {
f := fi.(FileInfoTruncated) for !needDone && needName < fi.Name {
for !needDone && needName < f.Name {
repaired++ repaired++
if err = t.Delete(dbi.Key()); err != nil && !backend.IsNotFound(err) { if err = t.Delete(dbi.Key()); err != nil && !backend.IsNotFound(err) {
return false return false
@ -1317,24 +1318,27 @@ func (db *Lowlevel) checkLocalNeed(folder []byte) (int, error) {
l.Debugln("check local need: removing", needName) l.Debugln("check local need: removing", needName)
next() next()
} }
if needName == f.Name { if needName == fi.Name {
next() next()
} else { } else {
repaired++ repaired++
key, err = t.keyer.GenerateNeedFileKey(key, folder, []byte(f.Name)) key, err = t.keyer.GenerateNeedFileKey(key, folder, []byte(fi.Name))
if err != nil { if err != nil {
return false return false
} }
if err = t.Put(key, nil); err != nil { if err = t.Put(key, nil); err != nil {
return false return false
} }
l.Debugln("check local need: adding", f.Name) l.Debugln("check local need: adding", fi.Name)
} }
return true return true
}) })
if err != nil { if err != nil {
return 0, err return 0, err
} }
if itErr != nil {
return 0, itErr
}
for !needDone { for !needDone {
repaired++ repaired++

View File

@ -7,12 +7,14 @@
package db package db
import ( import (
"bytes"
"errors" "errors"
"fmt" "fmt"
"math/bits" "math/bits"
"time" "time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db/backend" "github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
@ -56,24 +58,34 @@ func newMetadataTracker(keyer keyer, evLogger events.Logger) *metadataTracker {
// Unmarshal loads a metadataTracker from the corresponding protobuf // Unmarshal loads a metadataTracker from the corresponding protobuf
// representation // representation
func (m *metadataTracker) Unmarshal(bs []byte) error { func (m *metadataTracker) Unmarshal(bs []byte) error {
if err := m.counts.Unmarshal(bs); err != nil { var dbc dbproto.CountsSet
if err := proto.Unmarshal(bs, &dbc); err != nil {
return err return err
} }
m.counts.Created = dbc.Created
m.counts.Counts = make([]Counts, len(dbc.Counts))
for i, c := range dbc.Counts {
m.counts.Counts[i] = countsFromWire(c)
}
// Initialize the index map // Initialize the index map
m.indexes = make(map[metaKey]int)
for i, c := range m.counts.Counts { for i, c := range m.counts.Counts {
dev, err := protocol.DeviceIDFromBytes(c.DeviceID) m.indexes[metaKey{c.DeviceID, c.LocalFlags}] = i
if err != nil {
return err
}
m.indexes[metaKey{dev, c.LocalFlags}] = i
} }
return nil return nil
} }
// Marshal returns the protobuf representation of the metadataTracker // Marshal returns the protobuf representation of the metadataTracker
func (m *metadataTracker) Marshal() ([]byte, error) { func (m *metadataTracker) Marshal() ([]byte, error) {
return m.counts.Marshal() dbc := &dbproto.CountsSet{
Counts: make([]*dbproto.Counts, len(m.counts.Counts)),
Created: m.Created().UnixNano(),
}
for i, c := range m.counts.Counts {
dbc.Counts[i] = c.toWire()
}
return proto.Marshal(dbc)
} }
func (m *metadataTracker) CommitHook(folder []byte) backend.CommitHook { func (m *metadataTracker) CommitHook(folder []byte) backend.CommitHook {
@ -142,7 +154,7 @@ func (m *metadataTracker) countsPtr(dev protocol.DeviceID, flag uint32) *Counts
idx, ok := m.indexes[key] idx, ok := m.indexes[key]
if !ok { if !ok {
idx = len(m.counts.Counts) idx = len(m.counts.Counts)
m.counts.Counts = append(m.counts.Counts, Counts{DeviceID: dev[:], LocalFlags: flag}) m.counts.Counts = append(m.counts.Counts, Counts{DeviceID: dev, LocalFlags: flag})
m.indexes[key] = idx m.indexes[key] = idx
// Need bucket must be initialized when a device first occurs in // Need bucket must be initialized when a device first occurs in
// the metadatatracker, even if there's no change to the need // the metadatatracker, even if there's no change to the need
@ -160,19 +172,19 @@ func (m *metadataTracker) countsPtr(dev protocol.DeviceID, flag uint32) *Counts
// allNeeded makes sure there is a counts in case the device needs everything. // allNeeded makes sure there is a counts in case the device needs everything.
func (m *countsMap) allNeededCounts(dev protocol.DeviceID) Counts { func (m *countsMap) allNeededCounts(dev protocol.DeviceID) Counts {
counts := Counts{} var counts Counts
if idx, ok := m.indexes[metaKey{protocol.GlobalDeviceID, 0}]; ok { if idx, ok := m.indexes[metaKey{protocol.GlobalDeviceID, 0}]; ok {
counts = m.counts.Counts[idx] counts = m.counts.Counts[idx]
counts.Deleted = 0 // Don't need deletes if having nothing counts.Deleted = 0 // Don't need deletes if having nothing
} }
counts.DeviceID = dev[:] counts.DeviceID = dev
counts.LocalFlags = needFlag counts.LocalFlags = needFlag
return counts return counts
} }
// addFile adds a file to the counts, adjusting the sequence number as // addFile adds a file to the counts, adjusting the sequence number as
// appropriate // appropriate
func (m *metadataTracker) addFile(dev protocol.DeviceID, f protocol.FileIntf) { func (m *metadataTracker) addFile(dev protocol.DeviceID, f protocol.FileInfo) {
m.mut.Lock() m.mut.Lock()
defer m.mut.Unlock() defer m.mut.Unlock()
@ -181,7 +193,7 @@ func (m *metadataTracker) addFile(dev protocol.DeviceID, f protocol.FileIntf) {
m.updateFileLocked(dev, f, m.addFileLocked) m.updateFileLocked(dev, f, m.addFileLocked)
} }
func (m *metadataTracker) updateFileLocked(dev protocol.DeviceID, f protocol.FileIntf, fn func(protocol.DeviceID, uint32, protocol.FileIntf)) { func (m *metadataTracker) updateFileLocked(dev protocol.DeviceID, f protocol.FileInfo, fn func(protocol.DeviceID, uint32, protocol.FileInfo)) {
m.dirty = true m.dirty = true
if f.IsInvalid() && (f.FileLocalFlags() == 0 || dev == protocol.GlobalDeviceID) { if f.IsInvalid() && (f.FileLocalFlags() == 0 || dev == protocol.GlobalDeviceID) {
@ -209,7 +221,7 @@ func (m *metadataTracker) emptyNeeded(dev protocol.DeviceID) {
m.dirty = true m.dirty = true
empty := Counts{ empty := Counts{
DeviceID: dev[:], DeviceID: dev,
LocalFlags: needFlag, LocalFlags: needFlag,
} }
key := metaKey{dev, needFlag} key := metaKey{dev, needFlag}
@ -222,7 +234,7 @@ func (m *metadataTracker) emptyNeeded(dev protocol.DeviceID) {
} }
// addNeeded adds a file to the needed counts // addNeeded adds a file to the needed counts
func (m *metadataTracker) addNeeded(dev protocol.DeviceID, f protocol.FileIntf) { func (m *metadataTracker) addNeeded(dev protocol.DeviceID, f protocol.FileInfo) {
m.mut.Lock() m.mut.Lock()
defer m.mut.Unlock() defer m.mut.Unlock()
@ -237,7 +249,7 @@ func (m *metadataTracker) Sequence(dev protocol.DeviceID) int64 {
return m.countsPtr(dev, 0).Sequence return m.countsPtr(dev, 0).Sequence
} }
func (m *metadataTracker) updateSeqLocked(dev protocol.DeviceID, f protocol.FileIntf) { func (m *metadataTracker) updateSeqLocked(dev protocol.DeviceID, f protocol.FileInfo) {
if dev == protocol.GlobalDeviceID { if dev == protocol.GlobalDeviceID {
return return
} }
@ -246,7 +258,7 @@ func (m *metadataTracker) updateSeqLocked(dev protocol.DeviceID, f protocol.File
} }
} }
func (m *metadataTracker) addFileLocked(dev protocol.DeviceID, flag uint32, f protocol.FileIntf) { func (m *metadataTracker) addFileLocked(dev protocol.DeviceID, flag uint32, f protocol.FileInfo) {
cp := m.countsPtr(dev, flag) cp := m.countsPtr(dev, flag)
switch { switch {
@ -263,7 +275,7 @@ func (m *metadataTracker) addFileLocked(dev protocol.DeviceID, flag uint32, f pr
} }
// removeFile removes a file from the counts // removeFile removes a file from the counts
func (m *metadataTracker) removeFile(dev protocol.DeviceID, f protocol.FileIntf) { func (m *metadataTracker) removeFile(dev protocol.DeviceID, f protocol.FileInfo) {
m.mut.Lock() m.mut.Lock()
defer m.mut.Unlock() defer m.mut.Unlock()
@ -271,7 +283,7 @@ func (m *metadataTracker) removeFile(dev protocol.DeviceID, f protocol.FileIntf)
} }
// removeNeeded removes a file from the needed counts // removeNeeded removes a file from the needed counts
func (m *metadataTracker) removeNeeded(dev protocol.DeviceID, f protocol.FileIntf) { func (m *metadataTracker) removeNeeded(dev protocol.DeviceID, f protocol.FileInfo) {
m.mut.Lock() m.mut.Lock()
defer m.mut.Unlock() defer m.mut.Unlock()
@ -280,7 +292,7 @@ func (m *metadataTracker) removeNeeded(dev protocol.DeviceID, f protocol.FileInt
m.removeFileLocked(dev, needFlag, f) m.removeFileLocked(dev, needFlag, f)
} }
func (m *metadataTracker) removeFileLocked(dev protocol.DeviceID, flag uint32, f protocol.FileIntf) { func (m *metadataTracker) removeFileLocked(dev protocol.DeviceID, flag uint32, f protocol.FileInfo) {
cp := m.countsPtr(dev, flag) cp := m.countsPtr(dev, flag)
switch { switch {
@ -325,7 +337,7 @@ func (m *metadataTracker) resetAll(dev protocol.DeviceID) {
m.mut.Lock() m.mut.Lock()
m.dirty = true m.dirty = true
for i, c := range m.counts.Counts { for i, c := range m.counts.Counts {
if bytes.Equal(c.DeviceID, dev[:]) { if c.DeviceID == dev {
if c.LocalFlags != needFlag { if c.LocalFlags != needFlag {
m.counts.Counts[i] = Counts{ m.counts.Counts[i] = Counts{
DeviceID: c.DeviceID, DeviceID: c.DeviceID,
@ -346,7 +358,7 @@ func (m *metadataTracker) resetCounts(dev protocol.DeviceID) {
m.dirty = true m.dirty = true
for i, c := range m.counts.Counts { for i, c := range m.counts.Counts {
if bytes.Equal(c.DeviceID, dev[:]) { if c.DeviceID == dev {
m.counts.Counts[i] = Counts{ m.counts.Counts[i] = Counts{
DeviceID: c.DeviceID, DeviceID: c.DeviceID,
Sequence: c.Sequence, Sequence: c.Sequence,
@ -419,14 +431,10 @@ func (m *countsMap) devices() []protocol.DeviceID {
for _, dev := range m.counts.Counts { for _, dev := range m.counts.Counts {
if dev.Sequence > 0 { if dev.Sequence > 0 {
id, err := protocol.DeviceIDFromBytes(dev.DeviceID) if dev.DeviceID == protocol.GlobalDeviceID || dev.DeviceID == protocol.LocalDeviceID {
if err != nil {
panic(err)
}
if id == protocol.GlobalDeviceID || id == protocol.LocalDeviceID {
continue continue
} }
devs = append(devs, id) devs = append(devs, dev.DeviceID)
} }
} }

View File

@ -10,21 +10,56 @@ import (
"fmt" "fmt"
"time" "time"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
type ObservedFolder struct {
Time time.Time
Label string
ReceiveEncrypted bool
RemoteEncrypted bool
}
func (o *ObservedFolder) toWire() *dbproto.ObservedFolder {
return &dbproto.ObservedFolder{
Time: timestamppb.New(o.Time),
Label: o.Label,
ReceiveEncrypted: o.ReceiveEncrypted,
RemoteEncrypted: o.RemoteEncrypted,
}
}
func (o *ObservedFolder) fromWire(w *dbproto.ObservedFolder) {
o.Time = w.GetTime().AsTime()
o.Label = w.GetLabel()
o.ReceiveEncrypted = w.GetReceiveEncrypted()
o.RemoteEncrypted = w.GetRemoteEncrypted()
}
type ObservedDevice struct {
Time time.Time
Name string
Address string
}
func (o *ObservedDevice) fromWire(w *dbproto.ObservedDevice) {
o.Time = w.GetTime().AsTime()
o.Name = w.GetName()
o.Address = w.GetAddress()
}
func (db *Lowlevel) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) error { func (db *Lowlevel) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) error {
key := db.keyer.GeneratePendingDeviceKey(nil, device[:]) key := db.keyer.GeneratePendingDeviceKey(nil, device[:])
od := ObservedDevice{ od := &dbproto.ObservedDevice{
Time: time.Now().Truncate(time.Second), Time: timestamppb.New(time.Now().Truncate(time.Second)),
Name: name, Name: name,
Address: address, Address: address,
} }
bs, err := od.Marshal() return db.Put(key, mustMarshal(od))
if err != nil {
return err
}
return db.Put(key, bs)
} }
func (db *Lowlevel) RemovePendingDevice(device protocol.DeviceID) error { func (db *Lowlevel) RemovePendingDevice(device protocol.DeviceID) error {
@ -44,13 +79,15 @@ func (db *Lowlevel) PendingDevices() (map[protocol.DeviceID]ObservedDevice, erro
for iter.Next() { for iter.Next() {
keyDev := db.keyer.DeviceFromPendingDeviceKey(iter.Key()) keyDev := db.keyer.DeviceFromPendingDeviceKey(iter.Key())
deviceID, err := protocol.DeviceIDFromBytes(keyDev) deviceID, err := protocol.DeviceIDFromBytes(keyDev)
var protoD dbproto.ObservedDevice
var od ObservedDevice var od ObservedDevice
if err != nil { if err != nil {
goto deleteKey goto deleteKey
} }
if err = od.Unmarshal(iter.Value()); err != nil { if err = proto.Unmarshal(iter.Value(), &protoD); err != nil {
goto deleteKey goto deleteKey
} }
od.fromWire(&protoD)
res[deviceID] = od res[deviceID] = od
continue continue
deleteKey: deleteKey:
@ -70,11 +107,7 @@ func (db *Lowlevel) AddOrUpdatePendingFolder(id string, of ObservedFolder, devic
if err != nil { if err != nil {
return err return err
} }
bs, err := of.Marshal() return db.Put(key, mustMarshal(of.toWire()))
if err != nil {
return err
}
return db.Put(key, bs)
} }
// RemovePendingFolderForDevice removes entries for specific folder / device combinations. // RemovePendingFolderForDevice removes entries for specific folder / device combinations.
@ -139,6 +172,7 @@ func (db *Lowlevel) PendingFoldersForDevice(device protocol.DeviceID) (map[strin
for iter.Next() { for iter.Next() {
keyDev, ok := db.keyer.DeviceFromPendingFolderKey(iter.Key()) keyDev, ok := db.keyer.DeviceFromPendingFolderKey(iter.Key())
deviceID, err := protocol.DeviceIDFromBytes(keyDev) deviceID, err := protocol.DeviceIDFromBytes(keyDev)
var protoF dbproto.ObservedFolder
var of ObservedFolder var of ObservedFolder
var folderID string var folderID string
if !ok || err != nil { if !ok || err != nil {
@ -147,7 +181,7 @@ func (db *Lowlevel) PendingFoldersForDevice(device protocol.DeviceID) (map[strin
if folderID = string(db.keyer.FolderFromPendingFolderKey(iter.Key())); len(folderID) < 1 { if folderID = string(db.keyer.FolderFromPendingFolderKey(iter.Key())); len(folderID) < 1 {
goto deleteKey goto deleteKey
} }
if err = of.Unmarshal(iter.Value()); err != nil { if err = proto.Unmarshal(iter.Value(), &protoF); err != nil {
goto deleteKey goto deleteKey
} }
if _, ok := res[folderID]; !ok { if _, ok := res[folderID]; !ok {
@ -155,6 +189,7 @@ func (db *Lowlevel) PendingFoldersForDevice(device protocol.DeviceID) (map[strin
OfferedBy: map[protocol.DeviceID]ObservedFolder{}, OfferedBy: map[protocol.DeviceID]ObservedFolder{},
} }
} }
of.fromWire(&protoF)
res[folderID].OfferedBy[deviceID] = of res[folderID].OfferedBy[deviceID] = of
continue continue
deleteKey: deleteKey:

View File

@ -7,12 +7,11 @@
package db package db
import ( import (
"bytes"
"fmt" "fmt"
"sort"
"strings"
"github.com/syncthing/syncthing/lib/db/backend" "google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
@ -65,6 +64,12 @@ func (db *schemaUpdater) updateSchema() error {
return err return err
} }
if prevVersion > 0 && prevVersion < 14 {
// This is a database version that is too old to be upgraded directly.
// The user will have to upgrade to an older version first.
return fmt.Errorf("database version %d is too old to be upgraded directly; step via Syncthing v1.27.0 to upgrade", prevVersion)
}
if prevVersion > dbVersion { if prevVersion > dbVersion {
err := &databaseDowngradeError{} err := &databaseDowngradeError{}
if minSyncthingVersion, ok, dbErr := miscDB.String("dbMinSyncthingVersion"); dbErr != nil { if minSyncthingVersion, ok, dbErr := miscDB.String("dbMinSyncthingVersion"); dbErr != nil {
@ -89,16 +94,6 @@ func (db *schemaUpdater) updateSchema() error {
} }
migrations := []migration{ migrations := []migration{
{1, 1, "v0.14.0", db.updateSchema0to1},
{2, 2, "v0.14.46", db.updateSchema1to2},
{3, 3, "v0.14.48", db.updateSchema2to3},
{5, 5, "v0.14.49", db.updateSchemaTo5},
{6, 6, "v0.14.50", db.updateSchema5to6},
{7, 7, "v0.14.53", db.updateSchema6to7},
{9, 9, "v1.4.0", db.updateSchemaTo9},
{10, 10, "v1.6.0", db.updateSchemaTo10},
{11, 11, "v1.6.0", db.updateSchemaTo11},
{13, 13, "v1.7.0", db.updateSchemaTo13},
{14, 14, "v1.9.0", db.updateSchemaTo14}, {14, 14, "v1.9.0", db.updateSchemaTo14},
{14, 16, "v1.9.0", db.checkRepairMigration}, {14, 16, "v1.9.0", db.checkRepairMigration},
{14, 17, "v1.9.0", db.migration17}, {14, 17, "v1.9.0", db.migration17},
@ -143,571 +138,6 @@ func (*schemaUpdater) writeVersions(m migration, miscDB *NamespacedKV) error {
return nil return nil
} }
func (db *schemaUpdater) updateSchema0to1(_ int) error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
dbi, err := t.NewPrefixIterator([]byte{KeyTypeDevice})
if err != nil {
return err
}
defer dbi.Release()
symlinkConv := 0
changedFolders := make(map[string]struct{})
ignAdded := 0
var gk []byte
ro := t.readOnlyTransaction
for dbi.Next() {
folder, ok := db.keyer.FolderFromDeviceFileKey(dbi.Key())
if !ok {
// not having the folder in the index is bad; delete and continue
if err := t.Delete(dbi.Key()); err != nil {
return err
}
continue
}
device, ok := db.keyer.DeviceFromDeviceFileKey(dbi.Key())
if !ok {
// not having the device in the index is bad; delete and continue
if err := t.Delete(dbi.Key()); err != nil {
return err
}
continue
}
name := db.keyer.NameFromDeviceFileKey(dbi.Key())
// Remove files with absolute path (see #4799)
if strings.HasPrefix(string(name), "/") {
if _, ok := changedFolders[string(folder)]; !ok {
changedFolders[string(folder)] = struct{}{}
}
if err := t.Delete(dbi.Key()); err != nil {
return err
}
gk, err = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if err != nil {
return err
}
fl, err := getGlobalVersionsByKeyBefore11(gk, ro)
if backend.IsNotFound(err) {
// Shouldn't happen, but not critical.
continue
} else if err != nil {
return err
}
_, _ = fl.pop(device)
if len(fl.Versions) == 0 {
err = t.Delete(gk)
} else {
err = t.Put(gk, mustMarshal(&fl))
}
if err != nil {
return err
}
continue
}
// Change SYMLINK_FILE and SYMLINK_DIRECTORY types to the current SYMLINK
// type (previously SYMLINK_UNKNOWN). It does this for all devices, both
// local and remote, and does not reset delta indexes. It shouldn't really
// matter what the symlink type is, but this cleans it up for a possible
// future when SYMLINK_FILE and SYMLINK_DIRECTORY are no longer understood.
var f protocol.FileInfo
if err := f.Unmarshal(dbi.Value()); err != nil {
// probably can't happen
continue
}
if f.Type == protocol.FileInfoTypeSymlinkDirectory || f.Type == protocol.FileInfoTypeSymlinkFile {
f.Type = protocol.FileInfoTypeSymlink
bs, err := f.Marshal()
if err != nil {
panic("can't happen: " + err.Error())
}
if err := t.Put(dbi.Key(), bs); err != nil {
return err
}
symlinkConv++
}
// Add invalid files to global list
if f.IsInvalid() {
gk, err = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if err != nil {
return err
}
fl, err := getGlobalVersionsByKeyBefore11(gk, ro)
if err != nil && !backend.IsNotFound(err) {
return err
}
i := sort.Search(len(fl.Versions), func(j int) bool {
return fl.Versions[j].Invalid
})
for ; i < len(fl.Versions); i++ {
ordering := fl.Versions[i].Version.Compare(f.Version)
shouldInsert := ordering == protocol.Equal
if !shouldInsert {
shouldInsert, err = shouldInsertBefore(ordering, folder, fl.Versions[i].Device, true, f, ro)
if err != nil {
return err
}
}
if shouldInsert {
nv := FileVersionDeprecated{
Device: device,
Version: f.Version,
Invalid: true,
}
fl.insertAt(i, nv)
if err := t.Put(gk, mustMarshal(&fl)); err != nil {
return err
}
if _, ok := changedFolders[string(folder)]; !ok {
changedFolders[string(folder)] = struct{}{}
}
ignAdded++
break
}
}
}
if err := t.Checkpoint(); err != nil {
return err
}
}
dbi.Release()
if err != dbi.Error() {
return err
}
return t.Commit()
}
// updateSchema1to2 introduces a sequenceKey->deviceKey bucket for local items
// to allow iteration in sequence order (simplifies sending indexes).
func (db *schemaUpdater) updateSchema1to2(_ int) error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var sk []byte
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
var putErr error
err := t.withHave(folder, protocol.LocalDeviceID[:], nil, true, func(f protocol.FileIntf) bool {
sk, putErr = db.keyer.GenerateSequenceKey(sk, folder, f.SequenceNo())
if putErr != nil {
return false
}
dk, putErr = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(f.FileName()))
if putErr != nil {
return false
}
putErr = t.Put(sk, dk)
return putErr == nil
})
if putErr != nil {
return putErr
}
if err != nil {
return err
}
}
return t.Commit()
}
// updateSchema2to3 introduces a needKey->nil bucket for locally needed files.
func (db *schemaUpdater) updateSchema2to3(_ int) error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var nk []byte
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
var putErr error
err := withGlobalBefore11(folder, true, func(f protocol.FileIntf) bool {
name := []byte(f.FileName())
dk, putErr = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], name)
if putErr != nil {
return false
}
var v protocol.Vector
haveFile, ok, err := t.getFileTrunc(dk, true)
if err != nil {
putErr = err
return false
}
if ok {
v = haveFile.FileVersion()
}
fv := FileVersionDeprecated{
Version: f.FileVersion(),
Invalid: f.IsInvalid(),
Deleted: f.IsDeleted(),
}
if !needDeprecated(fv, ok, v) {
return true
}
nk, putErr = t.keyer.GenerateNeedFileKey(nk, folder, []byte(f.FileName()))
if putErr != nil {
return false
}
putErr = t.Put(nk, nil)
return putErr == nil
}, t.readOnlyTransaction)
if putErr != nil {
return putErr
}
if err != nil {
return err
}
}
return t.Commit()
}
// updateSchemaTo5 resets the need bucket due to bugs existing in the v0.14.49
// release candidates (dbVersion 3 and 4)
// https://github.com/syncthing/syncthing/issues/5007
// https://github.com/syncthing/syncthing/issues/5053
func (db *schemaUpdater) updateSchemaTo5(prevVersion int) error {
if prevVersion != 3 && prevVersion != 4 {
return nil
}
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
var nk []byte
for _, folderStr := range db.ListFolders() {
nk, err = db.keyer.GenerateNeedFileKey(nk, []byte(folderStr), nil)
if err != nil {
return err
}
if err := t.deleteKeyPrefix(nk[:keyPrefixLen+keyFolderLen]); err != nil {
return err
}
}
if err := t.Commit(); err != nil {
return err
}
return db.updateSchema2to3(2)
}
func (db *schemaUpdater) updateSchema5to6(_ int) error {
// For every local file with the Invalid bit set, clear the Invalid bit and
// set LocalFlags = FlagLocalIgnored.
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
var iterErr error
err := t.withHave(folder, protocol.LocalDeviceID[:], nil, false, func(f protocol.FileIntf) bool {
if !f.IsInvalid() {
return true
}
fi := f.(protocol.FileInfo)
fi.RawInvalid = false
fi.LocalFlags = protocol.FlagLocalIgnored
bs, _ := fi.Marshal()
dk, iterErr = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(fi.Name))
if iterErr != nil {
return false
}
if iterErr = t.Put(dk, bs); iterErr != nil {
return false
}
iterErr = t.Checkpoint()
return iterErr == nil
})
if iterErr != nil {
return iterErr
}
if err != nil {
return err
}
}
return t.Commit()
}
// updateSchema6to7 checks whether all currently locally needed files are really
// needed and removes them if not.
func (db *schemaUpdater) updateSchema6to7(_ int) error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var gk []byte
var nk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
var delErr error
err := withNeedLocalBefore11(folder, false, func(f protocol.FileIntf) bool {
name := []byte(f.FileName())
gk, delErr = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if delErr != nil {
return false
}
svl, err := t.Get(gk)
if err != nil {
// If there is no global list, we hardly need it.
key, err := t.keyer.GenerateNeedFileKey(nk, folder, name)
if err != nil {
delErr = err
return false
}
delErr = t.Delete(key)
return delErr == nil
}
var fl VersionListDeprecated
err = fl.Unmarshal(svl)
if err != nil {
// This can't happen, but it's ignored everywhere else too,
// so lets not act on it.
return true
}
globalFV := FileVersionDeprecated{
Version: f.FileVersion(),
Invalid: f.IsInvalid(),
Deleted: f.IsDeleted(),
}
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); !needDeprecated(globalFV, haveLocalFV, localFV.Version) {
key, err := t.keyer.GenerateNeedFileKey(nk, folder, name)
if err != nil {
delErr = err
return false
}
delErr = t.Delete(key)
}
return delErr == nil
}, t.readOnlyTransaction)
if delErr != nil {
return delErr
}
if err != nil {
return err
}
if err := t.Checkpoint(); err != nil {
return err
}
}
return t.Commit()
}
func (db *schemaUpdater) updateSchemaTo9(_ int) error {
// Loads and rewrites all files with blocks, to deduplicate block lists.
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
if err := rewriteFiles(t); err != nil {
return err
}
db.recordTime(indirectGCTimeKey)
return t.Commit()
}
func rewriteFiles(t readWriteTransaction) error {
it, err := t.NewPrefixIterator([]byte{KeyTypeDevice})
if err != nil {
return err
}
defer it.Release()
for it.Next() {
intf, err := t.unmarshalTrunc(it.Value(), false)
if backend.IsNotFound(err) {
// Unmarshal error due to missing parts (block list), probably
// due to a bad migration in a previous RC. Drop this key, as
// getFile would anyway return this as a "not found" in the
// normal flow of things.
if err := t.Delete(it.Key()); err != nil {
return err
}
continue
} else if err != nil {
return err
}
fi := intf.(protocol.FileInfo)
if fi.Blocks == nil {
continue
}
if err := t.putFile(it.Key(), fi); err != nil {
return err
}
if err := t.Checkpoint(); err != nil {
return err
}
}
it.Release()
return it.Error()
}
func (db *schemaUpdater) updateSchemaTo10(_ int) error {
// Rewrites global lists to include a Deleted flag.
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var buf []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
buf, err = t.keyer.GenerateGlobalVersionKey(buf, folder, nil)
if err != nil {
return err
}
buf = globalVersionKey(buf).WithoutName()
dbi, err := t.NewPrefixIterator(buf)
if err != nil {
return err
}
defer dbi.Release()
for dbi.Next() {
var vl VersionListDeprecated
if err := vl.Unmarshal(dbi.Value()); err != nil {
return err
}
changed := false
name := t.keyer.NameFromGlobalVersionKey(dbi.Key())
for i, fv := range vl.Versions {
buf, err = t.keyer.GenerateDeviceFileKey(buf, folder, fv.Device, name)
if err != nil {
return err
}
f, ok, err := t.getFileTrunc(buf, true)
if !ok {
return errEntryFromGlobalMissing
}
if err != nil {
return err
}
if f.IsDeleted() {
vl.Versions[i].Deleted = true
changed = true
}
}
if changed {
if err := t.Put(dbi.Key(), mustMarshal(&vl)); err != nil {
return err
}
if err := t.Checkpoint(); err != nil {
return err
}
}
}
dbi.Release()
}
// Trigger metadata recalc
if err := t.deleteKeyPrefix([]byte{KeyTypeFolderMeta}); err != nil {
return err
}
return t.Commit()
}
func (db *schemaUpdater) updateSchemaTo11(_ int) error {
// Populates block list map for every folder.
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
var putErr error
err := t.withHave(folder, protocol.LocalDeviceID[:], nil, true, func(fi protocol.FileIntf) bool {
f := fi.(FileInfoTruncated)
if f.IsDirectory() || f.IsDeleted() || f.IsSymlink() || f.IsInvalid() || f.BlocksHash == nil {
return true
}
name := []byte(f.FileName())
dk, putErr = db.keyer.GenerateBlockListMapKey(dk, folder, f.BlocksHash, name)
if putErr != nil {
return false
}
if putErr = t.Put(dk, nil); putErr != nil {
return false
}
putErr = t.Checkpoint()
return putErr == nil
})
if putErr != nil {
return putErr
}
if err != nil {
return err
}
}
return t.Commit()
}
func (db *schemaUpdater) updateSchemaTo13(prev int) error {
// Loads and rewrites all files, to deduplicate version vectors.
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
if prev < 12 {
if err := rewriteFiles(t); err != nil {
return err
}
}
if err := rewriteGlobals(t); err != nil {
return err
}
return t.Commit()
}
func (db *schemaUpdater) updateSchemaTo14(_ int) error { func (db *schemaUpdater) updateSchemaTo14(_ int) error {
// Checks for missing blocks and marks those entries as requiring a // Checks for missing blocks and marks those entries as requiring a
// rehash/being invalid. The db is checked/repaired afterwards, i.e. // rehash/being invalid. The db is checked/repaired afterwards, i.e.
@ -737,10 +167,11 @@ func (db *schemaUpdater) updateSchemaTo14(_ int) error {
} }
defer it.Release() defer it.Release()
for it.Next() { for it.Next() {
var fi protocol.FileInfo var bepf bep.FileInfo
if err := fi.Unmarshal(it.Value()); err != nil { if err := proto.Unmarshal(it.Value(), &bepf); err != nil {
return err return err
} }
fi := protocol.FileInfoFromDB(&bepf)
if len(fi.Blocks) > 0 || len(fi.BlocksHash) == 0 { if len(fi.Blocks) > 0 || len(fi.BlocksHash) == 0 {
continue continue
} }
@ -808,12 +239,11 @@ func (db *schemaUpdater) migration17(prev int) error {
return db.updateLocalFiles(folder, fs, meta) return db.updateLocalFiles(folder, fs, meta)
}) })
var innerErr error var innerErr error
err = t.withHave(folder, protocol.LocalDeviceID[:], nil, false, func(fi protocol.FileIntf) bool { err = t.withHave(folder, protocol.LocalDeviceID[:], nil, false, func(fi protocol.FileInfo) bool {
if fi.IsInvalid() && fi.FileLocalFlags() == 0 { if fi.IsInvalid() && fi.FileLocalFlags() == 0 {
f := fi.(protocol.FileInfo) fi.SetMustRescan()
f.SetMustRescan() fi.Version = protocol.Vector{}
f.Version = protocol.Vector{} batch.Append(fi)
batch.Append(f)
innerErr = batch.FlushIfFull() innerErr = batch.FlushIfFull()
return innerErr == nil return innerErr == nil
} }
@ -839,263 +269,3 @@ func (db *schemaUpdater) dropAllIndexIDsMigration(_ int) error {
func (db *schemaUpdater) dropOutgoingIndexIDsMigration(_ int) error { func (db *schemaUpdater) dropOutgoingIndexIDsMigration(_ int) error {
return db.dropOtherDeviceIndexIDs() return db.dropOtherDeviceIndexIDs()
} }
func rewriteGlobals(t readWriteTransaction) error {
it, err := t.NewPrefixIterator([]byte{KeyTypeGlobal})
if err != nil {
return err
}
defer it.Release()
for it.Next() {
var vl VersionListDeprecated
if err := vl.Unmarshal(it.Value()); err != nil {
// If we crashed during an earlier migration, some version
// lists might already be in the new format: Skip those.
var nvl VersionList
if nerr := nvl.Unmarshal(it.Value()); nerr == nil {
continue
}
return err
}
if len(vl.Versions) == 0 {
if err := t.Delete(it.Key()); err != nil {
return err
}
}
newVl := convertVersionList(vl)
if err := t.Put(it.Key(), mustMarshal(&newVl)); err != nil {
return err
}
if err := t.Checkpoint(); err != nil {
return err
}
}
return it.Error()
}
func convertVersionList(vl VersionListDeprecated) VersionList {
var newVl VersionList
var newPos, oldPos int
var lastVersion protocol.Vector
for _, fv := range vl.Versions {
if fv.Invalid {
break
}
oldPos++
if len(newVl.RawVersions) > 0 && lastVersion.Equal(fv.Version) {
newVl.RawVersions[newPos].Devices = append(newVl.RawVersions[newPos].Devices, fv.Device)
continue
}
newPos = len(newVl.RawVersions)
newVl.RawVersions = append(newVl.RawVersions, newFileVersion(fv.Device, fv.Version, false, fv.Deleted))
lastVersion = fv.Version
}
if oldPos == len(vl.Versions) {
return newVl
}
if len(newVl.RawVersions) == 0 {
fv := vl.Versions[oldPos]
newVl.RawVersions = []FileVersion{newFileVersion(fv.Device, fv.Version, true, fv.Deleted)}
oldPos++
}
newPos = 0
outer:
for _, fv := range vl.Versions[oldPos:] {
for _, nfv := range newVl.RawVersions[newPos:] {
switch nfv.Version.Compare(fv.Version) {
case protocol.Equal:
newVl.RawVersions[newPos].InvalidDevices = append(newVl.RawVersions[newPos].InvalidDevices, fv.Device)
continue outer
case protocol.Lesser:
newVl.insertAt(newPos, newFileVersion(fv.Device, fv.Version, true, fv.Deleted))
continue outer
case protocol.ConcurrentLesser, protocol.ConcurrentGreater:
// The version is invalid, i.e. it looses anyway,
// no need to check/get the conflicting file.
}
newPos++
}
// Couldn't insert into any existing versions
newVl.RawVersions = append(newVl.RawVersions, newFileVersion(fv.Device, fv.Version, true, fv.Deleted))
newPos++
}
return newVl
}
func getGlobalVersionsByKeyBefore11(key []byte, t readOnlyTransaction) (VersionListDeprecated, error) {
bs, err := t.Get(key)
if err != nil {
return VersionListDeprecated{}, err
}
var vl VersionListDeprecated
if err := vl.Unmarshal(bs); err != nil {
return VersionListDeprecated{}, err
}
return vl, nil
}
func withGlobalBefore11(folder []byte, truncate bool, fn Iterator, t readOnlyTransaction) error {
key, err := t.keyer.GenerateGlobalVersionKey(nil, folder, nil)
if err != nil {
return err
}
dbi, err := t.NewPrefixIterator(key)
if err != nil {
return err
}
defer dbi.Release()
var dk []byte
for dbi.Next() {
name := t.keyer.NameFromGlobalVersionKey(dbi.Key())
var vl VersionListDeprecated
if err := vl.Unmarshal(dbi.Value()); err != nil {
return err
}
dk, err = t.keyer.GenerateDeviceFileKey(dk, folder, vl.Versions[0].Device, name)
if err != nil {
return err
}
f, ok, err := t.getFileTrunc(dk, truncate)
if err != nil {
return err
}
if !ok {
continue
}
if !fn(f) {
return nil
}
}
if err != nil {
return err
}
return dbi.Error()
}
func withNeedLocalBefore11(folder []byte, truncate bool, fn Iterator, t readOnlyTransaction) error {
key, err := t.keyer.GenerateNeedFileKey(nil, folder, nil)
if err != nil {
return err
}
dbi, err := t.NewPrefixIterator(key.WithoutName())
if err != nil {
return err
}
defer dbi.Release()
var keyBuf []byte
var f protocol.FileIntf
var ok bool
for dbi.Next() {
keyBuf, f, ok, err = getGlobalBefore11(keyBuf, folder, t.keyer.NameFromGlobalVersionKey(dbi.Key()), truncate, t)
if err != nil {
return err
}
if !ok {
continue
}
if !fn(f) {
return nil
}
}
return dbi.Error()
}
func getGlobalBefore11(keyBuf, folder, file []byte, truncate bool, t readOnlyTransaction) ([]byte, protocol.FileIntf, bool, error) {
keyBuf, err := t.keyer.GenerateGlobalVersionKey(keyBuf, folder, file)
if err != nil {
return nil, nil, false, err
}
bs, err := t.Get(keyBuf)
if backend.IsNotFound(err) {
return keyBuf, nil, false, nil
} else if err != nil {
return nil, nil, false, err
}
var vl VersionListDeprecated
if err := vl.Unmarshal(bs); err != nil {
return nil, nil, false, err
}
if len(vl.Versions) == 0 {
return nil, nil, false, nil
}
keyBuf, err = t.keyer.GenerateDeviceFileKey(keyBuf, folder, vl.Versions[0].Device, file)
if err != nil {
return nil, nil, false, err
}
fi, ok, err := t.getFileTrunc(keyBuf, truncate)
if err != nil || !ok {
return keyBuf, nil, false, err
}
return keyBuf, fi, true, nil
}
func (vl *VersionListDeprecated) String() string {
var b bytes.Buffer
var id protocol.DeviceID
b.WriteString("{")
for i, v := range vl.Versions {
if i > 0 {
b.WriteString(", ")
}
copy(id[:], v.Device)
fmt.Fprintf(&b, "{%v, %v}", v.Version, id)
}
b.WriteString("}")
return b.String()
}
func (vl *VersionListDeprecated) pop(device []byte) (FileVersionDeprecated, int) {
for i, v := range vl.Versions {
if bytes.Equal(v.Device, device) {
vl.Versions = append(vl.Versions[:i], vl.Versions[i+1:]...)
return v, i
}
}
return FileVersionDeprecated{}, -1
}
func (vl *VersionListDeprecated) Get(device []byte) (FileVersionDeprecated, bool) {
for _, v := range vl.Versions {
if bytes.Equal(v.Device, device) {
return v, true
}
}
return FileVersionDeprecated{}, false
}
func (vl *VersionListDeprecated) insertAt(i int, v FileVersionDeprecated) {
vl.Versions = append(vl.Versions, FileVersionDeprecated{})
copy(vl.Versions[i+1:], vl.Versions[i:])
vl.Versions[i] = v
}
func needDeprecated(global FileVersionDeprecated, haveLocal bool, localVersion protocol.Vector) bool {
// We never need an invalid file.
if global.Invalid {
return false
}
// We don't need a deleted file if we don't have it.
if global.Deleted && !haveLocal {
return false
}
// We don't need the global file if we already have the same version.
if haveLocal && localVersion.GreaterEqual(global.Version) {
return false
}
return true
}

View File

@ -13,8 +13,10 @@
package db package db
import ( import (
"bytes"
"fmt" "fmt"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db/backend" "github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/fs" "github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil" "github.com/syncthing/syncthing/lib/osutil"
@ -33,7 +35,7 @@ type FileSet struct {
// The Iterator is called with either a protocol.FileInfo or a // The Iterator is called with either a protocol.FileInfo or a
// FileInfoTruncated (depending on the method) and returns true to // FileInfoTruncated (depending on the method) and returns true to
// continue iteration, false to stop. // continue iteration, false to stop.
type Iterator func(f protocol.FileIntf) bool type Iterator func(f protocol.FileInfo) bool
func NewFileSet(folder string, db *Lowlevel) (*FileSet, error) { func NewFileSet(folder string, db *Lowlevel) (*FileSet, error) {
select { select {
@ -292,26 +294,24 @@ func (s *Snapshot) GetGlobal(file string) (protocol.FileInfo, bool) {
if !ok { if !ok {
return protocol.FileInfo{}, false return protocol.FileInfo{}, false
} }
f := fi.(protocol.FileInfo) fi.Name = osutil.NativeFilename(fi.Name)
f.Name = osutil.NativeFilename(f.Name) return fi, true
return f, true
} }
func (s *Snapshot) GetGlobalTruncated(file string) (FileInfoTruncated, bool) { func (s *Snapshot) GetGlobalTruncated(file string) (protocol.FileInfo, bool) {
opStr := fmt.Sprintf("%s GetGlobalTruncated(%v)", s.folder, file) opStr := fmt.Sprintf("%s GetGlobalTruncated(%v)", s.folder, file)
l.Debugf(opStr) l.Debugf(opStr)
_, fi, ok, err := s.t.getGlobal(nil, []byte(s.folder), []byte(osutil.NormalizedFilename(file)), true) _, fi, ok, err := s.t.getGlobal(nil, []byte(s.folder), []byte(osutil.NormalizedFilename(file)), true)
if backend.IsClosed(err) { if backend.IsClosed(err) {
return FileInfoTruncated{}, false return protocol.FileInfo{}, false
} else if err != nil { } else if err != nil {
s.fatalError(err, opStr) s.fatalError(err, opStr)
} }
if !ok { if !ok {
return FileInfoTruncated{}, false return protocol.FileInfo{}, false
} }
f := fi.(FileInfoTruncated) fi.Name = osutil.NativeFilename(fi.Name)
f.Name = osutil.NativeFilename(f.Name) return fi, true
return f, true
} }
func (s *Snapshot) Availability(file string) []protocol.DeviceID { func (s *Snapshot) Availability(file string) []protocol.DeviceID {
@ -326,16 +326,16 @@ func (s *Snapshot) Availability(file string) []protocol.DeviceID {
return av return av
} }
func (s *Snapshot) DebugGlobalVersions(file string) VersionList { func (s *Snapshot) DebugGlobalVersions(file string) *DebugVersionList {
opStr := fmt.Sprintf("%s DebugGlobalVersions(%v)", s.folder, file) opStr := fmt.Sprintf("%s DebugGlobalVersions(%v)", s.folder, file)
l.Debugf(opStr) l.Debugf(opStr)
vl, err := s.t.getGlobalVersions(nil, []byte(s.folder), []byte(osutil.NormalizedFilename(file))) vl, err := s.t.getGlobalVersions(nil, []byte(s.folder), []byte(osutil.NormalizedFilename(file)))
if backend.IsClosed(err) || backend.IsNotFound(err) { if backend.IsClosed(err) || backend.IsNotFound(err) {
return VersionList{} return nil
} else if err != nil { } else if err != nil {
s.fatalError(err, opStr) s.fatalError(err, opStr)
} }
return vl return &DebugVersionList{vl}
} }
func (s *Snapshot) Sequence(device protocol.DeviceID) int64 { func (s *Snapshot) Sequence(device protocol.DeviceID) int64 {
@ -503,17 +503,9 @@ func normalizeFilenamesAndDropDuplicates(fs []protocol.FileInfo) []protocol.File
} }
func nativeFileIterator(fn Iterator) Iterator { func nativeFileIterator(fn Iterator) Iterator {
return func(fi protocol.FileIntf) bool { return func(fi protocol.FileInfo) bool {
switch f := fi.(type) { fi.Name = osutil.NativeFilename(fi.Name)
case protocol.FileInfo: return fn(fi)
f.Name = osutil.NativeFilename(f.Name)
return fn(f)
case FileInfoTruncated:
f.Name = osutil.NativeFilename(f.Name)
return fn(f)
default:
panic("unknown interface type")
}
} }
} }
@ -522,3 +514,40 @@ func fatalError(err error, opStr string, db *Lowlevel) {
l.Warnf("Fatal error: %v: %v", opStr, err) l.Warnf("Fatal error: %v: %v", opStr, err)
panic(ldbPathRe.ReplaceAllString(err.Error(), "$1 x: ")) panic(ldbPathRe.ReplaceAllString(err.Error(), "$1 x: "))
} }
// DebugFileVersion is the database-internal representation of a file
// version, with a nicer string representation, used only by API debug
// methods.
type DebugVersionList struct {
*dbproto.VersionList
}
func (vl DebugVersionList) String() string {
var b bytes.Buffer
var id protocol.DeviceID
b.WriteString("[")
for i, v := range vl.Versions {
if i > 0 {
b.WriteString(", ")
}
fmt.Fprintf(&b, "{Version:%v, Deleted:%v, Devices:[", protocol.VectorFromWire(v.Version), v.Deleted)
for j, dev := range v.Devices {
if j > 0 {
b.WriteString(", ")
}
copy(id[:], dev)
fmt.Fprint(&b, id.Short())
}
b.WriteString("], Invalid:[")
for j, dev := range v.InvalidDevices {
if j > 0 {
b.WriteString(", ")
}
copy(id[:], dev)
fmt.Fprint(&b, id.Short())
}
fmt.Fprint(&b, "]}")
}
b.WriteString("]")
return b.String()
}

View File

@ -16,6 +16,7 @@ import (
"time" "time"
"github.com/d4l3k/messagediff" "github.com/d4l3k/messagediff"
"github.com/syncthing/syncthing/lib/db" "github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend" "github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
@ -48,21 +49,19 @@ func globalList(t testing.TB, s *db.FileSet) []protocol.FileInfo {
var fs []protocol.FileInfo var fs []protocol.FileInfo
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
snap.WithGlobal(func(fi protocol.FileIntf) bool { snap.WithGlobal(func(fi protocol.FileInfo) bool {
f := fi.(protocol.FileInfo) fs = append(fs, fi)
fs = append(fs, f)
return true return true
}) })
return fs return fs
} }
func globalListPrefixed(t testing.TB, s *db.FileSet, prefix string) []db.FileInfoTruncated { func globalListPrefixed(t testing.TB, s *db.FileSet, prefix string) []protocol.FileInfo {
var fs []db.FileInfoTruncated var fs []protocol.FileInfo
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
snap.WithPrefixedGlobalTruncated(prefix, func(fi protocol.FileIntf) bool { snap.WithPrefixedGlobalTruncated(prefix, func(fi protocol.FileInfo) bool {
f := fi.(db.FileInfoTruncated) fs = append(fs, fi)
fs = append(fs, f)
return true return true
}) })
return fs return fs
@ -72,21 +71,19 @@ func haveList(t testing.TB, s *db.FileSet, n protocol.DeviceID) []protocol.FileI
var fs []protocol.FileInfo var fs []protocol.FileInfo
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
snap.WithHave(n, func(fi protocol.FileIntf) bool { snap.WithHave(n, func(fi protocol.FileInfo) bool {
f := fi.(protocol.FileInfo) fs = append(fs, fi)
fs = append(fs, f)
return true return true
}) })
return fs return fs
} }
func haveListPrefixed(t testing.TB, s *db.FileSet, n protocol.DeviceID, prefix string) []db.FileInfoTruncated { func haveListPrefixed(t testing.TB, s *db.FileSet, n protocol.DeviceID, prefix string) []protocol.FileInfo {
var fs []db.FileInfoTruncated var fs []protocol.FileInfo
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
snap.WithPrefixedHaveTruncated(n, prefix, func(fi protocol.FileIntf) bool { snap.WithPrefixedHaveTruncated(n, prefix, func(fi protocol.FileInfo) bool {
f := fi.(db.FileInfoTruncated) fs = append(fs, fi)
fs = append(fs, f)
return true return true
}) })
return fs return fs
@ -96,9 +93,8 @@ func needList(t testing.TB, s *db.FileSet, n protocol.DeviceID) []protocol.FileI
var fs []protocol.FileInfo var fs []protocol.FileInfo
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
snap.WithNeed(n, func(fi protocol.FileIntf) bool { snap.WithNeed(n, func(fi protocol.FileInfo) bool {
f := fi.(protocol.FileInfo) fs = append(fs, fi)
fs = append(fs, f)
return true return true
}) })
return fs return fs
@ -1011,9 +1007,9 @@ func TestWithHaveSequence(t *testing.T) {
i := 2 i := 2
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
snap.WithHaveSequence(int64(i), func(fi protocol.FileIntf) bool { snap.WithHaveSequence(int64(i), func(fi protocol.FileInfo) bool {
if f := fi.(protocol.FileInfo); !f.IsEquivalent(localHave[i-1], 0) { if !fi.IsEquivalent(localHave[i-1], 0) {
t.Fatalf("Got %v\nExpected %v", f, localHave[i-1]) t.Fatalf("Got %v\nExpected %v", fi, localHave[i-1])
} }
i++ i++
return true return true
@ -1062,7 +1058,7 @@ loop:
default: default:
} }
snap := snapshot(t, s) snap := snapshot(t, s)
snap.WithHaveSequence(prevSeq+1, func(fi protocol.FileIntf) bool { snap.WithHaveSequence(prevSeq+1, func(fi protocol.FileInfo) bool {
if fi.SequenceNo() < prevSeq+1 { if fi.SequenceNo() < prevSeq+1 {
t.Fatal("Skipped ", prevSeq+1, fi.SequenceNo()) t.Fatal("Skipped ", prevSeq+1, fi.SequenceNo())
} }
@ -1540,8 +1536,8 @@ func TestSequenceIndex(t *testing.T) {
// Start a routine to walk the sequence index and inspect the result. // Start a routine to walk the sequence index and inspect the result.
seen := make(map[string]protocol.FileIntf) seen := make(map[string]protocol.FileInfo)
latest := make([]protocol.FileIntf, 0, len(local)) latest := make([]protocol.FileInfo, 0, len(local))
var seq int64 var seq int64
t0 := time.Now() t0 := time.Now()
@ -1552,7 +1548,7 @@ func TestSequenceIndex(t *testing.T) {
// update has happened since our last iteration. // update has happened since our last iteration.
latest = latest[:0] latest = latest[:0]
snap := snapshot(t, s) snap := snapshot(t, s)
snap.WithHaveSequence(seq+1, func(f protocol.FileIntf) bool { snap.WithHaveSequence(seq+1, func(f protocol.FileInfo) bool {
seen[f.FileName()] = f seen[f.FileName()] = f
latest = append(latest, f) latest = append(latest, f)
seq = f.SequenceNo() seq = f.SequenceNo()
@ -1657,7 +1653,7 @@ func TestUpdateWithOneFileTwice(t *testing.T) {
snap := snapshot(t, s) snap := snapshot(t, s)
defer snap.Release() defer snap.Release()
count := 0 count := 0
snap.WithHaveSequence(0, func(f protocol.FileIntf) bool { snap.WithHaveSequence(0, func(_ protocol.FileInfo) bool {
count++ count++
return true return true
}) })

View File

@ -10,172 +10,52 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"strings" "strings"
"time"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
func (f FileInfoTruncated) String() string { type CountsSet struct {
switch f.Type { Counts []Counts
case protocol.FileInfoTypeDirectory: Created int64 // unix nanos
return fmt.Sprintf("Directory{Name:%q, Sequence:%d, Permissions:0%o, ModTime:%v, Version:%v, Deleted:%v, Invalid:%v, LocalFlags:0x%x, NoPermissions:%v}", }
f.Name, f.Sequence, f.Permissions, f.ModTime(), f.Version, f.Deleted, f.RawInvalid, f.LocalFlags, f.NoPermissions)
case protocol.FileInfoTypeFile: type Counts struct {
return fmt.Sprintf("File{Name:%q, Sequence:%d, Permissions:0%o, ModTime:%v, Version:%v, Length:%d, Deleted:%v, Invalid:%v, LocalFlags:0x%x, NoPermissions:%v, BlockSize:%d}", Files int
f.Name, f.Sequence, f.Permissions, f.ModTime(), f.Version, f.Size, f.Deleted, f.RawInvalid, f.LocalFlags, f.NoPermissions, f.RawBlockSize) Directories int
case protocol.FileInfoTypeSymlink, protocol.FileInfoTypeSymlinkDirectory, protocol.FileInfoTypeSymlinkFile: Symlinks int
return fmt.Sprintf("Symlink{Name:%q, Type:%v, Sequence:%d, Version:%v, Deleted:%v, Invalid:%v, LocalFlags:0x%x, NoPermissions:%v, SymlinkTarget:%q}", Deleted int
f.Name, f.Type, f.Sequence, f.Version, f.Deleted, f.RawInvalid, f.LocalFlags, f.NoPermissions, f.SymlinkTarget) Bytes int64
default: Sequence int64 // zero for the global state
panic("mystery file type detected") DeviceID protocol.DeviceID // device ID for remote devices, or special values for local/global
LocalFlags uint32 // the local flag for this count bucket
}
func (c Counts) toWire() *dbproto.Counts {
return &dbproto.Counts{
Files: int32(c.Files),
Directories: int32(c.Directories),
Symlinks: int32(c.Symlinks),
Deleted: int32(c.Deleted),
Bytes: c.Bytes,
Sequence: c.Sequence,
DeviceId: c.DeviceID[:],
LocalFlags: c.LocalFlags,
} }
} }
func (f FileInfoTruncated) IsDeleted() bool { func countsFromWire(w *dbproto.Counts) Counts {
return f.Deleted return Counts{
} Files: int(w.Files),
Directories: int(w.Directories),
func (f FileInfoTruncated) IsInvalid() bool { Symlinks: int(w.Symlinks),
return f.RawInvalid || f.LocalFlags&protocol.LocalInvalidFlags != 0 Deleted: int(w.Deleted),
} Bytes: w.Bytes,
Sequence: w.Sequence,
func (f FileInfoTruncated) IsUnsupported() bool { DeviceID: protocol.DeviceID(w.DeviceId),
return f.LocalFlags&protocol.FlagLocalUnsupported != 0 LocalFlags: w.LocalFlags,
}
func (f FileInfoTruncated) IsIgnored() bool {
return f.LocalFlags&protocol.FlagLocalIgnored != 0
}
func (f FileInfoTruncated) MustRescan() bool {
return f.LocalFlags&protocol.FlagLocalMustRescan != 0
}
func (f FileInfoTruncated) IsReceiveOnlyChanged() bool {
return f.LocalFlags&protocol.FlagLocalReceiveOnly != 0
}
func (f FileInfoTruncated) IsDirectory() bool {
return f.Type == protocol.FileInfoTypeDirectory
}
func (f FileInfoTruncated) IsSymlink() bool {
switch f.Type {
case protocol.FileInfoTypeSymlink, protocol.FileInfoTypeSymlinkDirectory, protocol.FileInfoTypeSymlinkFile:
return true
default:
return false
}
}
func (f FileInfoTruncated) ShouldConflict() bool {
return f.LocalFlags&protocol.LocalConflictFlags != 0
}
func (f FileInfoTruncated) HasPermissionBits() bool {
return !f.NoPermissions
}
func (f FileInfoTruncated) FileSize() int64 {
if f.Deleted {
return 0
}
if f.IsDirectory() || f.IsSymlink() {
return protocol.SyntheticDirectorySize
}
return f.Size
}
func (f FileInfoTruncated) BlockSize() int {
if f.RawBlockSize == 0 {
return protocol.MinBlockSize
}
return int(f.RawBlockSize)
}
func (f FileInfoTruncated) FileName() string {
return f.Name
}
func (f FileInfoTruncated) FileLocalFlags() uint32 {
return f.LocalFlags
}
func (f FileInfoTruncated) ModTime() time.Time {
return time.Unix(f.ModifiedS, int64(f.ModifiedNs))
}
func (f FileInfoTruncated) SequenceNo() int64 {
return f.Sequence
}
func (f FileInfoTruncated) FileVersion() protocol.Vector {
return f.Version
}
func (f FileInfoTruncated) FileType() protocol.FileInfoType {
return f.Type
}
func (f FileInfoTruncated) FilePermissions() uint32 {
return f.Permissions
}
func (f FileInfoTruncated) FileModifiedBy() protocol.ShortID {
return f.ModifiedBy
}
func (f FileInfoTruncated) PlatformData() protocol.PlatformData {
return f.Platform
}
func (f FileInfoTruncated) InodeChangeTime() time.Time {
return time.Unix(0, f.InodeChangeNs)
}
func (f FileInfoTruncated) FileBlocksHash() []byte {
return f.BlocksHash
}
func (f FileInfoTruncated) ConvertToIgnoredFileInfo() protocol.FileInfo {
file := f.copyToFileInfo()
file.SetIgnored()
return file
}
func (f FileInfoTruncated) ConvertToDeletedFileInfo(by protocol.ShortID) protocol.FileInfo {
file := f.copyToFileInfo()
file.SetDeleted(by)
return file
}
// ConvertDeletedToFileInfo converts a deleted truncated file info to a regular file info
func (f FileInfoTruncated) ConvertDeletedToFileInfo() protocol.FileInfo {
if !f.Deleted {
panic("ConvertDeletedToFileInfo must only be called on deleted items")
}
return f.copyToFileInfo()
}
// copyToFileInfo just copies all members of FileInfoTruncated to protocol.FileInfo
func (f FileInfoTruncated) copyToFileInfo() protocol.FileInfo {
return protocol.FileInfo{
Name: f.Name,
Size: f.Size,
ModifiedS: f.ModifiedS,
ModifiedBy: f.ModifiedBy,
Version: f.Version,
Sequence: f.Sequence,
SymlinkTarget: f.SymlinkTarget,
BlocksHash: f.BlocksHash,
Type: f.Type,
Permissions: f.Permissions,
ModifiedNs: f.ModifiedNs,
RawBlockSize: f.RawBlockSize,
LocalFlags: f.LocalFlags,
Deleted: f.Deleted,
RawInvalid: f.RawInvalid,
NoPermissions: f.NoPermissions,
} }
} }
@ -187,7 +67,7 @@ func (c Counts) Add(other Counts) Counts {
Deleted: c.Deleted + other.Deleted, Deleted: c.Deleted + other.Deleted,
Bytes: c.Bytes + other.Bytes, Bytes: c.Bytes + other.Bytes,
Sequence: c.Sequence + other.Sequence, Sequence: c.Sequence + other.Sequence,
DeviceID: protocol.EmptyDeviceID[:], DeviceID: protocol.EmptyDeviceID,
LocalFlags: c.LocalFlags | other.LocalFlags, LocalFlags: c.LocalFlags | other.LocalFlags,
} }
} }
@ -197,7 +77,6 @@ func (c Counts) TotalItems() int {
} }
func (c Counts) String() string { func (c Counts) String() string {
dev, _ := protocol.DeviceIDFromBytes(c.DeviceID)
var flags strings.Builder var flags strings.Builder
if c.LocalFlags&needFlag != 0 { if c.LocalFlags&needFlag != 0 {
flags.WriteString("Need") flags.WriteString("Need")
@ -220,7 +99,7 @@ func (c Counts) String() string {
if flags.Len() == 0 { if flags.Len() == 0 {
flags.WriteString("---") flags.WriteString("---")
} }
return fmt.Sprintf("{Device:%v, Files:%d, Dirs:%d, Symlinks:%d, Del:%d, Bytes:%d, Seq:%d, Flags:%s}", dev, c.Files, c.Directories, c.Symlinks, c.Deleted, c.Bytes, c.Sequence, flags.String()) return fmt.Sprintf("{Device:%v, Files:%d, Dirs:%d, Symlinks:%d, Del:%d, Bytes:%d, Seq:%d, Flags:%s}", c.DeviceID, c.Files, c.Directories, c.Symlinks, c.Deleted, c.Bytes, c.Sequence, flags.String())
} }
// Equal compares the numbers only, not sequence/dev/flags. // Equal compares the numbers only, not sequence/dev/flags.
@ -228,80 +107,50 @@ func (c Counts) Equal(o Counts) bool {
return c.Files == o.Files && c.Directories == o.Directories && c.Symlinks == o.Symlinks && c.Deleted == o.Deleted && c.Bytes == o.Bytes return c.Files == o.Files && c.Directories == o.Directories && c.Symlinks == o.Symlinks && c.Deleted == o.Deleted && c.Bytes == o.Bytes
} }
func (vl VersionList) String() string {
var b bytes.Buffer
var id protocol.DeviceID
b.WriteString("{")
for i, v := range vl.RawVersions {
if i > 0 {
b.WriteString(", ")
}
fmt.Fprintf(&b, "{Version:%v, Deleted:%v, Devices:{", v.Version, v.Deleted)
for j, dev := range v.Devices {
if j > 0 {
b.WriteString(", ")
}
copy(id[:], dev)
fmt.Fprint(&b, id.Short())
}
b.WriteString("}, Invalid:{")
for j, dev := range v.InvalidDevices {
if j > 0 {
b.WriteString(", ")
}
copy(id[:], dev)
fmt.Fprint(&b, id.Short())
}
fmt.Fprint(&b, "}}")
}
b.WriteString("}")
return b.String()
}
// update brings the VersionList up to date with file. It returns the updated // update brings the VersionList up to date with file. It returns the updated
// VersionList, a device that has the global/newest version, a device that previously // VersionList, a device that has the global/newest version, a device that previously
// had the global/newest version, a boolean indicating if the global version has // had the global/newest version, a boolean indicating if the global version has
// changed and if any error occurred (only possible in db interaction). // changed and if any error occurred (only possible in db interaction).
func (vl *VersionList) update(folder, device []byte, file protocol.FileIntf, t readOnlyTransaction) (FileVersion, FileVersion, FileVersion, bool, bool, bool, error) { func vlUpdate(vl *dbproto.VersionList, folder, device []byte, file protocol.FileInfo, t readOnlyTransaction) (*dbproto.FileVersion, *dbproto.FileVersion, *dbproto.FileVersion, bool, bool, bool, error) {
if len(vl.RawVersions) == 0 { if len(vl.Versions) == 0 {
nv := newFileVersion(device, file.FileVersion(), file.IsInvalid(), file.IsDeleted()) nv := newFileVersion(device, file.FileVersion(), file.IsInvalid(), file.IsDeleted())
vl.RawVersions = append(vl.RawVersions, nv) vl.Versions = append(vl.Versions, nv)
return nv, FileVersion{}, FileVersion{}, false, false, true, nil return nv, nil, nil, false, false, true, nil
} }
// Get the current global (before updating) // Get the current global (before updating)
oldFV, haveOldGlobal := vl.GetGlobal() oldFV, haveOldGlobal := vlGetGlobal(vl)
oldFV = oldFV.copy() oldFV = fvCopy(oldFV)
// Remove ourselves first // Remove ourselves first
removedFV, haveRemoved, _ := vl.pop(device) removedFV, haveRemoved, _ := vlPop(vl, device)
// Find position and insert the file // Find position and insert the file
err := vl.insert(folder, device, file, t) err := vlInsert(vl, folder, device, file, t)
if err != nil { if err != nil {
return FileVersion{}, FileVersion{}, FileVersion{}, false, false, false, err return nil, nil, nil, false, false, false, err
} }
newFV, _ := vl.GetGlobal() // We just inserted something above, can't be empty newFV, _ := vlGetGlobal(vl) // We just inserted something above, can't be empty
if !haveOldGlobal { if !haveOldGlobal {
return newFV, FileVersion{}, removedFV, false, haveRemoved, true, nil return newFV, nil, removedFV, false, haveRemoved, true, nil
} }
globalChanged := true globalChanged := true
if oldFV.IsInvalid() == newFV.IsInvalid() && oldFV.Version.Equal(newFV.Version) { if fvIsInvalid(oldFV) == fvIsInvalid(newFV) && protocol.VectorFromWire(oldFV.Version).Equal(protocol.VectorFromWire(newFV.Version)) {
globalChanged = false globalChanged = false
} }
return newFV, oldFV, removedFV, true, haveRemoved, globalChanged, nil return newFV, oldFV, removedFV, true, haveRemoved, globalChanged, nil
} }
func (vl *VersionList) insert(folder, device []byte, file protocol.FileIntf, t readOnlyTransaction) error { func vlInsert(vl *dbproto.VersionList, folder, device []byte, file protocol.FileInfo, t readOnlyTransaction) error {
var added bool var added bool
var err error var err error
i := 0 i := 0
for ; i < len(vl.RawVersions); i++ { for ; i < len(vl.Versions); i++ {
// Insert our new version // Insert our new version
added, err = vl.checkInsertAt(i, folder, device, file, t) added, err = vlCheckInsertAt(vl, i, folder, device, file, t)
if err != nil { if err != nil {
return err return err
} }
@ -309,80 +158,76 @@ func (vl *VersionList) insert(folder, device []byte, file protocol.FileIntf, t r
break break
} }
} }
if i == len(vl.RawVersions) { if i == len(vl.Versions) {
// Append to the end // Append to the end
vl.RawVersions = append(vl.RawVersions, newFileVersion(device, file.FileVersion(), file.IsInvalid(), file.IsDeleted())) vl.Versions = append(vl.Versions, newFileVersion(device, file.FileVersion(), file.IsInvalid(), file.IsDeleted()))
} }
return nil return nil
} }
func (vl *VersionList) insertAt(i int, v FileVersion) { func vlInsertAt(vl *dbproto.VersionList, i int, v *dbproto.FileVersion) {
vl.RawVersions = append(vl.RawVersions, FileVersion{}) vl.Versions = append(vl.Versions, &dbproto.FileVersion{})
copy(vl.RawVersions[i+1:], vl.RawVersions[i:]) copy(vl.Versions[i+1:], vl.Versions[i:])
vl.RawVersions[i] = v vl.Versions[i] = v
} }
// pop removes the given device from the VersionList and returns the FileVersion // pop removes the given device from the VersionList and returns the FileVersion
// before removing the device, whether it was found/removed at all and whether // before removing the device, whether it was found/removed at all and whether
// the global changed in the process. // the global changed in the process.
func (vl *VersionList) pop(device []byte) (FileVersion, bool, bool) { func vlPop(vl *dbproto.VersionList, device []byte) (*dbproto.FileVersion, bool, bool) {
invDevice, i, j, ok := vl.findDevice(device) invDevice, i, j, ok := vlFindDevice(vl, device)
if !ok { if !ok {
return FileVersion{}, false, false return nil, false, false
} }
globalPos := vl.findGlobal() globalPos := vlFindGlobal(vl)
if vl.RawVersions[i].deviceCount() == 1 { fv := vl.Versions[i]
fv := vl.RawVersions[i] if fvDeviceCount(fv) == 1 {
vl.popVersionAt(i) vlPopVersionAt(vl, i)
return fv, true, globalPos == i return fv, true, globalPos == i
} }
oldFV := vl.RawVersions[i].copy() oldFV := fvCopy(fv)
if invDevice { if invDevice {
vl.RawVersions[i].InvalidDevices = popDeviceAt(vl.RawVersions[i].InvalidDevices, j) vl.Versions[i].InvalidDevices = popDeviceAt(vl.Versions[i].InvalidDevices, j)
return oldFV, true, false return oldFV, true, false
} }
vl.RawVersions[i].Devices = popDeviceAt(vl.RawVersions[i].Devices, j) vl.Versions[i].Devices = popDeviceAt(vl.Versions[i].Devices, j)
// If the last valid device of the previous global was removed above, // If the last valid device of the previous global was removed above,
// the global changed. // the global changed.
return oldFV, true, len(vl.RawVersions[i].Devices) == 0 && globalPos == i return oldFV, true, len(vl.Versions[i].Devices) == 0 && globalPos == i
} }
// Get returns a FileVersion that contains the given device and whether it has // Get returns a FileVersion that contains the given device and whether it has
// been found at all. // been found at all.
func (vl *VersionList) Get(device []byte) (FileVersion, bool) { func vlGet(vl *dbproto.VersionList, device []byte) (*dbproto.FileVersion, bool) {
_, i, _, ok := vl.findDevice(device) _, i, _, ok := vlFindDevice(vl, device)
if !ok { if !ok {
return FileVersion{}, false return &dbproto.FileVersion{}, false
} }
return vl.RawVersions[i], true return vl.Versions[i], true
} }
// GetGlobal returns the current global FileVersion. The returned FileVersion // GetGlobal returns the current global FileVersion. The returned FileVersion
// may be invalid, if all FileVersions are invalid. Returns false only if // may be invalid, if all FileVersions are invalid. Returns false only if
// VersionList is empty. // VersionList is empty.
func (vl *VersionList) GetGlobal() (FileVersion, bool) { func vlGetGlobal(vl *dbproto.VersionList) (*dbproto.FileVersion, bool) {
i := vl.findGlobal() i := vlFindGlobal(vl)
if i == -1 { if i == -1 {
return FileVersion{}, false return nil, false
} }
return vl.RawVersions[i], true return vl.Versions[i], true
}
func (vl *VersionList) Empty() bool {
return len(vl.RawVersions) == 0
} }
// findGlobal returns the first version that isn't invalid, or if all versions are // findGlobal returns the first version that isn't invalid, or if all versions are
// invalid just the first version (i.e. 0) or -1, if there's no versions at all. // invalid just the first version (i.e. 0) or -1, if there's no versions at all.
func (vl *VersionList) findGlobal() int { func vlFindGlobal(vl *dbproto.VersionList) int {
for i, fv := range vl.RawVersions { for i := range vl.Versions {
if !fv.IsInvalid() { if !fvIsInvalid(vl.Versions[i]) {
return i return i
} }
} }
if len(vl.RawVersions) == 0 { if len(vl.Versions) == 0 {
return -1 return -1
} }
return 0 return 0
@ -391,8 +236,8 @@ func (vl *VersionList) findGlobal() int {
// findDevice returns whether the device is in InvalidVersions or Versions and // findDevice returns whether the device is in InvalidVersions or Versions and
// in InvalidDevices or Devices (true for invalid), the positions in the version // in InvalidDevices or Devices (true for invalid), the positions in the version
// and device slices and whether it has been found at all. // and device slices and whether it has been found at all.
func (vl *VersionList) findDevice(device []byte) (bool, int, int, bool) { func vlFindDevice(vl *dbproto.VersionList, device []byte) (bool, int, int, bool) {
for i, v := range vl.RawVersions { for i, v := range vl.Versions {
if j := deviceIndex(v.Devices, device); j != -1 { if j := deviceIndex(v.Devices, device); j != -1 {
return false, i, j, true return false, i, j, true
} }
@ -403,30 +248,31 @@ func (vl *VersionList) findDevice(device []byte) (bool, int, int, bool) {
return false, -1, -1, false return false, -1, -1, false
} }
func (vl *VersionList) popVersionAt(i int) { func vlPopVersionAt(vl *dbproto.VersionList, i int) {
vl.RawVersions = append(vl.RawVersions[:i], vl.RawVersions[i+1:]...) vl.Versions = append(vl.Versions[:i], vl.Versions[i+1:]...)
} }
// checkInsertAt determines if the given device and associated file should be // checkInsertAt determines if the given device and associated file should be
// inserted into the FileVersion at position i or into a new FileVersion at // inserted into the FileVersion at position i or into a new FileVersion at
// position i. // position i.
func (vl *VersionList) checkInsertAt(i int, folder, device []byte, file protocol.FileIntf, t readOnlyTransaction) (bool, error) { func vlCheckInsertAt(vl *dbproto.VersionList, i int, folder, device []byte, file protocol.FileInfo, t readOnlyTransaction) (bool, error) {
ordering := vl.RawVersions[i].Version.Compare(file.FileVersion()) fv := vl.Versions[i]
ordering := protocol.VectorFromWire(fv.Version).Compare(file.FileVersion())
if ordering == protocol.Equal { if ordering == protocol.Equal {
if !file.IsInvalid() { if !file.IsInvalid() {
vl.RawVersions[i].Devices = append(vl.RawVersions[i].Devices, device) fv.Devices = append(fv.Devices, device)
} else { } else {
vl.RawVersions[i].InvalidDevices = append(vl.RawVersions[i].InvalidDevices, device) fv.InvalidDevices = append(fv.InvalidDevices, device)
} }
return true, nil return true, nil
} }
existingDevice, _ := vl.RawVersions[i].FirstDevice() existingDevice, _ := fvFirstDevice(fv)
insert, err := shouldInsertBefore(ordering, folder, existingDevice, vl.RawVersions[i].IsInvalid(), file, t) insert, err := shouldInsertBefore(ordering, folder, existingDevice, fvIsInvalid(fv), file, t)
if err != nil { if err != nil {
return false, err return false, err
} }
if insert { if insert {
vl.insertAt(i, newFileVersion(device, file.FileVersion(), file.IsInvalid(), file.IsDeleted())) vlInsertAt(vl, i, newFileVersion(device, file.FileVersion(), file.IsInvalid(), file.IsDeleted()))
return true, nil return true, nil
} }
return false, nil return false, nil
@ -435,7 +281,7 @@ func (vl *VersionList) checkInsertAt(i int, folder, device []byte, file protocol
// shouldInsertBefore determines whether the file comes before an existing // shouldInsertBefore determines whether the file comes before an existing
// entry, given the version ordering (existing compared to new one), existing // entry, given the version ordering (existing compared to new one), existing
// device and if the existing version is invalid. // device and if the existing version is invalid.
func shouldInsertBefore(ordering protocol.Ordering, folder, existingDevice []byte, existingInvalid bool, file protocol.FileIntf, t readOnlyTransaction) (bool, error) { func shouldInsertBefore(ordering protocol.Ordering, folder, existingDevice []byte, existingInvalid bool, file protocol.FileInfo, t readOnlyTransaction) (bool, error) {
switch ordering { switch ordering {
case protocol.Lesser: case protocol.Lesser:
// The version at this point in the list is lesser // The version at this point in the list is lesser
@ -461,10 +307,7 @@ func shouldInsertBefore(ordering protocol.Ordering, folder, existingDevice []byt
if !ok { if !ok {
return true, nil return true, nil
} }
if err != nil { if file.WinsConflict(of) {
return false, err
}
if protocol.WinsConflict(file, of) {
return true, nil return true, nil
} }
} }
@ -484,9 +327,9 @@ func popDeviceAt(devices [][]byte, i int) [][]byte {
return append(devices[:i], devices[i+1:]...) return append(devices[:i], devices[i+1:]...)
} }
func newFileVersion(device []byte, version protocol.Vector, invalid, deleted bool) FileVersion { func newFileVersion(device []byte, version protocol.Vector, invalid, deleted bool) *dbproto.FileVersion {
fv := FileVersion{ fv := &dbproto.FileVersion{
Version: version, Version: version.ToWire(),
Deleted: deleted, Deleted: deleted,
} }
if invalid { if invalid {
@ -497,7 +340,7 @@ func newFileVersion(device []byte, version protocol.Vector, invalid, deleted boo
return fv return fv
} }
func (fv FileVersion) FirstDevice() ([]byte, bool) { func fvFirstDevice(fv *dbproto.FileVersion) ([]byte, bool) {
if len(fv.Devices) != 0 { if len(fv.Devices) != 0 {
return fv.Devices[0], true return fv.Devices[0], true
} }
@ -507,18 +350,14 @@ func (fv FileVersion) FirstDevice() ([]byte, bool) {
return nil, false return nil, false
} }
func (fv FileVersion) IsInvalid() bool { func fvIsInvalid(fv *dbproto.FileVersion) bool {
return len(fv.Devices) == 0 return fv == nil || len(fv.Devices) == 0
} }
func (fv FileVersion) deviceCount() int { func fvDeviceCount(fv *dbproto.FileVersion) int {
return len(fv.Devices) + len(fv.InvalidDevices) return len(fv.Devices) + len(fv.InvalidDevices)
} }
func (fv FileVersion) copy() FileVersion { func fvCopy(fv *dbproto.FileVersion) *dbproto.FileVersion {
n := fv return proto.Clone(fv).(*dbproto.FileVersion)
n.Version = fv.Version.Copy()
n.Devices = append([][]byte{}, fv.Devices...)
n.InvalidDevices = append([][]byte{}, fv.InvalidDevices...)
return n
} }

File diff suppressed because it is too large Load Diff

View File

@ -1 +0,0 @@
index.db

View File

@ -1,22 +0,0 @@
{"k":"AAAAAAAAAAABL25vdGdvb2Q=","v":"Cggvbm90Z29vZEoHCgUIARDoB1ACggEiGiAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHw=="}
{"k":"AAAAAAAAAAABYQ==","v":"CgFhSgcKBQgBEOgHUAGCASIaIAABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4f"}
{"k":"AAAAAAAAAAACYg==","v":"CgFiSgcKBQgBEOkHggEiGiAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eH4IBJBABGiABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fIA=="}
{"k":"AAAAAAAAAAACYw==","v":"CgFjOAFKBwoFCAEQ6geCASIaIAABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fggEkEAEaIAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gggEkEAIaIAIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhggEkEAMaIAMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fICEiggEkEAQaIAQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gISIj"}
{"k":"AAAAAAAAAAACZA==","v":"CgFkSgcKBQgBEOsHggEiGiAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eH4IBJBABGiABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fIIIBJBACGiACAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gIYIBJBADGiADBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIoIBJBAEGiAEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fICEiI4IBJBAFGiAFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gISIjJIIBJBAGGiAGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIiMkJQ=="}
{"k":"AAAAAAAAAAADYw==","v":"CgFjSgcKBQgBEOoHggEiGiAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eH4IBJBABGiABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fIIIBJBACGiACAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gIYIBJBADGiADBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIoIBJBAEGiAEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fICEiI4IBJBAFGiAFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gISIjJIIBJBAGGiAGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIiMkJQ=="}
{"k":"AAAAAAAAAAADZA==","v":"CgFkOAFKBwoFCAEQ6weCASIaIAABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fggEkEAEaIAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gggEkEAIaIAIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhggEkEAMaIAMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fICEiggEkEAQaIAQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gISIj"}
{"k":"AAAAAAAAAAADaW52YWxpZA==","v":"CgdpbnZhbGlkOAFKBwoFCAEQ7AeCASIaIAABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fggEkEAEaIAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gggEkEAIaIAIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhggEkEAMaIAMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fICEiggEkEAQaIAQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gISIj"}
{"k":"AQAAAAAvbm90Z29vZA==","v":"CisKBwoFCAEQ6AcSIP//////////////////////////////////////////"}
{"k":"AQAAAABh","v":"CisKBwoFCAEQ6AcSIP//////////////////////////////////////////"}
{"k":"AQAAAABi","v":"CisKBwoFCAEQ6QcSIAIj5b8/Vx850vCTKUE+HcWcQZUIgmhv//rEL3j3A/At"}
{"k":"AQAAAABj","v":"CisKBwoFCAEQ6gcSIEeUA//e9Ja19eW8nAoVIh5wBzFkUJ+jB2GvYwlPb5RcCi0KBwoFCAEQ6gcSIAIj5b8/Vx850vCTKUE+HcWcQZUIgmhv//rEL3j3A/AtGAE="}
{"k":"AQAAAABk","v":"CisKBwoFCAEQ6wcSIAIj5b8/Vx850vCTKUE+HcWcQZUIgmhv//rEL3j3A/AtCi0KBwoFCAEQ6wcSIEeUA//e9Ja19eW8nAoVIh5wBzFkUJ+jB2GvYwlPb5RcGAE="}
{"k":"AQAAAABpbnZhbGlk","v":"Ci0KBwoFCAEQ7AcSIEeUA//e9Ja19eW8nAoVIh5wBzFkUJ+jB2GvYwlPb5RcGAE="}
{"k":"AgAAAAAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHy9ub3Rnb29k","v":"AAAAAA=="}
{"k":"AgAAAAAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eH2E=","v":"AAAAAA=="}
{"k":"BgAAAAAAAAAA","v":"VXBkYXRlU2NoZW1hMHRvMw=="}
{"k":"BwAAAAAAAAAA","v":""}
{"k":"BwAAAAEAAAAA","v":"//////////////////////////////////////////8="}
{"k":"BwAAAAIAAAAA","v":"AiPlvz9XHznS8JMpQT4dxZxBlQiCaG//+sQvePcD8C0="}
{"k":"BwAAAAMAAAAA","v":"R5QD/970lrX15bycChUiHnAHMWRQn6MHYa9jCU9vlFw="}
{"k":"CQAAAAA=","v":"CicIAjACigEg//////////////////////////////////////////8KJwgFMAKKASD4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+AolCAKKASACI+W/P1cfOdLwkylBPh3FnEGVCIJob//6xC949wPwLQolCAGKASBHlAP/3vSWtfXlvJwKFSIecAcxZFCfowdhr2MJT2+UXBCyn6iaw4HGlxU="}

View File

@ -1,15 +0,0 @@
{"k":"AAAAAAAAAAABYmFy","v":"CgNiYXJKBwoFCAEQ6QdQAg=="}
{"k":"AAAAAAAAAAABZm9v","v":"CgNmb284AUoHCgUIARDoB1AB"}
{"k":"AAAAAAAAAAACYmF6","v":"CgNiYXo4AUoHCgUIKhDoBw=="}
{"k":"AAAAAAAAAAACcXV1eA==","v":"CgRxdXV4SgcKBQgqEOoH"}
{"k":"AQAAAABiYXI=","v":"CisKBwoFCAEQ6QcSIP//////////////////////////////////////////"}
{"k":"AQAAAABiYXo=","v":"Ci0KBwoFCCoQ6AcSICoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGAE="}
{"k":"AQAAAABmb28=","v":"Ci0KBwoFCAEQ6AcSIP//////////////////////////////////////////GAE="}
{"k":"AQAAAABxdXV4","v":"CisKBwoFCCoQ6gcSICoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"}
{"k":"BgAAAAAAAAAA","v":"dGVzdA=="}
{"k":"BwAAAAAAAAAA","v":""}
{"k":"BwAAAAEAAAAA","v":"//////////////////////////////////////////8="}
{"k":"BwAAAAIAAAAA","v":"KgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="}
{"k":"CQAAAAA=","v":"CicIATACigEg//////////////////////////////////////////8KJwgCMAKKASD4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+AolCAGKASAqAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABCoudnCuM+vlxU="}
{"k":"CwAAAAAAAAAAAAAAAQ==","v":"AAAAAAAAAAABZm9v"}
{"k":"CwAAAAAAAAAAAAAAAg==","v":"AAAAAAAAAAABYmFy"}

View File

@ -1,24 +0,0 @@
{"k":"AAAAAAAAAAABYQ==","v":"CgFhMAFKBwoFCAEQ6AdQAQ=="}
{"k":"AAAAAAAAAAABYg==","v":"CgFiSgcKBQgBEOgHUAKCASIaIAABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fggEkEAEaIAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8gkgEgC8XkepY1E4woWwAAyi81YItXr5CMuwY6mfvf2iLupTo="}
{"k":"AAAAAAAAAAABYw==","v":"CgFjMAFKBwoFCAEQ6AdQAw=="}
{"k":"AAAAAAAAAAACYQ==","v":"CgFhMAFKBwoFCAEQ6AdQAQ=="}
{"k":"AAAAAAAAAAACYg==","v":"CgFiMAFKFQoFCAEQ6AcKDAi5vtz687f5kQIQAVACggEiGiAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eH4IBJBABGiABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fIJIBIAvF5HqWNROMKFsAAMovNWCLV6+QjLsGOpn739oi7qU6"}
{"k":"AAAAAAAAAAACYw==","v":"CgFjShUKBQgBEOgHCgwIub7c+vO3+ZECEAFQA4IBIhogAAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh+SASBjDc0pZsQzZpESVEi7sltP9BKknHMtssirwbhYG9cQ3Q=="}
{"k":"AQAAAABh","v":"CisKBwoFCAEQ6AcSIAIj5b8/Vx850vCTKUE+HcWcQZUIgmhv//rEL3j3A/AtCisKBwoFCAEQ6AcSIP//////////////////////////////////////////"}
{"k":"AQAAAABi","v":"CjkKFQoFCAEQ6AcKDAi5vtz687f5kQIQARIgAiPlvz9XHznS8JMpQT4dxZxBlQiCaG//+sQvePcD8C0KKwoHCgUIARDoBxIg//////////////////////////////////////////8="}
{"k":"AQAAAABj","v":"CjkKFQoFCAEQ6AcKDAi5vtz687f5kQIQARIgAiPlvz9XHznS8JMpQT4dxZxBlQiCaG//+sQvePcD8C0KKwoHCgUIARDoBxIg//////////////////////////////////////////8="}
{"k":"AgAAAAAAAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eH2I=","v":"AAAAAA=="}
{"k":"AgAAAAABAgMEBQYHCAkKCwwNDg8QERITFBUWFxgZGhscHR4fIGI=","v":"AAAAAQ=="}
{"k":"BgAAAAAAAAAA","v":"dGVzdA=="}
{"k":"BwAAAAAAAAAA","v":""}
{"k":"BwAAAAEAAAAA","v":"//////////////////////////////////////////8="}
{"k":"BwAAAAIAAAAA","v":"AiPlvz9XHznS8JMpQT4dxZxBlQiCaG//+sQvePcD8C0="}
{"k":"CQAAAAA=","v":"CikIASACMAOKASD//////////////////////////////////////////wonCAEgAooBIPj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4+Pj4CikIASACMAOKASACI+W/P1cfOdLwkylBPh3FnEGVCIJob//6xC949wPwLRDE7Jrik+mdhhY="}
{"k":"CmRiTWluU3luY3RoaW5nVmVyc2lvbg==","v":"djEuNC4w"}
{"k":"CmRiVmVyc2lvbg==","v":"AAAAAAAAAAk="}
{"k":"Cmxhc3RJbmRpcmVjdEdDVGltZQ==","v":"AAAAAF6yy/Q="}
{"k":"CwAAAAAAAAAAAAAAAQ==","v":"AAAAAAAAAAABYQ=="}
{"k":"CwAAAAAAAAAAAAAAAg==","v":"AAAAAAAAAAABYg=="}
{"k":"CwAAAAAAAAAAAAAAAw==","v":"AAAAAAAAAAABYw=="}
{"k":"DAAAAABi","v":""}
{"k":"DAAAAABj","v":""}

View File

@ -11,10 +11,15 @@ import (
"errors" "errors"
"fmt" "fmt"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/bep"
"github.com/syncthing/syncthing/internal/gen/dbproto"
"github.com/syncthing/syncthing/lib/db/backend" "github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/osutil" "github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sliceutil"
) )
var ( var (
@ -63,48 +68,49 @@ func (t readOnlyTransaction) getFileByKey(key []byte) (protocol.FileInfo, bool,
if err != nil || !ok { if err != nil || !ok {
return protocol.FileInfo{}, false, err return protocol.FileInfo{}, false, err
} }
return f.(protocol.FileInfo), true, nil return f, true, nil
} }
func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (protocol.FileIntf, bool, error) { func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (protocol.FileInfo, bool, error) {
bs, err := t.Get(key) bs, err := t.Get(key)
if backend.IsNotFound(err) { if backend.IsNotFound(err) {
return nil, false, nil return protocol.FileInfo{}, false, nil
} }
if err != nil { if err != nil {
return nil, false, err return protocol.FileInfo{}, false, err
} }
f, err := t.unmarshalTrunc(bs, trunc) f, err := t.unmarshalTrunc(bs, trunc)
if backend.IsNotFound(err) { if backend.IsNotFound(err) {
return nil, false, nil return protocol.FileInfo{}, false, nil
} }
if err != nil { if err != nil {
return nil, false, err return protocol.FileInfo{}, false, err
} }
return f, true, nil return f, true, nil
} }
func (t readOnlyTransaction) unmarshalTrunc(bs []byte, trunc bool) (protocol.FileIntf, error) { func (t readOnlyTransaction) unmarshalTrunc(bs []byte, trunc bool) (protocol.FileInfo, error) {
if trunc { if trunc {
var tf FileInfoTruncated var bfi dbproto.FileInfoTruncated
err := tf.Unmarshal(bs) err := proto.Unmarshal(bs, &bfi)
if err != nil { if err != nil {
return nil, err return protocol.FileInfo{}, err
} }
if err := t.fillTruncated(&tf); err != nil { if err := t.fillTruncated(&bfi); err != nil {
return nil, err return protocol.FileInfo{}, err
} }
return tf, nil return protocol.FileInfoFromDBTruncated(&bfi), nil
} }
var fi protocol.FileInfo var bfi bep.FileInfo
if err := fi.Unmarshal(bs); err != nil { err := proto.Unmarshal(bs, &bfi)
return nil, err if err != nil {
return protocol.FileInfo{}, err
} }
if err := t.fillFileInfo(&fi); err != nil { if err := t.fillFileInfo(&bfi); err != nil {
return nil, err return protocol.FileInfo{}, err
} }
return fi, nil return protocol.FileInfoFromDB(&bfi), nil
} }
type blocksIndirectionError struct { type blocksIndirectionError struct {
@ -121,7 +127,7 @@ func (e *blocksIndirectionError) Unwrap() error {
// fillFileInfo follows the (possible) indirection of blocks and version // fillFileInfo follows the (possible) indirection of blocks and version
// vector and fills it out. // vector and fills it out.
func (t readOnlyTransaction) fillFileInfo(fi *protocol.FileInfo) error { func (t readOnlyTransaction) fillFileInfo(fi *bep.FileInfo) error {
var key []byte var key []byte
if len(fi.Blocks) == 0 && len(fi.BlocksHash) != 0 { if len(fi.Blocks) == 0 && len(fi.BlocksHash) != 0 {
@ -131,8 +137,8 @@ func (t readOnlyTransaction) fillFileInfo(fi *protocol.FileInfo) error {
if err != nil { if err != nil {
return &blocksIndirectionError{err} return &blocksIndirectionError{err}
} }
var bl BlockList var bl dbproto.BlockList
if err := bl.Unmarshal(bs); err != nil { if err := proto.Unmarshal(bs, &bl); err != nil {
return err return err
} }
fi.Blocks = bl.Blocks fi.Blocks = bl.Blocks
@ -144,11 +150,11 @@ func (t readOnlyTransaction) fillFileInfo(fi *protocol.FileInfo) error {
if err != nil { if err != nil {
return fmt.Errorf("filling Version: %w", err) return fmt.Errorf("filling Version: %w", err)
} }
var v protocol.Vector var v bep.Vector
if err := v.Unmarshal(bs); err != nil { if err := proto.Unmarshal(bs, &v); err != nil {
return err return err
} }
fi.Version = v fi.Version = &v
} }
return nil return nil
@ -156,7 +162,7 @@ func (t readOnlyTransaction) fillFileInfo(fi *protocol.FileInfo) error {
// fillTruncated follows the (possible) indirection of version vector and // fillTruncated follows the (possible) indirection of version vector and
// fills it. // fills it.
func (t readOnlyTransaction) fillTruncated(fi *FileInfoTruncated) error { func (t readOnlyTransaction) fillTruncated(fi *dbproto.FileInfoTruncated) error {
var key []byte var key []byte
if len(fi.VersionHash) == 0 { if len(fi.VersionHash) == 0 {
@ -168,73 +174,72 @@ func (t readOnlyTransaction) fillTruncated(fi *FileInfoTruncated) error {
if err != nil { if err != nil {
return err return err
} }
var v protocol.Vector var v bep.Vector
if err := v.Unmarshal(bs); err != nil { if err := proto.Unmarshal(bs, &v); err != nil {
return err return err
} }
fi.Version = v fi.Version = &v
return nil return nil
} }
func (t readOnlyTransaction) getGlobalVersions(keyBuf, folder, file []byte) (VersionList, error) { func (t readOnlyTransaction) getGlobalVersions(keyBuf, folder, file []byte) (*dbproto.VersionList, error) {
var err error var err error
keyBuf, err = t.keyer.GenerateGlobalVersionKey(keyBuf, folder, file) keyBuf, err = t.keyer.GenerateGlobalVersionKey(keyBuf, folder, file)
if err != nil { if err != nil {
return VersionList{}, err return nil, err
} }
return t.getGlobalVersionsByKey(keyBuf) return t.getGlobalVersionsByKey(keyBuf)
} }
func (t readOnlyTransaction) getGlobalVersionsByKey(key []byte) (VersionList, error) { func (t readOnlyTransaction) getGlobalVersionsByKey(key []byte) (*dbproto.VersionList, error) {
bs, err := t.Get(key) bs, err := t.Get(key)
if err != nil { if err != nil {
return VersionList{}, err return nil, err
} }
var vl VersionList var vl dbproto.VersionList
if err := vl.Unmarshal(bs); err != nil { if err := proto.Unmarshal(bs, &vl); err != nil {
return VersionList{}, err return nil, err
} }
return vl, nil return &vl, nil
} }
func (t readOnlyTransaction) getGlobal(keyBuf, folder, file []byte, truncate bool) ([]byte, protocol.FileIntf, bool, error) { func (t readOnlyTransaction) getGlobal(keyBuf, folder, file []byte, truncate bool) ([]byte, protocol.FileInfo, bool, error) {
vl, err := t.getGlobalVersions(keyBuf, folder, file) vl, err := t.getGlobalVersions(keyBuf, folder, file)
if backend.IsNotFound(err) { if backend.IsNotFound(err) {
return keyBuf, nil, false, nil return keyBuf, protocol.FileInfo{}, false, nil
} else if err != nil { } else if err != nil {
return nil, nil, false, err return nil, protocol.FileInfo{}, false, err
} }
var fi protocol.FileIntf keyBuf, fi, err := t.getGlobalFromVersionList(keyBuf, folder, file, truncate, vl)
keyBuf, fi, err = t.getGlobalFromVersionList(keyBuf, folder, file, truncate, vl)
return keyBuf, fi, true, err return keyBuf, fi, true, err
} }
func (t readOnlyTransaction) getGlobalFromVersionList(keyBuf, folder, file []byte, truncate bool, vl VersionList) ([]byte, protocol.FileIntf, error) { func (t readOnlyTransaction) getGlobalFromVersionList(keyBuf, folder, file []byte, truncate bool, vl *dbproto.VersionList) ([]byte, protocol.FileInfo, error) {
fv, ok := vl.GetGlobal() fv, ok := vlGetGlobal(vl)
if !ok { if !ok {
return keyBuf, nil, errEmptyGlobal return keyBuf, protocol.FileInfo{}, errEmptyGlobal
} }
keyBuf, fi, err := t.getGlobalFromFileVersion(keyBuf, folder, file, truncate, fv) keyBuf, fi, err := t.getGlobalFromFileVersion(keyBuf, folder, file, truncate, fv)
return keyBuf, fi, err return keyBuf, fi, err
} }
func (t readOnlyTransaction) getGlobalFromFileVersion(keyBuf, folder, file []byte, truncate bool, fv FileVersion) ([]byte, protocol.FileIntf, error) { func (t readOnlyTransaction) getGlobalFromFileVersion(keyBuf, folder, file []byte, truncate bool, fv *dbproto.FileVersion) ([]byte, protocol.FileInfo, error) {
dev, ok := fv.FirstDevice() dev, ok := fvFirstDevice(fv)
if !ok { if !ok {
return keyBuf, nil, errEmptyFileVersion return keyBuf, protocol.FileInfo{}, errEmptyFileVersion
} }
keyBuf, err := t.keyer.GenerateDeviceFileKey(keyBuf, folder, dev, file) keyBuf, err := t.keyer.GenerateDeviceFileKey(keyBuf, folder, dev, file)
if err != nil { if err != nil {
return keyBuf, nil, err return keyBuf, protocol.FileInfo{}, err
} }
fi, ok, err := t.getFileTrunc(keyBuf, truncate) fi, ok, err := t.getFileTrunc(keyBuf, truncate)
if err != nil { if err != nil {
return keyBuf, nil, err return keyBuf, protocol.FileInfo{}, err
} }
if !ok { if !ok {
return keyBuf, nil, errEntryFromGlobalMissing return keyBuf, protocol.FileInfo{}, errEntryFromGlobalMissing
} }
return keyBuf, fi, nil return keyBuf, fi, nil
} }
@ -357,13 +362,13 @@ func (t *readOnlyTransaction) withGlobal(folder, prefix []byte, truncate bool, f
return nil return nil
} }
var vl VersionList var vl dbproto.VersionList
if err := vl.Unmarshal(dbi.Value()); err != nil { if err := proto.Unmarshal(dbi.Value(), &vl); err != nil {
return err return err
} }
var f protocol.FileIntf var f protocol.FileInfo
dk, f, err = t.getGlobalFromVersionList(dk, folder, name, truncate, vl) dk, f, err = t.getGlobalFromVersionList(dk, folder, name, truncate, &vl)
if err != nil { if err != nil {
return err return err
} }
@ -432,7 +437,7 @@ func (t *readOnlyTransaction) availability(folder, file []byte) ([]protocol.Devi
return nil, err return nil, err
} }
fv, ok := vl.GetGlobal() fv, ok := vlGetGlobal(vl)
if !ok { if !ok {
return nil, nil return nil, nil
} }
@ -472,32 +477,32 @@ func (t *readOnlyTransaction) withNeedIteratingGlobal(folder, device []byte, tru
return err return err
} }
for dbi.Next() { for dbi.Next() {
var vl VersionList var vl dbproto.VersionList
if err := vl.Unmarshal(dbi.Value()); err != nil { if err := proto.Unmarshal(dbi.Value(), &vl); err != nil {
return err return err
} }
globalFV, ok := vl.GetGlobal() globalFV, ok := vlGetGlobal(&vl)
if !ok { if !ok {
return errEmptyGlobal return errEmptyGlobal
} }
haveFV, have := vl.Get(device) haveFV, have := vlGet(&vl, device)
if !Need(globalFV, have, haveFV.Version) { if !Need(globalFV, have, protocol.VectorFromWire(haveFV.Version)) {
continue continue
} }
name := t.keyer.NameFromGlobalVersionKey(dbi.Key()) name := t.keyer.NameFromGlobalVersionKey(dbi.Key())
var gf protocol.FileIntf var gf protocol.FileInfo
dk, gf, err = t.getGlobalFromFileVersion(dk, folder, name, truncate, globalFV) dk, gf, err = t.getGlobalFromFileVersion(dk, folder, name, truncate, globalFV)
if err != nil { if err != nil {
return err return err
} }
if shouldDebug() { if shouldDebug() {
if globalDev, ok := globalFV.FirstDevice(); ok { if globalDev, ok := fvFirstDevice(globalFV); ok {
globalID, _ := protocol.DeviceIDFromBytes(globalDev) globalID, _ := protocol.DeviceIDFromBytes(globalDev)
l.Debugf("need folder=%q device=%v name=%q have=%v invalid=%v haveV=%v haveDeleted=%v globalV=%v globalDeleted=%v globalDev=%v", folder, devID, name, have, haveFV.IsInvalid(), haveFV.Version, haveFV.Deleted, gf.FileVersion(), globalFV.Deleted, globalID) l.Debugf("need folder=%q device=%v name=%q have=%v invalid=%v haveV=%v haveDeleted=%v globalV=%v globalDeleted=%v globalDev=%v", folder, devID, name, have, fvIsInvalid(haveFV), haveFV.Version, haveFV.Deleted, gf.FileVersion(), globalFV.Deleted, globalID)
} }
} }
if !fn(gf) { if !fn(gf) {
@ -519,7 +524,7 @@ func (t *readOnlyTransaction) withNeedLocal(folder []byte, truncate bool, fn Ite
defer dbi.Release() defer dbi.Release()
var keyBuf []byte var keyBuf []byte
var f protocol.FileIntf var f protocol.FileInfo
var ok bool var ok bool
for dbi.Next() { for dbi.Next() {
keyBuf, f, ok, err = t.getGlobal(keyBuf, folder, t.keyer.NameFromGlobalVersionKey(dbi.Key()), truncate) keyBuf, f, ok, err = t.getGlobal(keyBuf, folder, t.keyer.NameFromGlobalVersionKey(dbi.Key()), truncate)
@ -589,7 +594,8 @@ func (t readWriteTransaction) putFile(fkey []byte, fi protocol.FileInfo) error {
bkey = t.keyer.GenerateBlockListKey(bkey, fi.BlocksHash) bkey = t.keyer.GenerateBlockListKey(bkey, fi.BlocksHash)
if _, err := t.Get(bkey); backend.IsNotFound(err) { if _, err := t.Get(bkey); backend.IsNotFound(err) {
// Marshal the block list and save it // Marshal the block list and save it
blocksBs := mustMarshal(&BlockList{Blocks: fi.Blocks}) blocks := sliceutil.Map(fi.Blocks, protocol.BlockInfo.ToWire)
blocksBs := mustMarshal(&dbproto.BlockList{Blocks: blocks})
if err := t.Put(bkey, blocksBs); err != nil { if err := t.Put(bkey, blocksBs); err != nil {
return err return err
} }
@ -605,7 +611,7 @@ func (t readWriteTransaction) putFile(fkey []byte, fi protocol.FileInfo) error {
bkey = t.keyer.GenerateVersionKey(bkey, fi.VersionHash) bkey = t.keyer.GenerateVersionKey(bkey, fi.VersionHash)
if _, err := t.Get(bkey); backend.IsNotFound(err) { if _, err := t.Get(bkey); backend.IsNotFound(err) {
// Marshal the version vector and save it // Marshal the version vector and save it
versionBs := mustMarshal(&fi.Version) versionBs := mustMarshal(fi.Version.ToWire())
if err := t.Put(bkey, versionBs); err != nil { if err := t.Put(bkey, versionBs); err != nil {
return err return err
} }
@ -619,7 +625,7 @@ func (t readWriteTransaction) putFile(fkey []byte, fi protocol.FileInfo) error {
t.indirectionTracker.recordIndirectionHashesForFile(&fi) t.indirectionTracker.recordIndirectionHashesForFile(&fi)
fiBs := mustMarshal(&fi) fiBs := mustMarshal(fi.ToWire(true))
return t.Put(fkey, fiBs) return t.Put(fkey, fiBs)
} }
@ -638,8 +644,11 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
if err != nil && !backend.IsNotFound(err) { if err != nil && !backend.IsNotFound(err) {
return nil, err return nil, err
} }
if fl == nil {
fl = &dbproto.VersionList{}
}
globalFV, oldGlobalFV, removedFV, haveOldGlobal, haveRemoved, globalChanged, err := fl.update(folder, device, file, t.readOnlyTransaction) globalFV, oldGlobalFV, removedFV, haveOldGlobal, haveRemoved, globalChanged, err := vlUpdate(fl, folder, device, file, t.readOnlyTransaction)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -647,20 +656,20 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
name := []byte(file.Name) name := []byte(file.Name)
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl) l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
if err := t.Put(gk, mustMarshal(&fl)); err != nil { if err := t.Put(gk, mustMarshal(fl)); err != nil {
return nil, err return nil, err
} }
// Only load those from db if actually needed // Only load those from db if actually needed
var gotGlobal, gotOldGlobal bool var gotGlobal, gotOldGlobal bool
var global, oldGlobal protocol.FileIntf var global, oldGlobal protocol.FileInfo
// Check the need of the device that was updated // Check the need of the device that was updated
// Must happen before updating global meta: If this is the first // Must happen before updating global meta: If this is the first
// item from this device, it will be initialized with the global state. // item from this device, it will be initialized with the global state.
needBefore := haveOldGlobal && Need(oldGlobalFV, haveRemoved, removedFV.Version) needBefore := haveOldGlobal && Need(oldGlobalFV, haveRemoved, protocol.VectorFromWire(removedFV.GetVersion()))
needNow := Need(globalFV, true, file.Version) needNow := Need(globalFV, true, file.Version)
if needBefore { if needBefore {
if keyBuf, oldGlobal, err = t.getGlobalFromFileVersion(keyBuf, folder, name, true, oldGlobalFV); err != nil { if keyBuf, oldGlobal, err = t.getGlobalFromFileVersion(keyBuf, folder, name, true, oldGlobalFV); err != nil {
@ -709,7 +718,7 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
// Add the new global to the global size counter // Add the new global to the global size counter
if !gotGlobal { if !gotGlobal {
if globalFV.Version.Equal(file.Version) { if protocol.VectorFromWire(globalFV.Version).Equal(file.Version) {
// The inserted file is the global file // The inserted file is the global file
global = file global = file
} else { } else {
@ -718,15 +727,15 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
return nil, err return nil, err
} }
} }
gotGlobal = true
} }
meta.addFile(protocol.GlobalDeviceID, global) meta.addFile(protocol.GlobalDeviceID, global)
// check for local (if not already done before) // check for local (if not already done before)
if !bytes.Equal(device, protocol.LocalDeviceID[:]) { if !bytes.Equal(device, protocol.LocalDeviceID[:]) {
localFV, haveLocal := fl.Get(protocol.LocalDeviceID[:]) localFV, haveLocal := vlGet(fl, protocol.LocalDeviceID[:])
needBefore := haveOldGlobal && Need(oldGlobalFV, haveLocal, localFV.Version) localVersion := protocol.VectorFromWire(localFV.Version)
needNow := Need(globalFV, haveLocal, localFV.Version) needBefore := haveOldGlobal && Need(oldGlobalFV, haveLocal, localVersion)
needNow := Need(globalFV, haveLocal, localVersion)
if needBefore { if needBefore {
meta.removeNeeded(protocol.LocalDeviceID, oldGlobal) meta.removeNeeded(protocol.LocalDeviceID, oldGlobal)
if !needNow { if !needNow {
@ -750,11 +759,12 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
// Already handled above // Already handled above
continue continue
} }
fv, have := fl.Get(dev[:]) fv, have := vlGet(fl, dev[:])
if haveOldGlobal && Need(oldGlobalFV, have, fv.Version) { fvVersion := protocol.VectorFromWire(fv.Version)
if haveOldGlobal && Need(oldGlobalFV, have, fvVersion) {
meta.removeNeeded(dev, oldGlobal) meta.removeNeeded(dev, oldGlobal)
} }
if Need(globalFV, have, fv.Version) { if Need(globalFV, have, fvVersion) {
meta.addNeeded(dev, global) meta.addNeeded(dev, global)
} }
} }
@ -778,11 +788,12 @@ func (t readWriteTransaction) updateLocalNeed(keyBuf, folder, name []byte, add b
return keyBuf, err return keyBuf, err
} }
func Need(global FileVersion, haveLocal bool, localVersion protocol.Vector) bool { func Need(global *dbproto.FileVersion, haveLocal bool, localVersion protocol.Vector) bool {
// We never need an invalid file or a file without a valid version (just // We never need an invalid file or a file without a valid version (just
// another way of expressing "invalid", really, until we fix that // another way of expressing "invalid", really, until we fix that
// part...). // part...).
if global.IsInvalid() || global.Version.IsEmpty() { globalVersion := protocol.VectorFromWire(global.Version)
if fvIsInvalid(global) || globalVersion.IsEmpty() {
return false return false
} }
// We don't need a deleted file if we don't have it. // We don't need a deleted file if we don't have it.
@ -790,7 +801,7 @@ func Need(global FileVersion, haveLocal bool, localVersion protocol.Vector) bool
return false return false
} }
// We don't need the global file if we already have the same version. // We don't need the global file if we already have the same version.
if haveLocal && localVersion.GreaterEqual(global.Version) { if haveLocal && localVersion.GreaterEqual(globalVersion) {
return false return false
} }
return true return true
@ -816,8 +827,8 @@ func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device, file
return nil, err return nil, err
} }
oldGlobalFV, haveOldGlobal := fl.GetGlobal() oldGlobalFV, haveOldGlobal := vlGetGlobal(fl)
oldGlobalFV = oldGlobalFV.copy() oldGlobalFV = fvCopy(oldGlobalFV)
if !haveOldGlobal { if !haveOldGlobal {
// Shouldn't ever happen, but doesn't hurt to handle. // Shouldn't ever happen, but doesn't hurt to handle.
@ -825,18 +836,18 @@ func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device, file
return keyBuf, t.Delete(gk) return keyBuf, t.Delete(gk)
} }
removedFV, haveRemoved, globalChanged := fl.pop(device) removedFV, haveRemoved, globalChanged := vlPop(fl, device)
if !haveRemoved { if !haveRemoved {
// There is no version for the given device // There is no version for the given device
return keyBuf, nil return keyBuf, nil
} }
var global protocol.FileIntf var global protocol.FileInfo
var gotGlobal bool var gotGlobal bool
globalFV, haveGlobal := fl.GetGlobal() globalFV, haveGlobal := vlGetGlobal(fl)
// Add potential needs of the removed device // Add potential needs of the removed device
if haveGlobal && !globalFV.IsInvalid() && Need(globalFV, false, protocol.Vector{}) && !Need(oldGlobalFV, haveRemoved, removedFV.Version) { if haveGlobal && !fvIsInvalid(globalFV) && Need(globalFV, false, protocol.Vector{}) && !Need(oldGlobalFV, haveRemoved, protocol.VectorFromWire(removedFV.Version)) {
keyBuf, global, err = t.getGlobalFromVersionList(keyBuf, folder, file, true, fl) keyBuf, global, err = t.getGlobalFromVersionList(keyBuf, folder, file, true, fl)
if err != nil { if err != nil {
return nil, err return nil, err
@ -853,13 +864,13 @@ func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device, file
// Global hasn't changed, abort early // Global hasn't changed, abort early
if !globalChanged { if !globalChanged {
l.Debugf("new global after remove: %v", fl) l.Debugf("new global after remove: %v", fl)
if err := t.Put(gk, mustMarshal(&fl)); err != nil { if err := t.Put(gk, mustMarshal(fl)); err != nil {
return nil, err return nil, err
} }
return keyBuf, nil return keyBuf, nil
} }
var oldGlobal protocol.FileIntf var oldGlobal protocol.FileInfo
keyBuf, oldGlobal, err = t.getGlobalFromFileVersion(keyBuf, folder, file, true, oldGlobalFV) keyBuf, oldGlobal, err = t.getGlobalFromFileVersion(keyBuf, folder, file, true, oldGlobalFV)
if err != nil { if err != nil {
return nil, err return nil, err
@ -868,11 +879,12 @@ func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device, file
// Remove potential device needs // Remove potential device needs
shouldRemoveNeed := func(dev protocol.DeviceID) bool { shouldRemoveNeed := func(dev protocol.DeviceID) bool {
fv, have := fl.Get(dev[:]) fv, have := vlGet(fl, dev[:])
if !Need(oldGlobalFV, have, fv.Version) { fvVersion := protocol.VectorFromWire(fv.Version)
if !Need(oldGlobalFV, have, fvVersion) {
return false // Didn't need it before return false // Didn't need it before
} }
return !haveGlobal || !Need(globalFV, have, fv.Version) return !haveGlobal || !Need(globalFV, have, fvVersion)
} }
if shouldRemoveNeed(protocol.LocalDeviceID) { if shouldRemoveNeed(protocol.LocalDeviceID) {
meta.removeNeeded(protocol.LocalDeviceID, oldGlobal) meta.removeNeeded(protocol.LocalDeviceID, oldGlobal)
@ -890,7 +902,7 @@ func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device, file
} }
// Nothing left, i.e. nothing to add to the global counter below. // Nothing left, i.e. nothing to add to the global counter below.
if fl.Empty() { if len(fl.Versions) == 0 {
if err := t.Delete(gk); err != nil { if err := t.Delete(gk); err != nil {
return nil, err return nil, err
} }
@ -907,7 +919,7 @@ func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device, file
meta.addFile(protocol.GlobalDeviceID, global) meta.addFile(protocol.GlobalDeviceID, global)
l.Debugf(`new global for "%s" after remove: %v`, file, fl) l.Debugf(`new global for "%s" after remove: %v`, file, fl)
if err := t.Put(gk, mustMarshal(&fl)); err != nil { if err := t.Put(gk, mustMarshal(fl)); err != nil {
return nil, err return nil, err
} }
@ -935,7 +947,7 @@ func (t readWriteTransaction) deleteKeyPrefixMatching(prefix []byte, match func(
return dbi.Error() return dbi.Error()
} }
func (t *readWriteTransaction) withAllFolderTruncated(folder []byte, fn func(device []byte, f FileInfoTruncated) bool) error { func (t *readWriteTransaction) withAllFolderTruncated(folder []byte, fn func(device []byte, f protocol.FileInfo) bool) error {
key, err := t.keyer.GenerateDeviceFileKey(nil, folder, nil, nil) key, err := t.keyer.GenerateDeviceFileKey(nil, folder, nil, nil)
if err != nil { if err != nil {
return err return err
@ -957,11 +969,10 @@ func (t *readWriteTransaction) withAllFolderTruncated(folder []byte, fn func(dev
continue continue
} }
intf, err := t.unmarshalTrunc(dbi.Value(), true) f, err := t.unmarshalTrunc(dbi.Value(), true)
if err != nil { if err != nil {
return err return err
} }
f := intf.(FileInfoTruncated)
switch f.Name { switch f.Name {
case "", ".", "..", "/": // A few obviously invalid filenames case "", ".", "..", "/": // A few obviously invalid filenames
@ -988,12 +999,8 @@ func (t *readWriteTransaction) withAllFolderTruncated(folder []byte, fn func(dev
return dbi.Error() return dbi.Error()
} }
type marshaller interface { func mustMarshal(f proto.Message) []byte {
Marshal() ([]byte, error) bs, err := proto.Marshal(f)
}
func mustMarshal(f marshaller) []byte {
bs, err := f.Marshal()
if err != nil { if err != nil {
panic(err) panic(err)
} }

View File

@ -6,7 +6,10 @@
package db package db
import "github.com/syncthing/syncthing/lib/protocol" import (
"github.com/syncthing/syncthing/lib/protocol"
"google.golang.org/protobuf/proto"
)
// How many files to send in each Index/IndexUpdate message. // How many files to send in each Index/IndexUpdate message.
const ( const (
@ -44,7 +47,7 @@ func (b *FileInfoBatch) Append(f protocol.FileInfo) {
b.infos = make([]protocol.FileInfo, 0, MaxBatchSizeFiles) b.infos = make([]protocol.FileInfo, 0, MaxBatchSizeFiles)
} }
b.infos = append(b.infos, f) b.infos = append(b.infos, f)
b.size += f.ProtoSize() b.size += proto.Size(f.ToWire(true))
} }
func (b *FileInfoBatch) Full() bool { func (b *FileInfoBatch) Full() bool {

View File

@ -10,9 +10,8 @@ import (
stdsync "sync" stdsync "sync"
"time" "time"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4"
) )
// A cachedFinder is a Finder with associated cache timeouts. // A cachedFinder is a Finder with associated cache timeouts.

View File

@ -21,11 +21,12 @@ import (
stdsync "sync" stdsync "sync"
"time" "time"
"golang.org/x/net/http2"
"github.com/syncthing/syncthing/lib/connections/registry" "github.com/syncthing/syncthing/lib/connections/registry"
"github.com/syncthing/syncthing/lib/dialer" "github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/net/http2"
) )
type globalClient struct { type globalClient struct {

View File

@ -7,6 +7,7 @@
package discover package discover
import ( import (
"bytes"
"context" "context"
"encoding/binary" "encoding/binary"
"encoding/hex" "encoding/hex"
@ -17,12 +18,15 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/thejerf/suture/v4"
"google.golang.org/protobuf/proto"
"github.com/syncthing/syncthing/internal/gen/discoproto"
"github.com/syncthing/syncthing/lib/beacon" "github.com/syncthing/syncthing/lib/beacon"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand" "github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/svcutil" "github.com/syncthing/syncthing/lib/svcutil"
"github.com/thejerf/suture/v4"
) )
type localClient struct { type localClient struct {
@ -121,12 +125,12 @@ func (c *localClient) announcementPkt(instanceID int64, msg []byte) ([]byte, boo
return msg, false return msg, false
} }
pkt := Announce{ pkt := &discoproto.Announce{
ID: c.myID, Id: c.myID[:],
Addresses: addrs, Addresses: addrs,
InstanceID: instanceID, InstanceId: instanceID,
} }
bs, _ := pkt.Marshal() bs, _ := proto.Marshal(pkt)
if pktLen := 4 + len(bs); cap(msg) < pktLen { if pktLen := 4 + len(bs); cap(msg) < pktLen {
msg = make([]byte, 0, pktLen) msg = make([]byte, 0, pktLen)
@ -193,18 +197,19 @@ func (c *localClient) recvAnnouncements(ctx context.Context) error {
continue continue
} }
var pkt Announce var pkt discoproto.Announce
err := pkt.Unmarshal(buf[4:]) err := proto.Unmarshal(buf[:4], &pkt)
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
l.Debugf("discover: Failed to unmarshal local announcement from %s:\n%s", addr, hex.Dump(buf)) l.Debugf("discover: Failed to unmarshal local announcement from %s:\n%s", addr, hex.Dump(buf))
continue continue
} }
l.Debugf("discover: Received local announcement from %s for %s", addr, pkt.ID) id, _ := protocol.DeviceIDFromBytes(pkt.Id)
l.Debugf("discover: Received local announcement from %s for %s", addr, id)
var newDevice bool var newDevice bool
if pkt.ID != c.myID { if !bytes.Equal(pkt.Id, c.myID[:]) {
newDevice = c.registerDevice(addr, pkt) newDevice = c.registerDevice(addr, &pkt)
} }
if newDevice { if newDevice {
@ -218,18 +223,24 @@ func (c *localClient) recvAnnouncements(ctx context.Context) error {
} }
} }
func (c *localClient) registerDevice(src net.Addr, device Announce) bool { func (c *localClient) registerDevice(src net.Addr, device *discoproto.Announce) bool {
// Remember whether we already had a valid cache entry for this device. // Remember whether we already had a valid cache entry for this device.
// If the instance ID has changed the remote device has restarted since // If the instance ID has changed the remote device has restarted since
// we last heard from it, so we should treat it as a new device. // we last heard from it, so we should treat it as a new device.
ce, existsAlready := c.Get(device.ID) id, err := protocol.DeviceIDFromBytes(device.Id)
isNewDevice := !existsAlready || time.Since(ce.when) > CacheLifeTime || ce.instanceID != device.InstanceID if err != nil {
l.Debugf("discover: Failed to parse device ID %x: %v", device.Id, err)
return false
}
ce, existsAlready := c.Get(id)
isNewDevice := !existsAlready || time.Since(ce.when) > CacheLifeTime || ce.instanceID != device.InstanceId
// Any empty or unspecified addresses should be set to the source address // Any empty or unspecified addresses should be set to the source address
// of the announcement. We also skip any addresses we can't parse. // of the announcement. We also skip any addresses we can't parse.
l.Debugln("discover: Registering addresses for", device.ID) l.Debugln("discover: Registering addresses for", id)
var validAddresses []string var validAddresses []string
for _, addr := range device.Addresses { for _, addr := range device.Addresses {
u, err := url.Parse(addr) u, err := url.Parse(addr)
@ -272,16 +283,16 @@ func (c *localClient) registerDevice(src net.Addr, device Announce) bool {
} }
} }
c.Set(device.ID, CacheEntry{ c.Set(id, CacheEntry{
Addresses: validAddresses, Addresses: validAddresses,
when: time.Now(), when: time.Now(),
found: true, found: true,
instanceID: device.InstanceID, instanceID: device.InstanceId,
}) })
if isNewDevice { if isNewDevice {
c.evLogger.Log(events.DeviceDiscovered, map[string]interface{}{ c.evLogger.Log(events.DeviceDiscovered, map[string]interface{}{
"device": device.ID.String(), "device": id.String(),
"addrs": validAddresses, "addrs": validAddresses,
}) })
} }

View File

@ -1,399 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/discover/local.proto
package discover
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
github_com_syncthing_syncthing_lib_protocol "github.com/syncthing/syncthing/lib/protocol"
_ "github.com/syncthing/syncthing/proto/ext"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type Announce struct {
ID github_com_syncthing_syncthing_lib_protocol.DeviceID `protobuf:"bytes,1,opt,name=id,proto3,customtype=github.com/syncthing/syncthing/lib/protocol.DeviceID" json:"id" xml:"id"`
Addresses []string `protobuf:"bytes,2,rep,name=addresses,proto3" json:"addresses" xml:"address"`
InstanceID int64 `protobuf:"varint,3,opt,name=instance_id,json=instanceId,proto3" json:"instanceId" xml:"instanceId"`
}
func (m *Announce) Reset() { *m = Announce{} }
func (m *Announce) String() string { return proto.CompactTextString(m) }
func (*Announce) ProtoMessage() {}
func (*Announce) Descriptor() ([]byte, []int) {
return fileDescriptor_18afca46562fdaf4, []int{0}
}
func (m *Announce) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Announce) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Announce.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Announce) XXX_Merge(src proto.Message) {
xxx_messageInfo_Announce.Merge(m, src)
}
func (m *Announce) XXX_Size() int {
return m.ProtoSize()
}
func (m *Announce) XXX_DiscardUnknown() {
xxx_messageInfo_Announce.DiscardUnknown(m)
}
var xxx_messageInfo_Announce proto.InternalMessageInfo
func init() {
proto.RegisterType((*Announce)(nil), "discover.Announce")
}
func init() { proto.RegisterFile("lib/discover/local.proto", fileDescriptor_18afca46562fdaf4) }
var fileDescriptor_18afca46562fdaf4 = []byte{
// 334 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x91, 0x31, 0x6b, 0xe3, 0x30,
0x18, 0x86, 0x2d, 0x05, 0x8e, 0x44, 0x77, 0x07, 0x87, 0x27, 0x93, 0x41, 0x0a, 0xbe, 0x0c, 0x81,
0x42, 0x3c, 0xb4, 0x53, 0x29, 0x85, 0x1a, 0x2f, 0x1e, 0xba, 0x64, 0xec, 0xd0, 0x10, 0x4b, 0xaa,
0x23, 0x70, 0xa4, 0x60, 0x39, 0x21, 0xfd, 0x07, 0x1d, 0x4b, 0xb6, 0x6e, 0xfd, 0x39, 0x19, 0x3d,
0x96, 0x0e, 0x82, 0xd8, 0x5b, 0xc6, 0xfc, 0x82, 0x12, 0x25, 0x69, 0x32, 0x76, 0x7b, 0xf5, 0xe8,
0x95, 0x78, 0xf8, 0x3e, 0xe4, 0x65, 0x22, 0x09, 0x98, 0xd0, 0x54, 0xcd, 0x79, 0x1e, 0x64, 0x8a,
0x8e, 0xb2, 0xfe, 0x34, 0x57, 0x85, 0x72, 0x9b, 0x47, 0xda, 0xfe, 0x9f, 0xf3, 0xa9, 0xd2, 0x81,
0xc5, 0xc9, 0xec, 0x29, 0x48, 0x55, 0xaa, 0xec, 0xc1, 0xa6, 0x7d, 0xbd, 0xdd, 0xe2, 0x8b, 0x62,
0x1f, 0xfd, 0x37, 0x88, 0x9a, 0x77, 0x52, 0xaa, 0x99, 0xa4, 0xdc, 0x95, 0x08, 0x0a, 0xe6, 0x81,
0x0e, 0xe8, 0xfd, 0x09, 0x1f, 0x57, 0x86, 0x38, 0x9f, 0x86, 0x5c, 0xa5, 0xa2, 0x18, 0xcf, 0x92,
0x3e, 0x55, 0x93, 0x40, 0x3f, 0x4b, 0x5a, 0x8c, 0x85, 0x4c, 0xcf, 0xd2, 0xce, 0xc9, 0x7e, 0x45,
0x55, 0xd6, 0x8f, 0xf8, 0x5c, 0x50, 0x1e, 0x47, 0x95, 0x21, 0x30, 0x8e, 0x36, 0x86, 0x40, 0xc1,
0xb6, 0x86, 0x34, 0x17, 0x93, 0xec, 0xda, 0x17, 0xcc, 0x7f, 0x29, 0xbb, 0x60, 0x59, 0x76, 0x61,
0x1c, 0x0d, 0xa0, 0x60, 0xee, 0x0d, 0x6a, 0x8d, 0x18, 0xcb, 0xb9, 0xd6, 0x5c, 0x7b, 0xb0, 0xd3,
0xe8, 0xb5, 0x42, 0xbc, 0x31, 0xe4, 0x04, 0xb7, 0x86, 0xfc, 0xb5, 0x6f, 0x0f, 0xc4, 0x1f, 0x9c,
0xee, 0xdc, 0x21, 0xfa, 0x2d, 0xa4, 0x2e, 0x46, 0x92, 0xf2, 0xa1, 0x60, 0x5e, 0xa3, 0x03, 0x7a,
0x8d, 0xf0, 0xb6, 0x32, 0x04, 0xc5, 0x07, 0x6c, 0x15, 0xd0, 0xb1, 0x14, 0xef, 0x54, 0xfe, 0xed,
0x55, 0xbe, 0x91, 0xbf, 0x2c, 0xbb, 0x67, 0xfd, 0xc1, 0x59, 0x3b, 0xbc, 0x5f, 0xad, 0xb1, 0x53,
0xae, 0xb1, 0xb3, 0xaa, 0x30, 0x28, 0x2b, 0x0c, 0x5e, 0x6b, 0xec, 0xbc, 0xd7, 0x18, 0x94, 0x35,
0x76, 0x3e, 0x6a, 0xec, 0x3c, 0x5c, 0xfc, 0x60, 0x38, 0xc7, 0xd5, 0x24, 0xbf, 0xec, 0x98, 0x2e,
0xbf, 0x02, 0x00, 0x00, 0xff, 0xff, 0xdc, 0xe8, 0x9a, 0xd0, 0xc7, 0x01, 0x00, 0x00,
}
func (m *Announce) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *Announce) MarshalTo(dAtA []byte) (int, error) {
size := m.ProtoSize()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Announce) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.InstanceID != 0 {
i = encodeVarintLocal(dAtA, i, uint64(m.InstanceID))
i--
dAtA[i] = 0x18
}
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
i -= len(m.Addresses[iNdEx])
copy(dAtA[i:], m.Addresses[iNdEx])
i = encodeVarintLocal(dAtA, i, uint64(len(m.Addresses[iNdEx])))
i--
dAtA[i] = 0x12
}
}
{
size := m.ID.ProtoSize()
i -= size
if _, err := m.ID.MarshalTo(dAtA[i:]); err != nil {
return 0, err
}
i = encodeVarintLocal(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0xa
return len(dAtA) - i, nil
}
func encodeVarintLocal(dAtA []byte, offset int, v uint64) int {
offset -= sovLocal(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *Announce) ProtoSize() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = m.ID.ProtoSize()
n += 1 + l + sovLocal(uint64(l))
if len(m.Addresses) > 0 {
for _, s := range m.Addresses {
l = len(s)
n += 1 + l + sovLocal(uint64(l))
}
}
if m.InstanceID != 0 {
n += 1 + sovLocal(uint64(m.InstanceID))
}
return n
}
func sovLocal(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozLocal(x uint64) (n int) {
return sovLocal(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *Announce) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Announce: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Announce: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
byteLen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthLocal
}
postIndex := iNdEx + byteLen
if postIndex < 0 {
return ErrInvalidLengthLocal
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.ID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthLocal
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthLocal
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InstanceID", wireType)
}
m.InstanceID = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.InstanceID |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipLocal(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthLocal
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipLocal(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthLocal
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupLocal
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthLocal
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthLocal = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowLocal = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupLocal = fmt.Errorf("proto: unexpected end of group")
)

View File

@ -13,6 +13,7 @@ import (
"net" "net"
"testing" "testing"
"github.com/syncthing/syncthing/internal/gen/discoproto"
"github.com/syncthing/syncthing/lib/events" "github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol" "github.com/syncthing/syncthing/lib/protocol"
) )
@ -50,40 +51,40 @@ func TestLocalInstanceIDShouldTriggerNew(t *testing.T) {
lc := c.(*localClient) lc := c.(*localClient)
src := &net.UDPAddr{IP: []byte{10, 20, 30, 40}, Port: 50} src := &net.UDPAddr{IP: []byte{10, 20, 30, 40}, Port: 50}
new := lc.registerDevice(src, Announce{ new := lc.registerDevice(src, &discoproto.Announce{
ID: protocol.DeviceID{10, 20, 30, 40, 50, 60, 70, 80, 90}, Id: padDeviceID(10),
Addresses: []string{"tcp://0.0.0.0:22000"}, Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 1234567890, InstanceId: 1234567890,
}) })
if !new { if !new {
t.Fatal("first register should be new") t.Fatal("first register should be new")
} }
new = lc.registerDevice(src, Announce{ new = lc.registerDevice(src, &discoproto.Announce{
ID: protocol.DeviceID{10, 20, 30, 40, 50, 60, 70, 80, 90}, Id: padDeviceID(10),
Addresses: []string{"tcp://0.0.0.0:22000"}, Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 1234567890, InstanceId: 1234567890,
}) })
if new { if new {
t.Fatal("second register should not be new") t.Fatal("second register should not be new")
} }
new = lc.registerDevice(src, Announce{ new = lc.registerDevice(src, &discoproto.Announce{
ID: protocol.DeviceID{42, 10, 20, 30, 40, 50, 60, 70, 80, 90}, Id: padDeviceID(42),
Addresses: []string{"tcp://0.0.0.0:22000"}, Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 1234567890, InstanceId: 1234567890,
}) })
if !new { if !new {
t.Fatal("new device ID should be new") t.Fatal("new device ID should be new")
} }
new = lc.registerDevice(src, Announce{ new = lc.registerDevice(src, &discoproto.Announce{
ID: protocol.DeviceID{10, 20, 30, 40, 50, 60, 70, 80, 90}, Id: padDeviceID(10),
Addresses: []string{"tcp://0.0.0.0:22000"}, Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 91234567890, InstanceId: 91234567890,
}) })
if !new { if !new {
@ -91,6 +92,12 @@ func TestLocalInstanceIDShouldTriggerNew(t *testing.T) {
} }
} }
func padDeviceID(bs ...byte) []byte {
var padded [32]byte
copy(padded[:], bs)
return padded[:]
}
func TestFilterUndialable(t *testing.T) { func TestFilterUndialable(t *testing.T) {
addrs := []string{ addrs := []string{
"quic://[2001:db8::1]:22000", // OK "quic://[2001:db8::1]:22000", // OK

View File

@ -9,9 +9,7 @@
package fs package fs
import ( import "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/protocol"
)
func (f *BasicFilesystem) PlatformData(name string, scanOwnership, scanXattrs bool, xattrFilter XattrFilter) (protocol.PlatformData, error) { func (f *BasicFilesystem) PlatformData(name string, scanOwnership, scanXattrs bool, xattrFilter XattrFilter) (protocol.PlatformData, error) {
return unixPlatformData(f, name, f.userCache, f.groupCache, scanOwnership, scanXattrs, xattrFilter) return unixPlatformData(f, name, f.userCache, f.groupCache, scanOwnership, scanXattrs, xattrFilter)

View File

@ -9,6 +9,6 @@
package fs package fs
func reachedMaxUserWatches(err error) bool { func reachedMaxUserWatches(_ error) bool {
return false return false
} }

View File

@ -9,9 +9,7 @@
package fs package fs
import ( import "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/protocol"
)
func (f *BasicFilesystem) GetXattr(path string, xattrFilter XattrFilter) ([]protocol.Xattr, error) { func (f *BasicFilesystem) GetXattr(path string, xattrFilter XattrFilter) ([]protocol.Xattr, error) {
return nil, ErrXattrsNotSupported return nil, ErrXattrsNotSupported

View File

@ -6,6 +6,17 @@
package fs package fs
type CopyRangeMethod int32
const (
CopyRangeMethodStandard CopyRangeMethod = 0
CopyRangeMethodIoctl CopyRangeMethod = 1
CopyRangeMethodCopyFileRange CopyRangeMethod = 2
CopyRangeMethodSendFile CopyRangeMethod = 3
CopyRangeMethodDuplicateExtents CopyRangeMethod = 4
CopyRangeMethodAllWithFallback CopyRangeMethod = 5
)
func (o CopyRangeMethod) String() string { func (o CopyRangeMethod) String() string {
switch o { switch o {
case CopyRangeMethodStandard: case CopyRangeMethodStandard:
@ -24,31 +35,3 @@ func (o CopyRangeMethod) String() string {
return "unknown" return "unknown"
} }
} }
func (o CopyRangeMethod) MarshalText() ([]byte, error) {
return []byte(o.String()), nil
}
func (o *CopyRangeMethod) UnmarshalText(bs []byte) error {
switch string(bs) {
case "standard":
*o = CopyRangeMethodStandard
case "ioctl":
*o = CopyRangeMethodIoctl
case "copy_file_range":
*o = CopyRangeMethodCopyFileRange
case "sendfile":
*o = CopyRangeMethodSendFile
case "duplicate_extents":
*o = CopyRangeMethodDuplicateExtents
case "all":
*o = CopyRangeMethodAllWithFallback
default:
*o = CopyRangeMethodStandard
}
return nil
}
func (o *CopyRangeMethod) ParseDefault(str string) error {
return o.UnmarshalText([]byte(str))
}

View File

@ -1,90 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: lib/fs/copyrangemethod.proto
package fs
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type CopyRangeMethod int32
const (
CopyRangeMethodStandard CopyRangeMethod = 0
CopyRangeMethodIoctl CopyRangeMethod = 1
CopyRangeMethodCopyFileRange CopyRangeMethod = 2
CopyRangeMethodSendFile CopyRangeMethod = 3
CopyRangeMethodDuplicateExtents CopyRangeMethod = 4
CopyRangeMethodAllWithFallback CopyRangeMethod = 5
)
var CopyRangeMethod_name = map[int32]string{
0: "COPY_RANGE_METHOD_STANDARD",
1: "COPY_RANGE_METHOD_IOCTL",
2: "COPY_RANGE_METHOD_COPY_FILE_RANGE",
3: "COPY_RANGE_METHOD_SEND_FILE",
4: "COPY_RANGE_METHOD_DUPLICATE_EXTENTS",
5: "COPY_RANGE_METHOD_ALL_WITH_FALLBACK",
}
var CopyRangeMethod_value = map[string]int32{
"COPY_RANGE_METHOD_STANDARD": 0,
"COPY_RANGE_METHOD_IOCTL": 1,
"COPY_RANGE_METHOD_COPY_FILE_RANGE": 2,
"COPY_RANGE_METHOD_SEND_FILE": 3,
"COPY_RANGE_METHOD_DUPLICATE_EXTENTS": 4,
"COPY_RANGE_METHOD_ALL_WITH_FALLBACK": 5,
}
func (CopyRangeMethod) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_78e1061c3022e87e, []int{0}
}
func init() {
proto.RegisterEnum("fs.CopyRangeMethod", CopyRangeMethod_name, CopyRangeMethod_value)
}
func init() { proto.RegisterFile("lib/fs/copyrangemethod.proto", fileDescriptor_78e1061c3022e87e) }
var fileDescriptor_78e1061c3022e87e = []byte{
// 391 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x92, 0xbd, 0x8e, 0xd3, 0x40,
0x14, 0x85, 0xed, 0xdd, 0x85, 0xc2, 0x0d, 0x96, 0x85, 0xb4, 0x68, 0x76, 0x35, 0x18, 0x22, 0x1a,
0x8a, 0x75, 0x81, 0xa8, 0xa0, 0x99, 0xb5, 0xc7, 0x59, 0x6b, 0x27, 0x4e, 0x94, 0x18, 0x05, 0x68,
0x2c, 0xff, 0xc5, 0xb6, 0x98, 0x78, 0x2c, 0x7b, 0x22, 0x91, 0x57, 0x70, 0xc5, 0x0b, 0x58, 0xa2,
0xa0, 0xa0, 0xe1, 0x3d, 0x52, 0xa6, 0xa4, 0x4d, 0xfc, 0x22, 0x28, 0x93, 0x06, 0x39, 0xe9, 0xee,
0x3d, 0x9a, 0xef, 0xd3, 0x91, 0xe6, 0x2a, 0xb7, 0x34, 0x0f, 0x8d, 0x45, 0x6d, 0x44, 0xac, 0x5c,
0x57, 0x41, 0x91, 0x26, 0xcb, 0x84, 0x67, 0x2c, 0xbe, 0x2b, 0x2b, 0xc6, 0x99, 0x76, 0xb1, 0xa8,
0xc1, 0xa0, 0x4a, 0x4a, 0x56, 0x1b, 0x22, 0x08, 0x57, 0x0b, 0x23, 0x65, 0x29, 0x13, 0x8b, 0x98,
0x8e, 0x0f, 0xdf, 0xfe, 0xb9, 0x54, 0x9e, 0x99, 0xac, 0x5c, 0x4f, 0x0f, 0x8a, 0x91, 0x50, 0x68,
0x1f, 0x14, 0x60, 0x8e, 0x27, 0x5f, 0xfc, 0x29, 0x72, 0x87, 0xd8, 0x1f, 0x61, 0xef, 0x61, 0x6c,
0xf9, 0x33, 0x0f, 0xb9, 0x16, 0x9a, 0x5a, 0xaa, 0x04, 0x6e, 0x9a, 0x56, 0xbf, 0xee, 0x41, 0x33,
0x1e, 0x14, 0x71, 0x50, 0xc5, 0xda, 0x7b, 0xe5, 0xfa, 0x14, 0x76, 0xc6, 0xa6, 0x47, 0x54, 0x19,
0xbc, 0x68, 0x5a, 0xfd, 0x79, 0x8f, 0x74, 0x58, 0xc4, 0xa9, 0x36, 0x54, 0x5e, 0x9d, 0x62, 0x22,
0xb1, 0x1d, 0x82, 0x8f, 0xb1, 0x7a, 0x01, 0xf4, 0xa6, 0xd5, 0x6f, 0x7b, 0x82, 0xc3, 0x6a, 0xe7,
0x34, 0x11, 0x91, 0xf6, 0x51, 0xb9, 0x39, 0x53, 0x1e, 0xbb, 0x96, 0x10, 0xa9, 0x97, 0xe7, 0xdb,
0x27, 0x45, 0x7c, 0x50, 0x68, 0x44, 0x19, 0x9c, 0xd2, 0xd6, 0xa7, 0x09, 0x71, 0x4c, 0xe4, 0x61,
0x1f, 0x7f, 0xf6, 0xb0, 0xeb, 0xcd, 0xd4, 0x2b, 0x30, 0x68, 0x5a, 0xfd, 0x65, 0xcf, 0x62, 0xad,
0x4a, 0x9a, 0x47, 0x01, 0x4f, 0xf0, 0x77, 0x9e, 0x14, 0xbc, 0xd6, 0x1e, 0xcf, 0xd9, 0x10, 0x21,
0xfe, 0xdc, 0xf1, 0x1e, 0x7c, 0x1b, 0x11, 0x72, 0x8f, 0xcc, 0x47, 0xf5, 0x09, 0x78, 0xdd, 0xb4,
0x3a, 0xec, 0xd9, 0x10, 0xa5, 0xf3, 0x9c, 0x67, 0x76, 0x40, 0x69, 0x18, 0x44, 0xdf, 0xc0, 0xd5,
0xef, 0x5f, 0x50, 0xba, 0x1f, 0x6e, 0x76, 0x50, 0xda, 0xee, 0xa0, 0xb4, 0xd9, 0x43, 0x79, 0xbb,
0x87, 0xf2, 0x8f, 0x0e, 0x4a, 0x3f, 0x3b, 0x28, 0x6f, 0x3b, 0x28, 0xfd, 0xed, 0xa0, 0xf4, 0xf5,
0x4d, 0x9a, 0xf3, 0x6c, 0x15, 0xde, 0x45, 0x6c, 0x69, 0xd4, 0xeb, 0x22, 0xe2, 0x59, 0x5e, 0xa4,
0xff, 0x4d, 0xc7, 0xbb, 0x09, 0x9f, 0x8a, 0xff, 0x7f, 0xf7, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xb6,
0x5d, 0x2e, 0x16, 0x48, 0x02, 0x00, 0x00,
}

Some files were not shown because too many files have changed in this diff Show More