2014-11-16 20:13:20 +00:00
|
|
|
// Copyright (C) 2014 The Syncthing Authors.
|
2014-09-29 19:43:32 +00:00
|
|
|
//
|
2015-03-07 20:36:35 +00:00
|
|
|
// This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
2017-02-09 06:52:18 +00:00
|
|
|
// You can obtain one at https://mozilla.org/MPL/2.0/.
|
2014-06-01 20:50:14 +00:00
|
|
|
|
2014-03-08 22:02:01 +00:00
|
|
|
package scanner
|
|
|
|
|
|
|
|
import (
|
2017-04-26 00:15:23 +00:00
|
|
|
"context"
|
2018-11-07 10:04:41 +00:00
|
|
|
"errors"
|
|
|
|
"fmt"
|
2018-05-14 07:47:23 +00:00
|
|
|
"path/filepath"
|
|
|
|
"strings"
|
2015-08-26 22:49:06 +00:00
|
|
|
"sync/atomic"
|
2014-12-09 23:58:58 +00:00
|
|
|
"time"
|
2015-03-18 22:54:50 +00:00
|
|
|
"unicode/utf8"
|
2014-08-16 16:33:01 +00:00
|
|
|
|
2019-02-02 10:45:17 +00:00
|
|
|
metrics "github.com/rcrowley/go-metrics"
|
refactor: use modern Protobuf encoder (#9817)
At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes #8247
2024-12-01 15:50:17 +00:00
|
|
|
"golang.org/x/text/unicode/norm"
|
|
|
|
|
2022-07-28 17:36:39 +00:00
|
|
|
"github.com/syncthing/syncthing/lib/build"
|
2015-08-26 22:49:06 +00:00
|
|
|
"github.com/syncthing/syncthing/lib/events"
|
2017-04-01 09:04:11 +00:00
|
|
|
"github.com/syncthing/syncthing/lib/fs"
|
2016-04-07 09:34:07 +00:00
|
|
|
"github.com/syncthing/syncthing/lib/ignore"
|
2019-10-22 09:12:21 +00:00
|
|
|
"github.com/syncthing/syncthing/lib/osutil"
|
2015-09-22 17:38:46 +00:00
|
|
|
"github.com/syncthing/syncthing/lib/protocol"
|
2014-03-08 22:02:01 +00:00
|
|
|
)
|
|
|
|
|
2016-05-09 18:25:39 +00:00
|
|
|
type Config struct {
|
2015-08-26 22:49:06 +00:00
|
|
|
// Folder for which the walker has been created
|
|
|
|
Folder string
|
2015-03-27 08:51:18 +00:00
|
|
|
// Limit walking to these paths within Dir, or no limit if Sub is empty
|
|
|
|
Subs []string
|
2014-10-14 20:30:00 +00:00
|
|
|
// If Matcher is not nil, it is used to identify files to ignore which were specified by the user.
|
2016-04-07 09:34:07 +00:00
|
|
|
Matcher *ignore.Matcher
|
2014-12-09 23:58:58 +00:00
|
|
|
// Number of hours to keep temporary files for
|
|
|
|
TempLifetime time.Duration
|
2018-02-14 07:59:46 +00:00
|
|
|
// If CurrentFiler is not nil, it is queried for the current file before rescanning.
|
|
|
|
CurrentFiler CurrentFiler
|
2017-04-01 09:04:11 +00:00
|
|
|
// The Filesystem provides an abstraction on top of the actual filesystem.
|
|
|
|
Filesystem fs.Filesystem
|
2014-05-23 12:31:16 +00:00
|
|
|
// If IgnorePerms is true, changes to permission bits will not be
|
2022-07-26 06:24:58 +00:00
|
|
|
// detected.
|
2014-05-23 12:31:16 +00:00
|
|
|
IgnorePerms bool
|
2015-03-18 22:54:50 +00:00
|
|
|
// When AutoNormalize is set, file names that are in UTF8 but incorrect
|
|
|
|
// normalization form will be corrected.
|
|
|
|
AutoNormalize bool
|
2014-12-24 23:12:12 +00:00
|
|
|
// Number of routines to use for hashing
|
|
|
|
Hashers int
|
2015-03-25 21:37:35 +00:00
|
|
|
// Our vector clock id
|
2016-01-20 19:10:22 +00:00
|
|
|
ShortID protocol.ShortID
|
2015-08-26 22:49:06 +00:00
|
|
|
// Optional progress tick interval which defines how often FolderScanProgress
|
|
|
|
// events are emitted. Negative number means disabled.
|
|
|
|
ProgressTickIntervalS int
|
2018-07-12 08:15:57 +00:00
|
|
|
// Local flags to set on scanned files
|
|
|
|
LocalFlags uint32
|
2019-07-23 19:48:53 +00:00
|
|
|
// Modification time is to be considered unchanged if the difference is lower.
|
|
|
|
ModTimeWindow time.Duration
|
2019-08-15 14:29:37 +00:00
|
|
|
// Event logger to which the scan progress events are sent
|
2019-08-21 06:05:43 +00:00
|
|
|
EventLogger events.Logger
|
2022-08-12 05:47:20 +00:00
|
|
|
// If ScanOwnership is true, we pick up ownership information on files while scanning.
|
|
|
|
ScanOwnership bool
|
2022-09-14 07:50:55 +00:00
|
|
|
// If ScanXattrs is true, we pick up extended attributes on files while scanning.
|
|
|
|
ScanXattrs bool
|
|
|
|
// Filter for extended attributes
|
|
|
|
XattrFilter XattrFilter
|
2014-03-08 22:02:01 +00:00
|
|
|
}
|
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
type CurrentFiler interface {
|
|
|
|
// CurrentFile returns the file as seen at last scan.
|
|
|
|
CurrentFile(name string) (protocol.FileInfo, bool)
|
2014-03-16 07:14:55 +00:00
|
|
|
}
|
|
|
|
|
2022-09-14 07:50:55 +00:00
|
|
|
type XattrFilter interface {
|
|
|
|
Permit(string) bool
|
|
|
|
GetMaxSingleEntrySize() int
|
|
|
|
GetMaxTotalSize() int
|
|
|
|
}
|
|
|
|
|
2018-11-07 10:04:41 +00:00
|
|
|
type ScanResult struct {
|
|
|
|
File protocol.FileInfo
|
|
|
|
Err error
|
|
|
|
Path string // to be set in case Err != nil and File == nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func Walk(ctx context.Context, cfg Config) chan ScanResult {
|
2020-11-09 14:33:32 +00:00
|
|
|
return newWalker(cfg).walk(ctx)
|
|
|
|
}
|
|
|
|
|
|
|
|
func WalkWithoutHashing(ctx context.Context, cfg Config) chan ScanResult {
|
|
|
|
return newWalker(cfg).walkWithoutHashing(ctx)
|
|
|
|
}
|
|
|
|
|
|
|
|
func newWalker(cfg Config) *walker {
|
|
|
|
w := &walker{cfg}
|
2016-05-09 18:25:39 +00:00
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
if w.CurrentFiler == nil {
|
|
|
|
w.CurrentFiler = noCurrentFiler{}
|
2016-05-09 18:25:39 +00:00
|
|
|
}
|
2017-04-01 09:04:11 +00:00
|
|
|
if w.Filesystem == nil {
|
2017-08-19 14:36:56 +00:00
|
|
|
panic("no filesystem specified")
|
2016-05-09 18:25:39 +00:00
|
|
|
}
|
2017-09-06 06:39:18 +00:00
|
|
|
if w.Matcher == nil {
|
|
|
|
w.Matcher = ignore.New(w.Filesystem)
|
|
|
|
}
|
2016-05-09 18:25:39 +00:00
|
|
|
|
2023-08-04 17:57:30 +00:00
|
|
|
registerFolderMetrics(w.Folder)
|
2020-11-09 14:33:32 +00:00
|
|
|
return w
|
2016-05-09 18:25:39 +00:00
|
|
|
}
|
|
|
|
|
2018-11-07 10:04:41 +00:00
|
|
|
var (
|
|
|
|
errUTF8Invalid = errors.New("item is not in UTF8 encoding")
|
|
|
|
errUTF8Normalization = errors.New("item is not in the correct UTF8 normalization form")
|
|
|
|
errUTF8Conflict = errors.New("item has UTF8 encoding conflict with another item")
|
|
|
|
)
|
|
|
|
|
2016-05-09 18:25:39 +00:00
|
|
|
type walker struct {
|
|
|
|
Config
|
|
|
|
}
|
|
|
|
|
2014-09-28 11:00:38 +00:00
|
|
|
// Walk returns the list of files found in the local folder by scanning the
|
2014-03-08 22:02:01 +00:00
|
|
|
// file system. Files are blockwise hashed.
|
2018-11-07 10:04:41 +00:00
|
|
|
func (w *walker) walk(ctx context.Context) chan ScanResult {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "Walk", w.Subs, w.Matcher)
|
2014-05-04 16:20:25 +00:00
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
toHashChan := make(chan protocol.FileInfo)
|
2018-11-07 10:04:41 +00:00
|
|
|
finishedChan := make(chan ScanResult)
|
2014-03-08 22:02:01 +00:00
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
// A routine which walks the filesystem tree, and sends files which have
|
|
|
|
// been modified to the counter routine.
|
2020-11-09 14:33:32 +00:00
|
|
|
go w.scan(ctx, toHashChan, finishedChan)
|
2014-03-08 22:02:01 +00:00
|
|
|
|
2015-08-26 22:49:06 +00:00
|
|
|
// We're not required to emit scan progress events, just kick off hashers,
|
|
|
|
// and feed inputs directly from the walker.
|
|
|
|
if w.ProgressTickIntervalS < 0 {
|
2023-08-04 17:57:30 +00:00
|
|
|
newParallelHasher(ctx, w.Folder, w.Filesystem, w.Hashers, finishedChan, toHashChan, nil, nil)
|
2018-01-24 00:05:47 +00:00
|
|
|
return finishedChan
|
2015-08-26 22:49:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Defaults to every 2 seconds.
|
|
|
|
if w.ProgressTickIntervalS == 0 {
|
|
|
|
w.ProgressTickIntervalS = 2
|
|
|
|
}
|
|
|
|
|
|
|
|
// We need to emit progress events, hence we create a routine which buffers
|
|
|
|
// the list of files to be hashed, counts the total number of
|
|
|
|
// bytes to hash, and once no more files need to be hashed (chan gets closed),
|
|
|
|
// start a routine which periodically emits FolderScanProgress events,
|
|
|
|
// until a stop signal is sent by the parallel hasher.
|
|
|
|
// Parallel hasher is stopped by this routine when we close the channel over
|
|
|
|
// which it receives the files we ask it to hash.
|
|
|
|
go func() {
|
2018-02-14 07:59:46 +00:00
|
|
|
var filesToHash []protocol.FileInfo
|
2015-11-17 20:08:36 +00:00
|
|
|
var total int64 = 1
|
|
|
|
|
2015-08-26 22:49:06 +00:00
|
|
|
for file := range toHashChan {
|
|
|
|
filesToHash = append(filesToHash, file)
|
2018-02-14 07:59:46 +00:00
|
|
|
total += file.Size
|
2015-08-26 22:49:06 +00:00
|
|
|
}
|
|
|
|
|
2023-12-31 22:01:16 +00:00
|
|
|
if len(filesToHash) == 0 {
|
|
|
|
close(finishedChan)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
realToHashChan := make(chan protocol.FileInfo)
|
2015-08-26 22:49:06 +00:00
|
|
|
done := make(chan struct{})
|
2015-11-18 09:57:11 +00:00
|
|
|
progress := newByteCounter()
|
|
|
|
|
2023-08-04 17:57:30 +00:00
|
|
|
newParallelHasher(ctx, w.Folder, w.Filesystem, w.Hashers, finishedChan, realToHashChan, progress, done)
|
2015-08-26 22:49:06 +00:00
|
|
|
|
|
|
|
// A routine which actually emits the FolderScanProgress events
|
|
|
|
// every w.ProgressTicker ticks, until the hasher routines terminate.
|
|
|
|
go func() {
|
2017-01-03 14:03:18 +00:00
|
|
|
defer progress.Close()
|
|
|
|
|
2023-12-31 22:01:16 +00:00
|
|
|
emitProgressEvent := func() {
|
|
|
|
current := progress.Total()
|
|
|
|
rate := progress.Rate()
|
|
|
|
l.Debugf("%v: Walk %s %s current progress %d/%d at %.01f MiB/s (%d%%)", w, w.Folder, w.Subs, current, total, rate/1024/1024, current*100/total)
|
|
|
|
w.EventLogger.Log(events.FolderScanProgress, map[string]interface{}{
|
|
|
|
"folder": w.Folder,
|
|
|
|
"current": current,
|
|
|
|
"total": total,
|
|
|
|
"rate": rate, // bytes per second
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2024-03-05 18:04:26 +00:00
|
|
|
ticker := time.NewTicker(time.Duration(w.ProgressTickIntervalS) * time.Second)
|
|
|
|
defer ticker.Stop()
|
2015-08-26 22:49:06 +00:00
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case <-done:
|
2023-12-31 22:01:16 +00:00
|
|
|
emitProgressEvent()
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "Walk progress done", w.Folder, w.Subs, w.Matcher)
|
2015-08-26 22:49:06 +00:00
|
|
|
return
|
|
|
|
case <-ticker.C:
|
2023-12-31 22:01:16 +00:00
|
|
|
emitProgressEvent()
|
2017-04-26 00:15:23 +00:00
|
|
|
case <-ctx.Done():
|
2015-11-13 14:00:32 +00:00
|
|
|
return
|
2015-08-26 22:49:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2015-11-13 14:00:32 +00:00
|
|
|
loop:
|
2015-08-26 22:49:06 +00:00
|
|
|
for _, file := range filesToHash {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "real to hash:", file.Name)
|
2015-11-13 14:00:32 +00:00
|
|
|
select {
|
|
|
|
case realToHashChan <- file:
|
2017-04-26 00:15:23 +00:00
|
|
|
case <-ctx.Done():
|
2015-11-13 14:00:32 +00:00
|
|
|
break loop
|
|
|
|
}
|
2015-08-26 22:49:06 +00:00
|
|
|
}
|
|
|
|
close(realToHashChan)
|
|
|
|
}()
|
|
|
|
|
2018-01-24 00:05:47 +00:00
|
|
|
return finishedChan
|
2014-03-08 22:02:01 +00:00
|
|
|
}
|
|
|
|
|
2020-11-09 14:33:32 +00:00
|
|
|
func (w *walker) walkWithoutHashing(ctx context.Context) chan ScanResult {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "Walk without hashing", w.Subs, w.Matcher)
|
2020-11-09 14:33:32 +00:00
|
|
|
|
|
|
|
toHashChan := make(chan protocol.FileInfo)
|
|
|
|
finishedChan := make(chan ScanResult)
|
|
|
|
|
|
|
|
// A routine which walks the filesystem tree, and sends files which have
|
|
|
|
// been modified to the counter routine.
|
|
|
|
go w.scan(ctx, toHashChan, finishedChan)
|
|
|
|
|
|
|
|
go func() {
|
|
|
|
for file := range toHashChan {
|
|
|
|
finishedChan <- ScanResult{File: file}
|
|
|
|
}
|
|
|
|
close(finishedChan)
|
|
|
|
}()
|
|
|
|
|
|
|
|
return finishedChan
|
|
|
|
}
|
|
|
|
|
|
|
|
func (w *walker) scan(ctx context.Context, toHashChan chan<- protocol.FileInfo, finishedChan chan<- ScanResult) {
|
|
|
|
hashFiles := w.walkAndHashFiles(ctx, toHashChan, finishedChan)
|
|
|
|
if len(w.Subs) == 0 {
|
|
|
|
w.Filesystem.Walk(".", hashFiles)
|
|
|
|
} else {
|
|
|
|
for _, sub := range w.Subs {
|
|
|
|
if err := osutil.TraversesSymlink(w.Filesystem, filepath.Dir(sub)); err != nil {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugf("%v: Skip walking %v as it is below a symlink", w, sub)
|
2020-11-09 14:33:32 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
w.Filesystem.Walk(sub, hashFiles)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
close(toHashChan)
|
|
|
|
}
|
|
|
|
|
2018-11-07 10:04:41 +00:00
|
|
|
func (w *walker) walkAndHashFiles(ctx context.Context, toHashChan chan<- protocol.FileInfo, finishedChan chan<- ScanResult) fs.WalkFunc {
|
2018-02-14 07:59:46 +00:00
|
|
|
now := time.Now()
|
2018-05-14 07:47:23 +00:00
|
|
|
ignoredParent := ""
|
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
return func(path string, info fs.FileInfo, err error) error {
|
2017-04-26 00:15:23 +00:00
|
|
|
select {
|
|
|
|
case <-ctx.Done():
|
2018-02-14 07:59:46 +00:00
|
|
|
return ctx.Err()
|
2017-04-26 00:15:23 +00:00
|
|
|
default:
|
|
|
|
}
|
|
|
|
|
2023-08-04 17:57:30 +00:00
|
|
|
metricScannedItems.WithLabelValues(w.Folder).Inc()
|
|
|
|
|
2015-03-19 06:44:43 +00:00
|
|
|
// Return value used when we are returning early and don't want to
|
|
|
|
// process the item. For directories, this means do-not-descend.
|
|
|
|
var skip error // nil
|
2015-03-22 14:06:29 +00:00
|
|
|
// info nil when error is not nil
|
|
|
|
if info != nil && info.IsDir() {
|
2017-04-01 09:04:11 +00:00
|
|
|
skip = fs.SkipDir
|
2015-03-19 06:44:43 +00:00
|
|
|
}
|
|
|
|
|
2018-12-21 11:08:15 +00:00
|
|
|
if !utf8.ValidString(path) {
|
2021-02-19 07:51:39 +00:00
|
|
|
handleError(ctx, "scan", path, errUTF8Invalid, finishedChan)
|
2018-02-13 10:02:07 +00:00
|
|
|
return skip
|
|
|
|
}
|
|
|
|
|
2017-09-02 05:52:38 +00:00
|
|
|
if fs.IsTemporary(path) {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "temporary:", path, "err:", err)
|
2018-12-21 11:08:15 +00:00
|
|
|
if err == nil && info.IsRegular() && info.ModTime().Add(w.TempLifetime).Before(now) {
|
2019-02-02 11:16:27 +00:00
|
|
|
w.Filesystem.Remove(path)
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "removing temporary:", path, info.ModTime())
|
2014-12-09 23:58:58 +00:00
|
|
|
}
|
2014-03-08 22:02:01 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-09-02 05:52:38 +00:00
|
|
|
if fs.IsInternal(path) {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "ignored (internal):", path)
|
2016-12-01 14:00:11 +00:00
|
|
|
return skip
|
|
|
|
}
|
|
|
|
|
2024-09-28 15:16:44 +00:00
|
|
|
// Just in case the filesystem doesn't produce the normalization the OS
|
|
|
|
// uses, and we use internally.
|
|
|
|
nonNormPath := path
|
|
|
|
path = normalizePath(path)
|
|
|
|
|
lib/ignore: Optimise ignoring directories for filesystem watcher (fixes #9339) (#9340)
This improves the ignore handling so that directories can be fully
ignored (skipped in the watcher) in more cases. Specifically, where the
previous rule was that any complex `!`-pattern would disable skipping
directories, the new rule is that only matches on patterns *after* such
a `!`-pattern disable skipping. That is, the following now does the
intuitive thing:
```
/foo
/bar
!whatever
*
```
- `/foo/**` and `/bar/**` are completely skipped, since there is no
chance anything underneath them could ever be not-ignored
- `!whatever` toggles the "can't skip directories any more" flag
- Anything that matches `*` can't skip directories, because it's
possible we can have `whatever` match something deeper.
To enable this, some refactoring was necessary:
- The "can skip dirs" flag is now a property of the match result, not of
the pattern set as a whole.
- That meant returning a boolean is not good enough, we need to actually
return the entire `Result` (or, like, two booleans but that seemed
uglier and more annoying to use)
- `ShouldIgnore(string) boolean` went away with
`Match(string).IsIgnored()` being the obvious replacement (API
simplification!)
- The watcher then needed to import the `ignore` package (for the
`Result` type), but `fs` imports the watcher and `ignore` imports `fs`.
That's a cycle, so I broke out `Result` into a package of its own so
that it can be safely imported everywhere in things like `type Matcher
interface { Match(string) result.Result }`. There's a fair amount of
stuttering in `result.Result` and maybe we should go with something like
`ignoreresult.R` or so, leaving this open for discussion.
Tests refactored to suit, I think this change is in fact quite well
covered by the existing ones...
Also some noise because a few of the changed files were quite old and
got the `gofumpt` treatment by my editor. Sorry not sorry.
---------
Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-01-15 10:13:22 +00:00
|
|
|
if m := w.Matcher.Match(path); m.IsIgnored() {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "ignored (patterns):", path)
|
2018-05-14 07:47:23 +00:00
|
|
|
// Only descend if matcher says so and the current file is not a symlink.
|
lib/ignore: Optimise ignoring directories for filesystem watcher (fixes #9339) (#9340)
This improves the ignore handling so that directories can be fully
ignored (skipped in the watcher) in more cases. Specifically, where the
previous rule was that any complex `!`-pattern would disable skipping
directories, the new rule is that only matches on patterns *after* such
a `!`-pattern disable skipping. That is, the following now does the
intuitive thing:
```
/foo
/bar
!whatever
*
```
- `/foo/**` and `/bar/**` are completely skipped, since there is no
chance anything underneath them could ever be not-ignored
- `!whatever` toggles the "can't skip directories any more" flag
- Anything that matches `*` can't skip directories, because it's
possible we can have `whatever` match something deeper.
To enable this, some refactoring was necessary:
- The "can skip dirs" flag is now a property of the match result, not of
the pattern set as a whole.
- That meant returning a boolean is not good enough, we need to actually
return the entire `Result` (or, like, two booleans but that seemed
uglier and more annoying to use)
- `ShouldIgnore(string) boolean` went away with
`Match(string).IsIgnored()` being the obvious replacement (API
simplification!)
- The watcher then needed to import the `ignore` package (for the
`Result` type), but `fs` imports the watcher and `ignore` imports `fs`.
That's a cycle, so I broke out `Result` into a package of its own so
that it can be safely imported everywhere in things like `type Matcher
interface { Match(string) result.Result }`. There's a fair amount of
stuttering in `result.Result` and maybe we should go with something like
`ignoreresult.R` or so, leaving this open for discussion.
Tests refactored to suit, I think this change is in fact quite well
covered by the existing ones...
Also some noise because a few of the changed files were quite old and
got the `gofumpt` treatment by my editor. Sorry not sorry.
---------
Co-authored-by: Simon Frei <freisim93@gmail.com>
2024-01-15 10:13:22 +00:00
|
|
|
if err != nil || m.CanSkipDir() || info.IsSymlink() {
|
2018-06-18 06:22:19 +00:00
|
|
|
return skip
|
2018-05-14 07:47:23 +00:00
|
|
|
}
|
|
|
|
// If the parent wasn't ignored already, set this path as the "highest" ignored parent
|
2018-11-22 10:16:45 +00:00
|
|
|
if info.IsDir() && (ignoredParent == "" || !fs.IsParent(path, ignoredParent)) {
|
2018-05-14 07:47:23 +00:00
|
|
|
ignoredParent = path
|
|
|
|
}
|
|
|
|
return nil
|
2018-02-10 15:56:53 +00:00
|
|
|
}
|
|
|
|
|
2018-12-21 11:08:15 +00:00
|
|
|
if err != nil {
|
2021-05-15 09:51:35 +00:00
|
|
|
// No need reporting errors for files that don't exist (e.g. scan
|
|
|
|
// due to filesystem watcher)
|
|
|
|
if !fs.IsNotExist(err) {
|
|
|
|
handleError(ctx, "scan", path, err, finishedChan)
|
|
|
|
}
|
2018-12-21 11:08:15 +00:00
|
|
|
return skip
|
|
|
|
}
|
|
|
|
|
|
|
|
if path == "." {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2024-09-28 15:16:44 +00:00
|
|
|
if path != nonNormPath {
|
|
|
|
if !w.AutoNormalize {
|
|
|
|
// We're not authorized to do anything about it, so complain and skip.
|
|
|
|
handleError(ctx, "normalizing path", nonNormPath, errUTF8Normalization, finishedChan)
|
|
|
|
return skip
|
|
|
|
}
|
|
|
|
|
|
|
|
path, err = w.applyNormalization(nonNormPath, path, info)
|
|
|
|
if err != nil {
|
|
|
|
handleError(ctx, "normalizing path", nonNormPath, err, finishedChan)
|
|
|
|
return skip
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-05-14 07:47:23 +00:00
|
|
|
if ignoredParent == "" {
|
|
|
|
// parent isn't ignored, nothing special
|
2024-09-28 15:16:44 +00:00
|
|
|
return w.handleItem(ctx, path, info, toHashChan, finishedChan)
|
2014-06-11 15:51:31 +00:00
|
|
|
}
|
|
|
|
|
2018-05-14 07:47:23 +00:00
|
|
|
// Part of current path below the ignored (potential) parent
|
|
|
|
rel := strings.TrimPrefix(path, ignoredParent+string(fs.PathSeparator))
|
|
|
|
|
|
|
|
// ignored path isn't actually a parent of the current path
|
|
|
|
if rel == path {
|
|
|
|
ignoredParent = ""
|
2024-09-28 15:16:44 +00:00
|
|
|
return w.handleItem(ctx, path, info, toHashChan, finishedChan)
|
2018-02-10 15:56:53 +00:00
|
|
|
}
|
|
|
|
|
2018-05-14 07:47:23 +00:00
|
|
|
// The previously ignored parent directories of the current, not
|
|
|
|
// ignored path need to be handled as well.
|
2020-06-08 06:14:50 +00:00
|
|
|
// Prepend an empty string to handle ignoredParent without anything
|
|
|
|
// appended in the first iteration.
|
2021-04-11 13:29:43 +00:00
|
|
|
for _, name := range append([]string{""}, fs.PathComponents(rel)...) {
|
2018-05-14 07:47:23 +00:00
|
|
|
ignoredParent = filepath.Join(ignoredParent, name)
|
2020-06-08 06:14:50 +00:00
|
|
|
info, err = w.Filesystem.Lstat(ignoredParent)
|
|
|
|
// An error here would be weird as we've already gotten to this point, but act on it nonetheless
|
|
|
|
if err != nil {
|
2021-02-19 07:51:39 +00:00
|
|
|
handleError(ctx, "scan", ignoredParent, err, finishedChan)
|
2020-06-08 06:14:50 +00:00
|
|
|
return skip
|
|
|
|
}
|
2024-09-28 15:16:44 +00:00
|
|
|
if err = w.handleItem(ctx, ignoredParent, info, toHashChan, finishedChan); err != nil {
|
2018-02-14 07:59:46 +00:00
|
|
|
return err
|
2017-01-05 14:05:49 +00:00
|
|
|
}
|
2018-05-14 07:47:23 +00:00
|
|
|
}
|
|
|
|
ignoredParent = ""
|
2015-05-05 19:19:59 +00:00
|
|
|
|
2018-05-14 07:47:23 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-09-28 15:16:44 +00:00
|
|
|
func (w *walker) handleItem(ctx context.Context, path string, info fs.FileInfo, toHashChan chan<- protocol.FileInfo, finishedChan chan<- ScanResult) error {
|
2018-05-14 07:47:23 +00:00
|
|
|
switch {
|
|
|
|
case info.IsSymlink():
|
2019-03-04 12:27:33 +00:00
|
|
|
if err := w.walkSymlink(ctx, path, info, finishedChan); err != nil {
|
2018-05-14 07:47:23 +00:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
if info.IsDir() {
|
|
|
|
// under no circumstances shall we descend into a symlink
|
|
|
|
return fs.SkipDir
|
2018-02-10 15:56:53 +00:00
|
|
|
}
|
2018-05-14 07:47:23 +00:00
|
|
|
return nil
|
2018-02-10 15:56:53 +00:00
|
|
|
|
2018-05-14 07:47:23 +00:00
|
|
|
case info.IsDir():
|
2024-09-28 15:16:44 +00:00
|
|
|
return w.walkDir(ctx, path, info, finishedChan)
|
2018-05-14 07:47:23 +00:00
|
|
|
|
|
|
|
case info.IsRegular():
|
2024-09-28 15:16:44 +00:00
|
|
|
return w.walkRegular(ctx, path, info, toHashChan)
|
2018-05-14 07:47:23 +00:00
|
|
|
|
2024-12-19 07:27:53 +00:00
|
|
|
default:
|
|
|
|
// A special file, socket, fifo, etc. -- do nothing but return
|
|
|
|
// success so we continue the walk.
|
|
|
|
l.Debugf("Skipping non-regular file %s (%s)", path, info.Mode())
|
|
|
|
return nil
|
|
|
|
}
|
2018-02-10 15:56:53 +00:00
|
|
|
}
|
|
|
|
|
2018-11-07 10:04:41 +00:00
|
|
|
func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileInfo, toHashChan chan<- protocol.FileInfo) error {
|
2018-05-29 06:01:23 +00:00
|
|
|
curFile, hasCurFile := w.CurrentFiler.CurrentFile(relPath)
|
|
|
|
|
2019-06-06 14:57:38 +00:00
|
|
|
blockSize := protocol.BlockSize(info.Size())
|
|
|
|
|
|
|
|
if hasCurFile {
|
|
|
|
// Check if we should retain current block size.
|
|
|
|
curBlockSize := curFile.BlockSize()
|
|
|
|
if blockSize > curBlockSize && blockSize/curBlockSize <= 2 {
|
|
|
|
// New block size is larger, but not more than twice larger.
|
|
|
|
// Retain.
|
|
|
|
blockSize = curBlockSize
|
|
|
|
} else if curBlockSize > blockSize && curBlockSize/blockSize <= 2 {
|
|
|
|
// Old block size is larger, but not more than twice larger.
|
|
|
|
// Retain.
|
|
|
|
blockSize = curBlockSize
|
2018-04-16 18:08:50 +00:00
|
|
|
}
|
|
|
|
}
|
2016-05-09 18:25:39 +00:00
|
|
|
|
2022-09-14 07:50:55 +00:00
|
|
|
f, err := CreateFileInfo(info, relPath, w.Filesystem, w.ScanOwnership, w.ScanXattrs, w.XattrFilter)
|
2022-07-26 06:24:58 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2018-09-18 13:34:17 +00:00
|
|
|
f = w.updateFileInfo(f, curFile)
|
|
|
|
f.NoPermissions = w.IgnorePerms
|
refactor: use modern Protobuf encoder (#9817)
At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes #8247
2024-12-01 15:50:17 +00:00
|
|
|
f.RawBlockSize = int32(blockSize)
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "checking:", f)
|
2018-02-25 08:39:00 +00:00
|
|
|
|
2018-04-16 18:08:50 +00:00
|
|
|
if hasCurFile {
|
2022-07-26 06:24:58 +00:00
|
|
|
if curFile.IsEquivalentOptional(f, protocol.FileInfoComparison{
|
|
|
|
ModTimeWindow: w.ModTimeWindow,
|
|
|
|
IgnorePerms: w.IgnorePerms,
|
|
|
|
IgnoreBlocks: true,
|
|
|
|
IgnoreFlags: w.LocalFlags,
|
2022-09-14 07:50:55 +00:00
|
|
|
IgnoreOwnership: !w.ScanOwnership,
|
|
|
|
IgnoreXattrs: !w.ScanXattrs,
|
2022-07-26 06:24:58 +00:00
|
|
|
}) {
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "unchanged:", curFile)
|
2018-02-25 08:39:00 +00:00
|
|
|
return nil
|
|
|
|
}
|
2024-01-08 09:29:20 +00:00
|
|
|
if curFile.ShouldConflict() && !f.ShouldConflict() {
|
2018-06-24 07:50:18 +00:00
|
|
|
// The old file was invalid for whatever reason and probably not
|
|
|
|
// up to date with what was out there in the cluster. Drop all
|
|
|
|
// others from the version vector to indicate that we haven't
|
|
|
|
// taken their version into account, and possibly cause a
|
2024-01-08 09:29:20 +00:00
|
|
|
// conflict. However, only do this if the new file is not also
|
|
|
|
// invalid. This would indicate that the new file is not part
|
|
|
|
// of the cluster, but e.g. a local change.
|
2018-03-27 20:24:20 +00:00
|
|
|
f.Version = f.Version.DropOthers(w.ShortID)
|
2018-02-25 08:39:00 +00:00
|
|
|
}
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "rescan:", curFile)
|
2018-02-25 08:39:00 +00:00
|
|
|
}
|
|
|
|
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "to hash:", relPath, f)
|
2015-11-20 09:32:16 +00:00
|
|
|
|
|
|
|
select {
|
2018-11-07 10:04:41 +00:00
|
|
|
case toHashChan <- f:
|
2017-04-26 00:15:23 +00:00
|
|
|
case <-ctx.Done():
|
2018-02-14 07:59:46 +00:00
|
|
|
return ctx.Err()
|
2015-11-20 09:32:16 +00:00
|
|
|
}
|
2018-02-14 07:59:46 +00:00
|
|
|
|
|
|
|
return nil
|
2014-03-08 22:02:01 +00:00
|
|
|
}
|
|
|
|
|
2018-11-07 10:04:41 +00:00
|
|
|
func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo, finishedChan chan<- ScanResult) error {
|
2018-09-18 13:34:17 +00:00
|
|
|
curFile, hasCurFile := w.CurrentFiler.CurrentFile(relPath)
|
2018-02-14 07:59:46 +00:00
|
|
|
|
2022-09-14 07:50:55 +00:00
|
|
|
f, err := CreateFileInfo(info, relPath, w.Filesystem, w.ScanOwnership, w.ScanXattrs, w.XattrFilter)
|
2022-07-26 06:24:58 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2018-09-18 13:34:17 +00:00
|
|
|
f = w.updateFileInfo(f, curFile)
|
|
|
|
f.NoPermissions = w.IgnorePerms
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "checking:", f)
|
2018-02-25 08:39:00 +00:00
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
if hasCurFile {
|
2022-07-26 06:24:58 +00:00
|
|
|
if curFile.IsEquivalentOptional(f, protocol.FileInfoComparison{
|
|
|
|
ModTimeWindow: w.ModTimeWindow,
|
|
|
|
IgnorePerms: w.IgnorePerms,
|
|
|
|
IgnoreBlocks: true,
|
|
|
|
IgnoreFlags: w.LocalFlags,
|
2022-09-14 07:50:55 +00:00
|
|
|
IgnoreOwnership: !w.ScanOwnership,
|
|
|
|
IgnoreXattrs: !w.ScanXattrs,
|
2022-07-26 06:24:58 +00:00
|
|
|
}) {
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "unchanged:", curFile)
|
2018-02-25 08:39:00 +00:00
|
|
|
return nil
|
|
|
|
}
|
2024-01-08 09:29:20 +00:00
|
|
|
if curFile.ShouldConflict() && !f.ShouldConflict() {
|
2018-06-24 07:50:18 +00:00
|
|
|
// The old file was invalid for whatever reason and probably not
|
|
|
|
// up to date with what was out there in the cluster. Drop all
|
|
|
|
// others from the version vector to indicate that we haven't
|
|
|
|
// taken their version into account, and possibly cause a
|
2024-01-08 09:29:20 +00:00
|
|
|
// conflict. However, only do this if the new file is not also
|
|
|
|
// invalid. This would indicate that the new file is not part
|
|
|
|
// of the cluster, but e.g. a local change.
|
2018-03-27 20:24:20 +00:00
|
|
|
f.Version = f.Version.DropOthers(w.ShortID)
|
2018-02-25 08:39:00 +00:00
|
|
|
}
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "rescan:", curFile)
|
2018-02-25 08:39:00 +00:00
|
|
|
}
|
|
|
|
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "dir:", relPath, f)
|
2015-11-20 08:54:12 +00:00
|
|
|
|
|
|
|
select {
|
2018-11-07 10:04:41 +00:00
|
|
|
case finishedChan <- ScanResult{File: f}:
|
2017-04-26 00:15:23 +00:00
|
|
|
case <-ctx.Done():
|
2018-02-14 07:59:46 +00:00
|
|
|
return ctx.Err()
|
2015-11-20 08:54:12 +00:00
|
|
|
}
|
2018-02-14 07:59:46 +00:00
|
|
|
|
|
|
|
return nil
|
2015-11-20 08:54:12 +00:00
|
|
|
}
|
|
|
|
|
2017-01-05 13:26:29 +00:00
|
|
|
// walkSymlink returns nil or an error, if the error is of the nature that
|
|
|
|
// it should stop the entire walk.
|
2019-03-04 12:27:33 +00:00
|
|
|
func (w *walker) walkSymlink(ctx context.Context, relPath string, info fs.FileInfo, finishedChan chan<- ScanResult) error {
|
2017-03-18 00:25:47 +00:00
|
|
|
// Symlinks are not supported on Windows. We ignore instead of returning
|
|
|
|
// an error.
|
2022-07-28 17:36:39 +00:00
|
|
|
if build.IsWindows {
|
2018-02-14 07:59:46 +00:00
|
|
|
return nil
|
2017-03-18 00:25:47 +00:00
|
|
|
}
|
|
|
|
|
2022-09-14 07:50:55 +00:00
|
|
|
f, err := CreateFileInfo(info, relPath, w.Filesystem, w.ScanOwnership, w.ScanXattrs, w.XattrFilter)
|
2015-11-20 08:50:46 +00:00
|
|
|
if err != nil {
|
2022-09-14 07:50:55 +00:00
|
|
|
handleError(ctx, "reading link", relPath, err, finishedChan)
|
2018-02-14 07:59:46 +00:00
|
|
|
return nil
|
2015-11-20 08:50:46 +00:00
|
|
|
}
|
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
curFile, hasCurFile := w.CurrentFiler.CurrentFile(relPath)
|
|
|
|
f = w.updateFileInfo(f, curFile)
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "checking:", f)
|
2015-11-20 08:50:46 +00:00
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
if hasCurFile {
|
2022-07-26 06:24:58 +00:00
|
|
|
if curFile.IsEquivalentOptional(f, protocol.FileInfoComparison{
|
|
|
|
ModTimeWindow: w.ModTimeWindow,
|
|
|
|
IgnorePerms: w.IgnorePerms,
|
|
|
|
IgnoreBlocks: true,
|
|
|
|
IgnoreFlags: w.LocalFlags,
|
2022-09-14 07:50:55 +00:00
|
|
|
IgnoreOwnership: !w.ScanOwnership,
|
|
|
|
IgnoreXattrs: !w.ScanXattrs,
|
2022-07-26 06:24:58 +00:00
|
|
|
}) {
|
2021-05-15 09:51:35 +00:00
|
|
|
l.Debugln(w, "unchanged:", curFile, info.ModTime().Unix(), info.Mode()&fs.ModePerm)
|
2018-02-25 08:39:00 +00:00
|
|
|
return nil
|
|
|
|
}
|
2024-01-08 09:29:20 +00:00
|
|
|
if curFile.ShouldConflict() && !f.ShouldConflict() {
|
2018-06-24 07:50:18 +00:00
|
|
|
// The old file was invalid for whatever reason and probably not
|
|
|
|
// up to date with what was out there in the cluster. Drop all
|
|
|
|
// others from the version vector to indicate that we haven't
|
|
|
|
// taken their version into account, and possibly cause a
|
2024-01-08 09:29:20 +00:00
|
|
|
// conflict. However, only do this if the new file is not also
|
|
|
|
// invalid. This would indicate that the new file is not part
|
|
|
|
// of the cluster, but e.g. a local change.
|
2018-03-27 20:24:20 +00:00
|
|
|
f.Version = f.Version.DropOthers(w.ShortID)
|
2018-02-25 08:39:00 +00:00
|
|
|
}
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "rescan:", curFile)
|
2018-02-25 08:39:00 +00:00
|
|
|
}
|
|
|
|
|
2022-10-13 17:32:58 +00:00
|
|
|
l.Debugln(w, "symlink:", relPath, f)
|
2015-11-20 08:50:46 +00:00
|
|
|
|
|
|
|
select {
|
2018-11-07 10:04:41 +00:00
|
|
|
case finishedChan <- ScanResult{File: f}:
|
2017-04-26 00:15:23 +00:00
|
|
|
case <-ctx.Done():
|
2018-02-14 07:59:46 +00:00
|
|
|
return ctx.Err()
|
2015-11-20 08:50:46 +00:00
|
|
|
}
|
2018-02-14 07:59:46 +00:00
|
|
|
|
|
|
|
return nil
|
2015-11-20 08:50:46 +00:00
|
|
|
}
|
|
|
|
|
2024-09-28 15:16:44 +00:00
|
|
|
func normalizePath(path string) string {
|
2024-07-31 05:31:14 +00:00
|
|
|
if build.IsDarwin || build.IsIOS {
|
2015-11-20 08:41:44 +00:00
|
|
|
// Mac OS X file names should always be NFD normalized.
|
2024-09-28 15:16:44 +00:00
|
|
|
return norm.NFD.String(path)
|
lib/scanner: Fix UTF-8 normalization on ZFS (fixes #4649)
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
2018-01-05 18:11:09 +00:00
|
|
|
}
|
2024-09-28 15:16:44 +00:00
|
|
|
// Every other OS in the known universe uses NFC or just plain
|
|
|
|
// doesn't bother to define an encoding. In our case *we* do care,
|
|
|
|
// so we enforce NFC regardless.
|
|
|
|
return norm.NFC.String(path)
|
|
|
|
}
|
2015-11-20 08:41:44 +00:00
|
|
|
|
2024-09-28 15:16:44 +00:00
|
|
|
// applyNormalization fixes the normalization of the file on disk, i.e. ensures
|
|
|
|
// the file at path ends up named normPath. It shouldn't but may happen that the
|
|
|
|
// file ends up with a different name, in which case that one should be scanned.
|
|
|
|
func (w *walker) applyNormalization(path, normPath string, info fs.FileInfo) (string, error) {
|
lib/scanner: Fix UTF-8 normalization on ZFS (fixes #4649)
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
2018-01-05 18:11:09 +00:00
|
|
|
// We will attempt to normalize it.
|
|
|
|
normInfo, err := w.Filesystem.Lstat(normPath)
|
|
|
|
if fs.IsNotExist(err) {
|
|
|
|
// Nothing exists with the normalized filename. Good.
|
|
|
|
if err = w.Filesystem.Rename(path, normPath); err != nil {
|
2018-11-07 10:04:41 +00:00
|
|
|
return "", err
|
2015-11-20 08:41:44 +00:00
|
|
|
}
|
lib/scanner: Fix UTF-8 normalization on ZFS (fixes #4649)
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
2018-01-05 18:11:09 +00:00
|
|
|
l.Infof(`Normalized UTF8 encoding of file name "%s".`, path)
|
2018-11-07 10:04:41 +00:00
|
|
|
return normPath, nil
|
|
|
|
}
|
|
|
|
if w.Filesystem.SameFile(info, normInfo) {
|
lib/scanner: Fix UTF-8 normalization on ZFS (fixes #4649)
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
2018-01-05 18:11:09 +00:00
|
|
|
// With some filesystems (ZFS), if there is an un-normalized path and you ask whether the normalized
|
|
|
|
// version exists, it responds with true. Therefore we need to check fs.SameFile as well.
|
|
|
|
// In this case, a call to Rename won't do anything, so we have to rename via a temp file.
|
|
|
|
|
|
|
|
// We don't want to use the standard syncthing prefix here, as that will result in the file being ignored
|
|
|
|
// and eventually deleted by Syncthing if the rename back fails.
|
|
|
|
|
|
|
|
tempPath := fs.TempNameWithPrefix(normPath, "")
|
|
|
|
if err = w.Filesystem.Rename(path, tempPath); err != nil {
|
2018-11-07 10:04:41 +00:00
|
|
|
return "", err
|
2015-11-20 08:41:44 +00:00
|
|
|
}
|
lib/scanner: Fix UTF-8 normalization on ZFS (fixes #4649)
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
2018-01-05 18:11:09 +00:00
|
|
|
if err = w.Filesystem.Rename(tempPath, normPath); err != nil {
|
|
|
|
// I don't ever expect this to happen, but if it does, we should probably tell our caller that the normalized
|
|
|
|
// path is the temp path: that way at least the user's data still gets synced.
|
|
|
|
l.Warnf(`Error renaming "%s" to "%s" while normalizating UTF8 encoding: %v. You will want to rename this file back manually`, tempPath, normPath, err)
|
2018-11-07 10:04:41 +00:00
|
|
|
return tempPath, nil
|
lib/scanner: Fix UTF-8 normalization on ZFS (fixes #4649)
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
2018-01-05 18:11:09 +00:00
|
|
|
}
|
2018-11-07 10:04:41 +00:00
|
|
|
return normPath, nil
|
2015-11-20 08:41:44 +00:00
|
|
|
}
|
2018-11-07 10:04:41 +00:00
|
|
|
// There is something already in the way at the normalized
|
|
|
|
// file name.
|
|
|
|
return "", errUTF8Conflict
|
2015-11-20 08:41:44 +00:00
|
|
|
}
|
|
|
|
|
2022-07-26 06:24:58 +00:00
|
|
|
// updateFileInfo updates walker specific members of protocol.FileInfo that
|
|
|
|
// do not depend on type, and things that should be preserved from the
|
|
|
|
// previous version of the FileInfo.
|
|
|
|
func (w *walker) updateFileInfo(dst, src protocol.FileInfo) protocol.FileInfo {
|
2022-07-28 17:36:39 +00:00
|
|
|
if dst.Type == protocol.FileInfoTypeFile && build.IsWindows {
|
2018-09-18 13:34:17 +00:00
|
|
|
// If we have an existing index entry, copy the executable bits
|
|
|
|
// from there.
|
2023-08-04 17:57:30 +00:00
|
|
|
dst.Permissions |= (src.Permissions & 0o111)
|
2018-09-18 13:34:17 +00:00
|
|
|
}
|
2022-07-26 06:24:58 +00:00
|
|
|
dst.Version = src.Version.Update(w.ShortID)
|
|
|
|
dst.ModifiedBy = w.ShortID
|
|
|
|
dst.LocalFlags = w.LocalFlags
|
|
|
|
|
|
|
|
// Copy OS data from src to dst, unless it was already set on dst.
|
2022-09-14 07:50:55 +00:00
|
|
|
dst.Platform.MergeWith(&src.Platform)
|
2022-07-26 06:24:58 +00:00
|
|
|
|
|
|
|
return dst
|
2018-09-18 13:34:17 +00:00
|
|
|
}
|
2018-12-17 13:52:15 +00:00
|
|
|
|
2021-02-19 07:51:39 +00:00
|
|
|
func handleError(ctx context.Context, context, path string, err error, finishedChan chan<- ScanResult) {
|
2018-11-07 10:04:41 +00:00
|
|
|
select {
|
|
|
|
case finishedChan <- ScanResult{
|
2020-03-03 21:40:00 +00:00
|
|
|
Err: fmt.Errorf("%s: %w", context, err),
|
2018-11-07 10:04:41 +00:00
|
|
|
Path: path,
|
|
|
|
}:
|
|
|
|
case <-ctx.Done():
|
|
|
|
}
|
|
|
|
}
|
2018-09-18 13:34:17 +00:00
|
|
|
|
2020-07-28 09:13:15 +00:00
|
|
|
func (w *walker) String() string {
|
|
|
|
return fmt.Sprintf("walker/%s@%p", w.Folder, w)
|
|
|
|
}
|
|
|
|
|
2015-11-17 20:08:36 +00:00
|
|
|
// A byteCounter gets bytes added to it via Update() and then provides the
|
|
|
|
// Total() and one minute moving average Rate() in bytes per second.
|
|
|
|
type byteCounter struct {
|
2023-02-07 11:07:34 +00:00
|
|
|
total atomic.Int64
|
2015-11-17 20:08:36 +00:00
|
|
|
metrics.EWMA
|
|
|
|
stop chan struct{}
|
|
|
|
}
|
|
|
|
|
|
|
|
func newByteCounter() *byteCounter {
|
|
|
|
c := &byteCounter{
|
|
|
|
EWMA: metrics.NewEWMA1(), // a one minute exponentially weighted moving average
|
|
|
|
stop: make(chan struct{}),
|
|
|
|
}
|
|
|
|
go c.ticker()
|
|
|
|
return c
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *byteCounter) ticker() {
|
|
|
|
// The metrics.EWMA expects clock ticks every five seconds in order to
|
|
|
|
// decay the average properly.
|
|
|
|
t := time.NewTicker(5 * time.Second)
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case <-t.C:
|
|
|
|
c.Tick()
|
|
|
|
case <-c.stop:
|
|
|
|
t.Stop()
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *byteCounter) Update(bytes int64) {
|
2023-02-07 11:07:34 +00:00
|
|
|
c.total.Add(bytes)
|
2015-11-17 20:08:36 +00:00
|
|
|
c.EWMA.Update(bytes)
|
|
|
|
}
|
|
|
|
|
2023-02-07 11:07:34 +00:00
|
|
|
func (c *byteCounter) Total() int64 { return c.total.Load() }
|
2015-11-17 20:08:36 +00:00
|
|
|
|
|
|
|
func (c *byteCounter) Close() {
|
|
|
|
close(c.stop)
|
|
|
|
}
|
2016-05-09 18:25:39 +00:00
|
|
|
|
2018-02-14 07:59:46 +00:00
|
|
|
// A no-op CurrentFiler
|
|
|
|
|
|
|
|
type noCurrentFiler struct{}
|
2016-05-09 18:25:39 +00:00
|
|
|
|
2022-07-28 15:17:29 +00:00
|
|
|
func (noCurrentFiler) CurrentFile(_ string) (protocol.FileInfo, bool) {
|
2018-02-14 07:59:46 +00:00
|
|
|
return protocol.FileInfo{}, false
|
|
|
|
}
|
2018-09-16 07:48:14 +00:00
|
|
|
|
2022-09-14 07:50:55 +00:00
|
|
|
func CreateFileInfo(fi fs.FileInfo, name string, filesystem fs.Filesystem, scanOwnership bool, scanXattrs bool, xattrFilter XattrFilter) (protocol.FileInfo, error) {
|
2018-09-18 13:34:17 +00:00
|
|
|
f := protocol.FileInfo{Name: name}
|
2022-09-14 07:50:55 +00:00
|
|
|
if scanOwnership || scanXattrs {
|
|
|
|
if plat, err := filesystem.PlatformData(name, scanOwnership, scanXattrs, xattrFilter); err == nil {
|
2022-08-12 05:47:20 +00:00
|
|
|
f.Platform = plat
|
|
|
|
} else {
|
|
|
|
return protocol.FileInfo{}, fmt.Errorf("reading platform data: %w", err)
|
|
|
|
}
|
2022-07-26 06:24:58 +00:00
|
|
|
}
|
2023-12-04 06:48:24 +00:00
|
|
|
|
|
|
|
if ct := fi.InodeChangeTime(); !ct.IsZero() {
|
|
|
|
f.InodeChangeNs = ct.UnixNano()
|
|
|
|
} else {
|
|
|
|
f.InodeChangeNs = 0
|
|
|
|
}
|
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
if fi.IsSymlink() {
|
2018-09-16 07:48:14 +00:00
|
|
|
f.Type = protocol.FileInfoTypeSymlink
|
|
|
|
target, err := filesystem.ReadSymlink(name)
|
|
|
|
if err != nil {
|
|
|
|
return protocol.FileInfo{}, err
|
|
|
|
}
|
|
|
|
f.SymlinkTarget = target
|
2019-03-04 12:27:33 +00:00
|
|
|
f.NoPermissions = true // Symlinks don't have permissions of their own
|
2018-09-18 13:34:17 +00:00
|
|
|
return f, nil
|
|
|
|
}
|
2023-12-04 06:48:24 +00:00
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
f.Permissions = uint32(fi.Mode() & fs.ModePerm)
|
|
|
|
f.ModifiedS = fi.ModTime().Unix()
|
refactor: use modern Protobuf encoder (#9817)
At a high level, this is what I've done and why:
- I'm moving the protobuf generation for the `protocol`, `discovery` and
`db` packages to the modern alternatives, and using `buf` to generate
because it's nice and simple.
- After trying various approaches on how to integrate the new types with
the existing code, I opted for splitting off our own data model types
from the on-the-wire generated types. This means we can have a
`FileInfo` type with nicer ergonomics and lots of methods, while the
protobuf generated type stays clean and close to the wire protocol. It
does mean copying between the two when required, which certainly adds a
small amount of inefficiency. If we want to walk this back in the future
and use the raw generated type throughout, that's possible, this however
makes the refactor smaller (!) as it doesn't change everything about the
type for everyone at the same time.
- I have simply removed in cold blood a significant number of old
database migrations. These depended on previous generations of generated
messages of various kinds and were annoying to support in the new
fashion. The oldest supported database version now is the one from
Syncthing 1.9.0 from Sep 7, 2020.
- I changed config structs to be regular manually defined structs.
For the sake of discussion, some things I tried that turned out not to
work...
### Embedding / wrapping
Embedding the protobuf generated structs in our existing types as a data
container and keeping our methods and stuff:
```
package protocol
type FileInfo struct {
*generated.FileInfo
}
```
This generates a lot of problems because the internal shape of the
generated struct is quite different (different names, different types,
more pointers), because initializing it doesn't work like you'd expect
(i.e., you end up with an embedded nil pointer and a panic), and because
the types of child types don't get wrapped. That is, even if we also
have a similar wrapper around a `Vector`, that's not the type you get
when accessing `someFileInfo.Version`, you get the `*generated.Vector`
that doesn't have methods, etc.
### Aliasing
```
package protocol
type FileInfo = generated.FileInfo
```
Doesn't help because you can't attach methods to it, plus all the above.
### Generating the types into the target package like we do now and
attaching methods
This fails because of the different shape of the generated type (as in
the embedding case above) plus the generated struct already has a bunch
of methods that we can't necessarily override properly (like `String()`
and a bunch of getters).
### Methods to functions
I considered just moving all the methods we attach to functions in a
specific package, so that for example
```
package protocol
func (f FileInfo) Equal(other FileInfo) bool
```
would become
```
package fileinfos
func Equal(a, b *generated.FileInfo) bool
```
and this would mostly work, but becomes quite verbose and cumbersome,
and somewhat limits discoverability (you can't see what methods are
available on the type in auto completions, etc). In the end I did this
in some cases, like in the database layer where a lot of things like
`func (fv *FileVersion) IsEmpty() bool` becomes `func fvIsEmpty(fv
*generated.FileVersion)` because they were anyway just internal methods.
Fixes #8247
2024-12-01 15:50:17 +00:00
|
|
|
f.ModifiedNs = int32(fi.ModTime().Nanosecond())
|
2023-12-04 06:48:24 +00:00
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
if fi.IsDir() {
|
|
|
|
f.Type = protocol.FileInfoTypeDirectory
|
|
|
|
return f, nil
|
2018-09-16 07:48:14 +00:00
|
|
|
}
|
2023-12-04 06:48:24 +00:00
|
|
|
|
2018-09-18 13:34:17 +00:00
|
|
|
f.Size = fi.Size()
|
|
|
|
f.Type = protocol.FileInfoTypeFile
|
2023-12-04 06:48:24 +00:00
|
|
|
|
2018-09-16 07:48:14 +00:00
|
|
|
return f, nil
|
|
|
|
}
|