mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2024-11-14 00:14:07 +00:00
Merge pull request #583 from ggtakec/master
Updated limit object size in s3fs man page
This commit is contained in:
commit
42cdcbc2dc
@ -277,7 +277,7 @@ Most of the generic mount options described in 'man mount' are supported (ro, rw
|
|||||||
There are many FUSE specific mount options that can be specified. e.g. allow_other. See the FUSE README for the full set.
|
There are many FUSE specific mount options that can be specified. e.g. allow_other. See the FUSE README for the full set.
|
||||||
.SH NOTES
|
.SH NOTES
|
||||||
.TP
|
.TP
|
||||||
Maximum file size=64GB (limited by s3fs, not Amazon).
|
The maximum size of objects that s3fs can handle depends on Amazone S3. For example, up to 5 GB when using single PUT API. And up to 5 TB is supported when Multipart Upload API is used.
|
||||||
.TP
|
.TP
|
||||||
If enabled via the "use_cache" option, s3fs automatically maintains a local cache of files in the folder specified by use_cache. Whenever s3fs needs to read or write a file on S3, it first downloads the entire file locally to the folder specified by use_cache and operates on it. When fuse_release() is called, s3fs will re-upload the file to S3 if it has been changed. s3fs uses md5 checksums to minimize downloads from S3.
|
If enabled via the "use_cache" option, s3fs automatically maintains a local cache of files in the folder specified by use_cache. Whenever s3fs needs to read or write a file on S3, it first downloads the entire file locally to the folder specified by use_cache and operates on it. When fuse_release() is called, s3fs will re-upload the file to S3 if it has been changed. s3fs uses md5 checksums to minimize downloads from S3.
|
||||||
.TP
|
.TP
|
||||||
|
Loading…
Reference in New Issue
Block a user