Commit Graph

175 Commits

Author SHA1 Message Date
Takeshi Nakatani
3b1cc3b197
Merge pull request #933 from gaul/cache/remove-mirror-path
Remove mirror path when deleting cache
2019-01-27 16:15:49 +09:00
Andrew Gaul
a93e500b44 Remove mirror path when deleting cache
Fixes #827.
2019-01-25 18:10:03 -08:00
Andrew Gaul
84b421d6ef Prefer empty over size checks
Found and fixed via clang-tidy.
2019-01-23 11:30:28 -08:00
Takeshi Nakatani
010276ceab
Merge pull request #921 from gaul/clang-tidy/redundant-string-init
Remove redundant string initializations
2019-01-23 19:44:59 +09:00
Andrew Gaul
1fc25e8c3f Remove redundant string initializations
Found and fixed via clang-tidy.
2019-01-22 23:16:37 -08:00
Andrew Gaul
5f5da4b2cb Load tail range during overwrite
Previously s3fs experienced data loss when writing to the middle of a
file.  Corrupt files would have the expected data from 0..offset+size
but unexpected NUL bytes from offset+size..EOF.  References #808.
2019-01-22 22:02:40 -08:00
Andrew Gaul
40ba3b44a1 Prefer abort over assert(false)
The compiler can remove the latter when compiled with NDEBUG which may
cause unintended control flow.
2019-01-20 12:30:27 -08:00
Andrew Gaul
2c43b1e12b Store and retrieve file change time
This introduces a new header with the change time; existing objects
will report modification time.  Fixes #771.
2019-01-14 10:05:11 -08:00
Andrew Gaul
a2f8ac535e Address cppcheck 1.86 errors
Lifetime, shadowing, and unused variables.  Found via the Travis macOS
builder.
2018-12-20 14:56:31 -08:00
Takeshi Nakatani
f71a28f9b9
Merge pull request #714 from orozery/reduce_lock_contention
reduce lock contention on file open
2018-03-04 13:36:08 +09:00
Or Ozeri
8b657eee41 add disk space reservation 2018-02-28 19:20:23 +02:00
Takeshi Nakatani
c494e54320 Fixed cppcheck error on osx 2018-02-28 12:06:06 +00:00
Or Ozeri
0edf056e95 reduce lock contention on file open 2018-02-04 17:13:58 +02:00
Takeshi Nakatani
6b58220009
Merge pull request #697 from pwulff/master
Fixing race condition in FdEntity::GetStats
2017-12-17 15:46:48 +09:00
Paul Wulff
ee6abea956 Race condition in FdManager::Rename because no mutex is used. 2017-12-15 15:27:51 +01:00
Paul Wulff
cea7d44717 Fixing race condition in FdEntity::GetStats 2017-12-13 10:49:00 +01:00
Or Ozeri
0da87e75fe fix condition for parallel download 2017-12-04 16:07:33 +02:00
Takeshi Nakatani
026260e7a1 Improved use of temporary files 2017-11-23 09:18:11 +00:00
Andrew Gaul
0418e53b3c Reduce use of preprocessor
This provides type-safety and avoids token expansion side effects.
2017-11-18 22:40:06 -08:00
Andrew Gaul
40501a7a73 Lock FdEntity when mutating orgmeta
References #654.
2017-10-23 22:41:42 -07:00
Scott Talbert
20da0e4dd3 Fix intermittent upload failures on macOS
There were multiple problems with the FdManager::GetFreeDiskSpace() function
on macOS:
1) When calling statvfs(), f_frsize should be used instead of f_bsize when
converting available blocks to bytes.  This was causing the free space
calculation to be incorrect.
2) On macOS, fsblkcnt_t is a 32-bit integer.  Thus, when calculating available
disk space, there were frequently overflows.  This caused s3fs to incorrectly
determine that the cache location was out of space in the middle of a transfer
which caused uploads to fail.  Changing this to a uint64_t resolves the
problem.
2017-09-08 15:23:10 -04:00
Takeshi Nakatani
9d10a5aa70 Changed copyright year format for debian pkg 2017-05-07 11:24:17 +00:00
Takeshi Nakatani
28efff5986 Merge pull request #554 from orozery/cache_cleanup
cleanup cache directory when running out of disk space
2017-04-16 19:13:11 +09:00
Takeshi Nakatani
fef3fbc225 Added check_cache_dir_exist option(refixed #347) - #538 2017-04-02 08:10:16 +00:00
Or Ozeri
95578cad43 cleanup cache directory when running out of disk space 2017-04-02 10:22:12 +03:00
Takeshi Nakatani
7b307601b5 Merge pull request #511 from s3fs-fuse/issue#435
Fixed a bug about uploading NULL to some part of the file contents
2016-12-04 17:20:50 +09:00
Andrew Gaul
d375bca0d0 Correct typos 2016-11-19 15:57:41 -08:00
Takeshi Nakatani
81e209bdd1 Fixed issue#435 codes 2016-11-19 20:09:35 +00:00
Takeshi Nakatani
a688df813e Fixed a bug at read symlink 2016-10-11 13:32:08 +00:00
Takeshi Nakatani
716baada22 Testing patch codes for issue#435 2016-10-10 12:16:09 +00:00
Takeshi Nakatani
e8a8019a71 Add mirror file logic for removing cache file 2016-07-03 03:37:08 +00:00
Andrew Gaul
95cb5d201f Correct search and replace typo 2016-06-13 10:25:33 -07:00
Takeshi Nakatani
090c37a1c1 Fixed writing sparsed file - #375,#379,#394 2016-04-12 18:24:36 +00:00
Takeshi Nakatani
fff40bbff3 Revert "Fixed a bug about writing sparsed file - #375" 2016-04-13 01:24:24 +09:00
Takeshi Nakatani
ded4faf2e4 Fixed a bug about writing sparsed file - #375 2016-03-22 05:44:14 +00:00
Takeshi Nakatani
150b83f61e Remove stat file cache dir if specified del_cache - #337 2016-02-06 18:59:13 +00:00
Takeshi Nakatani
c5a94cfc0c Check cache dirctory path and attributes - #347 2016-02-06 13:38:48 +00:00
haoran.yanghr
e3765ad497 Tune the code indent. 2016-01-28 11:16:06 +08:00
haoran.yanghr
dd9f3aed36 Fix the memory leak issue in fdcache. See issue #340 2016-01-28 11:11:53 +08:00
Takeshi Nakatani
759b44135a Check pthread prtability in configure as additional change for #307 2015-12-03 07:47:17 +00:00
Takeshi Nakatani
8b53e0d931 Merge pull request #307 from rockuw/master
Fix pthread portability problem
2015-12-03 16:35:30 +09:00
Tianlong Wu
3e655bad3b PTHREAD_MUTEX_RECURSIVE_NP is a enum not macro 2015-12-03 13:44:11 +08:00
Takeshi Nakatani
5e97cb0f48 Changed ensure free disk space as additional change for #306 2015-12-03 05:40:26 +00:00
Tianlong Wu
f44b61c403 Fix pthread portability problem 2015-12-03 10:44:38 +08:00
Guy
6067af6ef1 Fix read concurrency to work in parallel count
When the prefetch size is limited to the multipart size, the entire parallel logic of the read flow does not have an opportunity to use parallel get.
This fix increases the read performance significantly over our own s3 on-premise solution.
2015-11-30 18:38:15 +02:00
Takeshi Nakatani
7b62de80f6 Fixed a bug about mtime - #299 2015-11-29 15:53:53 +00:00
Takeshi Nakatani
8dd234dd8f Fixed bugs about cppcheck error 2015-10-20 15:47:07 +00:00
Takeshi Nakatani
83d46ef8c6 Fixed bugs about a object larger than free disk space 2015-10-20 15:19:04 +00:00
Takeshi Nakatani
d102eb752d Supported a object which is larger than free disk space 2015-10-18 17:03:41 +00:00
Takeshi Nakatani
92e52dadd4 Changed and cleaned the logic for debug message. 2015-09-30 19:41:27 +00:00
Takeshi Nakatani
ce66430fac Added chacking cache dir perms at starting. 2015-08-23 03:57:34 +00:00
Takeshi Nakatani
756d1e5e81 Configure cppcheck #224 2015-08-12 15:04:16 +00:00
Andrew Gaul
8ee71caabb Address Coverity errors
Fixed an uninitialized member, misordered NULL check, resource leak,
and unconsumed return value.
2015-08-05 23:28:06 -07:00
Jamie Alessio
912bc58df0 Fixed a few small spelling issues. 2015-07-10 11:50:40 -07:00
Takeshi Nakatani
d06b6d7d41 Fixed a bug no use_cache case about fixed #138 - issue#141 2015-03-08 16:41:14 +00:00
Takeshi Nakatani
114966e7c0 Fixed bugs, not turn use_cache off and ty to load to end - issue#97 2015-03-04 08:48:37 +00:00
Ka-Hing Cheung
03d84a07d1 fix rename before close
nautilus does this when you drag and drop to overwrite a file:

1) create .goutputstream-XXXXXX to write to
2) fsync the fd for .goutputstream-XXXXXX
3) rename .goutputstream-XXXXXX to target file
4) close the fd for .goutputstream-XXXXXX

previously, doing this on s3fs would result in an empty target file
because after the rename, s3fs would not flush the content of
.goutputstream-XXXXXX to target file.

this change moves the FdEntity from the old path to the new path
whenever rename happens. On flush s3fs would now flush the correct
content to the rename target.
2015-01-12 15:05:54 -08:00
Takeshi Nakatani
7a7c7572ea Cleaned up codes for next packaging. 2014-09-07 15:08:27 +00:00
Takeshi Nakatani
20b1c207be fixed issue #39 2014-08-26 17:11:10 +00:00
Takeshi Nakatani
cd27f0aa54 Supported another crypt libraries as GnuTLS and NSS, and added configure options 2014-05-06 14:23:05 +00:00
Takeshi Nakatani
157612e7e7 Merge pull request #28 from andrewgaul/fdcache-init-signedness-warning
Address signedness warning in FdCache::Init
2014-04-05 01:02:13 +09:00
Andrew Gaul
d475e22774 Address signedness warning in FdCache::Init
This commit allows GCC 4.8 to compile s3fs withing warnings.
2014-04-01 23:50:08 -07:00
Takeshi Nakatani
4762e53b5d Added multipart_size option for #16 2014-03-30 07:53:41 +00:00
ggtakec@gmail.com
40b9f0a408 Changes codes
1) Changed buffer size for file size
   Changes a internal buffer size from size_t to offt_t.
   It is a bug for 32 bit OS enviroment.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@496 df820570-a93a-0410-bd06-b72b767a4274
2013-11-17 08:50:41 +00:00
ggtakec@gmail.com
882f13020e Fixed bugs(overflow)
1) Overflow
   About over 4GB file, when st_size which is member stat structure,
   the value is overflow.
   Fixed this bug and fixed like this bug in all sources. 

2) Changed retrying request
   If s3fs gets 500 HTTP status for multipart request, s3fs retry
   to send same request.





git-svn-id: http://s3fs.googlecode.com/svn/trunk@495 df820570-a93a-0410-bd06-b72b767a4274
2013-11-13 16:26:50 +00:00
ggtakec@gmail.com
42b74c9d2e Changes codes
1) Changed codes about memory leak
   For memory leak, below codes are changed.
   * calls malloc_trim function
   * calls initializing NSS function, and adds configure
     option "--enable-nss-init".
     If libcurl is with NSS, s3fs initializes NSS manually.
     This initializing NSS is enabled by "--enable-nss-init"
     option at configure. if this option is specified, you
     need "nss-devel" package.
   * calls initializing libxml2(xmlInitParser).
   * BIO functions have memory leak, calls CRYPTO_free_ex_data.
   * changes cache structure.
   * changes cache out logic to LRU.
   * sets alignment for allcated memory in body data structure.
   * adds ssl session into share handle. and adds nosscache option.
   * deletes unused allocated memory.(bug)
   * changes defaule parallel count of head request in readdir
     (500->20)
   * fixes some bugs.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@482 df820570-a93a-0410-bd06-b72b767a4274
2013-09-14 21:50:39 +00:00
ggtakec@gmail.com
d45f4707ea Fixed bugs( Issue 368 )
1) Fixed Issue 368
   Fixed a bug that s3fs could not update local cache.

   (Issue 368)1.73: Updating existing file on server 'a' does not change length of file on server 'b'




git-svn-id: http://s3fs.googlecode.com/svn/trunk@481 df820570-a93a-0410-bd06-b72b767a4274
2013-08-30 02:25:27 +00:00
ggtakec@gmail.com
b3682f87d2 Fixed bugs(Issue 363)
1) Fixed Issue 363
   Fixed a bug which has below reason.
   Fuse does not wait finishing "release file descriptor" function
   called by fuse, and fuse runs(calls) next processing(commands).
   Then s3fs could not clear stats cache information for that file
   before calling next processing, and s3fs uses old stats cache
   information.
   So that, s3fs clears stats cache in release function at first.

   And found two bad codes(but these codes do not influence normal
   movement) in fdcache.cpp and fixed these.

   Issue 363: make check failing inconsistently



git-svn-id: http://s3fs.googlecode.com/svn/trunk@471 df820570-a93a-0410-bd06-b72b767a4274
2013-08-22 09:36:16 +00:00
ggtakec@gmail.com
ee01c91e02 Fixed bugs for compiling
1) Fixed bugs
   Fixes below bugs( format error and undefined fund ).

   * 1.72 Will not compile on Ubuntu 12.04.2 (precise) i686(Issue 360)
   * complie time error after running #make(Issue 361)

   I'll close these Issue if I can confirm that these problem was solved.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@466 df820570-a93a-0410-bd06-b72b767a4274
2013-08-19 06:29:24 +00:00
ggtakec@gmail.com
d7689151ab Fixed Issue 229 and Changes codes
1) Set metadata "Content-Encoding" automatically(Issue 292)
   For this issue, s3fs is added new option "ahbe_conf".

   New option means the configuration file path, and this file specifies
   additional HTTP header by file(object) extension.
   Thus you can specify any HTTP header for each object by extension.

   * ahbe_conf file format:
     -----------
     line                = [file suffix] HTTP-header [HTTP-header-values]
     file suffix         = file(object) suffix, if this field is empty, 
                           it means "*"(all object).
     HTTP-header         = additional HTTP header name
     HTTP-header-values  = additional HTTP header value
     -----------

   * Example:
     -----------
     .gz      Content-Encoding     gzip
     .Z       Content-Encoding     compress
              X-S3FS-MYHTTPHEAD    myvalue
     -----------
     A sample configuration file is uploaded in "test" directory.

   If ahbe_conf parameter is specified, s3fs loads it's configuration
   and compares extension(suffix) of object(file) when uploading
   (PUT/POST) it. If the extension is same, s3fs adds/sends specified
   HTTP header and value.

   A case of sample configuration file, if a object(it's extension is
   ".gz") which already has Content-Encoding HTTP header is renamed 
   to ".txt" extension, s3fs does not set Content-Encoding. Because
   ".txt" is not match any line in configuration file.
   So, s3fs matches the extension by each PUT/POST action.

   * Please take care about "Content-Encoding".
   This new option allows setting ANY HTTP header by object extension.
   For example, you can specify "Content-Encoding" for ".gz"/etc 
   extension in configuration. But this means that S3 always returns 
   "Content-Encoding: gzip" when a client requests with other 
   "Accept-Encoding:" header. It SHOULD NOT be good.
   Please see RFC 2616.

2) Changes about allow_other/uid/gid option for mount point
   I reviewed about mount point permission and allow_other/uid/gid
   options, and found bugs about these.
   s3fs is fixed bugs and changed to the following specifications.

   * s3fs only allows uid(gid) options as 0(root), when the effective 
     user is zero(root).
   * A mount point(directory) must have a permission to allow
     accessing by effective user/group.
   * If allow_other option is specified, the mount point permission
     is set 0777(all users allow all access).
     In another case, the mount point is set 0700(only allows 
     effective user).
   * When uid/gid option is specified, the mount point owner/group
     is set uid/gid option value.
     If uid/gid is not set, it is set effective user/group id.

   This changes maybe fixes some issue(321, 338).

3) Changes a logic about (Issue 229)
   The chmod command returns -EIO when changing the mount point.
   It is correct, s3fs can not changed owner/group/mtime for the
   mount point, but s3fs sends a request for changing the bucket.
   This revision does not send the request, and returns EIO as
   soon as possible.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@465 df820570-a93a-0410-bd06-b72b767a4274
2013-08-16 19:24:01 +00:00
ggtakec@gmail.com
02c3accb5b Changes codes
1) Changes macros for debugging
   Changed macros for debugging messages.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@461 df820570-a93a-0410-bd06-b72b767a4274
2013-08-10 15:29:39 +00:00
ggtakec@gmail.com
bf11a0444f Fixed a bug
1 ) Fixed a bug
    Since r458, s3fs uses stat files of cache files. But s3fs forgot removing these
    stat files when s3fs removed cache files.
    Fixed this bug.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@459 df820570-a93a-0410-bd06-b72b767a4274
2013-07-29 08:20:19 +00:00
ggtakec@gmail.com
3274f58948 Changes codes for performance(part 3)
* Summay
   This revision includes big change about temporary file and local cache file. 
   By this big change, s3fs works with good performance when s3fs opens/
   closes/syncs/reads object.
   I made a big change about the handling about temporary file and local cache
   file to do this implementation.

* Detail
1) About temporary file(local file)
   s3fs uses a temporary file on local file system when s3fs does download/
   upload/open/seek object on S3.
   After this revision, s3fs calls ftruncate() function when s3fs makes the 
   temporary file.
   In this way s3fs can set a file size of precisely length without downloading.
   (Notice - ftruncate function is for XSI-compliant systems, so that possibly
    you have a problem on non-XSI-compliant systems.)

   By this change, s3fs can download a part of a object by requesting with 
   "Range" http header. It seems like downloading by each block unit.
   The default block(part) size is 50MB, it is caused the result which is default 
   parallel requests count(5) by default multipart upload size(10MB).
   If you need to change this block size, you can change by new option 
   "fd_page_size". This option can take from 1MB(1024 * 1024) to any bytes.

   So that, you have to take care about that fdcache.cpp(and fdcache.h) were 
   changed a lot.

2) About local cache
   Local cache files which are in directory specified by "use_cache" option do 
   not have always all of object data.
   This cause is that s3fs uses ftruncate function and reads(writes) each block 
   unit of a temporary file.
   s3fs manages each block unit's status which are "downloaded area" or "not".
   For this status, s3fs makes new temporary file in cache directory which is 
   specified by "use_cache" option. This status files is in a directory which is 
   named "<use_cache sirectory>/.<bucket_name>/".

   When s3fs opens this status file, s3fs locks this file for exclusive control by 
   calling flock function. You need to take care about this, the status files can 
   not be laid on network drive(like NFS).

   This revision changes about file open mode, s3fs always opens a local cache 
   file and each status file with writable mode.
   Last, this revision adds new option "del_cache", this option means that s3fs 
   deletes all local cache file when s3fs starts and exits.

3) Uploading
   When s3fs writes data to file descriptor through FUSE request, old s3fs 
   revision downloads all of the object. But new revision does not download all, 
   it downloads only small percial area(some block units) including writing data 
   area.
   And when s3fs closes or flushes the file descriptor, s3fs downloads other area 
   which is not downloaded from server. After that,  s3fs uploads all of data.
   Already r456 revision has parallel upload function, then this revision with 
   r456 and r457 are very big change for performance.

4) Downloading
   By changing a temporary file and a local cache file, when s3fs downloads a 
   object, it downloads only the required range(some block units). 
   And s3fs downloads units by parallel GET request, it is same as a case of 
   uploading. (Maximum parallel request count and each download size are 
   specified same parameters for uploading.)

   In the new revision, when s3fs opens file, s3fs returns file descriptor soon.
   Because s3fs only opens(makes) the file descriptor with no downloading 
   data. And when s3fs reads a data, s3fs downloads only some block unit 
   including specified area.
   This result is good for performance.

5) Changes option name
   The option "parallel_upload" which added at r456 is changed to new option 
   name as "parallel_count". This reason is this option value is not only used by 
   uploading object, but a uploading object also uses this option. (For a while, 
   you can use old option name "parallel_upload" for compatibility.)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@458 df820570-a93a-0410-bd06-b72b767a4274
2013-07-23 16:01:48 +00:00
ggtakec@gmail.com
5a035a33f0 Changes codes
1) Changes FdCache class(cleanup codes)
    The FdCache class is for caching file discriptor.
    This class is modified as adding reference count for file discriptor and 
    removing pid for each path.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@450 df820570-a93a-0410-bd06-b72b767a4274
2013-06-21 06:07:22 +00:00
ggtakec@gmail.com
c686a3b2c0 Fixed Issue 241
1) problems with fseek and s3fs(Issue 241)
   This problem is that s3fs returns file stat(size) when client already has opened file discriptor and client modifies file before saving fd.
   So that, client adds bytes into file, but s3fs_getattr() function returns original size before changing.
   Changes codes for that s3fs returns size from opened file discriptor, if client already opens the file.
   * Changes s3fs.cpp
   * Adds fdcache.cpp fdcache.h



git-svn-id: http://s3fs.googlecode.com/svn/trunk@432 df820570-a93a-0410-bd06-b72b767a4274
2013-05-28 05:54:09 +00:00