Commit Graph

1944 Commits

Author SHA1 Message Date
ggtakec@gmail.com
ad19ffa458 Changes codes
1) Adds new S3fsCurl class
   Added new S3fsCurl class instead of directly calling curl function.
   This class is lapping curl function for s3fs(AWS S3 API).

2) Changes codes about adding S3fsCurl class
    Changed and deleted classes and structures which are related to curl in curl.cpp/curl.h.
    Changed codes which are calling S3 API with curl in s3fs.cpp.

3) Deletes YKIES macro
    Deleted YIKES macro, because this macro is used no more.

4) Changes a code
    s3fs does not get good performance because s3fs copies each byte while downloading.
    So that the codes is changed instead of memcpy, then s3fs performance not a little improves.

5) Fixes a bug
    When s3fs renames a file, s3fs does not use the value which is specified by servicepath option.
    Fixed this bug.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@451 df820570-a93a-0410-bd06-b72b767a4274
2013-07-05 02:28:31 +00:00
ggtakec@gmail.com
5a035a33f0 Changes codes
1) Changes FdCache class(cleanup codes)
    The FdCache class is for caching file discriptor.
    This class is modified as adding reference count for file discriptor and 
    removing pid for each path.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@450 df820570-a93a-0410-bd06-b72b767a4274
2013-06-21 06:07:22 +00:00
ggtakec@gmail.com
45950044f7 Changes codes
1) Changes codes for performance and request's costs
    s3fs gets object's  attributes by using HEAD request.
    Directory objects is following 4 type:
      a) name type is "dir", with meta information
      b) name type is "dir", without meta information(but has files in itself)
      c) name type is "dir/", with(out) meta information
      d) name type is "dir_$folder$", with(out) meta information
    The codes is changed to order checking directory object.
    So that, s3fs decreases requests for checking objects.

    Before version has a bug, that is s3fs can not be recognizable type-b) 
    definitely when s3fs check the object directly.(but s3fs can, when s3fs 
    check the object by listing)
    This change fixes this bug.

2) Adds "multireq_max" option
    Added "multireq_max" option is maximum number of parallel request 
    for listing objects.
    This changes is possible to solve CURLE_COULDNT_CONNECT.
    If this option can not solve it, this option will be useful for tuning 
    performance by each.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@449 df820570-a93a-0410-bd06-b72b767a4274
2013-06-19 14:53:58 +00:00
ggtakec@gmail.com
b65c3e195e Fixed Issue 346
1) Not recognizing group permissions( Issue 346 )
    Fixed umask option which works correctly.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@448 df820570-a93a-0410-bd06-b72b767a4274
2013-06-18 01:17:32 +00:00
ggtakec@gmail.com
e9ec680f1e Summary of Changes(1.70 -> 1.71)
==========================
List of Changes
==========================
1) Adds use_sse option(Issue 226) - r438, r439
    Supports SSE(Server-Side Encryption) and adds "use_sse" option.(Issue 226)

2) Fixes a bug(Issue 342) - r440
    Fixed a bug "Segmentation fault on connect on ppc64".   
    The third parameter of curl_easy_getinfo() is wrong.
    
3) Fixes a bug(Issue 343, 235, 257, 265) - r441
    Fixed a bug "SSL connect error).
    r434 could not fix it completely(mistook fixing).

4) Fixes bugs and changes codes - r442, r444
    - Fixes a bug which is forgot removing temporary files at error.
    - Fixes curl_share function prototype.
    - Changes one code for "-d" option.
    - Changes head_data struct menacers.
    - Fixes calling position for curl_global_init and curl_share_init function.
    - Fixes uninitializing struct tm.
    - Fixes accessing freed variable.
    - Changes using cur_slist directly to auto_curl_slist class.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@445 df820570-a93a-0410-bd06-b72b767a4274
2013-06-15 17:21:21 +00:00
ggtakec@gmail.com
b3fb37a87e Changed codes
1) don't use curl_slist directly
    s3fs has auto_curl_slist struct, but some function use curl_slist directly in s3fs.cpp.
    So that, changes codes for using auto_curl_slist.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@444 df820570-a93a-0410-bd06-b72b767a4274
2013-06-15 16:47:17 +00:00
ggtakec@gmail.com
f7e1a2a37f Fixed bugs
1) Fixed a bug(forgot removing temporary files)
    When s3fs gets a error from fwrite in multipart uploading function,
    s3fs does not remove a temporary file.

2) Fixed a bug(wrong prototype of function)
    The prototype of function for CURLSHOPT_UNLOCKFUNC
    is wrong.

3) Changed codes
    - In my_curl_easy_perform function, the codes for debugging messages
      is changed, because it is for not working codes when "-d" option is 
      not specified.
    - Changes struct head_data's member variables, and some codes for this 
      changes.
    - Moving calling function to main for curl_global_init and curl_share_init 
      functions, because these function must call in main thread.

4) Fixed a bug(use uninitialized memory)
    In get_lastmodified function, this function does not initialize value
   (struct tm).

5) Fixed a bug(access freed variable)
    In readdir_multi_head function, access a variable which is already freed.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@442 df820570-a93a-0410-bd06-b72b767a4274
2013-06-15 15:29:08 +00:00
ggtakec@gmail.com
a285716f87 Fixed Issue 343 (Issue 235 , Issue 257 , Issue 265)
1) Fixes "SSL connect error"(curl 35 error)
    Re-fixed "SSL connect error",  so r434 could not fix it completely(mistook fixing).



git-svn-id: http://s3fs.googlecode.com/svn/trunk@441 df820570-a93a-0410-bd06-b72b767a4274
2013-06-06 03:01:21 +00:00
ggtakec@gmail.com
b921844ff8 Fixed Issue 342
1) Segmentation fault on connect on ppc64( Issue 342 )
    The third parameter of curl_easy_getinfo() is wrong.
    It must be "long" but specified "CURLcode".
    Fixes this issue.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@440 df820570-a93a-0410-bd06-b72b767a4274
2013-06-05 02:13:39 +00:00
ggtakec@gmail.com
8ef1c7aa3a Fixed Issue 226(+)
1) Patch adding support for SSE( Issue 226 )
    Forgot changing codes for the error logic which use_sse and 
    use_rrs option is specified.
    (r438 + this fixes = fixed issue 226)




git-svn-id: http://s3fs.googlecode.com/svn/trunk@439 df820570-a93a-0410-bd06-b72b767a4274
2013-06-04 06:12:22 +00:00
ggtakec@gmail.com
9833c7e589 Fixed Issue 226
1) Patch adding support for SSE( Issue 226 )
    Supports SSE(Server-Side Encryption) and adds "use_sse" option.
    * Specifications
       When "use_sse" option is specified as "1", s3fs adds "x-amz-server-side-encryption"
       header as "AES256".
       But it only does when objects upload(writing object). 
       When you do chmod/chown/chgrp/touch/mv commands, s3fs does not 
       add this header and inherit SSE mode from oroginal object. 
    * Notice
       "use_sse" option can not specify with "use_rrs" because we get signature
       error from S3.





git-svn-id: http://s3fs.googlecode.com/svn/trunk@438 df820570-a93a-0410-bd06-b72b767a4274
2013-06-04 06:04:04 +00:00
ggtakec@gmail.com
129a279fc5 Summary of Changes(1.69 -> 1.70)
==========================
List of Changes
==========================
1) Fixes bugs - r428
    * Fixes a bug which object owner/group is set wrong id.
    * The permission of mount point when allow_other option is 
       specified.
    * Fixes a bug about permission
       when the directory permission is 0557, other user(who is not
       owner and same group) gets a permission error when making 
       file or directory in that dir.
    * Fixes a bug( Issue 340 )
       Fixes compile error about "blkcnt_t".

2) Fixes a bug( Issue 429 ) - r429
    Changes that s3fs always uses own dns cache, and adds "nodnscache"
    option. If "nodnscache" is specified, s3fs does not use dns cache as 
    before. s3fs keeps DNS cache for 60 senconds by libcurl's default.

3) Fixes a bug( Issue 235 ) - r430
    Fixes a CURLE_COULDNT_CONNECT error when s3fs reads 
    objects header information.
    * max request in curl_multi request is 500 and s3fs loops to call 
       curl_multi.
    * retries to call request which returns error as CURLE_COULDNT_CONNECT.

4) Fixes a bug - r431
    Fixed a bug(failed all multi head request when mounting bucket+path)

5) FIxes a bug( Issue 241 ) - r432
    Changes codes for that s3fs returns size from opened file discriptor, if
    client already opens the file.

6) Fixes a bug - r433
    Package tarball includes doc/Makefile, this file is not unnecessary for
    tarball.

7) Fixes bug( Issue 235 , Issue 257, Issue 265 ) - r434
    Fixed "SSL connect error", then s3fs can connect by SSL with no problem.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@435 df820570-a93a-0410-bd06-b72b767a4274
2013-06-01 15:55:40 +00:00
ggtakec@gmail.com
1758bc59f4 Fixed Issue 235, Issue 257, Issue 265
1) Fixes "SSL connect error"(curl 35 error)
    Fixed "SSL connect error", then s3fs can connect by SSL with no problem.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@434 df820570-a93a-0410-bd06-b72b767a4274
2013-06-01 15:31:31 +00:00
ggtakec@gmail.com
29326b048e Fixed a bug(doc/Makefile in tarball)
1) Fixes a bug
    Package tarball includes doc/Makefile, this file is not unnecessary for tarball.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@433 df820570-a93a-0410-bd06-b72b767a4274
2013-05-31 02:40:05 +00:00
ggtakec@gmail.com
c686a3b2c0 Fixed Issue 241
1) problems with fseek and s3fs(Issue 241)
   This problem is that s3fs returns file stat(size) when client already has opened file discriptor and client modifies file before saving fd.
   So that, client adds bytes into file, but s3fs_getattr() function returns original size before changing.
   Changes codes for that s3fs returns size from opened file discriptor, if client already opens the file.
   * Changes s3fs.cpp
   * Adds fdcache.cpp fdcache.h



git-svn-id: http://s3fs.googlecode.com/svn/trunk@432 df820570-a93a-0410-bd06-b72b767a4274
2013-05-28 05:54:09 +00:00
ggtakec@gmail.com
2d51439dcb Fixed a bug(failed all multi head request when mounting bucket+path)
1) Fixes a bug
    When the mount point is specified with sub-directory(mounting with 
    "bucket:/path"), internally all curl_multi head request in s3fs_readdir() 
    function failed.
    This reason is that the head curl_multi request is not specified with 
    mount path.
    This is a bug, and fixed.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@431 df820570-a93a-0410-bd06-b72b767a4274
2013-05-27 02:22:47 +00:00
ggtakec@gmail.com
7aa11f389a Fixed Issue 235
1) Problems using encrypted connection to s3(Issue 235)
    In s3fs_readdir() function, s3fs gets CURLE_COULDNT_CONNECT error when s3fs reads objects header information.
    Probably, this problem is too many request in curl_multi request.
    Then s3fs codes are changed:
    * max request in curl_multi request is 500 and s3fs loops to call curl_multi.
    * retries to call request which returns error as CURLE_COULDNT_CONNECT.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@430 df820570-a93a-0410-bd06-b72b767a4274
2013-05-27 01:15:48 +00:00
ggtakec@gmail.com
7477224d02 Fixed Issue 304
1) s3fs should cache DNS lookups?(Issue 304)
   Changes that s3fs always uses own dns cache, and adds "nodnscache" option.
   If "nodnscache" is specified, s3fs does not use dns cache as before.
   s3fs keeps DNS cache for 60 senconds by libcurl's default.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@429 df820570-a93a-0410-bd06-b72b767a4274
2013-05-22 08:49:23 +00:00
ggtakec@gmail.com
be5fa78032 Fixed Issue 340 and some bugs.
1) Fixes a bug which object owner/group is set wrong id.
    When do chown(chgrp) command without group(owner), s3fs set wrong id(-1) 
    for group(owner) id.
    Fixes this bug.

2) The permission of mount point when allow_other option is specified.
    When allow_other option is specified, s3fs forces the mount point directory 
    permission to set executable permission(mode | 0111).

3) Fixes a bug about permission
    For example, when the directory permission is 0557, other user(who is not owner
    and same group) gets a permission error when making file or directory in that dir.
    Fixes this bug.

4) Compile error: blkcnt_t(Issue 340)
    Fixes a bug: Compile error: blkcnt_t(Issue 340)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@428 df820570-a93a-0410-bd06-b72b767a4274
2013-05-21 05:29:07 +00:00
ggtakec@gmail.com
cd21f567a1 Summary of Changes(1.68 -> 1.69)
==========================
List of Changes
==========================
1) Fixes a bug - r418, r419
   Fixed a bug that s3fs failed to get stats when the max_stat_cache_size option was specified 0.

2) Fixed a bug( Issue 291) - r420
   Fixes ( Issue 291 ) "Man file has wrong permissions for passwd file",  
   then changes man page.
   Checks passwd file permission strictly.
   Fixes a bug that s3fs continued to run after finding invalid passwd file 
   permission.

3) Added enable_noobj_cache option for no-existing object - r420, r421
   Adds enable_noobj_cache option(default disable) for performance.
   This option is specified, s3fs memorizes in stat cache that the object
   (file or directory) does not exist.

4) Fixed a bug( Issue 240 ) - r421
   Fixes ( Issue 240 ) "Cannot Mount Path in Bucket".
   Changes man page.

5) Supported s3sync'ed object( Issue 31 ) - r422
   Supports HTTP headers which made by s3sync.
   It means that s3fs supported HTTP headers are x-amz-meta-owner, 
   x-amz-meta-permissions and x-amz-meta-group.

6) Added enable_content_md5 option - r423
   Adds enable_content_md5 option(default disable).
   If "enable_content_md5" option is specified, s3fs puts the object with 
   "Content-MD5" header when s3fs sending small object(under 20MB).

7) Supported uid/gid option - r424
   Supports uid/gid mount(fuse) option.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@425 df820570-a93a-0410-bd06-b72b767a4274
2013-05-16 06:46:35 +00:00
ggtakec@gmail.com
fb30405aef Supported uid/gid options
1) Supports uid/gid options
    Fuse(and mount) option "uid" and "gid" is supported.
    
*) Fixes some issues
    So these option are supported, some issue which are permission or access problem are solved.
    ex. FIle permissions 000(Issue 337)




git-svn-id: http://s3fs.googlecode.com/svn/trunk@424 df820570-a93a-0410-bd06-b72b767a4274
2013-05-16 04:53:30 +00:00
ggtakec@gmail.com
9da497af45 Added enable_content_md5 option
1) Adds enable_content_md5 option
   When s3fs uploads large object(over 20MB), s3fs checks always ETag(MD5) in each multipart's response.
   But for small objects, s3fs does not check MD5.
   This new option enables for checking MD5 of uploading object.
   If "enable_content_md5" option is specified, s3fs puts the object with "Content-MD5" header.

   The checking MD5 value is not default, because it increases some of user's cpu usage.
   (The default value may be replaced in the future.)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@423 df820570-a93a-0410-bd06-b72b767a4274
2013-05-16 02:02:55 +00:00
ggtakec@gmail.com
715b837a2b Fixed Issue 31 and Cleanup codes
1) s3sync'ed files not supported(Issue 31)
    Supports HTTP headers which made by s3sync.
    Supported new HTTP headers are x-amz-meta-owner, 
    x-amz-meta-permissions and x-amz-meta-group.
    s3fs read and understand these headers, but s3fs 
    gives priority to s3fs's headers over these headers.

2) Cleanups codes
    Cleanups some codes about issue 31.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@422 df820570-a93a-0410-bd06-b72b767a4274
2013-05-09 08:35:17 +00:00
ggtakec@gmail.com
c862ee40ea Fixed issue 240 and some bugs.
1) Cannot Mount Path in Bucket(Issue 240)
    Changes man page for this issue("bucket[:path]" -> "bucket[:/path]").
    And so s3fs did not work with mount path, fixed it.

2) Fixes other bug about renaming directory
    Fixes a bug caused by r420, which fails to rename directory. 




git-svn-id: http://s3fs.googlecode.com/svn/trunk@421 df820570-a93a-0410-bd06-b72b767a4274
2013-05-09 04:25:18 +00:00
ggtakec@gmail.com
6b78bfdf4b Fixed issue 291, and Adds "disable_noobj_cache" option.
1) Man file has wrong permissions for passwd file(Issue 291)
    Fixes man page for wrong permissions of passwd file.

2) Fixes a bug and Strictly checks passwd file permission.
    * Fixes a bug about checking passwd file permission.
       A bug is that s3fs continues to run after s3fs finds invalid passwd 
      file permission.
    * Checks passwd file strictly.
       Before this revision, s3fs allows executable permission for a 
       passwd file and  allows group writable permission for a passwd 
       file(which is not "/etc/passwd-s3fs").
       New s3fs checks permission strictly, that is  /etc/passwd-s3fs is 
       allowed owner readable/writable and group readable, and  the 
       passwd file(which is not "/etc/passwd-s3fs") is allowed only owner 
       readable/writable.

3) Adds disable_noobj_cache option for no-existing object.
    s3fs v1.68 always has to check whether file(or sub directory) exists 
    under object(path)  when s3fs does some command, since s3fs has 
    recognized a directory which does not exist and has files or sub 
    directories under itself.
    It increases ListBucket request and makes performance bad.
    For performance if the disable_noobj_cache option is specified, s3fs 
    memorizes in stat cache that the object(file or directory) does not exist.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@420 df820570-a93a-0410-bd06-b72b767a4274
2013-05-08 07:51:22 +00:00
ggtakec@gmail.com
5de37a6807 Cleanup codes
* Cleanup codes after committing r418.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@419 df820570-a93a-0410-bd06-b72b767a4274
2013-05-07 02:35:30 +00:00
ggtakec@gmail.com
6ba609eb66 Fixed a bug(max_stat_cache_size=0)
1) Fixes a bug
    When the option max_stat_cache_size=0 is specified, the s3fs fails getting the attributes.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@418 df820570-a93a-0410-bd06-b72b767a4274
2013-05-03 13:33:49 +00:00
ggtakec@gmail.com
ae2d1eda84 Summary of Changes(1.67 -> 1.68)
==========================
List of Changes
==========================
1) Fixes a bug(Issue 320) - r408, r411, r412
   Fixes (Issue 320) "Spurious I/O errors"
   (http://code.google.com/p/s3fs/issues/detail?id=320)

   When the s3fs gets error in upload loop, it try to re-upload without seeking
   the fd. This is a bug for this issue.
   Please see detail in r408, r411 and r412.

2) Fixes a bug(Issue 293) - r409
   Fixes (Issue 293) "Command line argument bucket: causes segv"
   (http://code.google.com/p/s3fs/issues/detail?id=293)

   If it specifies the bucket name which is terminated ":", the s3fs crushes.
   Please see detail in r409.

3) Supports a option(Issue 265) - r410
   Supports (Issue 265) "Unable to mount to a non empty directory"
   (http://code.google.com/p/s3fs/issues/detail?id=265)

   Supported "nonempty" fuse/mount option.
   Please see detail in r410.

4) Supports other S3 clients(Issue 27) - r413, r.414
   Supports (Issue 27) "Compatability with other S3FS clients"

   *** "_$folder$" dir object
    Supports the directory object which made by s3fox. Its name has "_$folder$"
    suffixes. On s3fs, that directory object is listed normal directory name 
    without "_$folder$".
    Please be careful when you change object attributes(rename, chmod, chown,
    touch), because the s3fs remakes the directory object without "_$folder$"
    sufixes. This means the object is re-made by the s3fs.

   *** no dir object
    Supports the directory which is no objects.
    If there is a object which has "/" charactor(ex. "<bucket>/dir/file"), the
    directory("dir") object is no object.
    Example, you can upload the object which name is "s3://bucket/dir/file" by
    the s3cmd(or other S3 clients like s3cmd).
    Then on thie case, the "dir" is not object in bucket.
    This s3fs version understands this case.
    Please be careful when you change object attributes(rename, chmod, chown,
    touch), because the s3fs makes new directory object.

   Please see detail in r413 and r414.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@415 df820570-a93a-0410-bd06-b72b767a4274
2013-05-01 03:45:42 +00:00
ggtakec@gmail.com
b973eaae44 Changed codes for issue: 27(+)
1) Feature Request: Compatability with other S3FS clients(Issue: 27)
    Rechanges source code.

2) For other S3 clients
    Supports the directory which is no objects.
    If there is a object which has "/" charactor(ex. "<bucket>/dir/file"), the directory("dir") object is no object.
    Exsample, you can upload the object which name is "s3://bucket/dir/file" by the s3cmd.
    Then the "dir" is not object in bucket("dir").
    This s3fs codes understands this case.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@414 df820570-a93a-0410-bd06-b72b767a4274
2013-04-29 14:31:10 +00:00
ggtakec@gmail.com
36447a23eb Fixed issue: 27
1) Feature Request: Compatability with other S3FS clients(Issue: 27)
    Supported the directory object which made by s3fox. Its name has "_$folder$" suffixes.
    On s3fs, that directory object is listed normal directory name without "_$folder$".
    You need careful when you change object attributes(rename, chmod, chown, touch), because the s3fs remakes the directory object after that object(which has "_$folder$" suffix)
    After changing the object attributes, the object name does not have "_$folder$" suffix.
    It means the object is remade by s3fs.

2) Other
    Fixes bugs which are found when this issue is fixed.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@413 df820570-a93a-0410-bd06-b72b767a4274
2013-04-20 19:17:28 +00:00
ggtakec@gmail.com
eaf43e6f59 Fixed issue: 320(++)
1) Changes calling fread/fwrite logic(Issue: 320)
    In conjunction with this issue, the opened file discripter is rewinded after reading/writing.
    The put_local_fd() and get_localfd() function always returns rewinded fd.





git-svn-id: http://s3fs.googlecode.com/svn/trunk@412 df820570-a93a-0410-bd06-b72b767a4274
2013-04-17 06:39:02 +00:00
ggtakec@gmail.com
a92a4c0a4f Fixed issue: 320
1) Changes calling fread/fwrite logic(Issue: 320)
    The s3fs functions are called by doing rsync command, the calling order is s3fs_create, s3fs_truncate, s3fs_flush.
    After the s3fs_truncate uploads the file, the s3fs_flush uploads the file again wothout rewinding fd.
    It is this issue bug, the s3fs_flush read EOF and put error.
    Then I changes the code  that is calling the lseek and seeking FD to head of file before fread/fwrite.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@411 df820570-a93a-0410-bd06-b72b767a4274
2013-04-17 04:50:13 +00:00
ggtakec@gmail.com
9641d07806 Fixed issue: 265
1) Unable to mount to a non empty directory(Issue: 265)
    Supported "nonempty" fuse option.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@410 df820570-a93a-0410-bd06-b72b767a4274
2013-04-16 08:05:24 +00:00
ggtakec@gmail.com
cec9bc5f3a Fixed issue: 293
1) Command line argument bucket: causes segv(Issue: 293)
    If it specifies the bucket name which is terminated ":", s3fs run and crush(segv).
    This bug is fixed.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@409 df820570-a93a-0410-bd06-b72b767a4274
2013-04-15 19:47:09 +00:00
ggtakec@gmail.com
a632af8f90 Fixed issue: 320
1) Changes calling fread/fwrite logic(Issue: 320)
    The calling fread/fwrite function logic is changed those in loop.
    If the fread/fwrite returns 0 byte without a error, s3fs continue(retry) to read/write.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@408 df820570-a93a-0410-bd06-b72b767a4274
2013-04-15 16:00:43 +00:00
ggtakec@gmail.com
9310c0b653 Summary of Changes(1.66 -> 1.67)
==========================
List of Changes
==========================
1) Fixes a bug(r403)
    Fixes (Issue: 326) "segfault using s3fs 1.6.5".
    (http://code.google.com/p/s3fs/issues/detail?id=326)

    The my_curl_easy_perform() function is not clearing the buffer(struct BodyStruct body) before retrying the request.
    And the "struct BodyStruct" is changed to "Class BodyData".
    New class is same as BodyStruct, but handling memory is automatically.
    And added a argument for my_curl_easy_perform().
    This function is needed the buffer pointer, but the arguments is only for body buffer.
    Then I added the buffer pointer for header buffer.

2) Fixes a bug(r404)
    Fixes (Issue: 328) "Problem with allow_other option".
    (http://code.google.com/p/s3fs/issues/detail?id=328)

    The return value in get_username() function is wrong value(NULL) when there is not user id in passwd file.

3) Fixes memory leak(r403)
  In get_object_name() function, there was a memory leak.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@405 df820570-a93a-0410-bd06-b72b767a4274
2013-04-13 17:08:00 +00:00
ggtakec@gmail.com
edadbe86d8 Fixed issue: 328
1) Changes for fixing a bug(Issue 328)
  The return value in get_username() function is wrong value(NULL) when there is not user id in passwd file.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@404 df820570-a93a-0410-bd06-b72b767a4274
2013-04-11 02:37:25 +00:00
ggtakec@gmail.com
f002cdb9b2 Fixed issue: 326
1) Changes for fixing a bug(r326)
  The my_curl_easy_perform() function is not clearing the buffer(struct BodyStruct body) before retrying the request.

2) Other changes
  In conjunction with this issue, the "struct BodyStruct" is changed to "Class BodyData".
  New class is same as BodyStruct, but handling memory is automatically.
  And added a argument for my_curl_easy_perform().
  This function is needed the buffer pointer, but the arguments is only for body buffer.
  Then I added the buffer pointer for header buffer.

3) Fixed memory leak
  In get_object_name() function, there was a memory leak.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@403 df820570-a93a-0410-bd06-b72b767a4274
2013-04-11 01:49:00 +00:00
ggtakec@gmail.com
8bd1483374 Summary of Changes(1.65 -> 1.66)
==========================
List of Changes
==========================
1) Fixes bugs
    Fixes Issue 321: "no write permission for non-root user".
    (http://code.google.com/p/s3fs/issues/detail?id=321)
    Fixes a bug which s3fs does not set uid/gid headers when making symlink.

2) Cleanup  code.
    Adds a common function which  converts the Last-Modified header to utime.
    Deletes the useless cord and arranged it.

3) xmlns
    Changes that s3fs can decide using the xmlns url automatically.
    Then the noxmlns option is not needed anymore, but it is left.

4) Changes cache for performance
    Changes stat cache, it accumulates stat information and some headers.
    By adding some headers into cache, s3fs does not need to call curl_get_headers function.
    After changing, one cache entry increases in about 500 bytes from about 144 byte.
    
    Adds one condition to get out of the cache, that condition is by looking object's ETag.
    It works good for noticing changes about obojects.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@400 df820570-a93a-0410-bd06-b72b767a4274
2013-04-06 17:39:22 +00:00
ggtakec@gmail.com
a35cdc73b7 Summary of Changes(1.64 -> 1.65)
1) Fixed a bug(r397)
    After deleting directory object, s3fs could not make directory which was same name.
    It was a bug about cache logic for compatibility other S3 client.

2) Cleaned up source codes(r396)
    No changes for logic, only changes layout of functions and valiables between a file to a file.
    Adds s3fs_util.cpp/s3fs_util.h/common.h




git-svn-id: http://s3fs.googlecode.com/svn/trunk@397 df820570-a93a-0410-bd06-b72b767a4274
2013-03-30 14:03:06 +00:00
ggtakec@gmail.com
953aedd7ad Cleaned up source codes
No changes for logic, only changes layout of functions and valiables between a file to a file.
    Adds s3fs_util.cpp/s3fs_util.h/common.h



git-svn-id: http://s3fs.googlecode.com/svn/trunk@396 df820570-a93a-0410-bd06-b72b767a4274
2013-03-30 13:37:14 +00:00
ggtakec@gmail.com
00b735beaa For v1.64
Please see r392 log message(http://code.google.com/p/s3fs/source/detail?r=392)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@393 df820570-a93a-0410-bd06-b72b767a4274
2013-03-23 14:16:18 +00:00
ggtakec@gmail.com
9af16df61e Summary of Changes(1.63 -> 1.64)
* This new version was made for fixing big issue about directory object.
  Please be careful and review new s3fs.

==========================
List of Changes
==========================
1) Fixed bugs
    Fixed some memory leak and  un-freed curl handle.
    Fixed codes with a bug which is not found yet.
    Fixed a bug that the s3fs could not update object's mtime when the s3fs had a opened file descriptor. 

    Please let us know a bug, when you find new bug of a memory leak.

2) Changed codes
    Changed codes of s3fs_readdir() and list_bucket() etc.
    Changed codes so that the get_realpath() function returned std::string.
    Changed codes about exit() function. Because the exit() function is called from many fuse callback function directly, these function called fuse_exit() function and retuned with error.
    Changed codes so that the case of the characters for the "x-amz-meta" response header is ignored.

3) Added a option
    Added the norenameapi option for the storage compatible with S3 without copy API.
    This option is subset of nocopyapi option.
    Please read man page or call with --help option.

4) Object for directory
    This is very big and important change.

    The object of directory is changed "dir/" instead of "dir" for being compatible with other S3 client applications.
    And this version understands the object of directory which is made by old version.
    If the new s3fs changes the attributes or owner/group or mtime of the directory object, the s3fs automatically changes the object from old object name("dir") to new("dir/").
    If you need to change old object name("dir") to new("dir/") manually, you can use shell script(mergedir.sh) in test directory.

    * About the directory object name
        AWS S3 allows the object name as both "dir" and "dir/".
        The s3fs before this version understood only "dir" as directory object name, but old version did not understand the "dir/" object name.
        The new version understands both of "dir" and "dir/" object name.
        The s3fs user needs to be care for the special situation that I mentioned later.

        The new version deletes old "dir" object and makes new "dir/" object, when the user operates the directory object for changing the permission or owner/group or mtime.
        This operation does on background and automatically.

        If you need to merge manually, you can use shell script which is mergedir.sh in test directory.
        This script runs chmod/chown/touch commands after finding a directory.
       Other S3 client application makes a directory object("dir/") without meta information which is needed to understand by the s3fs, this script can add meta information for a directory object.
        If this script function is insufficient for you, you can read and modify the codes by yourself.
        Please use the shell script carefully because of changing the object.
        If you find a bug in this script, please let me know.

    * Details
    ** The directory object made by old version
        The directory object made by old version is not understood by other S3 client application.
        New s3fs version was updated for keeping compatibility with other clients.
        You can use the mergedir.sh in test directory for merging  from old directory object("dir") to new("dir/").
        The directory object name is changed from "dir" to "dir/" after the mergedir.sh is run, this changed "dir/" object is understood by other S3 clients.
        This script runs chmod/chown/chgrp/touch/etc commands against the old directory object("dir"), then new s3fs merges that directory automatically.

        If you need to change directory object from old to new manually, you can do it by running these commands which change the directory attributes(mode/owner/group/mtime).

    ** The directory object made by new version
        The directory object name made by new version is "dir/".
        Because the name includes "/", other S3 client applications understand it as the directory.
        I tested new directory by s3cmd/tntDrive/DragonDisk/Gladinet as other S3 clients, the result was good compatibility.
        You need to know that the compatibility has small problem by the difference in specifications between clients.
        And you need to be careful about that the old s3fs can not understand the directory object which made by new s3fs.
        You should change all s3fs which accesses same bucket.

    ** The directory object made by other S3 client application
        Because the object is determined as a directory by the s3fs, the s3fs makes and uses special meta information which is "x-amz-meta-***" and "Content-Type" as HTTP header.
        The s3fs sets and uses HTTP headers for the directory object,  those headers are listed below.
            Content-Type: application/x-directory
            x-amz-meta-mode: <mode>
            x-amz-meta-uid: <UID>
            x-amz-meta-gid <GID>
            x-amz-meta-mtime: <unix time of modified file>

        Other S3 client application builds the directory object without attributes  which is needed by the s3fs.
        When the "ls" command is run on the s3fs-fuse file system which has directories/files made by other S3 clients, this result is shown below. 
            d---------  1 root     root           0 Feb 27 11:21 dir
            ----------  1 root     root     1024 Mar 14 02:15 file
        Because the objects don't have meta information("x-amz-meta-mode"), it means mode=0000.
        In this case, the directory object is shown only "d", because the s3fs determines the object as a directory when the object is the name with "/" or has "Content-type: application/x-directory" header.
        (The s3fs sets "Content-Type: application/x-directory" to the directory object, but other S3 clients set "binary/octet-stream".)
        In that result, nobody without root is allowed to operate the object.

        The owner and group are "root"(UID=0) because the object doesn't have "x-amz-meta-uid/gid".
        If the object doesn't have "x-amz-meta-mtime", the s3fs uses "Last-Modified" HTTP header.
        Therefore the object's mtime is "Last-Modified" value.(This logic is same as old version)
        It has been already explained, if you need to change the object attributes, you can do it by manually operation or mergedir.sh.

    * Example of the compatibility with s3cmd etc
    ** Case A) Only "dir/file" object
        One of case, there is only "dir/file" object without "dir/" object, that object is made by s3cmd or etc.
        In this case, the response of REST API(list bucket) with "delimiter=/" parameter has "CommonPrefixes", and the "dir/" is listed in "CommonPrefixes/Prefix", but the "dir/" object is not real object. 
        The s3fs needs to determine this object as directory, however there is no real directory object("dir" or "dir/").
        But both new s3fs and old one does NOT understand this "dir/" in "CommonPrefixes", because the s3fs fails to get meta information from "dir" or "dir/".
        On this case, the result of "ls" command is shown below.
            ??????????? ? ?        ?        ?            ? dir
        This "dir" is not operated by anyone and any process, because the s3fs does not understand this object permission.
        And "dir/file" object can not be shown and operated too.
        Some other S3 clients(tntDrive/Gladinet/etc) can not understand this object as same as the s3fs.

        If you need to operate "dir/file" object, you need to make the "dir/" object as a directory.
        To make the "dir/" directory object, you need to do below.
        Because there is already the "dir" object which is not real object, you can not make "dir/" directory.
        (s3cmd does not make "dir/" object because the object name has "/".).
        You should make another name directory(ex: "dir2/"), and move the "dir/file" objects to in new directory.
        Last, you can rename the directory name from "dir2/" to "dir/".

    ** Case B) Both "dir" and "dir/file" object
        This case is that there are "dir" and "dir/file" objects which were made by s3cmd/etc.
        s3cmd and s3fs understand the "dir" object as normal(file) object because this object does not have meta information and a name with "/".
        But the result of REST API(list bucket) has "dir/" name in "CommonPrefixes/Prefix". 

        The s3fs checks "dir/" and "dir" as a directory, but the "dir" object is not directory object.
        (Because the new s3fs need to compatible old version, the s3fs checks a directory object in order of "dir/", "dir")
        In this case, the result of "ls" command is shown below. 
            ----------  1 root     root     0 Feb 27 02:48 dir
        As a result, the "dir/file" can not be shown and operated because the "dir" object is a file.

        If you determine the "dir" as a directory, you need to add mete information to the "dir" object by s3cmd.


    ** Case C) Both "dir" and "dir/" object
        Last case is that there are "dir" and "dir/" objects which were made by other S3 clients.
        (example: At first you upload a object "dir/" as a directory by new 3sfs, and you upload a object "dir" by s3cmd.)
        New s3fs determines "dir/" as a directory, because the s3fs searches in oder of "dir/", "dir".
        As a result, the "dir" object can not be shown and operated.

    ** Compatibility between S3 clients 
        Both new and old s3fs do not understand both "dir" and "dir/" at the same time, tntDrive and Galdinet are same as the s3fs.
        If there are "dir/" and "dir" objects, the s3fs gives priority to "dir/".
        But s3cmd and DragonDisk understand both objects.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@392 df820570-a93a-0410-bd06-b72b767a4274
2013-03-23 14:04:07 +00:00
ggtakec@gmail.com
be38de5052 Summary of Changes(1.62 -> 1.63)
1) Lifetime for the stats cache
   Added the new option "stat_cache_expire".
   This option which is specified by seconds means the lifetime for each stats cache entry.
   If this option is not specified, the stats cache is kept in s3fs process until the stats cache grown to maximum size. (default)
   If this option is specified, the stats cache entry is out from the memory when the entry expires time.

2) Enable file permission
  s3fs before 1.62 did not consider the file  access permission.
  s3fs after this version can consider it.
  For access permission, the s3fs_getattr() function was divided into sub function which can check the file access permission.
  It is like access() function.
  And the function calling the s3fs_getattr() calls this new sub function instead of s3fs_getattr().
  Last the s3fs_opendir() function which is called by FUSE was added for checking directory access permission when listing the files in directory.

3) UID/GUID
  When a file or a directory was created, the s3fs could not set the UID/GID as the user who executed a command.
  (Almost the UID/GID are root, because the s3fs run by root.)
  After this version, the s3fs set correct UID/GID as the user who executes the commond.

4) About the mtime
  If the object does not have "x-amz-meta-mtime" meta, the s3fs uses the "Last-Modified" header instead of it.
  But the s3fs had a bug in this code, and this version fixed this bug.
  When user modified the file, the s3fs did not update the mtime of the file.
  This version fixed this bug.
  In the get_local_fd() function, the local file's mtime was changed only when s3fs run with "use_cache" option.
  This version always updates the mtime whether the local cache file is used or not.
  And s3fs_flush ( ) function set the mtime of local cache file from S3 object mtime, but it was wrong . 
  This version is that the s3fs_flush ( ) changes the mtime of S3 object from the local cache file or the tmpfile . 
  The s3fs cuts some requests, because the s3fs can always check mtime whether the s3fs uses or does not use the local cache file.

5) A case of no "x-amz-meta-mode"
  If the object did not have "x-amz-meta-mtime" mete, the s3fs recognized the file as not regular file.
  After this version, the s3fs recognizes the file as regular file.

6) "." and ".." directory
  The s3fs_readdir() did not return "X" and "XX" directory name.
  After this version, the s3fs is changed that it returns "X" and "XX".
  Example, the result of "ls" lists "X" and "XX" directory.

7) Fixed a bug
  The insert_object() had a bug, and it is fixed.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@390 df820570-a93a-0410-bd06-b72b767a4274
2013-02-24 08:58:54 +00:00
rrizun
1f3a7ff9c6 .gitignore
git-svn-id: http://s3fs.googlecode.com/svn/trunk@389 df820570-a93a-0410-bd06-b72b767a4274
2013-02-08 15:52:44 +00:00
ggtakec@gmail.com
900107f102 s3fs 1.62
git-svn-id: http://s3fs.googlecode.com/svn/trunk@385 df820570-a93a-0410-bd06-b72b767a4274
2013-01-21 06:30:31 +00:00
ggtakec@gmail.com
cc6bd2181b 1) Fixed a memory leak and un-closed file discriptor.
2) Added a static table for file discriptors and paths which is keeped until closing.
   The s3fs_xxxx function called by FUSE is able to use a file disctiptor which is already opened by s3fs_open function.
3) The mknod is changed that it always return error, because it does not work through a network.
4) The symbolic file attribute changed to S_IFLNK | S_IRWXU | S_IRWXG | S_IRWXO.
5) Fixed the truncate function to work.
6) The mkdir and clone_directory_object functions are simplified and are changed to use the common create_directory_object function for these.
   For fixed a bug that the directory's PID/UID/mode are changed when these are  renamed.
7) The get_object_name function is changed to check a object finely. 
8) The s3fs_check_service function is changed for "301" response code.
9) Added the noxmlns option for a case of the response without xmlns field. (for the storage compatible with S3)
10) Added the nocopyapi option for the storage compatible with S3 without copy API.

* Comments 
  No.9 and No.10 are for the storage compatible with AWS S3.
  Both option are unnecessary options for AWS S3.
  In future, for the s3fs's promotion and possibility I would like to add new function.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@384 df820570-a93a-0410-bd06-b72b767a4274
2013-01-19 16:05:07 +00:00
ben.lemasurier@gmail.com
2a09e0864e Fixed a possible memory leak in the stat cache where
- items with an initial hit count of 0 would not be deleted

Added an additiional integration test



git-svn-id: http://s3fs.googlecode.com/svn/trunk@383 df820570-a93a-0410-bd06-b72b767a4274
2011-09-26 15:20:14 +00:00
ben.lemasurier@gmail.com
6d12f31676 moving some repeated curl operations to a single location in curl.cpp
git-svn-id: http://s3fs.googlecode.com/svn/trunk@382 df820570-a93a-0410-bd06-b72b767a4274
2011-09-01 19:24:12 +00:00
ben.lemasurier@gmail.com
79ee801b94 cleanup HTTP DELETE operations to use the same curl interface
git-svn-id: http://s3fs.googlecode.com/svn/trunk@381 df820570-a93a-0410-bd06-b72b767a4274
2011-08-31 22:20:20 +00:00