Commit Graph

71 Commits

Author SHA1 Message Date
Andrew Gaul
699e3b3d79 Add test for multi-part upload 2015-03-02 17:17:30 -08:00
Ka-Hing Cheung
4ee32d7559 test ls after creating files and dirs 2015-02-27 10:55:25 -08:00
Takeshi Nakatani
53083202ba Merge pull request #132 from andrewgaul/s3proxy-integration-test
Use S3Proxy to run integration tests
2015-02-27 00:17:46 +09:00
Andrew Gaul
e811ae1104 Use s3proxy to run integration tests
References #129.
2015-02-24 12:08:22 -08:00
Ka-Hing Cheung
d65bf4128d refactor integration tests create/cleanup file 2015-02-23 12:08:14 -08:00
Ka-Hing Cheung
03d84a07d1 fix rename before close
nautilus does this when you drag and drop to overwrite a file:

1) create .goutputstream-XXXXXX to write to
2) fsync the fd for .goutputstream-XXXXXX
3) rename .goutputstream-XXXXXX to target file
4) close the fd for .goutputstream-XXXXXX

previously, doing this on s3fs would result in an empty target file
because after the rename, s3fs would not flush the content of
.goutputstream-XXXXXX to target file.

this change moves the FdEntity from the old path to the new path
whenever rename happens. On flush s3fs would now flush the correct
content to the rename target.
2015-01-12 15:05:54 -08:00
Takeshi Nakatani
7a7c7572ea Cleaned up codes for next packaging. 2014-09-07 15:08:27 +00:00
ggtakec@gmail.com
1a4e525465 Changed test/Makefile.am
1) Changed test/Makefile.am
   Changed test/Makefile.am because test/sample_delcache.sh is added
   by r472.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@473 df820570-a93a-0410-bd06-b72b767a4274
2013-08-23 15:26:48 +00:00
ggtakec@gmail.com
3ed9a2c1e4 Added sample script
1) Added sample script for deleting cache
   Added sample script file which removes cache file and stats file
   by limiting disk space.
   This script is based DPeuscher posted codes, special thanks.

   Issue 364: Feature request: Control cache size



git-svn-id: http://s3fs.googlecode.com/svn/trunk@472 df820570-a93a-0410-bd06-b72b767a4274
2013-08-23 15:22:24 +00:00
ggtakec@gmail.com
d7689151ab Fixed Issue 229 and Changes codes
1) Set metadata "Content-Encoding" automatically(Issue 292)
   For this issue, s3fs is added new option "ahbe_conf".

   New option means the configuration file path, and this file specifies
   additional HTTP header by file(object) extension.
   Thus you can specify any HTTP header for each object by extension.

   * ahbe_conf file format:
     -----------
     line                = [file suffix] HTTP-header [HTTP-header-values]
     file suffix         = file(object) suffix, if this field is empty, 
                           it means "*"(all object).
     HTTP-header         = additional HTTP header name
     HTTP-header-values  = additional HTTP header value
     -----------

   * Example:
     -----------
     .gz      Content-Encoding     gzip
     .Z       Content-Encoding     compress
              X-S3FS-MYHTTPHEAD    myvalue
     -----------
     A sample configuration file is uploaded in "test" directory.

   If ahbe_conf parameter is specified, s3fs loads it's configuration
   and compares extension(suffix) of object(file) when uploading
   (PUT/POST) it. If the extension is same, s3fs adds/sends specified
   HTTP header and value.

   A case of sample configuration file, if a object(it's extension is
   ".gz") which already has Content-Encoding HTTP header is renamed 
   to ".txt" extension, s3fs does not set Content-Encoding. Because
   ".txt" is not match any line in configuration file.
   So, s3fs matches the extension by each PUT/POST action.

   * Please take care about "Content-Encoding".
   This new option allows setting ANY HTTP header by object extension.
   For example, you can specify "Content-Encoding" for ".gz"/etc 
   extension in configuration. But this means that S3 always returns 
   "Content-Encoding: gzip" when a client requests with other 
   "Accept-Encoding:" header. It SHOULD NOT be good.
   Please see RFC 2616.

2) Changes about allow_other/uid/gid option for mount point
   I reviewed about mount point permission and allow_other/uid/gid
   options, and found bugs about these.
   s3fs is fixed bugs and changed to the following specifications.

   * s3fs only allows uid(gid) options as 0(root), when the effective 
     user is zero(root).
   * A mount point(directory) must have a permission to allow
     accessing by effective user/group.
   * If allow_other option is specified, the mount point permission
     is set 0777(all users allow all access).
     In another case, the mount point is set 0700(only allows 
     effective user).
   * When uid/gid option is specified, the mount point owner/group
     is set uid/gid option value.
     If uid/gid is not set, it is set effective user/group id.

   This changes maybe fixes some issue(321, 338).

3) Changes a logic about (Issue 229)
   The chmod command returns -EIO when changing the mount point.
   It is correct, s3fs can not changed owner/group/mtime for the
   mount point, but s3fs sends a request for changing the bucket.
   This revision does not send the request, and returns EIO as
   soon as possible.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@465 df820570-a93a-0410-bd06-b72b767a4274
2013-08-16 19:24:01 +00:00
ggtakec@gmail.com
00b735beaa For v1.64
Please see r392 log message(http://code.google.com/p/s3fs/source/detail?r=392)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@393 df820570-a93a-0410-bd06-b72b767a4274
2013-03-23 14:16:18 +00:00
ggtakec@gmail.com
9af16df61e Summary of Changes(1.63 -> 1.64)
* This new version was made for fixing big issue about directory object.
  Please be careful and review new s3fs.

==========================
List of Changes
==========================
1) Fixed bugs
    Fixed some memory leak and  un-freed curl handle.
    Fixed codes with a bug which is not found yet.
    Fixed a bug that the s3fs could not update object's mtime when the s3fs had a opened file descriptor. 

    Please let us know a bug, when you find new bug of a memory leak.

2) Changed codes
    Changed codes of s3fs_readdir() and list_bucket() etc.
    Changed codes so that the get_realpath() function returned std::string.
    Changed codes about exit() function. Because the exit() function is called from many fuse callback function directly, these function called fuse_exit() function and retuned with error.
    Changed codes so that the case of the characters for the "x-amz-meta" response header is ignored.

3) Added a option
    Added the norenameapi option for the storage compatible with S3 without copy API.
    This option is subset of nocopyapi option.
    Please read man page or call with --help option.

4) Object for directory
    This is very big and important change.

    The object of directory is changed "dir/" instead of "dir" for being compatible with other S3 client applications.
    And this version understands the object of directory which is made by old version.
    If the new s3fs changes the attributes or owner/group or mtime of the directory object, the s3fs automatically changes the object from old object name("dir") to new("dir/").
    If you need to change old object name("dir") to new("dir/") manually, you can use shell script(mergedir.sh) in test directory.

    * About the directory object name
        AWS S3 allows the object name as both "dir" and "dir/".
        The s3fs before this version understood only "dir" as directory object name, but old version did not understand the "dir/" object name.
        The new version understands both of "dir" and "dir/" object name.
        The s3fs user needs to be care for the special situation that I mentioned later.

        The new version deletes old "dir" object and makes new "dir/" object, when the user operates the directory object for changing the permission or owner/group or mtime.
        This operation does on background and automatically.

        If you need to merge manually, you can use shell script which is mergedir.sh in test directory.
        This script runs chmod/chown/touch commands after finding a directory.
       Other S3 client application makes a directory object("dir/") without meta information which is needed to understand by the s3fs, this script can add meta information for a directory object.
        If this script function is insufficient for you, you can read and modify the codes by yourself.
        Please use the shell script carefully because of changing the object.
        If you find a bug in this script, please let me know.

    * Details
    ** The directory object made by old version
        The directory object made by old version is not understood by other S3 client application.
        New s3fs version was updated for keeping compatibility with other clients.
        You can use the mergedir.sh in test directory for merging  from old directory object("dir") to new("dir/").
        The directory object name is changed from "dir" to "dir/" after the mergedir.sh is run, this changed "dir/" object is understood by other S3 clients.
        This script runs chmod/chown/chgrp/touch/etc commands against the old directory object("dir"), then new s3fs merges that directory automatically.

        If you need to change directory object from old to new manually, you can do it by running these commands which change the directory attributes(mode/owner/group/mtime).

    ** The directory object made by new version
        The directory object name made by new version is "dir/".
        Because the name includes "/", other S3 client applications understand it as the directory.
        I tested new directory by s3cmd/tntDrive/DragonDisk/Gladinet as other S3 clients, the result was good compatibility.
        You need to know that the compatibility has small problem by the difference in specifications between clients.
        And you need to be careful about that the old s3fs can not understand the directory object which made by new s3fs.
        You should change all s3fs which accesses same bucket.

    ** The directory object made by other S3 client application
        Because the object is determined as a directory by the s3fs, the s3fs makes and uses special meta information which is "x-amz-meta-***" and "Content-Type" as HTTP header.
        The s3fs sets and uses HTTP headers for the directory object,  those headers are listed below.
            Content-Type: application/x-directory
            x-amz-meta-mode: <mode>
            x-amz-meta-uid: <UID>
            x-amz-meta-gid <GID>
            x-amz-meta-mtime: <unix time of modified file>

        Other S3 client application builds the directory object without attributes  which is needed by the s3fs.
        When the "ls" command is run on the s3fs-fuse file system which has directories/files made by other S3 clients, this result is shown below. 
            d---------  1 root     root           0 Feb 27 11:21 dir
            ----------  1 root     root     1024 Mar 14 02:15 file
        Because the objects don't have meta information("x-amz-meta-mode"), it means mode=0000.
        In this case, the directory object is shown only "d", because the s3fs determines the object as a directory when the object is the name with "/" or has "Content-type: application/x-directory" header.
        (The s3fs sets "Content-Type: application/x-directory" to the directory object, but other S3 clients set "binary/octet-stream".)
        In that result, nobody without root is allowed to operate the object.

        The owner and group are "root"(UID=0) because the object doesn't have "x-amz-meta-uid/gid".
        If the object doesn't have "x-amz-meta-mtime", the s3fs uses "Last-Modified" HTTP header.
        Therefore the object's mtime is "Last-Modified" value.(This logic is same as old version)
        It has been already explained, if you need to change the object attributes, you can do it by manually operation or mergedir.sh.

    * Example of the compatibility with s3cmd etc
    ** Case A) Only "dir/file" object
        One of case, there is only "dir/file" object without "dir/" object, that object is made by s3cmd or etc.
        In this case, the response of REST API(list bucket) with "delimiter=/" parameter has "CommonPrefixes", and the "dir/" is listed in "CommonPrefixes/Prefix", but the "dir/" object is not real object. 
        The s3fs needs to determine this object as directory, however there is no real directory object("dir" or "dir/").
        But both new s3fs and old one does NOT understand this "dir/" in "CommonPrefixes", because the s3fs fails to get meta information from "dir" or "dir/".
        On this case, the result of "ls" command is shown below.
            ??????????? ? ?        ?        ?            ? dir
        This "dir" is not operated by anyone and any process, because the s3fs does not understand this object permission.
        And "dir/file" object can not be shown and operated too.
        Some other S3 clients(tntDrive/Gladinet/etc) can not understand this object as same as the s3fs.

        If you need to operate "dir/file" object, you need to make the "dir/" object as a directory.
        To make the "dir/" directory object, you need to do below.
        Because there is already the "dir" object which is not real object, you can not make "dir/" directory.
        (s3cmd does not make "dir/" object because the object name has "/".).
        You should make another name directory(ex: "dir2/"), and move the "dir/file" objects to in new directory.
        Last, you can rename the directory name from "dir2/" to "dir/".

    ** Case B) Both "dir" and "dir/file" object
        This case is that there are "dir" and "dir/file" objects which were made by s3cmd/etc.
        s3cmd and s3fs understand the "dir" object as normal(file) object because this object does not have meta information and a name with "/".
        But the result of REST API(list bucket) has "dir/" name in "CommonPrefixes/Prefix". 

        The s3fs checks "dir/" and "dir" as a directory, but the "dir" object is not directory object.
        (Because the new s3fs need to compatible old version, the s3fs checks a directory object in order of "dir/", "dir")
        In this case, the result of "ls" command is shown below. 
            ----------  1 root     root     0 Feb 27 02:48 dir
        As a result, the "dir/file" can not be shown and operated because the "dir" object is a file.

        If you determine the "dir" as a directory, you need to add mete information to the "dir" object by s3cmd.


    ** Case C) Both "dir" and "dir/" object
        Last case is that there are "dir" and "dir/" objects which were made by other S3 clients.
        (example: At first you upload a object "dir/" as a directory by new 3sfs, and you upload a object "dir" by s3cmd.)
        New s3fs determines "dir/" as a directory, because the s3fs searches in oder of "dir/", "dir".
        As a result, the "dir" object can not be shown and operated.

    ** Compatibility between S3 clients 
        Both new and old s3fs do not understand both "dir" and "dir/" at the same time, tntDrive and Galdinet are same as the s3fs.
        If there are "dir/" and "dir" objects, the s3fs gives priority to "dir/".
        But s3cmd and DragonDisk understand both objects.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@392 df820570-a93a-0410-bd06-b72b767a4274
2013-03-23 14:04:07 +00:00
ben.lemasurier@gmail.com
2a09e0864e Fixed a possible memory leak in the stat cache where
- items with an initial hit count of 0 would not be deleted

Added an additiional integration test



git-svn-id: http://s3fs.googlecode.com/svn/trunk@383 df820570-a93a-0410-bd06-b72b767a4274
2011-09-26 15:20:14 +00:00
mooredan@suncup.net
c8d5b35f8f Resolves issue #154
Installed and tested fix for file permissions/cache issue


git-svn-id: http://s3fs.googlecode.com/svn/trunk@311 df820570-a93a-0410-bd06-b72b767a4274
2011-02-11 03:30:02 +00:00
mooredan@suncup.net
6f7e180133 Resolves issue #152
- added move directory test
- fix bug introduced with fixing issue #150



git-svn-id: http://s3fs.googlecode.com/svn/trunk@310 df820570-a93a-0410-bd06-b72b767a4274
2011-02-10 01:07:46 +00:00
mooredan@suncup.net
0f18298886 Fix for issue #145 and additional tests in make check
git-svn-id: http://s3fs.googlecode.com/svn/trunk@301 df820570-a93a-0410-bd06-b72b767a4274
2011-01-19 05:26:01 +00:00
mooredan@suncup.net
acc7363433 Checkpoint for implementation of multipart upload
Check issue #142 for details

Code is operational, but not quite ready for
prime time -- needs some clean up


git-svn-id: http://s3fs.googlecode.com/svn/trunk@297 df820570-a93a-0410-bd06-b72b767a4274
2010-12-28 04:15:23 +00:00
mooredan@suncup.net
5c64ff83cf Restructing to take care of the directory rename.
This is a checkpoint. No functional changes in this commit.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@288 df820570-a93a-0410-bd06-b72b767a4274
2010-12-20 05:26:27 +00:00
mooredan@suncup.net
99aa781701 Clean up from last time just in case last time had
an error and left the test file in the bucket


git-svn-id: http://s3fs.googlecode.com/svn/trunk@276 df820570-a93a-0410-bd06-b72b767a4274
2010-12-08 03:08:10 +00:00
mooredan@suncup.net
45b1256966 test/*.sh files were not being included in the
tarball created by "make dist"


git-svn-id: http://s3fs.googlecode.com/svn/trunk@247 df820570-a93a-0410-bd06-b72b767a4274
2010-11-22 22:30:17 +00:00
mooredan@suncup.net
e9b8216d21 In preparation to remove the unnecessary "s3fs"
directory from the trunk directory.

First do a svn cp of all of the source up to
trunk.  This is supposed to preserve change
history -- we'll see.

The source remains untouched until this gets
worked out.

Also in preparation of bringing in the source
collateral for the debian package into the
repository. I expect that the top level will
look like this:

svn/
   s3fs/
      trunk/
      tags/
      branches/
   dpkg/
      trunk/
      tags/
      branches/


So far that's how it is looking.  I'll be
very careful to ensure integrity of the data.
As a result this may be a multistep process.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@236 df820570-a93a-0410-bd06-b72b767a4274
2010-11-13 23:59:23 +00:00