FUSE-based file system backed by Amazon S3
Go to file
ggtakec@gmail.com d7689151ab Fixed Issue 229 and Changes codes
1) Set metadata "Content-Encoding" automatically(Issue 292)
   For this issue, s3fs is added new option "ahbe_conf".

   New option means the configuration file path, and this file specifies
   additional HTTP header by file(object) extension.
   Thus you can specify any HTTP header for each object by extension.

   * ahbe_conf file format:
     -----------
     line                = [file suffix] HTTP-header [HTTP-header-values]
     file suffix         = file(object) suffix, if this field is empty, 
                           it means "*"(all object).
     HTTP-header         = additional HTTP header name
     HTTP-header-values  = additional HTTP header value
     -----------

   * Example:
     -----------
     .gz      Content-Encoding     gzip
     .Z       Content-Encoding     compress
              X-S3FS-MYHTTPHEAD    myvalue
     -----------
     A sample configuration file is uploaded in "test" directory.

   If ahbe_conf parameter is specified, s3fs loads it's configuration
   and compares extension(suffix) of object(file) when uploading
   (PUT/POST) it. If the extension is same, s3fs adds/sends specified
   HTTP header and value.

   A case of sample configuration file, if a object(it's extension is
   ".gz") which already has Content-Encoding HTTP header is renamed 
   to ".txt" extension, s3fs does not set Content-Encoding. Because
   ".txt" is not match any line in configuration file.
   So, s3fs matches the extension by each PUT/POST action.

   * Please take care about "Content-Encoding".
   This new option allows setting ANY HTTP header by object extension.
   For example, you can specify "Content-Encoding" for ".gz"/etc 
   extension in configuration. But this means that S3 always returns 
   "Content-Encoding: gzip" when a client requests with other 
   "Accept-Encoding:" header. It SHOULD NOT be good.
   Please see RFC 2616.

2) Changes about allow_other/uid/gid option for mount point
   I reviewed about mount point permission and allow_other/uid/gid
   options, and found bugs about these.
   s3fs is fixed bugs and changed to the following specifications.

   * s3fs only allows uid(gid) options as 0(root), when the effective 
     user is zero(root).
   * A mount point(directory) must have a permission to allow
     accessing by effective user/group.
   * If allow_other option is specified, the mount point permission
     is set 0777(all users allow all access).
     In another case, the mount point is set 0700(only allows 
     effective user).
   * When uid/gid option is specified, the mount point owner/group
     is set uid/gid option value.
     If uid/gid is not set, it is set effective user/group id.

   This changes maybe fixes some issue(321, 338).

3) Changes a logic about (Issue 229)
   The chmod command returns -EIO when changing the mount point.
   It is correct, s3fs can not changed owner/group/mtime for the
   mount point, but s3fs sends a request for changing the bucket.
   This revision does not send the request, and returns EIO as
   soon as possible.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@465 df820570-a93a-0410-bd06-b72b767a4274
2013-08-16 19:24:01 +00:00
doc Fixed Issue 229 and Changes codes 2013-08-16 19:24:01 +00:00
src Fixed Issue 229 and Changes codes 2013-08-16 19:24:01 +00:00
test Fixed Issue 229 and Changes codes 2013-08-16 19:24:01 +00:00
.gitignore .gitignore 2013-02-08 15:52:44 +00:00
AUTHORS Simplified s3fs_check_service 2011-08-25 16:34:10 +00:00
autogen.sh In preparation to remove the unnecessary "s3fs" 2010-11-13 23:59:23 +00:00
ChangeLog In preparation to remove the unnecessary "s3fs" 2010-11-13 23:59:23 +00:00
configure.ac Summary of Changes(1.71 -> 1.72) 2013-08-10 15:37:44 +00:00
COPYING In preparation to remove the unnecessary "s3fs" 2010-11-13 23:59:23 +00:00
INSTALL In preparation to remove the unnecessary "s3fs" 2010-11-13 23:59:23 +00:00
Makefile.am Fixed a bug(doc/Makefile in tarball) 2013-05-31 02:40:05 +00:00
NEWS In preparation to remove the unnecessary "s3fs" 2010-11-13 23:59:23 +00:00
README Support for mounting a remote directory (issue #7). 2011-06-26 00:37:52 +00:00

THIS README CONTAINS OUTDATED INFORMATION - please refer to the wiki or --help

S3FS-Fuse

S3FS is FUSE (File System in User Space) based solution to mount/unmount an Amazon S3 storage buckets and use system commands with S3 just like it was another Hard Disk.

In order to compile s3fs, You'll need the following requirements:

* Kernel-devel packages (or kernel source) installed that is the SAME version of your running kernel
* LibXML2-devel packages
* CURL-devel packages (or compile curl from sources at: curl.haxx.se/ use 7.15.X)
* GCC, GCC-C++
* pkgconfig
* FUSE (>= 2.8.4)
* FUSE Kernel module installed and running (RHEL 4.x/CentOS 4.x users - read below)
* OpenSSL-devel (0.9.8)
* Subversion

If you're using YUM or APT to install those packages, then it might require additional packaging, allow it to be installed.

Downloading & Compiling:
------------------------
In order to download s3fs, user the following command:
svn checkout http://s3fs.googlecode.com/svn/trunk/ s3fs-read-only

Go inside the directory that has been created (s3fs-read-only/s3fs) and run: ./autogen.sh
This will generate a number of scripts in the project directory, including a configure script which you should run with: ./configure
If configure succeeded, you can now run: make. If it didn't, make sure you meet the dependencies above.
This should compile the code. If everything goes OK, you'll be greated with "ok!" at the end and you'll have a binary file called "s3fs"
in the src/ directory.

As root (you can use su, su -, sudo) do: "make install" -this will copy the "s3fs" binary to /usr/local/bin.

Congratulations. S3fs is now compiled and installed.

Usage:
------
In order to use s3fs, make sure you have the Access Key and the Secret Key handy. (refer to the wiki)
First, create a directory where to mount the S3 bucket you want to use.
Example (as root): mkdir -p /mnt/s3
Then run: s3fs mybucket[:path] /mnt/s3

This will mount your bucket to /mnt/s3. You can do a simple "ls -l /mnt/s3" to see the content of your bucket.

If you want to allow other people access the same bucket in the same machine, you can add "-o allow _other" to read/write/delete content of the bucket.

You can add a fixed mount point in /etc/fstab, here's an example:

s3fs#mybucket /mnt/s3 fuse allow_other 0 0

This will mount upon reboot (or by launching: mount -a) your bucket on your machine.

All other options can be read at: http://code.google.com/p/s3fs/wiki/FuseOverAmazon

Known Issues:
-------------
s3fs should be working fine with S3 storage. However, There are couple of limitations:

* There is no full UID/GID support yet, everything looks as "root" and if you allow others to access the bucket, others can erase files. There is, however, permissions support built in.
* Currently s3fs could hang the CPU if you have lots of time-outs. This is *NOT* a fault of s3fs but rather libcurl. This happends when you try to copy thousands of files in 1 session, it doesn't happend when you upload hundreds of files or less.
* CentOS 4.x/RHEL 4.x users - if you use the kernel that shipped with your distribution and didn't upgrade to the latest kernel RedHat/CentOS gives, you might have a problem loading the "fuse" kernel. Please upgrade to the latest kernel (2.6.16 or above) and make sure "fuse" kernel module is compiled and loadable since FUSE requires this kernel module and s3fs requires it as well.
* Moving/renaming/erasing files takes time since the whole file needs to be accessed first. A workaround could be to use s3fs's cache support with the use_cache option.