mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2024-12-23 01:08:54 +00:00
FUSE-based file system backed by Amazon S3
ae2d1eda84
========================== List of Changes ========================== 1) Fixes a bug(Issue 320) - r408, r411, r412 Fixes (Issue 320) "Spurious I/O errors" (http://code.google.com/p/s3fs/issues/detail?id=320) When the s3fs gets error in upload loop, it try to re-upload without seeking the fd. This is a bug for this issue. Please see detail in r408, r411 and r412. 2) Fixes a bug(Issue 293) - r409 Fixes (Issue 293) "Command line argument bucket: causes segv" (http://code.google.com/p/s3fs/issues/detail?id=293) If it specifies the bucket name which is terminated ":", the s3fs crushes. Please see detail in r409. 3) Supports a option(Issue 265) - r410 Supports (Issue 265) "Unable to mount to a non empty directory" (http://code.google.com/p/s3fs/issues/detail?id=265) Supported "nonempty" fuse/mount option. Please see detail in r410. 4) Supports other S3 clients(Issue 27) - r413, r.414 Supports (Issue 27) "Compatability with other S3FS clients" *** "_$folder$" dir object Supports the directory object which made by s3fox. Its name has "_$folder$" suffixes. On s3fs, that directory object is listed normal directory name without "_$folder$". Please be careful when you change object attributes(rename, chmod, chown, touch), because the s3fs remakes the directory object without "_$folder$" sufixes. This means the object is re-made by the s3fs. *** no dir object Supports the directory which is no objects. If there is a object which has "/" charactor(ex. "<bucket>/dir/file"), the directory("dir") object is no object. Example, you can upload the object which name is "s3://bucket/dir/file" by the s3cmd(or other S3 clients like s3cmd). Then on thie case, the "dir" is not object in bucket. This s3fs version understands this case. Please be careful when you change object attributes(rename, chmod, chown, touch), because the s3fs makes new directory object. Please see detail in r413 and r414. git-svn-id: http://s3fs.googlecode.com/svn/trunk@415 df820570-a93a-0410-bd06-b72b767a4274 |
||
---|---|---|
doc | ||
src | ||
test | ||
.gitignore | ||
AUTHORS | ||
autogen.sh | ||
ChangeLog | ||
configure.ac | ||
COPYING | ||
INSTALL | ||
Makefile.am | ||
NEWS | ||
README |
THIS README CONTAINS OUTDATED INFORMATION - please refer to the wiki or --help S3FS-Fuse S3FS is FUSE (File System in User Space) based solution to mount/unmount an Amazon S3 storage buckets and use system commands with S3 just like it was another Hard Disk. In order to compile s3fs, You'll need the following requirements: * Kernel-devel packages (or kernel source) installed that is the SAME version of your running kernel * LibXML2-devel packages * CURL-devel packages (or compile curl from sources at: curl.haxx.se/ use 7.15.X) * GCC, GCC-C++ * pkgconfig * FUSE (>= 2.8.4) * FUSE Kernel module installed and running (RHEL 4.x/CentOS 4.x users - read below) * OpenSSL-devel (0.9.8) * Subversion If you're using YUM or APT to install those packages, then it might require additional packaging, allow it to be installed. Downloading & Compiling: ------------------------ In order to download s3fs, user the following command: svn checkout http://s3fs.googlecode.com/svn/trunk/ s3fs-read-only Go inside the directory that has been created (s3fs-read-only/s3fs) and run: ./autogen.sh This will generate a number of scripts in the project directory, including a configure script which you should run with: ./configure If configure succeeded, you can now run: make. If it didn't, make sure you meet the dependencies above. This should compile the code. If everything goes OK, you'll be greated with "ok!" at the end and you'll have a binary file called "s3fs" in the src/ directory. As root (you can use su, su -, sudo) do: "make install" -this will copy the "s3fs" binary to /usr/local/bin. Congratulations. S3fs is now compiled and installed. Usage: ------ In order to use s3fs, make sure you have the Access Key and the Secret Key handy. (refer to the wiki) First, create a directory where to mount the S3 bucket you want to use. Example (as root): mkdir -p /mnt/s3 Then run: s3fs mybucket[:path] /mnt/s3 This will mount your bucket to /mnt/s3. You can do a simple "ls -l /mnt/s3" to see the content of your bucket. If you want to allow other people access the same bucket in the same machine, you can add "-o allow _other" to read/write/delete content of the bucket. You can add a fixed mount point in /etc/fstab, here's an example: s3fs#mybucket /mnt/s3 fuse allow_other 0 0 This will mount upon reboot (or by launching: mount -a) your bucket on your machine. All other options can be read at: http://code.google.com/p/s3fs/wiki/FuseOverAmazon Known Issues: ------------- s3fs should be working fine with S3 storage. However, There are couple of limitations: * There is no full UID/GID support yet, everything looks as "root" and if you allow others to access the bucket, others can erase files. There is, however, permissions support built in. * Currently s3fs could hang the CPU if you have lots of time-outs. This is *NOT* a fault of s3fs but rather libcurl. This happends when you try to copy thousands of files in 1 session, it doesn't happend when you upload hundreds of files or less. * CentOS 4.x/RHEL 4.x users - if you use the kernel that shipped with your distribution and didn't upgrade to the latest kernel RedHat/CentOS gives, you might have a problem loading the "fuse" kernel. Please upgrade to the latest kernel (2.6.16 or above) and make sure "fuse" kernel module is compiled and loadable since FUSE requires this kernel module and s3fs requires it as well. * Moving/renaming/erasing files takes time since the whole file needs to be accessed first. A workaround could be to use s3fs's cache support with the use_cache option.