changed for version v1.77 and fixed man page

This commit is contained in:
Takeshi Nakatani 2014-04-19 16:08:10 +00:00
parent a4465105f7
commit 1a4065b0fb
4 changed files with 22 additions and 8 deletions

View File

@ -1,6 +1,15 @@
ChangeLog for S3FS ChangeLog for S3FS
------------------ ------------------
Version 1.77 -- Apr 19, 2014
issue 405(googlecode) - enable_content_md5 Input/output error
issue #14 - s3fs -u should return 0 if there are no lost multiparts
issue #16 - empty file is written to s3
issue #18 - s3fs crashes with segfault
issue #22 - Fix typos in docs for max_stat_cache_size
issue #23 - curl ssl problems
issue #28 - Address signedness warning in FdCache::Init
Version 1.76 -- Jan 21, 2014 Version 1.76 -- Jan 21, 2014
issue #5 - du shows incorrect usage stats issue #5 - du shows incorrect usage stats
issue #8 - version in configure.ac is 1.74 for release 1.75 issue #8 - version in configure.ac is 1.74 for release 1.75

14
README
View File

@ -14,16 +14,18 @@ In order to compile s3fs, You'll need the following requirements:
* FUSE (>= 2.8.4) * FUSE (>= 2.8.4)
* FUSE Kernel module installed and running (RHEL 4.x/CentOS 4.x users - read below) * FUSE Kernel module installed and running (RHEL 4.x/CentOS 4.x users - read below)
* OpenSSL-devel (0.9.8) * OpenSSL-devel (0.9.8)
* Subversion * Git
If you're using YUM or APT to install those packages, then it might require additional packaging, allow it to be installed. If you're using YUM or APT to install those packages, then it might require additional packaging, allow it to be installed.
Downloading & Compiling: Downloading & Compiling:
------------------------ ------------------------
In order to download s3fs, user the following command: In order to download s3fs, download from following url:
svn checkout http://s3fs.googlecode.com/svn/trunk/ s3fs-read-only https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip
Or clone the following command:
git clone git://github.com/s3fs-fuse/s3fs-fuse.git
Go inside the directory that has been created (s3fs-read-only/s3fs) and run: ./autogen.sh Go inside the directory that has been created (s3fs-fuse) and run: ./autogen.sh
This will generate a number of scripts in the project directory, including a configure script which you should run with: ./configure This will generate a number of scripts in the project directory, including a configure script which you should run with: ./configure
If configure succeeded, you can now run: make. If it didn't, make sure you meet the dependencies above. If configure succeeded, you can now run: make. If it didn't, make sure you meet the dependencies above.
This should compile the code. If everything goes OK, you'll be greated with "ok!" at the end and you'll have a binary file called "s3fs" This should compile the code. If everything goes OK, you'll be greated with "ok!" at the end and you'll have a binary file called "s3fs"
@ -49,14 +51,14 @@ You can add a fixed mount point in /etc/fstab, here's an example:
s3fs#mybucket /mnt/s3 fuse allow_other 0 0 s3fs#mybucket /mnt/s3 fuse allow_other 0 0
This will mount upon reboot (or by launching: mount -a) your bucket on your machine. This will mount upon reboot (or by launching: mount -a) your bucket on your machine.
If that does not work, probably you should specify with "_netdev" option in fstab.
All other options can be read at: http://code.google.com/p/s3fs/wiki/FuseOverAmazon All other options can be read at: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon
Known Issues: Known Issues:
------------- -------------
s3fs should be working fine with S3 storage. However, There are couple of limitations: s3fs should be working fine with S3 storage. However, There are couple of limitations:
* There is no full UID/GID support yet, everything looks as "root" and if you allow others to access the bucket, others can erase files. There is, however, permissions support built in.
* Currently s3fs could hang the CPU if you have lots of time-outs. This is *NOT* a fault of s3fs but rather libcurl. This happends when you try to copy thousands of files in 1 session, it doesn't happend when you upload hundreds of files or less. * Currently s3fs could hang the CPU if you have lots of time-outs. This is *NOT* a fault of s3fs but rather libcurl. This happends when you try to copy thousands of files in 1 session, it doesn't happend when you upload hundreds of files or less.
* CentOS 4.x/RHEL 4.x users - if you use the kernel that shipped with your distribution and didn't upgrade to the latest kernel RedHat/CentOS gives, you might have a problem loading the "fuse" kernel. Please upgrade to the latest kernel (2.6.16 or above) and make sure "fuse" kernel module is compiled and loadable since FUSE requires this kernel module and s3fs requires it as well. * CentOS 4.x/RHEL 4.x users - if you use the kernel that shipped with your distribution and didn't upgrade to the latest kernel RedHat/CentOS gives, you might have a problem loading the "fuse" kernel. Please upgrade to the latest kernel (2.6.16 or above) and make sure "fuse" kernel module is compiled and loadable since FUSE requires this kernel module and s3fs requires it as well.
* Moving/renaming/erasing files takes time since the whole file needs to be accessed first. A workaround could be to use s3fs's cache support with the use_cache option. * Moving/renaming/erasing files takes time since the whole file needs to be accessed first. A workaround could be to use s3fs's cache support with the use_cache option.

View File

@ -20,7 +20,7 @@
dnl Process this file with autoconf to produce a configure script. dnl Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59) AC_PREREQ(2.59)
AC_INIT(s3fs, 1.76) AC_INIT(s3fs, 1.77)
AC_CANONICAL_SYSTEM AC_CANONICAL_SYSTEM
AM_INIT_AUTOMAKE() AM_INIT_AUTOMAKE()

View File

@ -8,6 +8,9 @@ S3FS \- FUSE-based file system backed by Amazon S3
.SS unmounting .SS unmounting
.TP .TP
\fBumount mountpoint \fBumount mountpoint
.SS utility mode ( remove interrupted multipart uploading objects )
.TP
\fBs3fs -u bucket
.SH DESCRIPTION .SH DESCRIPTION
s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files). s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files).
.SH AUTHENTICATION .SH AUTHENTICATION