1) Fixed a bug
Fixed bugs(mis-coding) which is wrong prototype for md5hexsum, md5sum functions.
Issue 361: complie time error after running #make
Issue 360: 1.72 Will not compile on Ubuntu 12.04.2 (precise) i686
And fixed a code for initializing enum member in S3fsCurl class.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@469 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed a bug
s3fs was wrong for request retry processing so far.
It was fixed.
Probably, Issue 343(1.7 having curl 35 + other disconnect issue) is
occurred by this bug.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@468 df820570-a93a-0410-bd06-b72b767a4274
1) "virtual hosted-style request" for checking bucket
Old version calls "path-style request" for checking bucket at
initializing, then after this revision s3fs requests "virtual
hosted-style request".
This change is related to
"Operation not permitted - on any operation(Issue 362)".
2) Changed debugging message level
Changed debugging message level in prepare_url() from DPRNNN
to FPRNINFO.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@467 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed bugs
Fixes below bugs( format error and undefined fund ).
* 1.72 Will not compile on Ubuntu 12.04.2 (precise) i686(Issue 360)
* complie time error after running #make(Issue 361)
I'll close these Issue if I can confirm that these problem was solved.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@466 df820570-a93a-0410-bd06-b72b767a4274
1) Set metadata "Content-Encoding" automatically(Issue 292)
For this issue, s3fs is added new option "ahbe_conf".
New option means the configuration file path, and this file specifies
additional HTTP header by file(object) extension.
Thus you can specify any HTTP header for each object by extension.
* ahbe_conf file format:
-----------
line = [file suffix] HTTP-header [HTTP-header-values]
file suffix = file(object) suffix, if this field is empty,
it means "*"(all object).
HTTP-header = additional HTTP header name
HTTP-header-values = additional HTTP header value
-----------
* Example:
-----------
.gz Content-Encoding gzip
.Z Content-Encoding compress
X-S3FS-MYHTTPHEAD myvalue
-----------
A sample configuration file is uploaded in "test" directory.
If ahbe_conf parameter is specified, s3fs loads it's configuration
and compares extension(suffix) of object(file) when uploading
(PUT/POST) it. If the extension is same, s3fs adds/sends specified
HTTP header and value.
A case of sample configuration file, if a object(it's extension is
".gz") which already has Content-Encoding HTTP header is renamed
to ".txt" extension, s3fs does not set Content-Encoding. Because
".txt" is not match any line in configuration file.
So, s3fs matches the extension by each PUT/POST action.
* Please take care about "Content-Encoding".
This new option allows setting ANY HTTP header by object extension.
For example, you can specify "Content-Encoding" for ".gz"/etc
extension in configuration. But this means that S3 always returns
"Content-Encoding: gzip" when a client requests with other
"Accept-Encoding:" header. It SHOULD NOT be good.
Please see RFC 2616.
2) Changes about allow_other/uid/gid option for mount point
I reviewed about mount point permission and allow_other/uid/gid
options, and found bugs about these.
s3fs is fixed bugs and changed to the following specifications.
* s3fs only allows uid(gid) options as 0(root), when the effective
user is zero(root).
* A mount point(directory) must have a permission to allow
accessing by effective user/group.
* If allow_other option is specified, the mount point permission
is set 0777(all users allow all access).
In another case, the mount point is set 0700(only allows
effective user).
* When uid/gid option is specified, the mount point owner/group
is set uid/gid option value.
If uid/gid is not set, it is set effective user/group id.
This changes maybe fixes some issue(321, 338).
3) Changes a logic about (Issue 229)
The chmod command returns -EIO when changing the mount point.
It is correct, s3fs can not changed owner/group/mtime for the
mount point, but s3fs sends a request for changing the bucket.
This revision does not send the request, and returns EIO as
soon as possible.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@465 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes bugs and changes codes - r448, r451, r452, r453, r454,
r455, r460, r461
- Fixed umask option which works correctly.( Issue 346 )
- Added new S3fsCurl class for lapping curl functions.
- Deleted YIKES macro which is not used.
- Used memcpy instead of copying each bytes while downloading.
- Fixed a bug, s3fs did not use servicepath when renaming.
- Fixed and changed "use_sse"/"use_rrs" options( Issue 352 )
- Fixed a memory leak in curl_slist_sort_insert() function.
- Fixed a memory leak when multipart uploading with error.
- Supported mknod function.( Issue 355 )
- Changed debugging macros for simple.
2) Changes codes for performance and adds "multireq_max" - r449
Changed the order for checking directory objects.
Added "multireq_max" option is maximum number of parallel
request for listing objects.
3) Performance tuning - r456, r457, r458, r459
- Changed for large object uploading/downloading by parallel
requests.
- Added "parallel_count"/"fd_page_size" option.
- Not make temporary file when uploading large object by
multipart uploading.
- Changed about temporary file and local cache file.
And added cache's status file for local cache file.
- Use "Range" header for block downloading.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@462 df820570-a93a-0410-bd06-b72b767a4274
1) Patch in support for special file and block device types( Issue 355 )
Patched codes, and s3fs can make special files on S3.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@460 df820570-a93a-0410-bd06-b72b767a4274
1 ) Fixed a bug
Since r458, s3fs uses stat files of cache files. But s3fs forgot removing these
stat files when s3fs removed cache files.
Fixed this bug.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@459 df820570-a93a-0410-bd06-b72b767a4274
* Summay
This revision includes big change about temporary file and local cache file.
By this big change, s3fs works with good performance when s3fs opens/
closes/syncs/reads object.
I made a big change about the handling about temporary file and local cache
file to do this implementation.
* Detail
1) About temporary file(local file)
s3fs uses a temporary file on local file system when s3fs does download/
upload/open/seek object on S3.
After this revision, s3fs calls ftruncate() function when s3fs makes the
temporary file.
In this way s3fs can set a file size of precisely length without downloading.
(Notice - ftruncate function is for XSI-compliant systems, so that possibly
you have a problem on non-XSI-compliant systems.)
By this change, s3fs can download a part of a object by requesting with
"Range" http header. It seems like downloading by each block unit.
The default block(part) size is 50MB, it is caused the result which is default
parallel requests count(5) by default multipart upload size(10MB).
If you need to change this block size, you can change by new option
"fd_page_size". This option can take from 1MB(1024 * 1024) to any bytes.
So that, you have to take care about that fdcache.cpp(and fdcache.h) were
changed a lot.
2) About local cache
Local cache files which are in directory specified by "use_cache" option do
not have always all of object data.
This cause is that s3fs uses ftruncate function and reads(writes) each block
unit of a temporary file.
s3fs manages each block unit's status which are "downloaded area" or "not".
For this status, s3fs makes new temporary file in cache directory which is
specified by "use_cache" option. This status files is in a directory which is
named "<use_cache sirectory>/.<bucket_name>/".
When s3fs opens this status file, s3fs locks this file for exclusive control by
calling flock function. You need to take care about this, the status files can
not be laid on network drive(like NFS).
This revision changes about file open mode, s3fs always opens a local cache
file and each status file with writable mode.
Last, this revision adds new option "del_cache", this option means that s3fs
deletes all local cache file when s3fs starts and exits.
3) Uploading
When s3fs writes data to file descriptor through FUSE request, old s3fs
revision downloads all of the object. But new revision does not download all,
it downloads only small percial area(some block units) including writing data
area.
And when s3fs closes or flushes the file descriptor, s3fs downloads other area
which is not downloaded from server. After that, s3fs uploads all of data.
Already r456 revision has parallel upload function, then this revision with
r456 and r457 are very big change for performance.
4) Downloading
By changing a temporary file and a local cache file, when s3fs downloads a
object, it downloads only the required range(some block units).
And s3fs downloads units by parallel GET request, it is same as a case of
uploading. (Maximum parallel request count and each download size are
specified same parameters for uploading.)
In the new revision, when s3fs opens file, s3fs returns file descriptor soon.
Because s3fs only opens(makes) the file descriptor with no downloading
data. And when s3fs reads a data, s3fs downloads only some block unit
including specified area.
This result is good for performance.
5) Changes option name
The option "parallel_upload" which added at r456 is changed to new option
name as "parallel_count". This reason is this option value is not only used by
uploading object, but a uploading object also uses this option. (For a while,
you can use old option name "parallel_upload" for compatibility.)
git-svn-id: http://s3fs.googlecode.com/svn/trunk@458 df820570-a93a-0410-bd06-b72b767a4274
1) For uploading performance(part 2)
Changed a codes about uploading large object(multipart uploading).
This revision does not make temporary file when s3fs uploads large object by multipart uploading.
Before this revision, s3fs made temporary file(/tmp/s3fs.XXXXX) for multipart, but it was not good for performance.
So that, new codes do not use those files, and s3fs reads directly large object from s3fs's cache file.
2) Some value to symbol
Changed some value to symbol(define).
git-svn-id: http://s3fs.googlecode.com/svn/trunk@457 df820570-a93a-0410-bd06-b72b767a4274
1) For uploading performance(part 1)
Changed a code for large object uploading.
New codes makes s3fs send parallel requests when s3fs uploads large
object(20MB) by multipart post.
And added new "parallel_upload" option, which limits parallel request
count which s3fs requests at once.
This option's default value is "5", and you can change this value. But it
is necessary to set this value depending on a CPU and a network band.
s3fs became to work good performance by this option, please try to set
your value for this option.
2) Changes debugging messages
Changed debugging message in s3fs.cpp.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@456 df820570-a93a-0410-bd06-b72b767a4274
1) Changes a code in PutRequest function
Changed a code in S3fsCurl:: PutRequest function to duplicate file discriptor in
this function.
2) Changes debugging messages
Changed debugging message's indent in curl.cpp functions.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@455 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed a bug
When something error occured in multipart uploading process, s3fs forgets to free memory.
(from r451)
Fixed this bug.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@454 df820570-a93a-0410-bd06-b72b767a4274
1) Option syntax verbosity in doc ( Issue 352 )
Before this revision(version), "use_rrs" option needs to set a parameter like "use_sse" option.
But this option does not need a parameter, specified "use_rrs" option means enabled RRS.
(because RRS is desabled by default.)
After this revision, "use_rrs" option can be specified without a parameter, and "use_sse" too.
Changed codes, man page and help page.
Please notice, for old version "use_rrs"(and "use_sse") can be specified with a parameter("1" or "0") yet.
2) Fixes a bug about analizing "use_sse" option.
Fixed a bug in r451, "use_sse" option is not worked because s3fs mistook to call function for "use_rrs".
3) Fixes a memory leak.
Fixed a memory leak in r451.
Fixed that the curl_slist_sort_insert() function forgot to free memory.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@452 df820570-a93a-0410-bd06-b72b767a4274
1) Adds new S3fsCurl class
Added new S3fsCurl class instead of directly calling curl function.
This class is lapping curl function for s3fs(AWS S3 API).
2) Changes codes about adding S3fsCurl class
Changed and deleted classes and structures which are related to curl in curl.cpp/curl.h.
Changed codes which are calling S3 API with curl in s3fs.cpp.
3) Deletes YKIES macro
Deleted YIKES macro, because this macro is used no more.
4) Changes a code
s3fs does not get good performance because s3fs copies each byte while downloading.
So that the codes is changed instead of memcpy, then s3fs performance not a little improves.
5) Fixes a bug
When s3fs renames a file, s3fs does not use the value which is specified by servicepath option.
Fixed this bug.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@451 df820570-a93a-0410-bd06-b72b767a4274
1) Changes FdCache class(cleanup codes)
The FdCache class is for caching file discriptor.
This class is modified as adding reference count for file discriptor and
removing pid for each path.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@450 df820570-a93a-0410-bd06-b72b767a4274
1) Changes codes for performance and request's costs
s3fs gets object's attributes by using HEAD request.
Directory objects is following 4 type:
a) name type is "dir", with meta information
b) name type is "dir", without meta information(but has files in itself)
c) name type is "dir/", with(out) meta information
d) name type is "dir_$folder$", with(out) meta information
The codes is changed to order checking directory object.
So that, s3fs decreases requests for checking objects.
Before version has a bug, that is s3fs can not be recognizable type-b)
definitely when s3fs check the object directly.(but s3fs can, when s3fs
check the object by listing)
This change fixes this bug.
2) Adds "multireq_max" option
Added "multireq_max" option is maximum number of parallel request
for listing objects.
This changes is possible to solve CURLE_COULDNT_CONNECT.
If this option can not solve it, this option will be useful for tuning
performance by each.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@449 df820570-a93a-0410-bd06-b72b767a4274
1) Not recognizing group permissions( Issue 346 )
Fixed umask option which works correctly.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@448 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Adds use_sse option(Issue 226) - r438, r439
Supports SSE(Server-Side Encryption) and adds "use_sse" option.(Issue 226)
2) Fixes a bug(Issue 342) - r440
Fixed a bug "Segmentation fault on connect on ppc64".
The third parameter of curl_easy_getinfo() is wrong.
3) Fixes a bug(Issue 343, 235, 257, 265) - r441
Fixed a bug "SSL connect error).
r434 could not fix it completely(mistook fixing).
4) Fixes bugs and changes codes - r442, r444
- Fixes a bug which is forgot removing temporary files at error.
- Fixes curl_share function prototype.
- Changes one code for "-d" option.
- Changes head_data struct menacers.
- Fixes calling position for curl_global_init and curl_share_init function.
- Fixes uninitializing struct tm.
- Fixes accessing freed variable.
- Changes using cur_slist directly to auto_curl_slist class.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@445 df820570-a93a-0410-bd06-b72b767a4274
1) don't use curl_slist directly
s3fs has auto_curl_slist struct, but some function use curl_slist directly in s3fs.cpp.
So that, changes codes for using auto_curl_slist.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@444 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed a bug(forgot removing temporary files)
When s3fs gets a error from fwrite in multipart uploading function,
s3fs does not remove a temporary file.
2) Fixed a bug(wrong prototype of function)
The prototype of function for CURLSHOPT_UNLOCKFUNC
is wrong.
3) Changed codes
- In my_curl_easy_perform function, the codes for debugging messages
is changed, because it is for not working codes when "-d" option is
not specified.
- Changes struct head_data's member variables, and some codes for this
changes.
- Moving calling function to main for curl_global_init and curl_share_init
functions, because these function must call in main thread.
4) Fixed a bug(use uninitialized memory)
In get_lastmodified function, this function does not initialize value
(struct tm).
5) Fixed a bug(access freed variable)
In readdir_multi_head function, access a variable which is already freed.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@442 df820570-a93a-0410-bd06-b72b767a4274
1) Segmentation fault on connect on ppc64( Issue 342 )
The third parameter of curl_easy_getinfo() is wrong.
It must be "long" but specified "CURLcode".
Fixes this issue.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@440 df820570-a93a-0410-bd06-b72b767a4274
1) Patch adding support for SSE( Issue 226 )
Forgot changing codes for the error logic which use_sse and
use_rrs option is specified.
(r438 + this fixes = fixed issue 226)
git-svn-id: http://s3fs.googlecode.com/svn/trunk@439 df820570-a93a-0410-bd06-b72b767a4274
1) Patch adding support for SSE( Issue 226 )
Supports SSE(Server-Side Encryption) and adds "use_sse" option.
* Specifications
When "use_sse" option is specified as "1", s3fs adds "x-amz-server-side-encryption"
header as "AES256".
But it only does when objects upload(writing object).
When you do chmod/chown/chgrp/touch/mv commands, s3fs does not
add this header and inherit SSE mode from oroginal object.
* Notice
"use_sse" option can not specify with "use_rrs" because we get signature
error from S3.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@438 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes bugs - r428
* Fixes a bug which object owner/group is set wrong id.
* The permission of mount point when allow_other option is
specified.
* Fixes a bug about permission
when the directory permission is 0557, other user(who is not
owner and same group) gets a permission error when making
file or directory in that dir.
* Fixes a bug( Issue 340 )
Fixes compile error about "blkcnt_t".
2) Fixes a bug( Issue 429 ) - r429
Changes that s3fs always uses own dns cache, and adds "nodnscache"
option. If "nodnscache" is specified, s3fs does not use dns cache as
before. s3fs keeps DNS cache for 60 senconds by libcurl's default.
3) Fixes a bug( Issue 235 ) - r430
Fixes a CURLE_COULDNT_CONNECT error when s3fs reads
objects header information.
* max request in curl_multi request is 500 and s3fs loops to call
curl_multi.
* retries to call request which returns error as CURLE_COULDNT_CONNECT.
4) Fixes a bug - r431
Fixed a bug(failed all multi head request when mounting bucket+path)
5) FIxes a bug( Issue 241 ) - r432
Changes codes for that s3fs returns size from opened file discriptor, if
client already opens the file.
6) Fixes a bug - r433
Package tarball includes doc/Makefile, this file is not unnecessary for
tarball.
7) Fixes bug( Issue 235 , Issue 257, Issue 265 ) - r434
Fixed "SSL connect error", then s3fs can connect by SSL with no problem.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@435 df820570-a93a-0410-bd06-b72b767a4274
1) Fixes "SSL connect error"(curl 35 error)
Fixed "SSL connect error", then s3fs can connect by SSL with no problem.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@434 df820570-a93a-0410-bd06-b72b767a4274
1) Fixes a bug
Package tarball includes doc/Makefile, this file is not unnecessary for tarball.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@433 df820570-a93a-0410-bd06-b72b767a4274
1) problems with fseek and s3fs(Issue 241)
This problem is that s3fs returns file stat(size) when client already has opened file discriptor and client modifies file before saving fd.
So that, client adds bytes into file, but s3fs_getattr() function returns original size before changing.
Changes codes for that s3fs returns size from opened file discriptor, if client already opens the file.
* Changes s3fs.cpp
* Adds fdcache.cpp fdcache.h
git-svn-id: http://s3fs.googlecode.com/svn/trunk@432 df820570-a93a-0410-bd06-b72b767a4274
1) Fixes a bug
When the mount point is specified with sub-directory(mounting with
"bucket:/path"), internally all curl_multi head request in s3fs_readdir()
function failed.
This reason is that the head curl_multi request is not specified with
mount path.
This is a bug, and fixed.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@431 df820570-a93a-0410-bd06-b72b767a4274
1) Problems using encrypted connection to s3(Issue 235)
In s3fs_readdir() function, s3fs gets CURLE_COULDNT_CONNECT error when s3fs reads objects header information.
Probably, this problem is too many request in curl_multi request.
Then s3fs codes are changed:
* max request in curl_multi request is 500 and s3fs loops to call curl_multi.
* retries to call request which returns error as CURLE_COULDNT_CONNECT.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@430 df820570-a93a-0410-bd06-b72b767a4274
1) s3fs should cache DNS lookups?(Issue 304)
Changes that s3fs always uses own dns cache, and adds "nodnscache" option.
If "nodnscache" is specified, s3fs does not use dns cache as before.
s3fs keeps DNS cache for 60 senconds by libcurl's default.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@429 df820570-a93a-0410-bd06-b72b767a4274
1) Fixes a bug which object owner/group is set wrong id.
When do chown(chgrp) command without group(owner), s3fs set wrong id(-1)
for group(owner) id.
Fixes this bug.
2) The permission of mount point when allow_other option is specified.
When allow_other option is specified, s3fs forces the mount point directory
permission to set executable permission(mode | 0111).
3) Fixes a bug about permission
For example, when the directory permission is 0557, other user(who is not owner
and same group) gets a permission error when making file or directory in that dir.
Fixes this bug.
4) Compile error: blkcnt_t(Issue 340)
Fixes a bug: Compile error: blkcnt_t(Issue 340)
git-svn-id: http://s3fs.googlecode.com/svn/trunk@428 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes a bug - r418, r419
Fixed a bug that s3fs failed to get stats when the max_stat_cache_size option was specified 0.
2) Fixed a bug( Issue 291) - r420
Fixes ( Issue 291 ) "Man file has wrong permissions for passwd file",
then changes man page.
Checks passwd file permission strictly.
Fixes a bug that s3fs continued to run after finding invalid passwd file
permission.
3) Added enable_noobj_cache option for no-existing object - r420, r421
Adds enable_noobj_cache option(default disable) for performance.
This option is specified, s3fs memorizes in stat cache that the object
(file or directory) does not exist.
4) Fixed a bug( Issue 240 ) - r421
Fixes ( Issue 240 ) "Cannot Mount Path in Bucket".
Changes man page.
5) Supported s3sync'ed object( Issue 31 ) - r422
Supports HTTP headers which made by s3sync.
It means that s3fs supported HTTP headers are x-amz-meta-owner,
x-amz-meta-permissions and x-amz-meta-group.
6) Added enable_content_md5 option - r423
Adds enable_content_md5 option(default disable).
If "enable_content_md5" option is specified, s3fs puts the object with
"Content-MD5" header when s3fs sending small object(under 20MB).
7) Supported uid/gid option - r424
Supports uid/gid mount(fuse) option.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@425 df820570-a93a-0410-bd06-b72b767a4274
1) Supports uid/gid options
Fuse(and mount) option "uid" and "gid" is supported.
*) Fixes some issues
So these option are supported, some issue which are permission or access problem are solved.
ex. FIle permissions 000(Issue 337)
git-svn-id: http://s3fs.googlecode.com/svn/trunk@424 df820570-a93a-0410-bd06-b72b767a4274
1) Adds enable_content_md5 option
When s3fs uploads large object(over 20MB), s3fs checks always ETag(MD5) in each multipart's response.
But for small objects, s3fs does not check MD5.
This new option enables for checking MD5 of uploading object.
If "enable_content_md5" option is specified, s3fs puts the object with "Content-MD5" header.
The checking MD5 value is not default, because it increases some of user's cpu usage.
(The default value may be replaced in the future.)
git-svn-id: http://s3fs.googlecode.com/svn/trunk@423 df820570-a93a-0410-bd06-b72b767a4274
1) s3sync'ed files not supported(Issue 31)
Supports HTTP headers which made by s3sync.
Supported new HTTP headers are x-amz-meta-owner,
x-amz-meta-permissions and x-amz-meta-group.
s3fs read and understand these headers, but s3fs
gives priority to s3fs's headers over these headers.
2) Cleanups codes
Cleanups some codes about issue 31.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@422 df820570-a93a-0410-bd06-b72b767a4274
1) Cannot Mount Path in Bucket(Issue 240)
Changes man page for this issue("bucket[:path]" -> "bucket[:/path]").
And so s3fs did not work with mount path, fixed it.
2) Fixes other bug about renaming directory
Fixes a bug caused by r420, which fails to rename directory.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@421 df820570-a93a-0410-bd06-b72b767a4274
1) Man file has wrong permissions for passwd file(Issue 291)
Fixes man page for wrong permissions of passwd file.
2) Fixes a bug and Strictly checks passwd file permission.
* Fixes a bug about checking passwd file permission.
A bug is that s3fs continues to run after s3fs finds invalid passwd
file permission.
* Checks passwd file strictly.
Before this revision, s3fs allows executable permission for a
passwd file and allows group writable permission for a passwd
file(which is not "/etc/passwd-s3fs").
New s3fs checks permission strictly, that is /etc/passwd-s3fs is
allowed owner readable/writable and group readable, and the
passwd file(which is not "/etc/passwd-s3fs") is allowed only owner
readable/writable.
3) Adds disable_noobj_cache option for no-existing object.
s3fs v1.68 always has to check whether file(or sub directory) exists
under object(path) when s3fs does some command, since s3fs has
recognized a directory which does not exist and has files or sub
directories under itself.
It increases ListBucket request and makes performance bad.
For performance if the disable_noobj_cache option is specified, s3fs
memorizes in stat cache that the object(file or directory) does not exist.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@420 df820570-a93a-0410-bd06-b72b767a4274
1) Fixes a bug
When the option max_stat_cache_size=0 is specified, the s3fs fails getting the attributes.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@418 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes a bug(Issue 320) - r408, r411, r412
Fixes (Issue 320) "Spurious I/O errors"
(http://code.google.com/p/s3fs/issues/detail?id=320)
When the s3fs gets error in upload loop, it try to re-upload without seeking
the fd. This is a bug for this issue.
Please see detail in r408, r411 and r412.
2) Fixes a bug(Issue 293) - r409
Fixes (Issue 293) "Command line argument bucket: causes segv"
(http://code.google.com/p/s3fs/issues/detail?id=293)
If it specifies the bucket name which is terminated ":", the s3fs crushes.
Please see detail in r409.
3) Supports a option(Issue 265) - r410
Supports (Issue 265) "Unable to mount to a non empty directory"
(http://code.google.com/p/s3fs/issues/detail?id=265)
Supported "nonempty" fuse/mount option.
Please see detail in r410.
4) Supports other S3 clients(Issue 27) - r413, r.414
Supports (Issue 27) "Compatability with other S3FS clients"
*** "_$folder$" dir object
Supports the directory object which made by s3fox. Its name has "_$folder$"
suffixes. On s3fs, that directory object is listed normal directory name
without "_$folder$".
Please be careful when you change object attributes(rename, chmod, chown,
touch), because the s3fs remakes the directory object without "_$folder$"
sufixes. This means the object is re-made by the s3fs.
*** no dir object
Supports the directory which is no objects.
If there is a object which has "/" charactor(ex. "<bucket>/dir/file"), the
directory("dir") object is no object.
Example, you can upload the object which name is "s3://bucket/dir/file" by
the s3cmd(or other S3 clients like s3cmd).
Then on thie case, the "dir" is not object in bucket.
This s3fs version understands this case.
Please be careful when you change object attributes(rename, chmod, chown,
touch), because the s3fs makes new directory object.
Please see detail in r413 and r414.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@415 df820570-a93a-0410-bd06-b72b767a4274
1) Feature Request: Compatability with other S3FS clients(Issue: 27)
Rechanges source code.
2) For other S3 clients
Supports the directory which is no objects.
If there is a object which has "/" charactor(ex. "<bucket>/dir/file"), the directory("dir") object is no object.
Exsample, you can upload the object which name is "s3://bucket/dir/file" by the s3cmd.
Then the "dir" is not object in bucket("dir").
This s3fs codes understands this case.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@414 df820570-a93a-0410-bd06-b72b767a4274
1) Feature Request: Compatability with other S3FS clients(Issue: 27)
Supported the directory object which made by s3fox. Its name has "_$folder$" suffixes.
On s3fs, that directory object is listed normal directory name without "_$folder$".
You need careful when you change object attributes(rename, chmod, chown, touch), because the s3fs remakes the directory object after that object(which has "_$folder$" suffix)
After changing the object attributes, the object name does not have "_$folder$" suffix.
It means the object is remade by s3fs.
2) Other
Fixes bugs which are found when this issue is fixed.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@413 df820570-a93a-0410-bd06-b72b767a4274
1) Changes calling fread/fwrite logic(Issue: 320)
In conjunction with this issue, the opened file discripter is rewinded after reading/writing.
The put_local_fd() and get_localfd() function always returns rewinded fd.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@412 df820570-a93a-0410-bd06-b72b767a4274
1) Changes calling fread/fwrite logic(Issue: 320)
The s3fs functions are called by doing rsync command, the calling order is s3fs_create, s3fs_truncate, s3fs_flush.
After the s3fs_truncate uploads the file, the s3fs_flush uploads the file again wothout rewinding fd.
It is this issue bug, the s3fs_flush read EOF and put error.
Then I changes the code that is calling the lseek and seeking FD to head of file before fread/fwrite.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@411 df820570-a93a-0410-bd06-b72b767a4274
1) Unable to mount to a non empty directory(Issue: 265)
Supported "nonempty" fuse option.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@410 df820570-a93a-0410-bd06-b72b767a4274
1) Command line argument bucket: causes segv(Issue: 293)
If it specifies the bucket name which is terminated ":", s3fs run and crush(segv).
This bug is fixed.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@409 df820570-a93a-0410-bd06-b72b767a4274