For ListBucketResult on an empty directory, AWS S3 and S3Proxy 1.4
differ. AWS will match the directory name, S3Proxy does not.
Changing max-keys=1 to max-keys-2 works for both implementations.
append_objects_from_xml() will swallow the directory key. The log
level of this message is changed from ERROR to DBG.
Fixes#345
When the prefetch size is limited to the multipart size, the entire parallel logic of the read flow does not have an opportunity to use parallel get.
This fix increases the read performance significantly over our own s3 on-premise solution.
This allows retries of multi-part uploads instead of discovering a
fatal error during complete multipart upload. Also enable Content-MD5
for integration tests and refactor hexadecimal code.
Previously AutoLock::Lock allowed subsequent callers to proceed
without the lock. Further is_locked was not always protected by
auto_mutex. Finally AutoLock eagerly released auto_mutex when
recursively unlocking. s3fs does not need recursive locks so we
rewrite and simplify AutoLock. Partially surfaced by Coverity.
These system calls take an extra 'position' parameter on OS X. A
non-zero position value is only valid for resource forks (the Darwin
VFS layer will reject anything else with EINVAL); this patch simply
adds and ignores the parameter on Apple platforms.
Allows building against OSXFUSE.
This also encodes asterisk and tilde correctly when listing a file
with a V4 auth endpoint. Also add tests for special characters
although s3proxy does not yet support V4 auth.
Fixes#188. Fixes#194.
change the minimum version of fuse required. Change the
checkers to use a variable for the minimum fuse version
instead of it being hardcoded in four different places.
src/s3fs.cpp: Use __APPLE__ define around fuse code that
is offensive to osxfuse. Not including the code doesn't
seem to matter.
Buckets with mixed-case names can't be accessed with the virtual-hosted
style API due to DNS limitations. S3FS has an option for
pathrequeststyle which is used for the URL, but it was not applied when
building the endpoint passed through the Host header. Fix this, and
relax the validation on bucket names when using this style.
See: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Query parameters need a trailing = for V4 signatures. Send correct
content-sha256 although Amazon does not seem to enforce this for
zero-length bodies. Finally remove a stale comment. Fixes#133.
nautilus does this when you drag and drop to overwrite a file:
1) create .goutputstream-XXXXXX to write to
2) fsync the fd for .goutputstream-XXXXXX
3) rename .goutputstream-XXXXXX to target file
4) close the fd for .goutputstream-XXXXXX
previously, doing this on s3fs would result in an empty target file
because after the rename, s3fs would not flush the content of
.goutputstream-XXXXXX to target file.
this change moves the FdEntity from the old path to the new path
whenever rename happens. On flush s3fs would now flush the correct
content to the rename target.
Subsequent commits will use this infrastructure. Also reparent
prepare_url which relies on unrelated bucket, foreground2, and
pathrequeststyle symbols.
The space causes signature mismatch when using "ahbe_conf" file to add additional headers.When s3 use the" x-amaz" header to calculates the signature, the format is as follow:
PUT
application/octet-stream
Wed, 05 Nov 2014 03:05:08 GMT
x-amz-acl:private
x-amz-meta-gid:0
x-amz-meta-mode:33188
x-amz-meta-mtime:1415156708
x-amz-meta-uid:0
There is no space after colon.
When the scheme is omitted in URL overriding (for example `example.com`
instead of `https://example.com`), s3fs is modifying the URL by
inserting `s3.` in the middle of the name (`examples3..com`).
This can be a bit difficult to troubleshoot and curl seems to handle
schema-less requests just fine. So, just handle this case correctly.
Previously S3fsMultiCurl::MultiRead did not report read errors since
it did not treat failed callback setup as a fatal operation error.
Failed callback setups usually result from exceeding the number of
allowed retries. Previously cp did not report an error during a
network outage but now does:
$ cp ~/s3-path/s3-file .
cp: error reading ‘/home/gaul/s3-path/s3-file’: Input/output error
cp: failed to extend ‘./s3-file’: Input/output error
Rather than using virtual host style requests, path style requests can be used
instead.
i.e. rather than bucketname.s3.amazon.com/... the s3fs will be able to request
from s3.amazon.com/bucketname/...
This is useful for S3 compatible APIs which don't support the virtual host style
request.
It is enabled with the new option, `use_path_style_request`.
Example:
/usr/bin/s3fs data ~/netcdf -o url="https://swift.rc.nectar.org.au:8888/" -o use_path_request_style -o allow_other -o uid=500 -o gid=500
1) Changed condition for retrying multipart error
Changes condition that 404 is not retrying, other case is retrying,
when multipart request failed.
2) file type wrong
fdcache.h file type is wrong, so that fixed it.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@498 df820570-a93a-0410-bd06-b72b767a4274
* Fixed a bug
Fixes a bug that is dead loop when s3fs listed in a directory with
directory objects which was not object(no information).
This bug made by r493, and reported by issue 389.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@497 df820570-a93a-0410-bd06-b72b767a4274
1) Changed buffer size for file size
Changes a internal buffer size from size_t to offt_t.
It is a bug for 32 bit OS enviroment.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@496 df820570-a93a-0410-bd06-b72b767a4274
1) Overflow
About over 4GB file, when st_size which is member stat structure,
the value is overflow.
Fixed this bug and fixed like this bug in all sources.
2) Changed retrying request
If s3fs gets 500 HTTP status for multipart request, s3fs retry
to send same request.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@495 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed bugs
* Rename objects
Fixes s3fs specifies wrong part number of multipart rename.
And s3fs adds x-amz-acl and x-amz-server-side-encryption
header when rename objects.
2) Changed retry logic for multipart uploading(and renaming)
Sometimes, s3fs gets 400 HTTP response for one of part from
S3 when s3fs uploads a large object by multipart.
New logic retries uploading failed part until "retries"
option count.
3) Added action on utility mode.
s3fs have had utility mode for displaying the result of REST
listing multipart uploading.
Changed this row result(xml) to list, after that, s3fs starts
conversation for removing it.
Then you can remove the object which is failed uploading by
multipart, and do not need to pay for that ever.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@493 df820570-a93a-0410-bd06-b72b767a4274
* Fixed a bug
Fixes a bug that the retrying multipart post request is not
complete, then it is fixed.
This is reported by Issue 371#32.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@492 df820570-a93a-0410-bd06-b72b767a4274
* Fixed a bug
Fixes a bug that is mis-coding about retrying Multipart Post.
This is reported by Issue 371#28.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@491 df820570-a93a-0410-bd06-b72b767a4274
1) Supported IAM role
Supports IAM role by option, that is instead of AccessKeyID/
SecretAccessKey.
Adds new option "iam_role" which is specified as IAM role
name.(like s3fs-c)
git-svn-id: http://s3fs.googlecode.com/svn/trunk@490 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed a bug(about public_bucket)
Fixes a bug that public_bucket option does not work.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@489 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed a bug(about curl_off_t)
Fixes a bug which is not use cast to curl_off_t from off_t(ssize_t)
at calling curl_easy_setopt with CURLOPT_POSTFIELDSIZE and
CURLOPT_INFILESIZE_LARGE.
Maybe this forgot cast occurred issue 471(failed multipart uploading).
git-svn-id: http://s3fs.googlecode.com/svn/trunk@488 df820570-a93a-0410-bd06-b72b767a4274
1) Changed debug message level
Changes a level and format for a debugging message about
parallel multipart upload in curl.cpp
git-svn-id: http://s3fs.googlecode.com/svn/trunk@487 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed Issue 371
Fixes a bug which is wrong return value in s3fs_truncate function(mis-coding).
Issue 371: ftruncate failed
git-svn-id: http://s3fs.googlecode.com/svn/trunk@486 df820570-a93a-0410-bd06-b72b767a4274
1) Re-Fixed Issue 368
Changes that s3fs checks always object stat information before opening it.
Then the object updates other s3fs process or other client, s3fs can know
it.
(Issue 368)1.73: Updating existing file on server 'a' does not change length
of file on server 'b'
git-svn-id: http://s3fs.googlecode.com/svn/trunk@485 df820570-a93a-0410-bd06-b72b767a4274
1) fixed a bug
fixes a code in s3fs.cpp.
It freed memory twice because of careless.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@484 df820570-a93a-0410-bd06-b72b767a4274
1) Changed codes about memory leak
For memory leak, below codes are changed.
* calls malloc_trim function
* calls initializing NSS function, and adds configure
option "--enable-nss-init".
If libcurl is with NSS, s3fs initializes NSS manually.
This initializing NSS is enabled by "--enable-nss-init"
option at configure. if this option is specified, you
need "nss-devel" package.
* calls initializing libxml2(xmlInitParser).
* BIO functions have memory leak, calls CRYPTO_free_ex_data.
* changes cache structure.
* changes cache out logic to LRU.
* sets alignment for allcated memory in body data structure.
* adds ssl session into share handle. and adds nosscache option.
* deletes unused allocated memory.(bug)
* changes defaule parallel count of head request in readdir
(500->20)
* fixes some bugs.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@482 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed Issue 368
Fixed a bug that s3fs could not update local cache.
(Issue 368)1.73: Updating existing file on server 'a' does not change length of file on server 'b'
git-svn-id: http://s3fs.googlecode.com/svn/trunk@481 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed Issue 321
Fixed a bug that a value of umask option is not reflected
definitely.
(Issue 321) no write permission for non-root user
2) Fixed a bug about utimens
Fixed a bug that utimens function cloud not set value to the
other user's object which does not allowed writable.
3) Strictly option check.
Fixed checking s3fs options strictly.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@480 df820570-a93a-0410-bd06-b72b767a4274
1) Changed codes about initializing curl and openSSL
s3fs before this change called curl_global_init() two times
with curl_global_cleanup(). After reviewing this processing,
s3fs calls curl_global_init() one time.
Then s3fs_check_service function which checks user bucket
existing is called after calling fuse.
So this new processing does not have a problem, the codes
updated.
And about initializing openSSL(CRYPTO), old s3fs called only
static locking callback function(ex. CRYPTO_set_locking_callback()).
Added calling dynamic locking callback function for CRYPTO
(ex. CRYPTO_set_dynlock_lock_callback()).
git-svn-id: http://s3fs.googlecode.com/svn/trunk@479 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed codes
Fixed compiling error on 32bit, which specified wrong dev_t format
on 32bit.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@478 df820570-a93a-0410-bd06-b72b767a4274
1) Added debugging message in s3fs_getattr
If s3fs runs with "f2" option for deep debugging message, s3fs_getattr
puts debugging message as file's uid/gid/mode.
2) Added curldbg option
Added new option "curldbg" which is for debugging curl http/https
information.
It implements by CURLOPT_VERBOSE on curl_easy_setopt function.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@474 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed Issue 363
Fixed a bug which has below reason.
Fuse does not wait finishing "release file descriptor" function
called by fuse, and fuse runs(calls) next processing(commands).
Then s3fs could not clear stats cache information for that file
before calling next processing, and s3fs uses old stats cache
information.
So that, s3fs clears stats cache in release function at first.
And found two bad codes(but these codes do not influence normal
movement) in fdcache.cpp and fixed these.
Issue 363: make check failing inconsistently
git-svn-id: http://s3fs.googlecode.com/svn/trunk@471 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed Issue 321(#30)
Fixed a bug(mis-coding).
Issue 321(#30): no write permission for non-root user
2) Fixed Issue 365
Fixed a bug(mis-coding).
Issue 365: there is a logical error in s3fs-1.72 s3fs.cpp:2865
git-svn-id: http://s3fs.googlecode.com/svn/trunk@470 df820570-a93a-0410-bd06-b72b767a4274