Commit Graph

410 Commits

Author SHA1 Message Date
Richard Caunt
c238701d09
Corrected ECS headers 2017-11-08 15:21:49 +00:00
Richard Caunt
60d2ac3c7a
Adding x-amz-security-token header 2017-11-08 15:09:59 +00:00
Richard Caunt
967ef4d56b
Corrected fat finger mistakes 2017-11-08 13:14:49 +00:00
Richard Caunt
ad57bdda6c
Corrected keycount check 2017-11-08 13:06:22 +00:00
Richard Caunt
a0b69d1d3d
Corrected keyval[].c_str() 2017-11-08 13:01:52 +00:00
Richard Caunt
5df94d7e33
Add debug messages 2017-11-08 09:50:39 +00:00
Richard Caunt
1cbe9fb7a3 Gotta pass that cppcheckgit add . 2017-11-07 21:41:51 +00:00
Richard Caunt
8660abaea2 Use jsoncpp to parse AWS JSON 2017-11-07 21:20:02 +00:00
Richard Caunt
366f0705a0 ECS credentials bug fixes 2017-11-06 21:45:58 +00:00
Richard Caunt
5d54883e2f Remove commented out code 2017-11-05 19:25:34 +00:00
Richard Caunt
662f65c3c8 Add support for ECS metadata endpoint 2017-11-05 19:24:02 +00:00
Takeshi Nakatani
5db550a298 Fixed a bug in S3fsCurl::LocateBundle 2017-11-05 11:26:05 +00:00
Or Ozeri
384b4cbafa auth headers insertion refactoring 2017-10-30 11:52:58 +02:00
Takeshi Nakatani
5957d9ead0 Fixed with unnecessary equal in POST uploads url argment - #643 2017-09-17 10:52:28 +00:00
Takeshi Nakatani
00bc9142c4 Fixed potential atomic violation in S3fsCurl::AddUserAgent - #633 2017-09-17 09:16:05 +00:00
Takeshi Nakatani
a22675bafd Merge pull request #567 from andrewgaul/default-acl
Do not send ACL unless overridden
2017-05-09 23:03:27 +09:00
Andrew Gaul
7b30d5d15b Do not send canned ACL header when empty string
Some providers such as StorageGRID do not support canned ACLs.
Setting to empty allows callers to omit the header.  References #125.
2017-05-07 10:52:31 -07:00
Takeshi Nakatani
9d10a5aa70 Changed copyright year format for debian pkg 2017-05-07 11:24:17 +00:00
Takeshi Nakatani
a12e0d5ec4 Fixed failure to upload/copy with SSE_C and SSE_KMS 2017-05-07 09:29:08 +00:00
Takeshi Nakatani
d07c3f38b7 Check errors returned in 200 OK responses for put header request 2017-05-06 02:15:53 +00:00
Takeshi Nakatani
df0ff3a2fd Merge pull request #556 from orozery/fix_nocache_multipart_upload
fix multipart upload handling without cache
2017-04-16 19:22:15 +09:00
Takeshi Nakatani
edcf4c6218 Merge pull request #555 from orozery/dont_sign_empty_headers
don't sign empty headers (as they are discarded by libcurl)
2017-04-16 19:16:47 +09:00
Takeshi Nakatani
efba9bcbc1 Merge pull request #553 from orozery/custom_cipher_suite
add TLS cipher suites customization
2017-04-16 19:09:27 +09:00
Takeshi Nakatani
6bd179c92b Merge pull request #552 from orozery/foreground_threads
switch S3fsMultiCurl to use foreground threads
2017-04-16 19:05:16 +09:00
Or Ozeri
96764b7410 switch S3fsMultiCurl to use foreground threads 2017-04-09 16:56:49 +03:00
Takeshi Nakatani
b4c90d6957 Fixed a bug about multipart uploading at no disk free space related to #509 2017-04-09 04:37:20 +00:00
Or Ozeri
75b59a7c16 switch S3fsMultiCurl to use foreground threads 2017-04-04 15:32:53 +03:00
Or Ozeri
3bcca75a88 don't sign empty headers (as they are discarded by libcurl) 2017-04-04 15:24:20 +03:00
Or Ozeri
8ee95ff7ab fix multipart upload handling without cache 2017-04-02 10:27:43 +03:00
Andrew Gaul
03217baa99 Address cppcheck 1.77 warnings 2017-03-06 12:41:08 -08:00
Andrew Gaul
915a1321c7 Use server-provided ETag during complete upload
This avoids calculating the MD5 locally and enables use with object
stores which do not use MD5 as ETag.
2016-11-23 18:48:57 -08:00
Andrew Gaul
d375bca0d0 Correct typos 2016-11-19 15:57:41 -08:00
Takeshi Nakatani
edd0a11fb5 Merge pull request #494 from mapreri/typo
Fix typo s/destroied/destroyed/
2016-11-20 05:53:23 +09:00
Takeshi Nakatani
67a836223a Merge pull request #495 from driskell/fix_sse_copy
Fix invalid V4 signature on multipart copy requests
2016-11-20 05:43:10 +09:00
Jason Woods
6f688770fd Fix invalid V4 signature on multipart copy requests 2016-11-13 13:22:00 +00:00
Mattia Rizzolo
8c0b1d9c5b
Fix typo s/destroied/destroyed/ 2016-11-11 23:27:17 +00:00
Takeshi Nakatani
cca217f613 Merge pull request #487 from driskell/debugging
Split header debugging onto multiple lines for easier reading
2016-10-23 21:51:38 +09:00
Jason Woods
02d7296210 Split header debugging onto multiple lines for easier reading 2016-10-22 15:11:18 +01:00
Takeshi Nakatani
6005929a96 Handled all curl error without exiting process - #437 2016-06-27 10:38:49 +00:00
Nathaniel W. Turner
584ea488bf Use role name instead of profile name when iam_role=auto
When using an instance with an IAM Role, transient credentials can be
found in http://169.254.169.254/latest/meta-data/ at
iam/security-credentials/role-name and s3fs tries to do this. However,
it is using the profile-name where role-name is needed. In many cases
the role and profile name are the same, but they are not always.

The simplest way to find the role name appears to be to GET
http://169.254.169.254/latest/meta-data/iam/security-credentials/
itself, which returns a listing of the role names for which temporary
credentials exist. (I think there will probably only be one, but we
probably want to split on newlines and take the first one here in case
that assumption is not valid). This is the approach the AWS SDK appears
to use (based on WireShark analysis).

Bug: https://github.com/s3fs-fuse/s3fs-fuse/issues/421
Signed-off-by: Nathaniel W. Turner <nate@houseofnate.net>
2016-05-24 13:34:19 -04:00
Takeshi Nakatani
50f1ad51c8 loading IAM role name automatically(iam_role option) - #387 2016-05-06 04:37:32 +00:00
Takeshi Nakatani
845fdb43f2 Merge pull request #404 from rockuw/keep-alive
Add curl handler pool to reuse connections
2016-04-26 23:40:45 +09:00
Tianlong Wu
b78adb4bb0 Add curl handler pool to reuse connections 2016-04-22 14:57:31 +08:00
Tianlong Wu
115bd51f3f Fix a bug of truncating empty file 2016-04-22 14:49:37 +08:00
Takeshi Nakatani
10589a9497 Supported User-Agent header - #383 2016-04-17 07:44:03 +00:00
Mattia Rizzolo
136ec654c2
fix typo in curl.cpp: s/returing/returning/ 2016-04-02 15:19:06 +00:00
Takeshi Nakatani
c7cf86c2ef Sepalated AdditionalHeader class from curl.* 2016-02-07 05:41:56 +00:00
Takeshi Nakatani
6472eedddc Supported regex type for additional header format. 2016-02-07 05:08:52 +00:00
Takeshi Nakatani
83937700dd Fixed a bug about IAMCRED type could not be retried. 2016-01-24 05:01:50 +00:00
Takeshi Nakatani
e932583309 Merge pull request #334 from andrewgaul/bucket-host
Bucket host should include port and not path
2016-01-17 14:46:40 +09:00
Andrew Gaul
88a4f04217 Bucket host should include port and not path
This resolves issues when using v4 signing with path-style requests.
2016-01-16 15:58:54 -08:00
Andrew Gaul
ff607e1a2d Correct multiple issues with ListBucketRequest
* provide correct path
* sign query string
* URL encode query string
2016-01-16 10:17:20 -08:00
Takeshi Nakatani
43b91d3235 Merge pull request #330 from andrewgaul/pass-by-reference
Pass by const reference where possible
2016-01-16 16:14:31 +09:00
Andrew Gaul
b946b59522 Pass by const reference where possible 2016-01-10 16:58:24 -08:00
Andrew Gaul
ea6b287d1a Fix v4 signature with use_path_request_style
Previously s3fs omitted the bucket name when using path request style
causing SignatureDoesNotMatch with v4 signatures.
2016-01-10 13:41:56 -08:00
SnakeHunt2012
c04bcce206 Fix a small spelling issue. 2015-11-06 16:49:37 +08:00
Takeshi Nakatani
001206f7c1 Fixed a bug about head request(copy) for SSE - issue#286 2015-11-01 14:05:47 +00:00
Takeshi Nakatani
2ef7f497f6 Fixed a bug about head request(copy) for SSE - issue#286 2015-11-01 13:54:47 +00:00
Takeshi Nakatani
83d46ef8c6 Fixed bugs about a object larger than free disk space 2015-10-20 15:19:04 +00:00
Takeshi Nakatani
d102eb752d Supported a object which is larger than free disk space 2015-10-18 17:03:41 +00:00
Takeshi Nakatani
f51ad1f33e Supported for SSE KMS 2015-10-06 14:46:14 +00:00
Takeshi Nakatani
92e52dadd4 Changed and cleaned the logic for debug message. 2015-09-30 19:41:27 +00:00
Andrew Gaul
785ed642ba Add support for standard_ia storage class
This enables storage with lower at-rest prices, higher request prices,
and lower availability.  Also rework existing reduced redundancy
parsing into a more generic storage class.  More background on
standard_ia:

https://aws.amazon.com/blogs/aws/aws-storage-update-new-lower-cost-s3-storage-option-glacier-price-reduction/
2015-09-17 13:35:25 -07:00
Takeshi Nakatani
a3e820e733 Merge pull request #245 from andrewgaul/map-duplicate-lookups
Elide duplicate lookups of std::map via iterators
2015-08-20 01:22:06 +09:00
Takeshi Nakatani
4ad57bdea5 Merge pull request #240 from andrewgaul/md5
Enable Content-MD5 during multipart upload part
2015-08-20 01:19:01 +09:00
Takeshi Nakatani
fcb58aec3c Merge pull request #238 from andrewgaul/cppcheck
Enable all cppcheck rules
2015-08-20 01:06:50 +09:00
Takeshi Nakatani
026a9f2bdc Merge pull request #235 from andrewgaul/complete-mpu-leak
Plug leak during complete multipart upload
2015-08-20 00:40:00 +09:00
Andrew Gaul
67d1576dfb Elide duplicate lookups of std::map via iterators
Also remove use of C++11 std::map::at.
2015-08-18 14:00:42 -07:00
Andrew Gaul
a157ac59ca Enable Content-MD5 during multipart upload part
This allows retries of multi-part uploads instead of discovering a
fatal error during complete multipart upload.  Also enable Content-MD5
for integration tests and refactor hexadecimal code.
2015-08-18 02:54:00 -07:00
Andrew Gaul
c0b21d8808 Enable all cppcheck rules 2015-08-16 17:13:24 -07:00
Andrew Gaul
9c5bf0bb66 Plug leak during complete multipart upload 2015-08-15 22:38:24 -07:00
Andrew Gaul
0ea88a73c7 Remove IntToStr
str duplicates this functionality.  Also add unit test.
2015-08-12 08:25:09 -07:00
Takeshi Nakatani
756d1e5e81 Configure cppcheck #224 2015-08-12 15:04:16 +00:00
Takeshi Nakatani
64146f69a4 Merge pull request #221 from andrewgaul/compare
Compare idiomatically
2015-08-12 23:41:24 +09:00
Takeshi Nakatani
49e32967ec Merge pull request #219 from andrewgaul/coverity
Address Coverity errors
2015-08-12 23:40:47 +09:00
Andrew Gaul
ff8a0c2eea Parse ETag from copy multipart correctly
Previously s3fs misparsed this, preventing renames of files larger
than 5 GB.  Integration test disabled until S3Proxy 1.5.0 is released.
2015-08-11 14:43:35 -07:00
Andrew Gaul
801ca0c2d3 Compare idiomatically 2015-08-05 23:35:08 -07:00
Andrew Gaul
8ee71caabb Address Coverity errors
Fixed an uninitialized member, misordered NULL check, resource leak,
and unconsumed return value.
2015-08-05 23:28:06 -07:00
Jamie Alessio
912bc58df0 Fixed a few small spelling issues. 2015-07-10 11:50:40 -07:00
Bartlomiej Palmowski
3522e5eda3 Add no_check_certificate option which allows to ignore issues with self signed certs. 2015-05-20 17:32:36 +02:00
Peter A. Bigot
92fcee824b curl: use pathrequeststyle option when constructing Host endpoint
Buckets with mixed-case names can't be accessed with the virtual-hosted
style API due to DNS limitations.  S3FS has an option for
pathrequeststyle which is used for the URL, but it was not applied when
building the endpoint passed through the Host header.  Fix this, and
relax the validation on bucket names when using this style.

See: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2015-04-19 08:31:40 -05:00
Takeshi Nakatani
490ed8f689 Reviewed and fixed response codes print in curl.cpp - #157 2015-04-18 13:32:04 +00:00
Takeshi Nakatani
bd27294ab0 Increased default connecting/reading/writing timeout value 2015-04-12 02:04:13 +00:00
Ka-Hing Cheung
6e0a302f7d refactor sigv4 to reduce code duplication 2015-04-09 15:11:59 -07:00
Ka-Hing Cheung
98af055d8b send the correct Host header when using -o url
fixes #161
2015-04-09 13:53:50 -07:00
Takeshi Nakatani
0f13c8fe97 Fixed a bug about ssl session sharing with libcurl older 7.23.0 - issue/#126 2015-03-21 07:04:20 +00:00
Takeshi Nakatani
2fc3a4e91e Fixed a bug: unable to mount bucket subdirectory 2015-03-21 04:31:59 +00:00
Takeshi Nakatani
2f8ad7ace8 Merge pull request #135 from andrewgaul/mpu-v4
Correct V4 signature for initiate multipart upload
2015-03-01 22:57:10 +09:00
Andrew Gaul
a07e804f57 Include Content-Type in complete MPU V2 signature
Previously this failed with SignatureDoesNotMatch since the headers
included it but the signature did not.  Fixes #125.
2015-02-28 18:03:21 -08:00
Andrew Gaul
e9656810e3 Correct V4 signature for initiate multipart upload
Query parameters need a trailing = for V4 signatures.  Send correct
content-sha256 although Amazon does not seem to enforce this for
zero-length bodies.  Finally remove a stale comment.  Fixes #133.
2015-02-28 17:50:06 -08:00
Takeshi Nakatani
1424f87754 Supported signature version 4 for GnuTLS/NSS and automatically set endpoint/sigv2 2015-02-02 16:36:08 +00:00
Takeshi Nakatani
4f953f9bd7 Clean codes for signature v4 and added new sigv2 option 2015-01-28 17:13:11 +00:00
Takeshi Nakatani
0d2f3e2dc4 Fixed bugs, segfault and signature error at listing. 2015-01-24 16:36:30 +00:00
Takeshi Nakatani
bb1f1d3faa Merged manually from caxapniy/s3fs-fuse/tree/1.77v4merge for signature v4 - #102 2015-01-20 16:31:36 +00:00
Takeshi Nakatani
d0b82428d5 Merge pull request #100 from adobos/dns_ssl_switch_bugfix
CURL handles not properly initialized to use DNS or SSL session caching.
2015-01-14 00:11:46 +09:00
Takeshi Nakatani
902911765e Merge pull request #93 from andrewgaul/unit-test
Add simple unit tests for trim functions
2015-01-14 00:07:01 +09:00
Andrej Dobos
045f1e7906 CURL handles were not properly initialized to use DNS caching, or SSL session caching. 2014-12-23 22:31:54 -08:00
Andrew Gaul
a56b8db410 Add simple unit tests for trim functions
Subsequent commits will use this infrastructure.  Also reparent
prepare_url which relies on unrelated bucket, foreground2, and
pathrequeststyle symbols.
2014-12-06 18:07:14 -08:00
bupt_tengteng
b31ec5c4af Update curl.cpp
The space causes signature mismatch when using "ahbe_conf" file to add additional headers.When s3 use the" x-amaz" header to calculates the signature, the format is as follow:
PUT

application/octet-stream
Wed, 05 Nov 2014 03:05:08 GMT
x-amz-acl:private
x-amz-meta-gid:0
x-amz-meta-mode:33188
x-amz-meta-mtime:1415156708
x-amz-meta-uid:0
There is no space after colon.
2014-11-05 11:28:33 +08:00
Takeshi Nakatani
651e8c3158 Merge pull request #64 from andrewgaul/failed-read-eio
Return EIO on failed read
2014-11-03 01:03:32 +09:00
Takeshi Nakatani
9237d07226 Merge pull request #63 from jollyroger/spelling
Fix spelling errors
2014-10-13 11:38:13 +09:00
Andrew Gaul
a1ca8b7124 Return EIO on failed read
Previously S3fsMultiCurl::MultiRead did not report read errors since
it did not treat failed callback setup as a fatal operation error.
Failed callback setups usually result from exceeding the number of
allowed retries.  Previously cp did not report an error during a
network outage but now does:

$ cp ~/s3-path/s3-file .
cp: error reading ‘/home/gaul/s3-path/s3-file’: Input/output error
cp: failed to extend ‘./s3-file’: Input/output error
2014-10-03 21:30:11 -07:00
Andriy Senkovych
6633366218 Fix spelling errors 2014-10-01 13:42:39 +03:00
Andrew Gaul
3d69ee0c30 Emit response on failed CheckBucket requests
This allows callers to diagnose errors like InvalidAccessKeyId and
RequestTimeTooSkewed.
2014-09-28 16:12:53 -07:00
Andrew Gaul
c88a5f38be Disable CURLOPT_FAILONERROR for CheckBucket
curl will not consume the body of a response when CURLOPT_FAILONERROR
is set.  This prevents logging of responses for failed requests.
2014-09-28 16:12:43 -07:00
Takeshi Nakatani
7a7c7572ea Cleaned up codes for next packaging. 2014-09-07 15:08:27 +00:00
Takeshi Nakatani
f0c33f8ef2 clean codes 2014-08-27 00:59:49 +00:00
Takeshi Nakatani
20b1c207be fixed issue #39 2014-08-26 17:11:10 +00:00
Takeshi Nakatani
7a55eab399 Support for SSE-C, issue #39 2014-07-19 19:02:55 +00:00
Takeshi Nakatani
cd27f0aa54 Supported another crypt libraries as GnuTLS and NSS, and added configure options 2014-05-06 14:23:05 +00:00
Takeshi Nakatani
8bba566774 Retry to send request at CURLE_SSL_CONNECT_ERROR 2014-04-04 16:23:56 +00:00
Takeshi Nakatani
4762e53b5d Added multipart_size option for #16 2014-03-30 07:53:41 +00:00
Takeshi Nakatani
d7563309a2 Fixed a bug #18(losing check retry error) 2014-03-30 06:40:49 +00:00
Takeshi Nakatani
52d56d15e4 Fixed a bug(googlecode issue 405), enable_content_md5 Input/output error 2014-03-03 16:19:08 +00:00
ggtakec@gmail.com
74db6748dd Changes codes
1) Changed condition for retrying multipart error
   Changes condition that 404 is not retrying, other case is retrying,
   when multipart request failed.

2) file type wrong
   fdcache.h file type is wrong, so that fixed it.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@498 df820570-a93a-0410-bd06-b72b767a4274
2013-11-19 01:48:53 +00:00
ggtakec@gmail.com
8acbaf7199 Fixed a bug.
* Fixed a bug
  Fixes a bug that is dead loop when s3fs listed in a directory with
  directory objects which was not object(no information).
  This bug made by r493, and reported by issue 389.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@497 df820570-a93a-0410-bd06-b72b767a4274
2013-11-18 02:29:41 +00:00
ggtakec@gmail.com
882f13020e Fixed bugs(overflow)
1) Overflow
   About over 4GB file, when st_size which is member stat structure,
   the value is overflow.
   Fixed this bug and fixed like this bug in all sources. 

2) Changed retrying request
   If s3fs gets 500 HTTP status for multipart request, s3fs retry
   to send same request.





git-svn-id: http://s3fs.googlecode.com/svn/trunk@495 df820570-a93a-0410-bd06-b72b767a4274
2013-11-13 16:26:50 +00:00
ggtakec@gmail.com
c785be917f Changed a code
* Cut a #ifdef
  Cuts a code for clearing r493.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@494 df820570-a93a-0410-bd06-b72b767a4274
2013-11-11 15:03:04 +00:00
ggtakec@gmail.com
09fc2593e3 Fixed bugs and Changed utility mode
1) Fixed bugs
 * Rename objects
   Fixes s3fs specifies wrong part number of multipart rename.
   And s3fs adds x-amz-acl and x-amz-server-side-encryption
   header when rename objects.

2) Changed retry logic for multipart uploading(and renaming)
   Sometimes, s3fs gets 400 HTTP response for one of part from
   S3 when s3fs uploads a large object by multipart.
   New logic retries uploading failed part until "retries"
   option count.

3) Added action on utility mode.
   s3fs have had utility mode for displaying the result of REST
   listing multipart uploading.
   Changed this row result(xml) to list, after that, s3fs starts
   conversation for removing it.
   Then you can remove the object which is failed uploading by
   multipart, and do not need to pay for that ever.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@493 df820570-a93a-0410-bd06-b72b767a4274
2013-11-11 13:45:35 +00:00
ggtakec@gmail.com
1bae39e21f Fixed a bug.
* Fixed a bug
   Fixes a bug that the retrying multipart post request is not 
   complete, then it is fixed.
   This is reported by Issue 371#32.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@492 df820570-a93a-0410-bd06-b72b767a4274
2013-10-09 01:44:56 +00:00
ggtakec@gmail.com
33431dec46 Fixed a bug.
* Fixed a bug
   Fixes a bug that is mis-coding about retrying Multipart Post.
   This is reported by Issue 371#28.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@491 df820570-a93a-0410-bd06-b72b767a4274
2013-10-08 08:19:10 +00:00
ggtakec@gmail.com
99db6d13af Changes codes
1) Supported IAM role
   Supports IAM role by option, that is instead of AccessKeyID/
   SecretAccessKey.
   Adds new option "iam_role" which is specified as IAM role
   name.(like s3fs-c)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@490 df820570-a93a-0410-bd06-b72b767a4274
2013-10-06 13:45:32 +00:00
ggtakec@gmail.com
e6038f74ed Fixed a bug
1) Fixed a bug(about curl_off_t)
   Fixes a bug which is not use cast to curl_off_t from off_t(ssize_t) 
   at calling curl_easy_setopt with CURLOPT_POSTFIELDSIZE and 
   CURLOPT_INFILESIZE_LARGE.
   Maybe this forgot cast occurred issue 471(failed multipart uploading).



git-svn-id: http://s3fs.googlecode.com/svn/trunk@488 df820570-a93a-0410-bd06-b72b767a4274
2013-09-27 07:39:07 +00:00
ggtakec@gmail.com
a6884f1c3a Changes codes
1) Changed debug message level
   Changes a level and format for a debugging message about 
   parallel multipart upload in curl.cpp



git-svn-id: http://s3fs.googlecode.com/svn/trunk@487 df820570-a93a-0410-bd06-b72b767a4274
2013-09-26 05:00:21 +00:00
ggtakec@gmail.com
b231081aff Changes codes
1) fixed a bug
   fixes a code in curl.cpp.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@483 df820570-a93a-0410-bd06-b72b767a4274
2013-09-14 21:53:30 +00:00
ggtakec@gmail.com
42b74c9d2e Changes codes
1) Changed codes about memory leak
   For memory leak, below codes are changed.
   * calls malloc_trim function
   * calls initializing NSS function, and adds configure
     option "--enable-nss-init".
     If libcurl is with NSS, s3fs initializes NSS manually.
     This initializing NSS is enabled by "--enable-nss-init"
     option at configure. if this option is specified, you
     need "nss-devel" package.
   * calls initializing libxml2(xmlInitParser).
   * BIO functions have memory leak, calls CRYPTO_free_ex_data.
   * changes cache structure.
   * changes cache out logic to LRU.
   * sets alignment for allcated memory in body data structure.
   * adds ssl session into share handle. and adds nosscache option.
   * deletes unused allocated memory.(bug)
   * changes defaule parallel count of head request in readdir
     (500->20)
   * fixes some bugs.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@482 df820570-a93a-0410-bd06-b72b767a4274
2013-09-14 21:50:39 +00:00
ggtakec@gmail.com
7ce8135fa9 Changes codes
1) Changed codes about initializing curl and openSSL
   s3fs before this change called curl_global_init() two times
   with curl_global_cleanup(). After reviewing this processing,
   s3fs calls curl_global_init() one time.
   Then s3fs_check_service function which checks user bucket
   existing is called after calling fuse.
   So this new processing does not have a problem, the codes
   updated.

   And about initializing openSSL(CRYPTO), old s3fs called only
   static locking callback function(ex. CRYPTO_set_locking_callback()).
   Added calling dynamic locking callback function for CRYPTO
   (ex. CRYPTO_set_dynlock_lock_callback()).



git-svn-id: http://s3fs.googlecode.com/svn/trunk@479 df820570-a93a-0410-bd06-b72b767a4274
2013-08-27 08:12:01 +00:00
ggtakec@gmail.com
3dda0b20d4 Added debugging message
1) Added debugging message in s3fs_getattr
   If s3fs runs with "f2" option for deep debugging message, s3fs_getattr
   puts debugging message as file's uid/gid/mode.

2) Added curldbg option
   Added new option "curldbg" which is for debugging curl http/https
   information.
   It implements by CURLOPT_VERBOSE on curl_easy_setopt function.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@474 df820570-a93a-0410-bd06-b72b767a4274
2013-08-23 16:28:50 +00:00
ggtakec@gmail.com
2b3fb2d102 Fixed a bug(prototype and initializing enum)
1) Fixed a bug
   Fixed bugs(mis-coding) which is wrong prototype for md5hexsum, md5sum functions.
     Issue 361: complie time error after running #make
     Issue 360: 1.72 Will not compile on Ubuntu 12.04.2 (precise) i686

   And fixed a code for initializing enum member in S3fsCurl class. 



git-svn-id: http://s3fs.googlecode.com/svn/trunk@469 df820570-a93a-0410-bd06-b72b767a4274
2013-08-21 08:39:06 +00:00
ggtakec@gmail.com
171de649ef Fixed a bug(about retry request)
1) Fixed a bug
   s3fs was wrong for request retry processing so far.
   It was fixed.
   Probably, Issue 343(1.7 having curl 35 + other disconnect issue) is 
   occurred by this bug.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@468 df820570-a93a-0410-bd06-b72b767a4274
2013-08-21 07:43:32 +00:00
ggtakec@gmail.com
7fa1e37a28 Changes codes
1) "virtual hosted-style request" for checking bucket
   Old version calls "path-style request" for checking bucket at 
   initializing, then after this revision s3fs requests "virtual 
   hosted-style request".
   This change is related to 
   "Operation not permitted - on any operation(Issue 362)".

2) Changed debugging message level
   Changed debugging message level in prepare_url() from DPRNNN
   to FPRNINFO.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@467 df820570-a93a-0410-bd06-b72b767a4274
2013-08-20 07:16:12 +00:00
ggtakec@gmail.com
ee01c91e02 Fixed bugs for compiling
1) Fixed bugs
   Fixes below bugs( format error and undefined fund ).

   * 1.72 Will not compile on Ubuntu 12.04.2 (precise) i686(Issue 360)
   * complie time error after running #make(Issue 361)

   I'll close these Issue if I can confirm that these problem was solved.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@466 df820570-a93a-0410-bd06-b72b767a4274
2013-08-19 06:29:24 +00:00
ggtakec@gmail.com
d7689151ab Fixed Issue 229 and Changes codes
1) Set metadata "Content-Encoding" automatically(Issue 292)
   For this issue, s3fs is added new option "ahbe_conf".

   New option means the configuration file path, and this file specifies
   additional HTTP header by file(object) extension.
   Thus you can specify any HTTP header for each object by extension.

   * ahbe_conf file format:
     -----------
     line                = [file suffix] HTTP-header [HTTP-header-values]
     file suffix         = file(object) suffix, if this field is empty, 
                           it means "*"(all object).
     HTTP-header         = additional HTTP header name
     HTTP-header-values  = additional HTTP header value
     -----------

   * Example:
     -----------
     .gz      Content-Encoding     gzip
     .Z       Content-Encoding     compress
              X-S3FS-MYHTTPHEAD    myvalue
     -----------
     A sample configuration file is uploaded in "test" directory.

   If ahbe_conf parameter is specified, s3fs loads it's configuration
   and compares extension(suffix) of object(file) when uploading
   (PUT/POST) it. If the extension is same, s3fs adds/sends specified
   HTTP header and value.

   A case of sample configuration file, if a object(it's extension is
   ".gz") which already has Content-Encoding HTTP header is renamed 
   to ".txt" extension, s3fs does not set Content-Encoding. Because
   ".txt" is not match any line in configuration file.
   So, s3fs matches the extension by each PUT/POST action.

   * Please take care about "Content-Encoding".
   This new option allows setting ANY HTTP header by object extension.
   For example, you can specify "Content-Encoding" for ".gz"/etc 
   extension in configuration. But this means that S3 always returns 
   "Content-Encoding: gzip" when a client requests with other 
   "Accept-Encoding:" header. It SHOULD NOT be good.
   Please see RFC 2616.

2) Changes about allow_other/uid/gid option for mount point
   I reviewed about mount point permission and allow_other/uid/gid
   options, and found bugs about these.
   s3fs is fixed bugs and changed to the following specifications.

   * s3fs only allows uid(gid) options as 0(root), when the effective 
     user is zero(root).
   * A mount point(directory) must have a permission to allow
     accessing by effective user/group.
   * If allow_other option is specified, the mount point permission
     is set 0777(all users allow all access).
     In another case, the mount point is set 0700(only allows 
     effective user).
   * When uid/gid option is specified, the mount point owner/group
     is set uid/gid option value.
     If uid/gid is not set, it is set effective user/group id.

   This changes maybe fixes some issue(321, 338).

3) Changes a logic about (Issue 229)
   The chmod command returns -EIO when changing the mount point.
   It is correct, s3fs can not changed owner/group/mtime for the
   mount point, but s3fs sends a request for changing the bucket.
   This revision does not send the request, and returns EIO as
   soon as possible.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@465 df820570-a93a-0410-bd06-b72b767a4274
2013-08-16 19:24:01 +00:00
ggtakec@gmail.com
02c3accb5b Changes codes
1) Changes macros for debugging
   Changed macros for debugging messages.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@461 df820570-a93a-0410-bd06-b72b767a4274
2013-08-10 15:29:39 +00:00
ggtakec@gmail.com
3274f58948 Changes codes for performance(part 3)
* Summay
   This revision includes big change about temporary file and local cache file. 
   By this big change, s3fs works with good performance when s3fs opens/
   closes/syncs/reads object.
   I made a big change about the handling about temporary file and local cache
   file to do this implementation.

* Detail
1) About temporary file(local file)
   s3fs uses a temporary file on local file system when s3fs does download/
   upload/open/seek object on S3.
   After this revision, s3fs calls ftruncate() function when s3fs makes the 
   temporary file.
   In this way s3fs can set a file size of precisely length without downloading.
   (Notice - ftruncate function is for XSI-compliant systems, so that possibly
    you have a problem on non-XSI-compliant systems.)

   By this change, s3fs can download a part of a object by requesting with 
   "Range" http header. It seems like downloading by each block unit.
   The default block(part) size is 50MB, it is caused the result which is default 
   parallel requests count(5) by default multipart upload size(10MB).
   If you need to change this block size, you can change by new option 
   "fd_page_size". This option can take from 1MB(1024 * 1024) to any bytes.

   So that, you have to take care about that fdcache.cpp(and fdcache.h) were 
   changed a lot.

2) About local cache
   Local cache files which are in directory specified by "use_cache" option do 
   not have always all of object data.
   This cause is that s3fs uses ftruncate function and reads(writes) each block 
   unit of a temporary file.
   s3fs manages each block unit's status which are "downloaded area" or "not".
   For this status, s3fs makes new temporary file in cache directory which is 
   specified by "use_cache" option. This status files is in a directory which is 
   named "<use_cache sirectory>/.<bucket_name>/".

   When s3fs opens this status file, s3fs locks this file for exclusive control by 
   calling flock function. You need to take care about this, the status files can 
   not be laid on network drive(like NFS).

   This revision changes about file open mode, s3fs always opens a local cache 
   file and each status file with writable mode.
   Last, this revision adds new option "del_cache", this option means that s3fs 
   deletes all local cache file when s3fs starts and exits.

3) Uploading
   When s3fs writes data to file descriptor through FUSE request, old s3fs 
   revision downloads all of the object. But new revision does not download all, 
   it downloads only small percial area(some block units) including writing data 
   area.
   And when s3fs closes or flushes the file descriptor, s3fs downloads other area 
   which is not downloaded from server. After that,  s3fs uploads all of data.
   Already r456 revision has parallel upload function, then this revision with 
   r456 and r457 are very big change for performance.

4) Downloading
   By changing a temporary file and a local cache file, when s3fs downloads a 
   object, it downloads only the required range(some block units). 
   And s3fs downloads units by parallel GET request, it is same as a case of 
   uploading. (Maximum parallel request count and each download size are 
   specified same parameters for uploading.)

   In the new revision, when s3fs opens file, s3fs returns file descriptor soon.
   Because s3fs only opens(makes) the file descriptor with no downloading 
   data. And when s3fs reads a data, s3fs downloads only some block unit 
   including specified area.
   This result is good for performance.

5) Changes option name
   The option "parallel_upload" which added at r456 is changed to new option 
   name as "parallel_count". This reason is this option value is not only used by 
   uploading object, but a uploading object also uses this option. (For a while, 
   you can use old option name "parallel_upload" for compatibility.)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@458 df820570-a93a-0410-bd06-b72b767a4274
2013-07-23 16:01:48 +00:00
ggtakec@gmail.com
1c93dd30c1 Changes codes
1) For uploading performance(part 2)
   Changed a codes about uploading large object(multipart uploading).
   This revision does not make temporary file when s3fs uploads large object by multipart uploading.
   Before this revision, s3fs made temporary file(/tmp/s3fs.XXXXX) for multipart, but it was not good for performance.
   So that, new codes do not use those files, and s3fs reads directly large object from s3fs's cache file.

2) Some value to symbol
   Changed some value to symbol(define).



git-svn-id: http://s3fs.googlecode.com/svn/trunk@457 df820570-a93a-0410-bd06-b72b767a4274
2013-07-12 00:33:36 +00:00
ggtakec@gmail.com
1095b7bc52 Changes codes
1) For uploading performance(part 1)
   Changed a code for large object uploading.
   New codes makes s3fs send parallel requests when s3fs uploads large 
   object(20MB) by multipart post.

   And added new "parallel_upload" option, which limits parallel request 
   count which s3fs requests at once.
   This option's default value is "5", and you can change this value. But it
   is necessary to set this value depending on a CPU and a network band.
   s3fs became to work good performance by this option, please try to set 
   your value for this option.

2) Changes debugging messages
    Changed debugging message in s3fs.cpp.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@456 df820570-a93a-0410-bd06-b72b767a4274
2013-07-10 06:24:06 +00:00
ggtakec@gmail.com
6e169f6bda Changes codes
1) Changes a code in PutRequest function
   Changed a code in S3fsCurl:: PutRequest function to duplicate file discriptor in 
   this function.

2) Changes debugging messages
    Changed debugging message's indent in curl.cpp functions. 



git-svn-id: http://s3fs.googlecode.com/svn/trunk@455 df820570-a93a-0410-bd06-b72b767a4274
2013-07-08 01:25:11 +00:00
ggtakec@gmail.com
0c630ba2d0 Fixed a bug
1) Fixed a bug
    When something error occured in multipart uploading process, s3fs forgets to free memory.
    (from r451)
    Fixed this bug.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@454 df820570-a93a-0410-bd06-b72b767a4274
2013-07-05 06:36:11 +00:00
ggtakec@gmail.com
d1a17cbe3d Fixed Issue 352 and bugs
1) Option syntax verbosity in doc ( Issue 352 )
    Before this revision(version), "use_rrs" option needs to set a parameter like "use_sse" option.
    But this option does not need a parameter, specified "use_rrs" option means enabled RRS.
    (because RRS is desabled by default.)
    After this revision, "use_rrs" option can be specified without a parameter, and "use_sse" too.
    Changed codes, man page and help page.
    Please notice, for old version "use_rrs"(and "use_sse") can be specified with a parameter("1" or "0") yet.

2) Fixes a bug about analizing "use_sse" option.
    Fixed a bug in r451, "use_sse" option is not worked because s3fs mistook to call function for "use_rrs".

3) Fixes a memory leak.
    Fixed a memory leak in r451.
    Fixed that the curl_slist_sort_insert() function forgot to free memory.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@452 df820570-a93a-0410-bd06-b72b767a4274
2013-07-05 05:41:46 +00:00
ggtakec@gmail.com
ad19ffa458 Changes codes
1) Adds new S3fsCurl class
   Added new S3fsCurl class instead of directly calling curl function.
   This class is lapping curl function for s3fs(AWS S3 API).

2) Changes codes about adding S3fsCurl class
    Changed and deleted classes and structures which are related to curl in curl.cpp/curl.h.
    Changed codes which are calling S3 API with curl in s3fs.cpp.

3) Deletes YKIES macro
    Deleted YIKES macro, because this macro is used no more.

4) Changes a code
    s3fs does not get good performance because s3fs copies each byte while downloading.
    So that the codes is changed instead of memcpy, then s3fs performance not a little improves.

5) Fixes a bug
    When s3fs renames a file, s3fs does not use the value which is specified by servicepath option.
    Fixed this bug.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@451 df820570-a93a-0410-bd06-b72b767a4274
2013-07-05 02:28:31 +00:00
ggtakec@gmail.com
f7e1a2a37f Fixed bugs
1) Fixed a bug(forgot removing temporary files)
    When s3fs gets a error from fwrite in multipart uploading function,
    s3fs does not remove a temporary file.

2) Fixed a bug(wrong prototype of function)
    The prototype of function for CURLSHOPT_UNLOCKFUNC
    is wrong.

3) Changed codes
    - In my_curl_easy_perform function, the codes for debugging messages
      is changed, because it is for not working codes when "-d" option is 
      not specified.
    - Changes struct head_data's member variables, and some codes for this 
      changes.
    - Moving calling function to main for curl_global_init and curl_share_init 
      functions, because these function must call in main thread.

4) Fixed a bug(use uninitialized memory)
    In get_lastmodified function, this function does not initialize value
   (struct tm).

5) Fixed a bug(access freed variable)
    In readdir_multi_head function, access a variable which is already freed.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@442 df820570-a93a-0410-bd06-b72b767a4274
2013-06-15 15:29:08 +00:00
ggtakec@gmail.com
1758bc59f4 Fixed Issue 235, Issue 257, Issue 265
1) Fixes "SSL connect error"(curl 35 error)
    Fixed "SSL connect error", then s3fs can connect by SSL with no problem.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@434 df820570-a93a-0410-bd06-b72b767a4274
2013-06-01 15:31:31 +00:00
ggtakec@gmail.com
2d51439dcb Fixed a bug(failed all multi head request when mounting bucket+path)
1) Fixes a bug
    When the mount point is specified with sub-directory(mounting with 
    "bucket:/path"), internally all curl_multi head request in s3fs_readdir() 
    function failed.
    This reason is that the head curl_multi request is not specified with 
    mount path.
    This is a bug, and fixed.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@431 df820570-a93a-0410-bd06-b72b767a4274
2013-05-27 02:22:47 +00:00
ggtakec@gmail.com
7477224d02 Fixed Issue 304
1) s3fs should cache DNS lookups?(Issue 304)
   Changes that s3fs always uses own dns cache, and adds "nodnscache" option.
   If "nodnscache" is specified, s3fs does not use dns cache as before.
   s3fs keeps DNS cache for 60 senconds by libcurl's default.



git-svn-id: http://s3fs.googlecode.com/svn/trunk@429 df820570-a93a-0410-bd06-b72b767a4274
2013-05-22 08:49:23 +00:00
ggtakec@gmail.com
9da497af45 Added enable_content_md5 option
1) Adds enable_content_md5 option
   When s3fs uploads large object(over 20MB), s3fs checks always ETag(MD5) in each multipart's response.
   But for small objects, s3fs does not check MD5.
   This new option enables for checking MD5 of uploading object.
   If "enable_content_md5" option is specified, s3fs puts the object with "Content-MD5" header.

   The checking MD5 value is not default, because it increases some of user's cpu usage.
   (The default value may be replaced in the future.)



git-svn-id: http://s3fs.googlecode.com/svn/trunk@423 df820570-a93a-0410-bd06-b72b767a4274
2013-05-16 02:02:55 +00:00
ggtakec@gmail.com
f002cdb9b2 Fixed issue: 326
1) Changes for fixing a bug(r326)
  The my_curl_easy_perform() function is not clearing the buffer(struct BodyStruct body) before retrying the request.

2) Other changes
  In conjunction with this issue, the "struct BodyStruct" is changed to "Class BodyData".
  New class is same as BodyStruct, but handling memory is automatically.
  And added a argument for my_curl_easy_perform().
  This function is needed the buffer pointer, but the arguments is only for body buffer.
  Then I added the buffer pointer for header buffer.

3) Fixed memory leak
  In get_object_name() function, there was a memory leak.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@403 df820570-a93a-0410-bd06-b72b767a4274
2013-04-11 01:49:00 +00:00
ggtakec@gmail.com
8bd1483374 Summary of Changes(1.65 -> 1.66)
==========================
List of Changes
==========================
1) Fixes bugs
    Fixes Issue 321: "no write permission for non-root user".
    (http://code.google.com/p/s3fs/issues/detail?id=321)
    Fixes a bug which s3fs does not set uid/gid headers when making symlink.

2) Cleanup  code.
    Adds a common function which  converts the Last-Modified header to utime.
    Deletes the useless cord and arranged it.

3) xmlns
    Changes that s3fs can decide using the xmlns url automatically.
    Then the noxmlns option is not needed anymore, but it is left.

4) Changes cache for performance
    Changes stat cache, it accumulates stat information and some headers.
    By adding some headers into cache, s3fs does not need to call curl_get_headers function.
    After changing, one cache entry increases in about 500 bytes from about 144 byte.
    
    Adds one condition to get out of the cache, that condition is by looking object's ETag.
    It works good for noticing changes about obojects.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@400 df820570-a93a-0410-bd06-b72b767a4274
2013-04-06 17:39:22 +00:00
ggtakec@gmail.com
953aedd7ad Cleaned up source codes
No changes for logic, only changes layout of functions and valiables between a file to a file.
    Adds s3fs_util.cpp/s3fs_util.h/common.h



git-svn-id: http://s3fs.googlecode.com/svn/trunk@396 df820570-a93a-0410-bd06-b72b767a4274
2013-03-30 13:37:14 +00:00
ggtakec@gmail.com
9af16df61e Summary of Changes(1.63 -> 1.64)
* This new version was made for fixing big issue about directory object.
  Please be careful and review new s3fs.

==========================
List of Changes
==========================
1) Fixed bugs
    Fixed some memory leak and  un-freed curl handle.
    Fixed codes with a bug which is not found yet.
    Fixed a bug that the s3fs could not update object's mtime when the s3fs had a opened file descriptor. 

    Please let us know a bug, when you find new bug of a memory leak.

2) Changed codes
    Changed codes of s3fs_readdir() and list_bucket() etc.
    Changed codes so that the get_realpath() function returned std::string.
    Changed codes about exit() function. Because the exit() function is called from many fuse callback function directly, these function called fuse_exit() function and retuned with error.
    Changed codes so that the case of the characters for the "x-amz-meta" response header is ignored.

3) Added a option
    Added the norenameapi option for the storage compatible with S3 without copy API.
    This option is subset of nocopyapi option.
    Please read man page or call with --help option.

4) Object for directory
    This is very big and important change.

    The object of directory is changed "dir/" instead of "dir" for being compatible with other S3 client applications.
    And this version understands the object of directory which is made by old version.
    If the new s3fs changes the attributes or owner/group or mtime of the directory object, the s3fs automatically changes the object from old object name("dir") to new("dir/").
    If you need to change old object name("dir") to new("dir/") manually, you can use shell script(mergedir.sh) in test directory.

    * About the directory object name
        AWS S3 allows the object name as both "dir" and "dir/".
        The s3fs before this version understood only "dir" as directory object name, but old version did not understand the "dir/" object name.
        The new version understands both of "dir" and "dir/" object name.
        The s3fs user needs to be care for the special situation that I mentioned later.

        The new version deletes old "dir" object and makes new "dir/" object, when the user operates the directory object for changing the permission or owner/group or mtime.
        This operation does on background and automatically.

        If you need to merge manually, you can use shell script which is mergedir.sh in test directory.
        This script runs chmod/chown/touch commands after finding a directory.
       Other S3 client application makes a directory object("dir/") without meta information which is needed to understand by the s3fs, this script can add meta information for a directory object.
        If this script function is insufficient for you, you can read and modify the codes by yourself.
        Please use the shell script carefully because of changing the object.
        If you find a bug in this script, please let me know.

    * Details
    ** The directory object made by old version
        The directory object made by old version is not understood by other S3 client application.
        New s3fs version was updated for keeping compatibility with other clients.
        You can use the mergedir.sh in test directory for merging  from old directory object("dir") to new("dir/").
        The directory object name is changed from "dir" to "dir/" after the mergedir.sh is run, this changed "dir/" object is understood by other S3 clients.
        This script runs chmod/chown/chgrp/touch/etc commands against the old directory object("dir"), then new s3fs merges that directory automatically.

        If you need to change directory object from old to new manually, you can do it by running these commands which change the directory attributes(mode/owner/group/mtime).

    ** The directory object made by new version
        The directory object name made by new version is "dir/".
        Because the name includes "/", other S3 client applications understand it as the directory.
        I tested new directory by s3cmd/tntDrive/DragonDisk/Gladinet as other S3 clients, the result was good compatibility.
        You need to know that the compatibility has small problem by the difference in specifications between clients.
        And you need to be careful about that the old s3fs can not understand the directory object which made by new s3fs.
        You should change all s3fs which accesses same bucket.

    ** The directory object made by other S3 client application
        Because the object is determined as a directory by the s3fs, the s3fs makes and uses special meta information which is "x-amz-meta-***" and "Content-Type" as HTTP header.
        The s3fs sets and uses HTTP headers for the directory object,  those headers are listed below.
            Content-Type: application/x-directory
            x-amz-meta-mode: <mode>
            x-amz-meta-uid: <UID>
            x-amz-meta-gid <GID>
            x-amz-meta-mtime: <unix time of modified file>

        Other S3 client application builds the directory object without attributes  which is needed by the s3fs.
        When the "ls" command is run on the s3fs-fuse file system which has directories/files made by other S3 clients, this result is shown below. 
            d---------  1 root     root           0 Feb 27 11:21 dir
            ----------  1 root     root     1024 Mar 14 02:15 file
        Because the objects don't have meta information("x-amz-meta-mode"), it means mode=0000.
        In this case, the directory object is shown only "d", because the s3fs determines the object as a directory when the object is the name with "/" or has "Content-type: application/x-directory" header.
        (The s3fs sets "Content-Type: application/x-directory" to the directory object, but other S3 clients set "binary/octet-stream".)
        In that result, nobody without root is allowed to operate the object.

        The owner and group are "root"(UID=0) because the object doesn't have "x-amz-meta-uid/gid".
        If the object doesn't have "x-amz-meta-mtime", the s3fs uses "Last-Modified" HTTP header.
        Therefore the object's mtime is "Last-Modified" value.(This logic is same as old version)
        It has been already explained, if you need to change the object attributes, you can do it by manually operation or mergedir.sh.

    * Example of the compatibility with s3cmd etc
    ** Case A) Only "dir/file" object
        One of case, there is only "dir/file" object without "dir/" object, that object is made by s3cmd or etc.
        In this case, the response of REST API(list bucket) with "delimiter=/" parameter has "CommonPrefixes", and the "dir/" is listed in "CommonPrefixes/Prefix", but the "dir/" object is not real object. 
        The s3fs needs to determine this object as directory, however there is no real directory object("dir" or "dir/").
        But both new s3fs and old one does NOT understand this "dir/" in "CommonPrefixes", because the s3fs fails to get meta information from "dir" or "dir/".
        On this case, the result of "ls" command is shown below.
            ??????????? ? ?        ?        ?            ? dir
        This "dir" is not operated by anyone and any process, because the s3fs does not understand this object permission.
        And "dir/file" object can not be shown and operated too.
        Some other S3 clients(tntDrive/Gladinet/etc) can not understand this object as same as the s3fs.

        If you need to operate "dir/file" object, you need to make the "dir/" object as a directory.
        To make the "dir/" directory object, you need to do below.
        Because there is already the "dir" object which is not real object, you can not make "dir/" directory.
        (s3cmd does not make "dir/" object because the object name has "/".).
        You should make another name directory(ex: "dir2/"), and move the "dir/file" objects to in new directory.
        Last, you can rename the directory name from "dir2/" to "dir/".

    ** Case B) Both "dir" and "dir/file" object
        This case is that there are "dir" and "dir/file" objects which were made by s3cmd/etc.
        s3cmd and s3fs understand the "dir" object as normal(file) object because this object does not have meta information and a name with "/".
        But the result of REST API(list bucket) has "dir/" name in "CommonPrefixes/Prefix". 

        The s3fs checks "dir/" and "dir" as a directory, but the "dir" object is not directory object.
        (Because the new s3fs need to compatible old version, the s3fs checks a directory object in order of "dir/", "dir")
        In this case, the result of "ls" command is shown below. 
            ----------  1 root     root     0 Feb 27 02:48 dir
        As a result, the "dir/file" can not be shown and operated because the "dir" object is a file.

        If you determine the "dir" as a directory, you need to add mete information to the "dir" object by s3cmd.


    ** Case C) Both "dir" and "dir/" object
        Last case is that there are "dir" and "dir/" objects which were made by other S3 clients.
        (example: At first you upload a object "dir/" as a directory by new 3sfs, and you upload a object "dir" by s3cmd.)
        New s3fs determines "dir/" as a directory, because the s3fs searches in oder of "dir/", "dir".
        As a result, the "dir" object can not be shown and operated.

    ** Compatibility between S3 clients 
        Both new and old s3fs do not understand both "dir" and "dir/" at the same time, tntDrive and Galdinet are same as the s3fs.
        If there are "dir/" and "dir" objects, the s3fs gives priority to "dir/".
        But s3cmd and DragonDisk understand both objects.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@392 df820570-a93a-0410-bd06-b72b767a4274
2013-03-23 14:04:07 +00:00
ben.lemasurier@gmail.com
2a09e0864e Fixed a possible memory leak in the stat cache where
- items with an initial hit count of 0 would not be deleted

Added an additiional integration test



git-svn-id: http://s3fs.googlecode.com/svn/trunk@383 df820570-a93a-0410-bd06-b72b767a4274
2011-09-26 15:20:14 +00:00
ben.lemasurier@gmail.com
6d12f31676 moving some repeated curl operations to a single location in curl.cpp
git-svn-id: http://s3fs.googlecode.com/svn/trunk@382 df820570-a93a-0410-bd06-b72b767a4274
2011-09-01 19:24:12 +00:00
ben.lemasurier@gmail.com
79ee801b94 cleanup HTTP DELETE operations to use the same curl interface
git-svn-id: http://s3fs.googlecode.com/svn/trunk@381 df820570-a93a-0410-bd06-b72b767a4274
2011-08-31 22:20:20 +00:00
ben.lemasurier@gmail.com
9fb05fba4f moved calc_signature to curl.cpp
git-svn-id: http://s3fs.googlecode.com/svn/trunk@380 df820570-a93a-0410-bd06-b72b767a4274
2011-08-31 20:36:40 +00:00
ben.lemasurier@gmail.com
4ba385d1be return -EPERM on 403 (access forbidden) instead of -EIO
git-svn-id: http://s3fs.googlecode.com/svn/trunk@373 df820570-a93a-0410-bd06-b72b767a4274
2011-08-30 19:44:26 +00:00
ben.lemasurier@gmail.com
c933b6a9b1 Support for modifying files > 5GB (fixes issue #215)
Modified rename_object and put_headers to handle objects larger than
5GB. Files larger than 5GB are required to use the multi interface.


git-svn-id: http://s3fs.googlecode.com/svn/trunk@363 df820570-a93a-0410-bd06-b72b767a4274
2011-08-29 22:01:32 +00:00
ben.lemasurier@gmail.com
07baba972a Handle curl send and recv errors a little more gracefully
git-svn-id: http://s3fs.googlecode.com/svn/trunk@357 df820570-a93a-0410-bd06-b72b767a4274
2011-07-29 15:48:15 +00:00
ben.lemasurier@gmail.com
ee1915ff93 missed this on the last commit
git-svn-id: http://s3fs.googlecode.com/svn/trunk@349 df820570-a93a-0410-bd06-b72b767a4274
2011-07-02 18:52:44 +00:00
ben.lemasurier@gmail.com
2eafa487d7 Massive speed improvements for readdir operations
complete s3fs_readdir() refactor
    - multi interface now batches HTTP requests
      - proper HTTP KeepAlive sessions are back! (CURLOPT_FORBID_REUSE is no longer required)
    - use xpath to quickly grab xml nodes
    - lots of cleanup
    - fixes some strange stat cache behavior
    - huge readdir performance benefits (8-14x in my case) on large directories



git-svn-id: http://s3fs.googlecode.com/svn/trunk@348 df820570-a93a-0410-bd06-b72b767a4274
2011-07-02 02:11:54 +00:00
ben.lemasurier@gmail.com
6cd9e9e65d moved generic curl routines to their own file
git-svn-id: http://s3fs.googlecode.com/svn/trunk@332 df820570-a93a-0410-bd06-b72b767a4274
2011-03-01 19:35:55 +00:00