1) Changed codes about memory leak
For memory leak, below codes are changed.
* calls malloc_trim function
* calls initializing NSS function, and adds configure
option "--enable-nss-init".
If libcurl is with NSS, s3fs initializes NSS manually.
This initializing NSS is enabled by "--enable-nss-init"
option at configure. if this option is specified, you
need "nss-devel" package.
* calls initializing libxml2(xmlInitParser).
* BIO functions have memory leak, calls CRYPTO_free_ex_data.
* changes cache structure.
* changes cache out logic to LRU.
* sets alignment for allcated memory in body data structure.
* adds ssl session into share handle. and adds nosscache option.
* deletes unused allocated memory.(bug)
* changes defaule parallel count of head request in readdir
(500->20)
* fixes some bugs.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@482 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Added ahbe_conf option - r465
- Added ahbe_conf option means the configuration file path, and
this file specifies additional HTTP header by file(object)
extension.( Issue 292 )
- Added sample configuration file in test directory for ahbe_conf
option.
2) Changed mount point permission - r465
- Not allow group/other permission for mount point when s3fs runs
without allow_other.
- Allow permission to all user for mount point when specified
allow_other option.
3) Fixed bugs - r465, r466, r467, r468, r470, r471
- Changed a code which s3fs returns error as soon as possible
when user tries to change mount point.( Issue 229 )
- Fixed mis-format for debugging print.
- Changed request type to "virtual hosted-style" for checking
bucket when s3fs is starting.( Issue 362 )
- Fixed bug issue(bug), when s3fs retry a request at something
error occurred, s3fs is wrong usage curl handle and fails
retrying request.( Issue 343 )
- Fixed mis-coding about fonction prototype.( Issue 360/Issue 361 )
- Fixed a bug about umask.( Issue 321 )
- Fixed a bug which s3fs exit though specified correct
$HOME/.passwd-s3fs.( Issue 365 )
- Fixed a bug which deleting stat cache information is bad
posission in s3fs_release function.( Issue 363 )
4) Added sample script - 472, r473
- Added sample_delcache.sh in test directory for deleting
cahce files.
5) Added debugging messages - r467, r474
- Changed debugging level for prepare_url function.
- Specified f2 option, s3s_getattr puts detail for file attar.
- Added new option as curldbg for curl http(s) debugging.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@475 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes bugs and changes codes - r448, r451, r452, r453, r454,
r455, r460, r461
- Fixed umask option which works correctly.( Issue 346 )
- Added new S3fsCurl class for lapping curl functions.
- Deleted YIKES macro which is not used.
- Used memcpy instead of copying each bytes while downloading.
- Fixed a bug, s3fs did not use servicepath when renaming.
- Fixed and changed "use_sse"/"use_rrs" options( Issue 352 )
- Fixed a memory leak in curl_slist_sort_insert() function.
- Fixed a memory leak when multipart uploading with error.
- Supported mknod function.( Issue 355 )
- Changed debugging macros for simple.
2) Changes codes for performance and adds "multireq_max" - r449
Changed the order for checking directory objects.
Added "multireq_max" option is maximum number of parallel
request for listing objects.
3) Performance tuning - r456, r457, r458, r459
- Changed for large object uploading/downloading by parallel
requests.
- Added "parallel_count"/"fd_page_size" option.
- Not make temporary file when uploading large object by
multipart uploading.
- Changed about temporary file and local cache file.
And added cache's status file for local cache file.
- Use "Range" header for block downloading.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@462 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Adds use_sse option(Issue 226) - r438, r439
Supports SSE(Server-Side Encryption) and adds "use_sse" option.(Issue 226)
2) Fixes a bug(Issue 342) - r440
Fixed a bug "Segmentation fault on connect on ppc64".
The third parameter of curl_easy_getinfo() is wrong.
3) Fixes a bug(Issue 343, 235, 257, 265) - r441
Fixed a bug "SSL connect error).
r434 could not fix it completely(mistook fixing).
4) Fixes bugs and changes codes - r442, r444
- Fixes a bug which is forgot removing temporary files at error.
- Fixes curl_share function prototype.
- Changes one code for "-d" option.
- Changes head_data struct menacers.
- Fixes calling position for curl_global_init and curl_share_init function.
- Fixes uninitializing struct tm.
- Fixes accessing freed variable.
- Changes using cur_slist directly to auto_curl_slist class.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@445 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes bugs - r428
* Fixes a bug which object owner/group is set wrong id.
* The permission of mount point when allow_other option is
specified.
* Fixes a bug about permission
when the directory permission is 0557, other user(who is not
owner and same group) gets a permission error when making
file or directory in that dir.
* Fixes a bug( Issue 340 )
Fixes compile error about "blkcnt_t".
2) Fixes a bug( Issue 429 ) - r429
Changes that s3fs always uses own dns cache, and adds "nodnscache"
option. If "nodnscache" is specified, s3fs does not use dns cache as
before. s3fs keeps DNS cache for 60 senconds by libcurl's default.
3) Fixes a bug( Issue 235 ) - r430
Fixes a CURLE_COULDNT_CONNECT error when s3fs reads
objects header information.
* max request in curl_multi request is 500 and s3fs loops to call
curl_multi.
* retries to call request which returns error as CURLE_COULDNT_CONNECT.
4) Fixes a bug - r431
Fixed a bug(failed all multi head request when mounting bucket+path)
5) FIxes a bug( Issue 241 ) - r432
Changes codes for that s3fs returns size from opened file discriptor, if
client already opens the file.
6) Fixes a bug - r433
Package tarball includes doc/Makefile, this file is not unnecessary for
tarball.
7) Fixes bug( Issue 235 , Issue 257, Issue 265 ) - r434
Fixed "SSL connect error", then s3fs can connect by SSL with no problem.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@435 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes a bug - r418, r419
Fixed a bug that s3fs failed to get stats when the max_stat_cache_size option was specified 0.
2) Fixed a bug( Issue 291) - r420
Fixes ( Issue 291 ) "Man file has wrong permissions for passwd file",
then changes man page.
Checks passwd file permission strictly.
Fixes a bug that s3fs continued to run after finding invalid passwd file
permission.
3) Added enable_noobj_cache option for no-existing object - r420, r421
Adds enable_noobj_cache option(default disable) for performance.
This option is specified, s3fs memorizes in stat cache that the object
(file or directory) does not exist.
4) Fixed a bug( Issue 240 ) - r421
Fixes ( Issue 240 ) "Cannot Mount Path in Bucket".
Changes man page.
5) Supported s3sync'ed object( Issue 31 ) - r422
Supports HTTP headers which made by s3sync.
It means that s3fs supported HTTP headers are x-amz-meta-owner,
x-amz-meta-permissions and x-amz-meta-group.
6) Added enable_content_md5 option - r423
Adds enable_content_md5 option(default disable).
If "enable_content_md5" option is specified, s3fs puts the object with
"Content-MD5" header when s3fs sending small object(under 20MB).
7) Supported uid/gid option - r424
Supports uid/gid mount(fuse) option.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@425 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes a bug(Issue 320) - r408, r411, r412
Fixes (Issue 320) "Spurious I/O errors"
(http://code.google.com/p/s3fs/issues/detail?id=320)
When the s3fs gets error in upload loop, it try to re-upload without seeking
the fd. This is a bug for this issue.
Please see detail in r408, r411 and r412.
2) Fixes a bug(Issue 293) - r409
Fixes (Issue 293) "Command line argument bucket: causes segv"
(http://code.google.com/p/s3fs/issues/detail?id=293)
If it specifies the bucket name which is terminated ":", the s3fs crushes.
Please see detail in r409.
3) Supports a option(Issue 265) - r410
Supports (Issue 265) "Unable to mount to a non empty directory"
(http://code.google.com/p/s3fs/issues/detail?id=265)
Supported "nonempty" fuse/mount option.
Please see detail in r410.
4) Supports other S3 clients(Issue 27) - r413, r.414
Supports (Issue 27) "Compatability with other S3FS clients"
*** "_$folder$" dir object
Supports the directory object which made by s3fox. Its name has "_$folder$"
suffixes. On s3fs, that directory object is listed normal directory name
without "_$folder$".
Please be careful when you change object attributes(rename, chmod, chown,
touch), because the s3fs remakes the directory object without "_$folder$"
sufixes. This means the object is re-made by the s3fs.
*** no dir object
Supports the directory which is no objects.
If there is a object which has "/" charactor(ex. "<bucket>/dir/file"), the
directory("dir") object is no object.
Example, you can upload the object which name is "s3://bucket/dir/file" by
the s3cmd(or other S3 clients like s3cmd).
Then on thie case, the "dir" is not object in bucket.
This s3fs version understands this case.
Please be careful when you change object attributes(rename, chmod, chown,
touch), because the s3fs makes new directory object.
Please see detail in r413 and r414.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@415 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes a bug(r403)
Fixes (Issue: 326) "segfault using s3fs 1.6.5".
(http://code.google.com/p/s3fs/issues/detail?id=326)
The my_curl_easy_perform() function is not clearing the buffer(struct BodyStruct body) before retrying the request.
And the "struct BodyStruct" is changed to "Class BodyData".
New class is same as BodyStruct, but handling memory is automatically.
And added a argument for my_curl_easy_perform().
This function is needed the buffer pointer, but the arguments is only for body buffer.
Then I added the buffer pointer for header buffer.
2) Fixes a bug(r404)
Fixes (Issue: 328) "Problem with allow_other option".
(http://code.google.com/p/s3fs/issues/detail?id=328)
The return value in get_username() function is wrong value(NULL) when there is not user id in passwd file.
3) Fixes memory leak(r403)
In get_object_name() function, there was a memory leak.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@405 df820570-a93a-0410-bd06-b72b767a4274
==========================
List of Changes
==========================
1) Fixes bugs
Fixes Issue 321: "no write permission for non-root user".
(http://code.google.com/p/s3fs/issues/detail?id=321)
Fixes a bug which s3fs does not set uid/gid headers when making symlink.
2) Cleanup code.
Adds a common function which converts the Last-Modified header to utime.
Deletes the useless cord and arranged it.
3) xmlns
Changes that s3fs can decide using the xmlns url automatically.
Then the noxmlns option is not needed anymore, but it is left.
4) Changes cache for performance
Changes stat cache, it accumulates stat information and some headers.
By adding some headers into cache, s3fs does not need to call curl_get_headers function.
After changing, one cache entry increases in about 500 bytes from about 144 byte.
Adds one condition to get out of the cache, that condition is by looking object's ETag.
It works good for noticing changes about obojects.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@400 df820570-a93a-0410-bd06-b72b767a4274
1) Fixed a bug(r397)
After deleting directory object, s3fs could not make directory which was same name.
It was a bug about cache logic for compatibility other S3 client.
2) Cleaned up source codes(r396)
No changes for logic, only changes layout of functions and valiables between a file to a file.
Adds s3fs_util.cpp/s3fs_util.h/common.h
git-svn-id: http://s3fs.googlecode.com/svn/trunk@397 df820570-a93a-0410-bd06-b72b767a4274
1) Lifetime for the stats cache
Added the new option "stat_cache_expire".
This option which is specified by seconds means the lifetime for each stats cache entry.
If this option is not specified, the stats cache is kept in s3fs process until the stats cache grown to maximum size. (default)
If this option is specified, the stats cache entry is out from the memory when the entry expires time.
2) Enable file permission
s3fs before 1.62 did not consider the file access permission.
s3fs after this version can consider it.
For access permission, the s3fs_getattr() function was divided into sub function which can check the file access permission.
It is like access() function.
And the function calling the s3fs_getattr() calls this new sub function instead of s3fs_getattr().
Last the s3fs_opendir() function which is called by FUSE was added for checking directory access permission when listing the files in directory.
3) UID/GUID
When a file or a directory was created, the s3fs could not set the UID/GID as the user who executed a command.
(Almost the UID/GID are root, because the s3fs run by root.)
After this version, the s3fs set correct UID/GID as the user who executes the commond.
4) About the mtime
If the object does not have "x-amz-meta-mtime" meta, the s3fs uses the "Last-Modified" header instead of it.
But the s3fs had a bug in this code, and this version fixed this bug.
When user modified the file, the s3fs did not update the mtime of the file.
This version fixed this bug.
In the get_local_fd() function, the local file's mtime was changed only when s3fs run with "use_cache" option.
This version always updates the mtime whether the local cache file is used or not.
And s3fs_flush ( ) function set the mtime of local cache file from S3 object mtime, but it was wrong .
This version is that the s3fs_flush ( ) changes the mtime of S3 object from the local cache file or the tmpfile .
The s3fs cuts some requests, because the s3fs can always check mtime whether the s3fs uses or does not use the local cache file.
5) A case of no "x-amz-meta-mode"
If the object did not have "x-amz-meta-mtime" mete, the s3fs recognized the file as not regular file.
After this version, the s3fs recognizes the file as regular file.
6) "." and ".." directory
The s3fs_readdir() did not return "X" and "XX" directory name.
After this version, the s3fs is changed that it returns "X" and "XX".
Example, the result of "ls" lists "X" and "XX" directory.
7) Fixed a bug
The insert_object() had a bug, and it is fixed.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@390 df820570-a93a-0410-bd06-b72b767a4274
complete s3fs_readdir() refactor
- multi interface now batches HTTP requests
- proper HTTP KeepAlive sessions are back! (CURLOPT_FORBID_REUSE is no longer required)
- use xpath to quickly grab xml nodes
- lots of cleanup
- fixes some strange stat cache behavior
- huge readdir performance benefits (8-14x in my case) on large directories
git-svn-id: http://s3fs.googlecode.com/svn/trunk@348 df820570-a93a-0410-bd06-b72b767a4274
curl error code 23 - CURLE_WRITE_ERROR
When encountered, it does a retry.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@329 df820570-a93a-0410-bd06-b72b767a4274
- s3fs_flush() now checks to see whether the file on the remote end is the same as the local copy.
- md5sum() now requires a file descriptor instead of a path.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@322 df820570-a93a-0410-bd06-b72b767a4274
Functional changes are limited to the multipart upload process. Each uploaded part is now verified against a local md5sum.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@318 df820570-a93a-0410-bd06-b72b767a4274
- increase max_keys in readdir from 50 to 500
- handle the curle_couldnt_resolve_host error better
- add the curl forbid reuse option
git-svn-id: http://s3fs.googlecode.com/svn/trunk@308 df820570-a93a-0410-bd06-b72b767a4274
Beginning of s3fs "utility" mode - initially -u option
just reports in progress multipart uploads for the
bucket. Eventually this mode could be used for
other S3 tasks not accessible through typical
file system operations
For multipart upload, use safer mkstemp() instead
of tmpnam() for temporary file
Increased the curl connect and readwrite timeouts
to 10 and 30 seconds respectively.
Autodetect when a big file is being uploaded,
increase the readwrite timeout to 120 seconds. This
was found through experimentation. When uploading
a big file, it is suspected that time is needed
for S3 to assemble the file before it is available
for access. It was found that when a large file
was uploaded via rsync, the final mtime and
chmod modifications were timing out, even though
the upload itself was successful.
Multipart upload is ready for use. A couple of
error checks are still needed in the function and
some cleanup. Need some feedback on how it
is working though.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@298 df820570-a93a-0410-bd06-b72b767a4274
Check issue #142 for details
Code is operational, but not quite ready for
prime time -- needs some clean up
git-svn-id: http://s3fs.googlecode.com/svn/trunk@297 df820570-a93a-0410-bd06-b72b767a4274