This _should_ handle issue #153 alongside simplifying s3fs_check_service;
determining if the network is up/down duplicated a lot of functionality
availble in my_curl_easy_perform. This will need some testing, of course.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@374 df820570-a93a-0410-bd06-b72b767a4274
Modified rename_object and put_headers to handle objects larger than
5GB. Files larger than 5GB are required to use the multi interface.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@363 df820570-a93a-0410-bd06-b72b767a4274
Passing s3fs through valgrind revealed a few more memory leaks assocated with
the fuse option parser. We'll no longer directly return main from fuse_main().
In the event that s3fs_check_service failed, many of the curl handles were not
being cleaned up properly.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@362 df820570-a93a-0410-bd06-b72b767a4274
running s3fs through several valgrind checks found a bug in s3fs_check_service()
git-svn-id: http://s3fs.googlecode.com/svn/trunk@361 df820570-a93a-0410-bd06-b72b767a4274
simply using strstr to see if the bucket is available.
removed the check to see if the service has any buckets at all.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@360 df820570-a93a-0410-bd06-b72b767a4274
when s3fs is linked against libcurl < 7.20.0 we'd receive CURLM_CALL_MULTI_PERFORM
git-svn-id: http://s3fs.googlecode.com/svn/trunk@358 df820570-a93a-0410-bd06-b72b767a4274
complete s3fs_readdir() refactor
- multi interface now batches HTTP requests
- proper HTTP KeepAlive sessions are back! (CURLOPT_FORBID_REUSE is no longer required)
- use xpath to quickly grab xml nodes
- lots of cleanup
- fixes some strange stat cache behavior
- huge readdir performance benefits (8-14x in my case) on large directories
git-svn-id: http://s3fs.googlecode.com/svn/trunk@348 df820570-a93a-0410-bd06-b72b767a4274
- fixed a bug in the file cache, it was attempting to set the mtime
on symlinks
- general code cleanup; moved some string functions to string_util.cpp
git-svn-id: http://s3fs.googlecode.com/svn/trunk@345 df820570-a93a-0410-bd06-b72b767a4274
curl error code 23 - CURLE_WRITE_ERROR
When encountered, it does a retry.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@329 df820570-a93a-0410-bd06-b72b767a4274
- s3fs_flush() now checks to see whether the file on the remote end is the same as the local copy.
- md5sum() now requires a file descriptor instead of a path.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@322 df820570-a93a-0410-bd06-b72b767a4274
Functional changes are limited to the multipart upload process. Each uploaded part is now verified against a local md5sum.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@318 df820570-a93a-0410-bd06-b72b767a4274
- increase max_keys in readdir from 50 to 500
- handle the curle_couldnt_resolve_host error better
- add the curl forbid reuse option
git-svn-id: http://s3fs.googlecode.com/svn/trunk@308 df820570-a93a-0410-bd06-b72b767a4274
Beginning of s3fs "utility" mode - initially -u option
just reports in progress multipart uploads for the
bucket. Eventually this mode could be used for
other S3 tasks not accessible through typical
file system operations
For multipart upload, use safer mkstemp() instead
of tmpnam() for temporary file
Increased the curl connect and readwrite timeouts
to 10 and 30 seconds respectively.
Autodetect when a big file is being uploaded,
increase the readwrite timeout to 120 seconds. This
was found through experimentation. When uploading
a big file, it is suspected that time is needed
for S3 to assemble the file before it is available
for access. It was found that when a large file
was uploaded via rsync, the final mtime and
chmod modifications were timing out, even though
the upload itself was successful.
Multipart upload is ready for use. A couple of
error checks are still needed in the function and
some cleanup. Need some feedback on how it
is working though.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@298 df820570-a93a-0410-bd06-b72b767a4274