complete s3fs_readdir() refactor
- multi interface now batches HTTP requests
- proper HTTP KeepAlive sessions are back! (CURLOPT_FORBID_REUSE is no longer required)
- use xpath to quickly grab xml nodes
- lots of cleanup
- fixes some strange stat cache behavior
- huge readdir performance benefits (8-14x in my case) on large directories
git-svn-id: http://s3fs.googlecode.com/svn/trunk@348 df820570-a93a-0410-bd06-b72b767a4274
- fixed a bug in the file cache, it was attempting to set the mtime
on symlinks
- general code cleanup; moved some string functions to string_util.cpp
git-svn-id: http://s3fs.googlecode.com/svn/trunk@345 df820570-a93a-0410-bd06-b72b767a4274
curl error code 23 - CURLE_WRITE_ERROR
When encountered, it does a retry.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@329 df820570-a93a-0410-bd06-b72b767a4274
- s3fs_flush() now checks to see whether the file on the remote end is the same as the local copy.
- md5sum() now requires a file descriptor instead of a path.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@322 df820570-a93a-0410-bd06-b72b767a4274
Functional changes are limited to the multipart upload process. Each uploaded part is now verified against a local md5sum.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@318 df820570-a93a-0410-bd06-b72b767a4274
- increase max_keys in readdir from 50 to 500
- handle the curle_couldnt_resolve_host error better
- add the curl forbid reuse option
git-svn-id: http://s3fs.googlecode.com/svn/trunk@308 df820570-a93a-0410-bd06-b72b767a4274
Beginning of s3fs "utility" mode - initially -u option
just reports in progress multipart uploads for the
bucket. Eventually this mode could be used for
other S3 tasks not accessible through typical
file system operations
For multipart upload, use safer mkstemp() instead
of tmpnam() for temporary file
Increased the curl connect and readwrite timeouts
to 10 and 30 seconds respectively.
Autodetect when a big file is being uploaded,
increase the readwrite timeout to 120 seconds. This
was found through experimentation. When uploading
a big file, it is suspected that time is needed
for S3 to assemble the file before it is available
for access. It was found that when a large file
was uploaded via rsync, the final mtime and
chmod modifications were timing out, even though
the upload itself was successful.
Multipart upload is ready for use. A couple of
error checks are still needed in the function and
some cleanup. Need some feedback on how it
is working though.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@298 df820570-a93a-0410-bd06-b72b767a4274
Check issue #142 for details
Code is operational, but not quite ready for
prime time -- needs some clean up
git-svn-id: http://s3fs.googlecode.com/svn/trunk@297 df820570-a93a-0410-bd06-b72b767a4274
Implemented create() function - Resolves issue #18
Issues will be updated with more detail.
git-svn-id: http://s3fs.googlecode.com/svn/trunk@289 df820570-a93a-0410-bd06-b72b767a4274
refactoring easier and the code easier to understand (for me anyway)
Opened up the VERIFY macro so that memory cleanup can be done
before returning from a function.
Make the file descriptor function calls a bit more robust,
check the return codes.
Current code tested on Debian sid, CentOS (with FUSE 2.84) and Ubuntu 10.10
git-svn-id: http://s3fs.googlecode.com/svn/trunk@286 df820570-a93a-0410-bd06-b72b767a4274
No tarball until further testing on other platforms.
Extensively tested on Debian sid 64bit
Resolves issue #104
git-svn-id: http://s3fs.googlecode.com/svn/trunk@285 df820570-a93a-0410-bd06-b72b767a4274
re-wrote the curl write-to-memory callback function
(this helped eliminate a memory leak)
eliminated another memory leak
more debug messages
git-svn-id: http://s3fs.googlecode.com/svn/trunk@281 df820570-a93a-0410-bd06-b72b767a4274
code with CURLE_OK is returned.
clean up a few debug/stdout messages
This commit doesn't really fix anything, but eliminates
the suspicous HTTP 404 errors from the syslog -- these are
usually normal
git-svn-id: http://s3fs.googlecode.com/svn/trunk@280 df820570-a93a-0410-bd06-b72b767a4274
Rather than erroring out, it is treated like the
CURLE_OPERATION_TIMEOUT in that it doesn't exit the
timeout loop while the retry count is > 0. I
also added a short duration sleep to not retry
immediately.
Resolves issue #132
git-svn-id: http://s3fs.googlecode.com/svn/trunk@277 df820570-a93a-0410-bd06-b72b767a4274