Fixed issue: 320

1) Changes calling fread/fwrite logic(Issue: 320)
    The s3fs functions are called by doing rsync command, the calling order is s3fs_create, s3fs_truncate, s3fs_flush.
    After the s3fs_truncate uploads the file, the s3fs_flush uploads the file again wothout rewinding fd.
    It is this issue bug, the s3fs_flush read EOF and put error.
    Then I changes the code  that is calling the lseek and seeking FD to head of file before fread/fwrite.




git-svn-id: http://s3fs.googlecode.com/svn/trunk@411 df820570-a93a-0410-bd06-b72b767a4274
This commit is contained in:
ggtakec@gmail.com 2013-04-17 04:50:13 +00:00
parent 9641d07806
commit a92a4c0a4f

View File

@ -875,8 +875,9 @@ static int put_local_fd(const char* path, headers_t meta, int fd) {
FGPRINT(" put_local_fd[path=%s][fd=%d]\n", path, fd);
if(fstat(fd, &st) == -1)
if(fstat(fd, &st) == -1){
YIKES(-errno);
}
/*
* Make decision to do multi upload (or not) based upon file size
@ -897,6 +898,13 @@ static int put_local_fd(const char* path, headers_t meta, int fd) {
return -ENOTSUP;
}
// seek to head of file.
if(0 != lseek(fd, 0, SEEK_SET)){
SYSLOGERR("line %d: lseek: %d", __LINE__, -errno);
FGPRINT(" put_local_fd - lseek error(%d)\n", -errno);
return -errno;
}
if(st.st_size >= 20971520 && !nomultipart) { // 20MB
// Additional time is needed for large files
if(readwrite_timeout < 120)