Failed to upload the chunk ... operation not supported

Please describe what you are doing to trigger the bug:
On MacOS Catalina, Duplicacy 2.3.0
duplicacy -verbose -debug -log backup -stats

Please describe what you expect to happen (but doesn’t):
Normal backup

Please describe what actually happens (the wrong behaviour):
Backup fails with the error below. I tried a new backup with a new id, removed cache, with or without VSS. Previous version, 2.2.3, works just fine, but without VSS.

2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/.zshrc is included
2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/CODEOWNERS is included
2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/CODE_OF_CONDUCT.md is included
2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/CONTRIBUTING.md is included
2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/LICENSE is included
2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/README.md is included
2019-11-26 20:51:23.795 DEBUG PATTERN_INCLUDE dotfiles/install.sh is included
2019-11-26 20:51:23.795 DEBUG PATTERN_EXCLUDE dotfiles/.git/ is excluded by pattern e:(?i)\.(svn|git|hg)/.*$
2019-11-26 20:51:24.060 TRACE PACK_START Packing .wget-hsts
2019-11-26 20:51:24.060 INFO PACK_END Packed .wget-hsts (262)
2019-11-26 20:51:24.061 TRACE PACK_START Packing .zsh_history
2019-11-26 20:51:24.062 INFO PACK_END Packed .zsh_history (99145)
2019-11-26 20:51:24.062 WARN OPEN_FAILURE Failed to open file for reading: open /Users/vlad/.dropbox/events/1-6cc2-5dddcc8e: no such file or directory
2019-11-26 20:51:24.062 WARN OPEN_FAILURE Failed to open file for reading: open /Users/vlad/.dropbox/events/1-d306-5dddcc8a: no such file or directory
2019-11-26 20:51:24.062 TRACE PACK_START Packing .dropbox/instance1/aggregation.dbx
2019-11-26 20:51:24.062 INFO PACK_END Packed .dropbox/instance1/aggregation.dbx (94208)
2019-11-26 20:51:24.062 TRACE PACK_START Packing .dropbox/instance1/config.dbx
2019-11-26 20:51:24.063 INFO PACK_END Packed .dropbox/instance1/config.dbx (203776)
2019-11-26 20:51:24.063 TRACE PACK_START Packing .dropbox/instance1/home.db
2019-11-26 20:51:24.064 INFO PACK_END Packed .dropbox/instance1/home.db (623616)
2019-11-26 20:51:24.064 TRACE PACK_START Packing .dropbox/instance1/icon.db
2019-11-26 20:51:24.175 DEBUG CHUNK_CACHE Skipped chunk 14a9344868c80dac492160cb88365aebb95f8c53f2b33ca8e4b3ae6cc14065bf in cache
2019-11-26 20:51:24.175 INFO UPLOAD_PROGRESS Skipped chunk 2 size 3123265, 2.98MB/s 00:02:06 0.7%
2019-11-26 20:51:24.184 DEBUG CHUNK_CACHE Skipped chunk d8b41e3212a001582be44e9a8c6b6ba72d8c3c471ddb901051974003a9a585f5 in cache
2019-11-26 20:51:24.184 INFO UPLOAD_PROGRESS Skipped chunk 3 size 1846057, 4.74MB/s 00:01:19 1.2%
2019-11-26 20:51:24.192 DEBUG CHUNK_CACHE Skipped chunk 05bf064217ea17c6e9158beea95cd0a561c2b3a8951a11370193b2989c693e9e in cache
2019-11-26 20:51:24.192 INFO UPLOAD_PROGRESS Skipped chunk 4 size 1651478, 6.31MB/s 00:00:59 1.6%
2019-11-26 20:51:24.201 DEBUG CHUNK_CACHE Skipped chunk d1352c5012e73fa89a3f522aa54fe7b63028189c3e5d79d0208ebac9ef72adc3 in cache
2019-11-26 20:51:24.201 INFO UPLOAD_PROGRESS Skipped chunk 5 size 2037536, 8.26MB/s 00:00:45 2.1%
2019-11-26 20:51:24.265 INFO PACK_END Packed .dropbox/instance1/icon.db (30296064)
2019-11-26 20:51:24.265 TRACE PACK_START Packing .dropbox/instance1/sync/nucleus.sqlite3-wal
2019-11-26 20:51:27.553 ERROR UPLOAD_CHUNK Failed to upload the chunk 42d16542730a704772063fed97f2a5c4b9d3a642aef8ed89cc8934939dbec880: sync /Volumes/BackupV/chunks/42/d16542730a704772063fed97f2a5c4b9d3a642aef8ed89cc8934939dbec880.sggmytcq.tmp: operation not supported

Is the storage a network share? It looks like the sync operation isn’t supported on a network share.

Is it possible for you to use a different protocol to access the network share, like sftp?

Thank you for getting back to me so fast. It is indeed a network share, which works just fine with 2.2.3 (without VSS because Catalina). I did a few tests with a blank new repo, from scratch, only a few GB.

First I tested 2.3.0 with the network share as above (NFS v4):

 ~/Dropbox  duplicacy2.3.0 init -encrypt VMacTest /Volumes/BackupV
Enter storage password for /Volumes/BackupV:****************************************************************
The storage '/Volumes/BackupV' has already been initialized
Compression level: 100
Average chunk size: 4194304
Maximum chunk size: 16777216
Minimum chunk size: 1048576
Chunk seed: 897fe215233abffb006661497326c230eb1428bf36070980af629a7530d69a2f
/Users/vlad/Dropbox will be backed up to /Volumes/BackupV with id VMacTest

 ~/Dropbox  duplicacy2.3.0 -log backup -stats -vss
2019-11-27 21:15:05.471 INFO STORAGE_SET Storage set to /Volumes/BackupV
Enter storage password:****************************************************************
2019-11-27 21:15:11.547 INFO BACKUP_START No previous backup found
2019-11-27 21:15:12.075 INFO VSS_DONE Shadow copy created and mounted at /tmp/snp_064817310
2019-11-27 21:15:12.075 INFO BACKUP_INDEXING Indexing /Users/vlad/Dropbox
2019-11-27 21:15:12.075 INFO SNAPSHOT_FILTER Parsing filter file /Users/vlad/Dropbox/.duplicacy/filters
2019-11-27 21:15:12.075 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2019-11-27 21:15:14.230 ERROR UPLOAD_CHUNK Failed to upload the chunk 2df7b5a5b8932872c6c2b035c7264f8caf0aa14c6bdf5ec39c16341372028e58: sync /Volumes/BackupV/chunks/2d/f7b5a5b8932872c6c2b035c7264f8caf0aa14c6bdf5ec39c16341372028e58.cmjpxoil.tmp: operation not supported
2019-11-27 21:15:14.231 INFO INCOMPLETE_SAVE Incomplete snapshot saved to /Users/vlad/Dropbox/.duplicacy/incomplete

This failed, so I tested a SFTP connection to the same server hosting the nfs share, and this works just fine. However the speed is very very slow compared to the NFS share (4-5MB/s):

 ~/Dropbox  duplicacy2.3.0 init mywork sftp://vlad@192.168.1.11/test
Enter the path of the private key file:/Users/vlad/.ssh/ansible_rsa
The storage 'sftp://vlad@192.168.1.11/test' has already been initialized
Compression level: 100
Average chunk size: 4194304
Maximum chunk size: 16777216
Minimum chunk size: 1048576
Chunk seed: 6475706c6963616379
/Users/vlad/Dropbox will be backed up to sftp://vlad@192.168.1.11/test with id mywork

 ~/Dropbox  duplicacy2.3.0 -log backup -stats -vss
2019-11-27 21:21:54.647 INFO STORAGE_SET Storage set to sftp://vlad@192.168.1.11/test
2019-11-27 21:21:55.545 INFO BACKUP_START No previous backup found
2019-11-27 21:21:55.914 INFO VSS_DONE Shadow copy created and mounted at /tmp/snp_213689641
2019-11-27 21:21:55.914 INFO BACKUP_INDEXING Indexing /Users/vlad/Dropbox
2019-11-27 21:21:55.914 INFO SNAPSHOT_FILTER Parsing filter file /Users/vlad/Dropbox/.duplicacy/filters
2019-11-27 21:21:55.914 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2019-11-27 21:21:56.552 INFO BACKUP_LIST Listing all chunks
2019-11-27 21:21:57.217 INFO UPLOAD_PROGRESS Uploaded chunk 1 size 1783321, 1.70MB/s 00:27:40 0.0%
2019-11-27 21:21:57.375 INFO UPLOAD_PROGRESS Uploaded chunk 2 size 1403112, 3.04MB/s 00:15:29 0.1%
2019-11-27 21:21:58.713 INFO UPLOAD_PROGRESS Uploaded chunk 3 size 9941349, 6.26MB/s 00:07:30 0.4%
2019-11-27 21:22:01.014 INFO UPLOAD_PROGRESS Uploaded chunk 4 size 16478452, 5.65MB/s 00:08:16 0.9%
2019-11-27 21:22:02.222 INFO UPLOAD_PROGRESS Uploaded chunk 5 size 6997979, 5.82MB/s 00:08:00 1.2%
2019-11-27 21:22:03.711 INFO UPLOAD_PROGRESS Uploaded chunk 6 size 4435348, 5.59MB/s 00:08:19 1.3%
2019-11-27 21:22:03.907 INFO UPLOAD_PROGRESS Uploaded chunk 7 size 1100142, 5.74MB/s 00:08:05 1.4%
2019-11-27 21:22:04.444 INFO UPLOAD_PROGRESS Uploaded chunk 8 size 3280025, 5.41MB/s 00:08:34 1.5%
2019-11-27 21:22:04.943 INFO UPLOAD_PROGRESS Uploaded chunk 9 size 2192313, 5.68MB/s 00:08:10 1.6%
2019-11-27 21:22:05.766 INFO UPLOAD_PROGRESS Uploaded chunk 10 size 3614789, 5.43MB/s 00:08:32 1.7%
2019-11-27 21:22:07.116 INFO UPLOAD_PROGRESS Uploaded chunk 11 size 5937934, 4.96MB/s 00:09:19 1.9

And lastly, I tested again with the old version (2.2.3) just to get a sense of speed and it is much much faster (25-30MB/s):

 ~/Dropbox  duplicacy2.2.3 init -encrypt VMacTest /Volumes/BackupV
Enter storage password for /Volumes/BackupV:****************************************************************
The storage '/Volumes/BackupV' has already been initialized
Compression level: 100
Average chunk size: 4194304
Maximum chunk size: 16777216
Minimum chunk size: 1048576
Chunk seed: 897fe215233abffb006661497326c230eb1428bf36070980af629a7530d69a2f
/Users/vlad/Dropbox will be backed up to /Volumes/BackupV with id VMacTest
 
 ✘  ~/Dropbox  duplicacy -log backup -stats
2019-11-27 21:25:19.820 INFO STORAGE_SET Storage set to /Volumes/BackupV
2019-11-27 21:25:27.443 INFO BACKUP_START No previous backup found
2019-11-27 21:25:27.443 INFO BACKUP_INDEXING Indexing /Users/vlad/Dropbox
2019-11-27 21:25:27.443 INFO SNAPSHOT_FILTER Parsing filter file /Users/vlad/Dropbox/.duplicacy/filters
2019-11-27 21:25:27.443 INFO SNAPSHOT_FILTER Loaded 0 include/exclude pattern(s)
2019-11-27 21:25:29.288 INFO UPLOAD_PROGRESS Uploaded chunk 1 size 2145302, 1.02MB/s 00:45:59 0.0%
2019-11-27 21:25:29.312 INFO UPLOAD_PROGRESS Skipped chunk 2 size 1212810, 1.60MB/s 00:29:22 0.1%
2019-11-27 21:25:29.328 INFO UPLOAD_PROGRESS Skipped chunk 3 size 2007459, 2.56MB/s 00:18:22 0.1%
2019-11-27 21:25:29.637 INFO UPLOAD_PROGRESS Skipped chunk 4 size 15443875, 9.92MB/s 00:04:43 0.7%
2019-11-27 21:25:29.643 INFO UPLOAD_PROGRESS Skipped chunk 5 size 5817797, 12.70MB/s 00:03:41 0.8%
2019-11-27 21:25:29.701 INFO UPLOAD_PROGRESS Skipped chunk 6 size 4784606, 14.98MB/s 00:03:07 1.0%
2019-11-27 21:25:29.720 INFO UPLOAD_PROGRESS Skipped chunk 7 size 1220246, 15.56MB/s 00:03:00 1.1%
2019-11-27 21:25:29.794 INFO UPLOAD_PROGRESS Skipped chunk 8 size 12561249, 21.55MB/s 00:02:10 1.5%
2019-11-27 21:25:29.799 INFO UPLOAD_PROGRESS Skipped chunk 9 size 1150653, 22.10MB/s 00:02:06 1.5%
2019-11-27 21:25:29.804 INFO UPLOAD_PROGRESS Skipped chunk 10 size 1266680, 22.70MB/s 00:02:03 1.6%
2019-11-27 21:25:29.982 INFO UPLOAD_PROGRESS Skipped chunk 11 size 16777216, 30.70MB/s 00:01:30 2.1%
2019-11-27 21:25:29.998 INFO UPLOAD_PROGRESS Skipped chunk 12 size 2326396, 31.81MB/s 00:01:27 2.2%
2019-11-27 21:25:30.044 INFO UPLOAD_PROGRESS Skipped chunk 13 size 1989702, 21.84MB/s 00:02:07 2.3%
2019-11-27 21:25:30.128 INFO UPLOAD_PROGRESS Skipped chunk 14 size 2426828, 22.61MB/s 00:02:02 2.4%
2019-11-27 21:25:30.179 INFO UPLOAD_PROGRESS Skipped chunk 15 size 1810704, 23.19MB/s 00:01:59 2.4%
2019-11-27 21:25:30.192 INFO UPLOAD_PROGRESS Skipped chunk 16 size 1895979, 23.79MB/s 00:01:56 2.5%
2019-11-27 21:25:32.170 INFO UPLOAD_PROGRESS Uploaded chunk 17 size 10194482, 16.22MB/s 00:02:50 2.8%
2019-11-27 21:25:32.196 INFO UPLOAD_PROGRESS Skipped chunk 18 size 2982936, 16.79MB/s 00:02:44 2.9%
2019-11-27 21:25:32.227 INFO UPLOAD_PROGRESS Skipped chunk 19 size 4684822, 17.68MB/s 00:02:35 3.1%
2019-11-27 21:25:32.276 INFO UPLOAD_PROGRESS Skipped chunk 20 size 3538023, 18.36MB/s 00:02:29 3.2%
2019-11-27 21:25:32.308 INFO UPLOAD_PROGRESS Skipped chunk 21 size 2850091, 18.90MB/s 00:02:25 3.3%

In conclusion, for now I will stick to duplicacy 2.2.3 until 2.3.0 will work on Catalina with a network share.

When it says “Skipped chunk”, the chunk isn’t actually uploaded, but the speed is still calculated as if the chunk were uploaded, so the numbers there do not reflect the actual transfer speed.

The performances of NFS and SFTP should be similar.

1 Like

That I didn’t know. I probably wrongly assumed SFTP is slower due to encryption overhead. But I redid tests and it is actually slightly faster in my case. But there is some error introduced in 2.3.0, right?

There was no code change regarding the file backend (duplicacy_filestorage.c), but version 2.3.0 was built with go 1.12. I’m not sure if that is the reason.

I’m confirming that I experience the same issue with an SMB mount to a NAS from my laptops. In the destination, I see the .tmp file, as well as a file without the .{ramdom}.tmp extension. The two files differ, so it doesn’t look like I should mess with either of them.

From the client experiencing the issue, I’m on OS X Mojave, and this was working a couple weeks ago with what I presume was 2.2.3. The current version in ~/.duplicacy-web/bin is 2.3.0.

Are you seeing the same error message? That is, the permission issue is with the sync operation.

If you are seeing two files with the same hex name but one with the .tmp suffix, it could be the move operation that was failing. You can delete the .tmp one - both files should contain the same chunk, but they were encrypted with different random keys.

Yes, I see the same “Operation not permitted” message. Though I have full permission to the destination folder, so I don’t know that it’s actually a permission issue rather than an unsupported call.

The error I receive:

2019-12-10 19:27:16.406 ERROR UPLOAD_CHUNK Failed to upload the chunk 8c394bf54eb6771037c0de9553c0a024f182df82464c15361252f955631d1a98: sync /Volumes/storage/Backups/duplicacy/chunks/8c/394bf54eb6771037c0de9553c0a024f182df82464c15361252f955631d1a98.xhbuzpnv.tmp: operation not supported
Failed to upload the chunk 8c394bf54eb6771037c0de9553c0a024f182df82464c15361252f955631d1a98: sync /Volumes/storage/Backups/duplicacy/chunks/8c/394bf54eb6771037c0de9553c0a024f182df82464c15361252f955631d1a98.xhbuzpnv.tmp: operation not supported

I’ll try cleaning up all of the .tmp files in the destination directory, and see if that changes anything. The big concern for me is whether or not the .tmp files contain any data that could be missing in the other files, and therefore could be useful for restore at some point.

After renaming the existing .{random}.tmp file, a new .{randon}.tmp file appeared, and I received the same error message. I have confirmed that from this client, though, the destination file does not exist yet. So the .{random}.tmp file has no file with the hash component without the various tmp extension formats.

I can create a file, and rename a file from the command line, on the client I’m backing up from, in the destination folder used by Duplicacy.

$ echo “foobarbaz” > /Volumes/storage/Backups/duplicacy/chunks/8c/DELETEME.tmp
$ mv /Volumes/storage/Backups/duplicacy/chunks/8c/DELETEME.tmp /Volumes/storage/Backups/duplicacy/chunks/8c/DELETEME
$ ls -ahl /Volumes/storage/Backups/duplicacy/chunks/8c/DEL*
-rwx------ 1 tom 10 Dec 10 19:40 /Volumes/storage/Backups/duplicacy/chunks/8c/DELETEME

I’m not too sure what’s happening here at this point. I’ve tried verbose and debug mode, and the only error message received is

2019-12-10 19:39:17.284 ERROR UPLOAD_CHUNK Failed to upload the chunk 8c394bf54eb6771037c0de9553c0a024f182df82464c15361252f955631d1a98: sync /Volumes/storage/Backups/duplicacy/chunks/8c/394bf54eb6771037c0de9553c0a024f182df82464c15361252f955631d1a98.uukuqxas.tmp: operation not supported

From what I can find, this seems to be the reason.

It looks like this might be an an incompatibility between macOS and most SMB servers (e.g., Samba <4.8) introduced by changes in go 1.12.

go 1.12 release notes: Go 1.12 Release Notes - The Go Programming Language

File.Sync now uses F_FULLFSYNC on macOS to correctly flush the file contents to permanent storage. This may cause the method to run more slowly than in previous releases.

@372800b839af73b7fc0b what is the NAS in this instance? Seems like Samba didn’t add support for fullsync until Samba 4.8; so if you can upgrade Samba, that might be a potential fix.

Additional reading:

That seems like a likely candidate. But I’m running Samba 4.10, and I have fruit:advertise_fullsync = true enabled in my Samba config with no change.

Either way, since the comment above says no change has happened to file code and that the only appreciable change is Go 1.12, and we see references from other utilities experiencing the same issue, this seems like a decent place to start looking for a fix.

@gchen It looks like anywhere a .Sync() is called on a file descriptor, the error check should contain code that attempts to see if Sync() is supported by the platform. If you take contributions from the community, I could probably spend some time on Thursday evening grepping through the CLI source and adding something simple.

Alternatively, we could do something I’ve had to do previously, and break low-level file handling code out into platform specific modules. I’ve got one package where I’ve some something like filestuff.go and filestuff_darwin.go because I use O_DIRECT flags on everything except OS X.

Indeed the server I first tested on was Ubuntu 18.04 which has Samba 4.7.6+dfsg~ubuntu-0ubuntu2.1.
However, I just tested on a Debian 10 (Buster) which has Samba 4.9.5+dfsg-5+deb10u1, and I have the same issue.

1 Like

Are you sure that’s the correct setting? It looks like some earlier patches for Samba may have used that, but based on the documentation I can find on samba.org the setting to advertise fullsync is fruit:time machine = yes (reference).

What about after setting fruit:time machine = yes on the share?

Still not working.

 cat /etc/samba/smb.conf
 ...
 [Docs]
 path = /home/share
 public = no
 valid users= vlad @sambashare
 force group = sambashare
 browseable = yes
 writable = yes
 create mode = 0664
 directory mode = 0775
 vfs objects = catia fruit streams_xattr
 fruit:time machine = yes
 ...

 $ duplicacy2.3.0 init VMac /Volumes/Docs
 Failed to configure the storage: sync /Volumes/Docs/config.stvxkmap.tmp: operation not supported
 
 $ duplicacy2.2.3 init VMac /Volumes/Docs
 /Users/vlad/Dropbox will be backed up to /Volumes/Docs with id VMac

Interesting. I set fruit:time machine = yes on the share, and it seems to work. @vlad did you restart or stop and start the smbd service? Using the stop/start trick worked for me, but restart didn’t seem to.

1 Like

@372800b839af73b7fc0b, yes, if you can submit a PR to fix this it will be highly appreciated.

That is very strange. Stop and start seems to work just fine.
I can now confirm that with this setting duplicacy 2.3.0 works on a samba storage.

1 Like

Sorry to pull up an old thread but I’m having this issue as well. Unfortunately I’m running a Windows server so I’m not sure it’s even possible to add time machine support.

Are my only options to run duplicacy 2.2.3 or run samba through WSL?

This has been fixed by Fix "Failed to upload the chunk ... sync ...: operation not supported" by fbarthez · Pull Request #590 · gilbertchen/duplicacy · GitHub, which has been merged and will be included in a new CLI release available later tonight.