Exit status 101?

I am getting exit status 101 when copying one storage into another:

Options: [-log -d copy -from GDsnapshots -to GDrive01 -threads 4]
2022-07-09 18:10:40.044 INFO STORAGE_SET Source storage set to gcd://.duplicacy-snapshots
2022-07-09 18:10:40.044 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_GDSNAPSHOTS_GCD_TOKEN
2022-07-09 18:10:41.498 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_GDSNAPSHOTS_PASSWORD
2022-07-09 18:10:42.280 TRACE CONFIG_ITERATIONS Using 16384 iterations for key derivation
2022-07-09 18:10:42.469 DEBUG STORAGE_NESTING Chunk read levels: [1], write level: 1
2022-07-09 18:10:42.469 INFO CONFIG_INFO Compression level: 100
2022-07-09 18:10:42.469 INFO CONFIG_INFO Average chunk size: 4194304
2022-07-09 18:10:42.469 INFO CONFIG_INFO Maximum chunk size: 16777216
2022-07-09 18:10:42.469 INFO CONFIG_INFO Minimum chunk size: 1048576
2022-07-09 18:10:42.469 TRACE CONFIG_INFO File chunks are encrypted
...
2022-07-09 18:10:42.469 TRACE CONFIG_INFO Metadata chunks are encrypted
2022-07-09 18:10:42.472 INFO STORAGE_SET Destination storage set to gcd://.duplicacy
2022-07-09 18:10:42.473 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_GDRIVE01_GCD_TOKEN
2022-07-09 18:10:43.869 DEBUG PASSWORD_ENV_VAR Reading the environment variable DUPLICACY_GDRIVE01_PASSWORD
2022-07-09 18:10:44.556 TRACE CONFIG_ITERATIONS Using 16384 iterations for key derivation
2022-07-09 18:10:44.762 DEBUG STORAGE_NESTING Chunk read levels: [1], write level: 1
2022-07-09 18:10:44.854 INFO CONFIG_INFO Compression level: 100
2022-07-09 18:10:44.854 INFO CONFIG_INFO Average chunk size: 4194304
2022-07-09 18:10:44.854 INFO CONFIG_INFO Maximum chunk size: 16777216
2022-07-09 18:10:44.854 INFO CONFIG_INFO Minimum chunk size: 1048576
...
2022-07-09 19:33:54.683 DEBUG SNAPSHOT_COPY Copying chunk 1406a1cd59843212520d0f5b561066594145f309a59e3de66376a0d128862ab7 to 7d62dbb796d07dd7b06d49d1cbb07a2d0569ac68bc273b07a47e82f094939fca
2022-07-09 19:33:54.720 DEBUG CHUNK_UPLOAD Chunk fe33e8e9667e885f945cd28a2cb2488f367c853d593920348220152545867c45 has been uploaded
2022-07-09 19:33:54.720 INFO COPY_PROGRESS Copied chunk fe33e8e9667e885f945cd28a2cb2488f367c853d593920348220152545867c45 (1633/445468) 1.69MB/s 15 days 14:27:00 0.4%
2022-07-09 19:33:55.009 DEBUG CHUNK_UPLOAD Chunk 0dcbee9efbcc8682cb75f9c9e9a409b126ba2e368effca2ea6bb51dfb428e124 has been uploaded
2022-07-09 19:33:55.009 INFO COPY_PROGRESS Copied chunk 0dcbee9efbcc8682cb75f9c9e9a409b126ba2e368effca2ea6bb51dfb428e124 (1632/445468) 1.69MB/s 15 days 14:42:07 0.4%
2022-07-09 19:33:55.621 DEBUG CHUNK_UPLOAD Chunk 7a9db077c7d526de28898d18a326a3ecdb52fde18398691908f03e7179cb36aa has been uploaded
2022-07-09 19:33:55.621 INFO COPY_PROGRESS Copied chunk 7a9db077c7d526de28898d18a326a3ecdb52fde18398691908f03e7179cb36aa (1634/445468) 1.69MB/s 15 days 14:17:16 0.4%
exit status 101

Both storages are on the same GDrive; GDsnapshots (source) one uses a folder in the root of the drive, GDrive01 (destination) uses application data folder of a service account. Copy operation works for a while (a few thousands of chunks), then just dies with exit status 101. So no issues with storage setup/authorization, something happens afterwards.

What could be a reason for such failure? This is repeatable, fails after running for 1-2 hours and several thousands for chunks.

Is duplicacy being killed due to memory exhaustion or limit on a runtime?

What is in duplicacy_web log? What is in /var/log/messages on your host system?

Hm, nothing interesting in /var/log/messages, but duplicacy_web.log shows this:

2022/07/09 21:36:20 CLI stderr: runtime error: invalid memory address or nil pointer dereference
goroutine 616 [running]:
runtime/debug.Stack()
        /usr/local/go/src/runtime/debug/stack.go:24 +0x65
runtime/debug.PrintStack()
        /usr/local/go/src/runtime/debug/stack.go:16 +0x19
github.com/gilbertchen/duplicacy/src.CatchLogException()
        /home/xxxxxx/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_log.go:233 +0xad
panic({0x10101e0, 0x1bc1940})
        /usr/local/go/src/runtime/panic.go:838 +0x207
github.com/gilbertchen/duplicacy/src.(*ChunkOperator).UploadChunk(0xc007272280, 0xc0005d1d48?, {0x1, {0xc003ae4040, 0x40}, {0xc006fd13a0, 0x20}, 0x521, {0x0, 0x0}, ...})
        /home/xxxxxx/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkoperator.go:494 +0x11b
github.com/gilbertchen/duplicacy/src.(*ChunkOperator).Run(0xd0d000000000003?, 0xc000055784?, {0x1, {0xc003ae4040, 0x40}, {0xc006fd13a0, 0x20}, 0x521, {0x0, 0x0}, ...})
        /home/xxxxxx/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkoperator.go:183 +0x1018
github.com/gilbertchen/duplicacy/src.CreateChunkOperator.func1(0x0?)
        /home/xxxxxx/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkoperator.go:88 +0xe5
created by github.com/gilbertchen/duplicacy/src.CreateChunkOperator
        /home/xxxxxx/go/src/github.com/gilbertchen/duplicacy/src/duplicacy_chunkoperator.go:83 +0x1e9
2022/07/09 21:36:20 The CLI executable returned an error: exit status 101, exit code: 101

There shouldn’t be any issues with available memory - there is about 4G available RAM + 16G almost empty swap, and duplicacy CLI doesn’t show memusage above 5%.

I have to mention though that this CLI is the latest and the greatest from the master branch (fc2386f9ccd7cf7af7829c29f0a409e4b7527566?), not the official 2.7.2 as IIRC 2.7.2 still doesn’t support appDataFolder.

Master branch is not stable. You can track down the bug and submit PR :slight_smile:

Don’t really have time nor inclination for in-depth analysis, but looking at the trace and the corresponding code, the failure happens in duplicacy_chunkoperator.go here:

func (operator *ChunkOperator) UploadChunk(threadIndex int, task ChunkTask) bool {

	chunk := task.chunk
	chunkID := task.chunkID
	chunkSize := chunk.GetLength()

	// For a snapshot chunk, verify that its chunk id is correct
	if task.isMetadata {
		chunk.VerifyID()
	}

	if task.isMetadata && operator.storage.IsCacheNeeded() { // < missing operator.snapshotCache check?
		// Save a copy to the local snapshot.
PANIC!>	chunkPath, exist, _, err := operator.snapshotCache.FindChunk(threadIndex, chunkID, false)
		if err != nil {
			LOG_WARN("UPLOAD_CACHE", "Failed to find the cache path for the chunk %s: %v", chunkID, err)
		} else if exist {
			LOG_DEBUG("CHUNK_CACHE", "Chunk %s already exists in the snapshot cache", chunkID)
		} else if err = operator.snapshotCache.UploadFile(threadIndex, chunkPath, chunk.GetBytes()); err != nil {
			LOG_WARN("UPLOAD_CACHE", "Failed to save the chunk %s to the snapshot cache: %v", chunkID, err)
		} else {
			LOG_DEBUG("CHUNK_CACHE", "Chunk %s has been saved to the snapshot cache", chunkID)
		}
	}

So operator.snapshotCache is nil? Other similar snippets using operator.snapshotCache check for nil (e.g. beginning of DownloadChunk), so perhaps it is as easy as adding && operator.snapshotCache != nil to corresponding checks in UploadChunk and bottom of DownloadChunk. I don’t know the expected semantics of this datastructure though, so it is possible that it is merely a symptom of a problem elsewhere.

I’ll leave it to @gchen to take a look.

@sevimo you’re right. operator.snapshotCache is always nil during a copy operation so this bug shows up every time a metadata chunk is being copied. It wasn’t caught by system tests because only the file storage is used there which doesn’t have the cache enabled.

I’ve checked in Fixed a bug that referenced uninitialized operator.snapshotCache · gilbertchen/duplicacy@54952ce · GitHub

Thanks @gchen, I’ll see if it fixes it. But before I do that, take a look at line 448 in the same file, there is another usage of operator.snapshotCache without (obvious) check. Take a look as to whether or not it is also needed there as well.

Cheers.

P.S. Also, probably makes sense to run some system tests with cloud backends, this is not exactly a marginal case :wink:

That line is ok, since it is guarded by the condition len(cachedPath) > 0, and cachedPath can only be set to a non-empty value at line 301 if operator.snapshotCache is not nil.

So far so good, updated version has been running copy for about a day, no issues.