Fatal error: out of memory - QNAP

Hello everyone,

Please describe what you are doing to trigger the bug:
Running a backup :slight_smile:

Please describe what you expect to happen (but doesn’t):*
No out of memory, since there is plenty there

Please describe what actually happens (the wrong behaviour):
After some time (seconds or minutes) I get this in the log:

2024/07/03 00:11:43 The CLI executable returned an error: exit status 2, exit code: 2
2024/07/03 00:11:43 CLI stderr: runtime: out of memory: cannot allocate 268435456-byte block (1706000384 in use)
fatal error: out of memory


goroutine 580695 [running]:
runtime.throw({0xd9095c, 0xd})
        /usr/local/go/src/runtime/panic.go:1047 +0x4c fp=0x2bbd6a0 sp=0x2bbd68c pc=0x4f3d0
runtime.(*mcache).allocLarge(0x75f5d2e0, 0x10000000, 0x1)
        /usr/local/go/src/runtime/mcache.go:235 +0x218 fp=0x2bbd6c8 sp=0x2bbd6a0 pc=0x272cc
runtime.mallocgc(0x10000000, 0x0, 0x0)
        /usr/local/go/src/runtime/malloc.go:1029 +0x6b0 fp=0x2bbd708 sp=0x2bbd6c8 pc=0x1cc58
runtime.growslice(0xc2ab60, {0x0, 0x0, 0x0}, 0x10000000)
        /usr/local/go/src/runtime/slice.go:284 +0x454 fp=0x2bbd730 sp=0x2bbd708 pc=0x6b480
bytes.growSlice({0x14900000, 0x7ff1807, 0x8000000}, 0x10003)
        /usr/local/go/src/bytes/buffer.go:240 +0x98 fp=0x2bbd768 sp=0x2bbd730 pc=0x12d2ec
bytes.(*Buffer).grow(0x2b28000, 0x10003)
        /usr/local/go/src/bytes/buffer.go:142 +0x140 fp=0x2bbd78c sp=0x2bbd768 pc=0x12cc78
bytes.(*Buffer).Write(0x2b28000, {0x21182000, 0x10003, 0x20000})
        /usr/local/go/src/bytes/buffer.go:170 +0x54 fp=0x2bbd7a0 sp=0x2bbd78c pc=0x12ce64
github.com/klauspost/compress/zstd.(*Encoder).nextBlock.func1.2()
        /Users/gchen/zincbox/go/pkg/mod/github.com/klauspost/compress@v1.16.3/zstd/encoder.go:364 +0x1d0 fp=0x2bbd7ec sp=0x2bbd7a0 pc=0x48f628
runtime.goexit()

[...]

→ QNAP 431P2 with 8GB RAM
→ duplicacy_linux_arm_3.2.3

started repo with:

[/share/external/DEV3303_1] # /share/CACHEDEV1_DATA/.qpkg/Duplicacy/duplicacy_linux_arm_3.2.3 -d -log init -c 128M -zstd-level fastest 24TB /share/external/DEV3303_1/
2024-07-02 13:49:54.732 INFO CONFIG_INFO Compression level: 200
2024-07-02 13:49:54.732 INFO CONFIG_INFO Average chunk size: 134217728
2024-07-02 13:49:54.732 INFO CONFIG_INFO Maximum chunk size: 536870912
2024-07-02 13:49:54.732 INFO CONFIG_INFO Minimum chunk size: 33554432
2024-07-02 13:49:54.732 INFO CONFIG_INFO Chunk seed: 6475706c6963616379
2024-07-02 13:49:54.732 TRACE CONFIG_INFO Hash key: 6475706c6963616379
2024-07-02 13:49:54.732 TRACE CONFIG_INFO ID key: 6475706c6963616379
2024-07-02 13:49:54.733 INFO REPOSITORY_INIT /share/external/DEV3303_1 will be backed up to /share/external/DEV3303_1/ with id 24TB

run backup with no extra settings from the UI.

2024/07/02 23:51:17 Running /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/bin/duplicacy_linux_arm_3.2.3 [-log backup -storage 24TB -stats]
2024/07/02 23:51:17 Set current working directory to /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/repositories/localhost/0

Data is abouzt 16TB with filesizes between KBs and 100GB.
Source is the QNAP RAID-5 and target is an USB drive with 24TB (hence the backup name)

What am I doing wrong?
Is it really trying to use more then 8GB?

[/mnt] # free
              total         used         free       shared      buffers
  Mem:      8291848      1621060      6670788       260104       112828
 Swap:      7232760         7980      7224780
Total:     15524608      1629040     13895568

I used a ramdisk to check if using 6GB of the 8GB is possible without issue and it worked. Copied a file onto the ramdisk and saw it on htop.

Anyone knows why this is an issue?

Best
Me :wink:

What version of duplicacy CLI?

Depending on number of files you are backing up 8GB may not be enough.

duplicacy_linux_arm_3.2.3

But why is the number of files relevant?
Anyway I tried now just 2 small text files with a chunk size of (-c 128M) as well as one try withl 10-50GB files: Just a second after starting:

2024/07/03 22:56:27 192.168.33.169:2010 POST /start_stop_backup
2024/07/03 22:56:27 Created log file /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/logs/backup-20240703-225627.log
2024/07/03 22:56:27 Running /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/bin/duplicacy_linux_arm_3.2.3 [-log backup -storage 24TB -stats]
2024/07/03 22:56:27 Set current working directory to /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/repositories/localhost/0
2024/07/03 22:56:27 192.168.33.169:2010 POST /get_backup_status
2024/07/03 22:56:28 192.168.33.169:2010 POST /get_backup_status
2024/07/03 22:56:29 192.168.33.169:2010 POST /get_backup_status
2024/07/03 22:56:29 CLI stderr: runtime: out of memory: cannot allocate 1073741824-byte block (1215561728 in use)
fatal error: out of memory

The default chunk size is 4MB and you specified a chunk size of 128M when initializing the repository. Was that on purpose? I suspect that’s your problem.

So it attempts to allocate 1GB block and fails.
I’m not sure why would it want to allocate 1GB contiguous block, @gchen?, but your system evidently does not have enough contiguous memory and fails to page off other stuff.

What happens if you specify 256M block size? Will it attempt 2GB allocation?

Also I agree with @leftytennis, 128MB block seems high. What are you trying to accomplish?

Thanks you for your info and feedback so far :slight_smile:

Just to answer this question:

-c 256M
2024/07/04 23:24:23 CLI stderr: runtime: out of memory: cannot allocate 1073741824-byte block (1215627264 in use)
-c 512M
2024/07/04 23:26:56 CLI stderr: runtime: out of memory: cannot allocate 322961408-byte block (1215823872 in use)
-c 1024M
init did not finish … hang

Strange results :slight_smile:

What are you trying to accomplish?

Well I thought, that it will corresponed to the chunk size in the storage. Since I have mostly larger files (1GB+) and there not really deduplicateable, a larger chunk size should reduce the CPU/HDD usage find and linking. And folder with a lot of file was never a good thing to have :smiley:

But this lager chunk size change seems not the be a good idea afterall?

It looks like the implicit assumption is that the chunk size is “small” enough that allocating a bunch of them should not be an issue, and on the contrary, allocating blocks of memory for a few of them to be more effective than each one individually.

That is no longer a problem with modern filesystems.

Are you referring to non-compressible, undeduplicatabe, immutable files like media files? There is no reason to use duplicacy for them in the first place - you can just sync them to immutable bucket. Bucket immutability with protect against bit rot, and accidental deletions. Duplicacy is 100% waste of resource in this case.

But if you still want to – e.g. to have everything in one place – sure, use it in default configuration.

The only time I had to mess with chunk size is with Storj endpoint to optimize performance and cost by making average chunk as close to storj segment size as possible to reduce overhead.

I see. Thank you. With the default 4M I had no issue so far. :slight_smile:
I guess the topic can be called closed :slight_smile:

Thank you for everyones help :slight_smile:
See you next time :slight_smile:

Indeed. Not everything but surely are large part. Most of my data is either compressed due to being a media file, or due to being ran through some sort of compression (gz/zip/…) already.I may could use some zip-deduplication-friendly alignement, but my backup files rotate so “often”, I usually don’t care and I have the space.

Therefore, you might be totaly right and I may have chosen the wrong backup solution for my case.
Cheers!