Hello everyone,
Please describe what you are doing to trigger the bug:
Running a backup
Please describe what you expect to happen (but doesn’t):*
No out of memory, since there is plenty there
Please describe what actually happens (the wrong behaviour):
After some time (seconds or minutes) I get this in the log:
2024/07/03 00:11:43 The CLI executable returned an error: exit status 2, exit code: 2
2024/07/03 00:11:43 CLI stderr: runtime: out of memory: cannot allocate 268435456-byte block (1706000384 in use)
fatal error: out of memory
goroutine 580695 [running]:
runtime.throw({0xd9095c, 0xd})
/usr/local/go/src/runtime/panic.go:1047 +0x4c fp=0x2bbd6a0 sp=0x2bbd68c pc=0x4f3d0
runtime.(*mcache).allocLarge(0x75f5d2e0, 0x10000000, 0x1)
/usr/local/go/src/runtime/mcache.go:235 +0x218 fp=0x2bbd6c8 sp=0x2bbd6a0 pc=0x272cc
runtime.mallocgc(0x10000000, 0x0, 0x0)
/usr/local/go/src/runtime/malloc.go:1029 +0x6b0 fp=0x2bbd708 sp=0x2bbd6c8 pc=0x1cc58
runtime.growslice(0xc2ab60, {0x0, 0x0, 0x0}, 0x10000000)
/usr/local/go/src/runtime/slice.go:284 +0x454 fp=0x2bbd730 sp=0x2bbd708 pc=0x6b480
bytes.growSlice({0x14900000, 0x7ff1807, 0x8000000}, 0x10003)
/usr/local/go/src/bytes/buffer.go:240 +0x98 fp=0x2bbd768 sp=0x2bbd730 pc=0x12d2ec
bytes.(*Buffer).grow(0x2b28000, 0x10003)
/usr/local/go/src/bytes/buffer.go:142 +0x140 fp=0x2bbd78c sp=0x2bbd768 pc=0x12cc78
bytes.(*Buffer).Write(0x2b28000, {0x21182000, 0x10003, 0x20000})
/usr/local/go/src/bytes/buffer.go:170 +0x54 fp=0x2bbd7a0 sp=0x2bbd78c pc=0x12ce64
github.com/klauspost/compress/zstd.(*Encoder).nextBlock.func1.2()
/Users/gchen/zincbox/go/pkg/mod/github.com/klauspost/compress@v1.16.3/zstd/encoder.go:364 +0x1d0 fp=0x2bbd7ec sp=0x2bbd7a0 pc=0x48f628
runtime.goexit()
[...]
→ QNAP 431P2 with 8GB RAM
→ duplicacy_linux_arm_3.2.3
started repo with:
[/share/external/DEV3303_1] # /share/CACHEDEV1_DATA/.qpkg/Duplicacy/duplicacy_linux_arm_3.2.3 -d -log init -c 128M -zstd-level fastest 24TB /share/external/DEV3303_1/
2024-07-02 13:49:54.732 INFO CONFIG_INFO Compression level: 200
2024-07-02 13:49:54.732 INFO CONFIG_INFO Average chunk size: 134217728
2024-07-02 13:49:54.732 INFO CONFIG_INFO Maximum chunk size: 536870912
2024-07-02 13:49:54.732 INFO CONFIG_INFO Minimum chunk size: 33554432
2024-07-02 13:49:54.732 INFO CONFIG_INFO Chunk seed: 6475706c6963616379
2024-07-02 13:49:54.732 TRACE CONFIG_INFO Hash key: 6475706c6963616379
2024-07-02 13:49:54.732 TRACE CONFIG_INFO ID key: 6475706c6963616379
2024-07-02 13:49:54.733 INFO REPOSITORY_INIT /share/external/DEV3303_1 will be backed up to /share/external/DEV3303_1/ with id 24TB
run backup with no extra settings from the UI.
2024/07/02 23:51:17 Running /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/bin/duplicacy_linux_arm_3.2.3 [-log backup -storage 24TB -stats]
2024/07/02 23:51:17 Set current working directory to /share/CACHEDEV1_DATA/.qpkg/Duplicacy/.duplicacy-web/repositories/localhost/0
Data is abouzt 16TB with filesizes between KBs and 100GB.
Source is the QNAP RAID-5 and target is an USB drive with 24TB (hence the backup name)
What am I doing wrong?
Is it really trying to use more then 8GB?
[/mnt] # free
total used free shared buffers
Mem: 8291848 1621060 6670788 260104 112828
Swap: 7232760 7980 7224780
Total: 15524608 1629040 13895568
I used a ramdisk to check if using 6GB of the 8GB is possible without issue and it worked. Copied a file onto the ramdisk and saw it on htop.
Anyone knows why this is an issue?
Best
Me