Memory Usage

After the file list phase, do any of the block size setting or other preferences control memory use?

Testing with default settings I was seeing about 100MB RAM allocated for the file list phases, and then during backup, pretty flat at 450MB RAM. That was more RAM than I want hoping to use on some clients.

The default average chunk size is 4MB, but it is the maximum chunk size (16 MB) that determines the size of buffers to be allocated, and there could be multiple buffers. If you set the average chunk size to 1MB when initializing the storage, the default maximum size will be 4MB, and that could reduce the memory footprint a bit.

What’s the status of this issue? I have a 1.7TB backup via another provider’s software and want to switch to Duplicacy, but memory usage is definitely going to be important.

I haven’t got a chance to work on this. However, a 1.7 TB backup may not consume too much memory, if the number of files isn’t huge. I know several customers who back up millions of files totaling more than 10 TB.

Any progress on this?

I have a 800GB backup I’m trying to perform on Ubuntu 16.04 with 4GB of RAM and the backup is getting ‘Killed’

Sorry no progress so far. I’ll be focusing on the new web-based GUI in the next two months and hope to tackle this problem after that.

5 Likes

First, thanks for writing Duplicacy. I’ve had to suffer though many slow, difficult to use backup solutions in the past, whereas Duplicacy is quick and pretty much effortless.

I recently ran my first backup to S3, using the normal (non-RSA) encryption and a small filters list, with Duplicacy 2.3.0 on a Linux x86-64 system. By around 25% through the backup, the process was using ~1.1GiB RAM. By the end, it was using ~1.5GiB of RAM. The system barely had enough memory to finish the job.

The backup command was simply duplicacy backup -stats -threads 8. These were the final stats for the backup:

159003 total, 44,738M bytes; 159003 new, 44,738M bytes
File chunks: 9093 total, 44,738M bytes; 9084 new, 44,721M bytes, 42,754M bytes uploaded
Metadata chunks: 13 total, 51,830K bytes; 13 new, 51,830K bytes, 18,672K bytes uploaded
All chunks: 9106 total, 44,788M bytes; 9097 new, 44,772M bytes, 42,772M bytes uploaded

Assuming that memory use scales ~linearly with number of files, I don’t understand how anyone could back up millions of files without 16GiB or more RAM just to dedicate to the Duplicacy process alone. By my calculation this is ~10K of memory per backed up file. Does this seem correct?

Is this problem supposed to have been fixed already? If not, is it on the roadmap to be fixed in the next few months? Should I expect the same memory use on every incremental backup and prune operation?

Thank you again, and thanks for the response. Let me know if I can give any more information.

The number of files is only one factor. The number of threads is another. Moreover, Go being a garbage collection language can cause more memory to be used than what is needed. Therefore, you can’t simply extrapolate the actual memory usage from a small data set.

Thank you for a quick response! If this is a partially GC-related issue, does setting GOGC to a lower-than-default value help, in your experience? Also, how about those other questions? I have some additional follow-ups but I don’t want to bombard you with a bunch of extra questions since I am sure you are busy, and if the answer is “improvements are coming” then I don’t need to waste your time with them. :slight_smile: Thanks!

I think GOGC should help, but I don’t know by how much.

The improvements on memory usage are planned but I haven’t really started working on it.

Questions are always welcome. For those related to memory usage, there isn’t a simple proportional function to predict, so the best way is to try it out yourself.

@gchen I am also running into VirtualAlloc failure - see exception stack attached.
hiavi-DuplicacyMemoryExceptionDuringBackupAfter32.2percent.txt (11.5 KB)

One odd thing was that the allocation attempt was for zero bytes.

runtime: VirtualAlloc of 0 bytes failed with errno=1455
fatal error: runtime: failed to commit pages

My repository is ~2TB with ~553K files spread across ~52K folders.

So, what are my options here? - are they just

  1. Retry with DUPLICACY_ATTRIBUTE_THRESHOLD set to 1
  2. Break the repository into smaller subsets and backup them into same storage one after the other.

Is that it?

That is an out-of-memory error. Yes, you can try option 1 first and if that doesn’t help then option 2.

1 Like

Just FYI

  • I tried option # 1 on my Windows Machine with 16GB RAM - but still ran into the same issue.
  • Then tried it out MacBookPro - also with 16GB RAM and it ran smoothly. In both cases I used CLI.

In both the cases - I closed all other apps - but surely there might be services running in background resulting in different RAM available profile.

Has anyone observed significant memory usage differences between different OS platforms?

1 Like

Linux / unix seems to be more conservative on the memory

Hello, I also just got this issue (7-9TB, +2 million files, 4GB RAM, Debian). I am using the web UI. Where do I enter the “DUPLICACY_ATTRIBUTE_THRESHOLD”? I tried to add it in to the globals and options in the backup section but this disables the backup process…

I am really happy with Duplicacy (works perfectly on a 8GB RAM Debian machine)! Thank you very much!

There is no easy way to set the environment variable for the CLI from the web GUI. Your best option might be to divide the big backup job into several small ones, by using different sets of filters for each smaller job.

1 Like

Ok, thank you for your answer. So I have to set up 10 different backups for the subfolders to split it up? Does this reduce the deduplication?

Is there any plan/timeline of fixing this? This problem exists now for over 4 years. :neutral_face:

Thank you very much!

I’m planning a big rewrite of the backup engine and hope to get it done in 2 months.

2 Likes

Awesome! If you need testers feel free to send me a message! Thank you very much!

I too am having memory issues while backing up a large number of files, currently on Web Edition 1.5.0. I understand from a post a few years ago, that you’re working on changing the architecture:

There is no need to load the entire file list into memory at once. My plan is to construct the file list on the fly and upload file list chunks as soon as they have been generated.

Has the rewrite been released yet?

Update: This affects Restore operations as well. If a backup took 64GB of RAM to run, it also seems to take 64GB of RAM to load the file list during a restore. Is this something that’s still being actively worked on, or is the enhancement regarding loading the entire file list into memory already live?