Win GUI Crash during backup

Attempting a backup of about 150GB on a Windows 2008 server to B2. The duplicacy_win_i386_2.0.9.exe keeps crashing with a “VirtualAlloc of [xxxxxxx] bytes failed with errno=487…fatal error: runtime: cannot map pages in arena address space…”

This sounds similar…maybe just a 64bit build would help?
link

Any thoughts. Hoping to move to Duplicacy->B2 for backups of about 20 RHEL servers and a dozen or so Win servers, but need to make sure it’ll work for us.

Thanks!

If your server is 64 bit, please run the 64 build. Go programs are very memory-limited when running on 32 bit Windows.

Hmm. maybe 32bit shouldn’t be the default (only) download from the main page. In fact, I could only get the 64 bit Win GUI download by guessing at the URL.

Regardless, after fully removing (which required manually deleting the service which isn’t removed by uninstalling/reboot/etc) and installing the 64 bit GUI, and confirming that the services are listed as 64 bit, it’s still crashing with the same VirtualAlloc error. Any other thoughts?

How many files are under the repository directory? And how much memory does this computer have? The memory usage is highly related to the number of files to be backed up.

It’s got 3GB of RAM (it’s a VM) and about 160GB across 250,000 files. This is a fairly conservative scenario for a file server. Is Duplicacy simply not going to work without throwing tons of RAM at it? It seems like there should be a way to write a backup app that doesn’t need to keep everything resident in RAM? CrashPlan Pro was always awful with memory faults, but TSM doesn’t even flinch at the 2.5+ million files on one of my other fileservers. I’d really love to be able to move away from TSM (due to cost and other support issues) and I’d also love to use something open source and cross platform. We’ve already got a sizable contract with BackBlaze for endpoint backups, and B2 is really attractive from a cost/billing basis. So Duplicacy seems like a great fit, but I’ve gotta get it working reliably across 25+ Windows and RHEL servers.

I suppose I could create repositories lower in the hierarchy, instead of the root of the storage drive, but that complicates config…and even going one level down and creating repos for each subdirectory, Duplicacy would need to work on a dir of about 100GB and 200,000 files. I really couldn’t go any deeper, or I’d have to create hundreds of repos…and frankly, backup software shouldn’t require that. I don’t think it’s unreasonable to expect to be able to specify a drive to backup and have backup software more or less just work.

Any advice?

Currently Duplicacy isn’t optimized with regard to memory usage – the entire snapshot needs to be loaded into the memory. However, I would be surprised if 250,000 files take more than 1GB. The log also indicated that the first two attempts failed at a file read error, which meant the snapshot had already been loaded into the memory. The memory error occurred only at the third attempt, so maybe at that time there are other applications that together with Duplicacy used up all available memory. Increasing the size of the paging file would definitely help in this case.

Just wanted to follow up, because this seems to be resolved (avoided?). I realized that this server had all page files disabled due to an earlier issue trying to defrag+shrink a volume. After turning system managed paging back on and rebooting, duplicacy finished a backup successfully. Obviously, it would still be great if/when duplicacy is more memory-efficient (I haven’t yet tried backing up a primary fileserver with ~6TB of data across god knows howmany files).

Also, to clarify, the 2 file read errors in the most recent image above have nothing to do with the VirtualAlloc error, they were just 2 locked files and I wasn’t using VSS.

Thanks for reporting backup. Memory optimization will be the next thing to do after the GUI version 2.1.0 is released.