New to Duplicacy, question about interpreting the log

Hi,

I’m trying out Duplicacy with the goal of backing up my Docker containers’ persistent storage folders as well as a few other random things.

I setup the schedule allowing a maximum of 4 hours for it to work while the containers are stopped via cron.

But the logs I keep getting just say “aborted” and doesn’t seem to indicate that any complete backups were made.

Here are the logs…

2023/11/29 02:05:01 Starting schedule schedule1 at scheduled time 2023-1129 02:05
2023/11/29 02:05:01 Schedule schedule1 max run time: 13500 seconds
2023/11/29 02:05:01 Created log file /var/log/backup-20231129-020501.log
2023/11/29 02:05:01 Running /root/.duplicacy-web/bin/duplicacy_linux_x64_3.2.3 [-log backup -storage duplicacydocker2 -stats]
2023/11/29 02:05:01 Set current working directory to /var/cache/duplicacy/repositories/localhost/0
2023/11/29 05:50:02 Stopped schedule schedule1 due to max run time exceeded
2023/11/29 05:50:02 Sending email to myemail@gmail.com
2023/11/29 05:50:04 Schedule completion email has been sent to myemail@gmail.com
2023/11/29 05:50:07 Schedule schedule1 next run time: 2023-1130 02:05

Now in the Duplicacy GUI it shows under Storage “Size -” “Backup IDs -” “Chunks -” “Status -”

So I’m guessing what all of this means is that Duplicacy was never able to complete a full backup? And therefor it is being forced to start over again next time? And that I should probably give it more than 4 hours to complete the first one?

It’s around 200GB of files to backup and I have a 1000mbps upload connection (backing up to B2) so I assumed 4 hours would be enough but I’m guessing not?

Few things here. B2 provides about 10Mbps throughput per thread. So, to saturate your gigabit connection you need to use 100 threads. Add -threads 50 parameter to backup job — it will go 50x faster.

Second, the compute performance on your host to actually compress and encrypt data. I assume it’s adequate to keep up.

Lastly, does your host filesystem support snapshots? The better approach, compared to stopping services for the duration of the backup, is to stop them justa moment to take a filesystem snapshot, resume services, and then backup that snapshot, for however long it takes.

There may be other ways to do that, specific to your scenario: for example, if data is a database — export it and backup the export without stopping containers. Or if the data does not need to be coherent — then it’s not an issue in the first place.

Thanks for the response. I’ll try adding the -threads parameter. I assume that can be done via GUI?

Running on an Intel NUC with an i5 processor so I assume it can handle it.

I’m not sure about the host filesystem. I’m running on Ubuntu with whatever default filesystem it installs? Sorry fairly new at Linux/Docker/etc.

For your last question, I assume exporting the database would be something I’d have to manually setup through each application? Or you’re saying I should briefly stop the containers, setup some kind of script to copy their databases to another location, and then let Duplicacy back them up from that location? I don’t know if the data needs to be coherent. Mostly backing up things like Plex, qBittorrent, Sonarr, Radarr, Calibre-web, etc.

Yes, if you click on the - in the line there (it’s really tiny) the dialog with open where you can put options.

Definitely.

Probably ext4. It does not support snapshots. For that you woudl want zfs or btrfs.

Depending on complexity of the app, and whether they perhaps support some way to create backup internally, this may or may not be worth the time.

This is actually not a bad idea, with the lack of proper snapshotting. Some filesystems (again, not ext4) support almost instant copies of massive amount of data by not actually copying anything until it’s modified (so called COW filesystems – Copy On Write). This is almost like snapshotting for our purposes here.

Ah, I would not bother even stopping containers with these. Just backup live (except maybe calibre – not sure how it works, haven’t used that one)

Thank you! Appreciate the support.

So if I have a maximum amount of time that a scheduled backup can run for and it doesn’t complete the backup in that amount of time, does it somehow pick up where it left off the next time? Or am I starting over completely if it doesn’t finish in the allotted time?

In a way, yes. Chunks that have been uploaded, won’t need to be re-uploaded. But chunk generation will still need to happen from scratch.

1 Like