Run parallel in schedule jobs

Hi,

I have searched the forum but can’t understand this cinfig in my web-gui.

I have like 15 backup jobs that i want to run at a specific time.

Shall i sett all 15 jobs to start 07:00 and must i choose run parallel or will they run one at a time?

I guess i could time them once and then set different start times for them but that feels bad.

What is the best setting (in web gui) for this?

Have a nice day

Hej Peter,

To first answer your question, I also started out with a lot of backups (one for each folder I wanted to backup) and did indeed run them one after another in one schedule. They will run sequentially unless you mark them as parallel.

From here on I’ll go increasingly off-topic :slight_smile:

However, I then found out that Duplicacy will follow top-level symlinks in your backed-up folder. This allowed me to consolidate several backup folders into one backup ID. In my case, it’s one ID per machine and it contains several folders. Perhaps this doesn’t apply to you.

For my server I run duplicacy-web in @saspus’s docker container and map all folders I wish to backup as volumes into one directory. This server also does all checking and pruning so clients don’t have to.

I then discovered the RSA feature that allows clients to backup sensitive data without having to trust each-other or the checking/pruning server. The private key required to decrypt the data is stored securely elsewhere.

Duplicacy has a bit of a learning curve but it’s also very satisfying to tinker with…

-Alex

Hi @alind and thank you for a quick and great answer,

English is not my native language so some stuff i find a bit hard to grasp.

So about having many separate backups and schedules:

If i don’t check run parallel and set the same schedule time on them they will run one at a time until every backup is done?

About not having separate backups:

I have two collections of things i want to backup from my Unraid server:

  • VMs that are all separate files like file1.tar.gz, file2.tar.gz… in a folder
  • Dockers that are all separate files like file1.tar.gz, file2.tar.gz… in another folder

Of course i first tried (as its much easier) to have only two backup jobs for each but then i couldn’t figure out how to restore only one VM or one Docker. I don’t want to restore everything in my two backups.

Also (but this might be doable) i don’t want to schedule every VM, just the ones i’m currently working on so i don’t bloat Duplicacy with versions of an non changed VM.

I really would like to have only two backups and then two schedules containing separate jobs for each VM/Docker.

I have the files backed up on my Unraid server so i’m not that worried but i want to be able to tinker with my job schedule for a specific VM i’m currently working with. Maybe this is unnecessary as Duplicacy might not do anything if the file is unchanged?

But i still need to be able to restore separate files, not whole folders.

Well, maybe i can put my file.tar.gz in separate folders on my Unraid server. Will this make it possible to choose this filder in Duplicacy for separate restore?

You are in good company.

If I understand you correctly, you can create one “schedule” and to this you can add several “jobs”. The jobs could be the individual backups you wish to perform. In this case, the parallel flag would indicate if you wish the jobs to run sequentially or concurrently.

If however, you have several schedules, each with one job in them, the parallel flag should not do anything.

I saw your unraid thread and scripts and it looks like you rsync data into a backup folder, then tar and compress them before backing them up with duplicacy.

Unfortunately this might not be ideal for duplicacy, as you will always have modified files (fresh tar archives) and already compressed data (gz).

While I’m not familiar with unraid, I would rather create symbolic links from your backup folder into the data you’re rsync-ing. This would allow duplicacy to scan the actual source data for changes and allow for proper deduplication.

Duplicacy will also compress the data itself so you shouldn’t have to.

Duplicacy can restore individual folders or files from a backup. You should not need to restore all files in a backup. As such, lumping multiple files into one backup should not be an issue.

While it takes a bit getting used to, the Web-UI does allow you to specify which files you wish to restore and to where.

Good luck!

-Alex

1 Like

Make sense.

My nightly local Unraid backups with rsync and tar does make the WM look like a new file but it might not be changed right?

So in unraid my scrips fills a folder with backups tared and this i want to have since all VMs and Dockers is on a cache disk for speed and i feel more secure having them backed up on the raid.

BUT. For backing up to Duplicacy i can have like symlinked folders inside my connected backups folder to the actual folders in Unraid.

My structure:

Real source on Unraid:

Dockers
/mnt/user/appdata

WMs
/mnt/user/domains

Backups (where Duplicacy has to fetch from)
/mnt/user/backups

So will this work?
/mnt/user/appdata --symlink to mirror to–> /mnt/user/backups/duplicacy/dockers

and

/mnt/user/domains --symlink to mirror to–> /mnt/user/backups/duplicacy/vms

Then have two backup jobs in Duplicacy for each folder (dockers/vms)

This way my “local backups” won’t interfere with Dupicacy’s backup process?

Aha, I think I understand your setup a bit better now.

  1. You want the local rsync+tar to happen anyway.
  2. You wish to backup these tars somewhere else in an efficient and deduplicated fashion using duplicacy.
  3. All of this while minimizing downtime so the docker and VMs don’t have to wait for duplicacy to finish.

If this is accurate, I think I would stick with your original solution. :+1:

Duplicacy would indeed have to scan every new tar but deduplication should minimize the actual uploaded data.

My only uncertainty is if gzip has a cascading file difference, like an entropy encoded format, e.g. 7zip, where a small change in the beginning of the file will change the rest of the file. Or if small changes in the tar will also only give small changes in the tar.gz.

I guess you will find out very quickly when you run it. :slight_smile:

Hi @alind ,

Well, the symlink didn’t work as both my self in unraid and Duplicacy only saw the symlinked folder but when i klick on it in unraid’s file manager i get directly to the “real” folder and Duplicacy saw the folder but when trying to do a backup i got error.

Also in Duplicacy when i tried to open the folder to include/exclude it said that is was empty.

I don’t want Duplicacy have any direct access to do stuff with my “live folders” cause i’m not that skilled to be safe with me not messing something up in Duplicacy.

Sure the gain with tar is smaller uploads to Duplicacy but everything i want to backup i like 460GB so well under GDrive’s daily limit of 750GB and if my two folders grow i can do one each day.

The most important thing for me is like you said, minimum down time on my dockers and WM’s. Then how long i takes for Duplicacy to do the backups i don’t care about.

In fact if i skip the tar part in my script and only rsync the exact folderstructure i will have much less down time.

Also if rsync just copies the folders to my local backup folder they might keep their modified date/time and Duplicacy can act like it should with versions?

The restore part i will do in steps for safety. I will restore an selected Docker folder or VM folder to a dedicated “restore folder” in unraid and then i can inspect the restore and copying it into the “live folder” manually.

So in short i want a daily local backup on my unraid server and a Duplicacy backup in case my house burns down.