Automatic revisons reducement

In Reference: Prune command details

I am Using the Web Version which is currently /duplicacy_web_linux_x64_1.5.0

As backend is onedrive used

my prune setting is like -keep 0:30 1:21 1:14 1:7 1:1
Which is like, having 1 Revision each week, and nothing after 30 Days.

The Wired Thing is, due i have a 5 Minute Backup Schedule for /etc i generate thounds of Revisions, which is nice but, i got today complete desaster Recovery and i am unable to fetch it due timeouts.

Anyone have a idea, how can i SAFELY reduce the revisons so i can go a head with this backup-storage?

There are many ways to approach this. What is your backend?

I updated my initial post, It’s OneDrive, to answer your question and thanks for the prompt response

What I would do is make a temporary backup of the snapshots\id folder directly on the storage, or instead create a subdirectory (snapshots\id\tmp) and move all but the important snapshot files into it. These are just small files with numbers as the file name - e.g. 1, 288, 576, 577 etc…

Very important not to run any prune operations before moving them back.

i will give it, a shot - lets see

@Droolio it seems that it speeded it up my seeking, but restore is a pain i see :frowning:
I will have to create new backups for the next time, involving more specific folders… next time :wink:

OneDrive really does not like a lot of requests and may be throttling hard. Make sure you only use one thread.

You can also try to decouple Duplicacy from OneDrive API entirely:

  1. use rclone mount with local cache and --fast-list option to mount OneDrive locally with cache, something like so:

    mkdir /tmp/oneDrive
    ./rclone mount \
        myOneDriveEndpoint: /tmp/oneDrive \
        --volname OneDrive \
        --vfs-cache-mode full \
        --daemon \
        --fast-list
    

    You will need to have Fuse working. You can add --readonly flag to prevent inadvertently messing up cloud datastore.

  2. In Duplicacy add local storage at /tmp/oneDrive

  3. Restore your stuff from there.

This will accomplish several things:

  1. rclone will use optimized batched listing, which will reduce api pressure and increase chances of success when fetching directories with a massive number of files
  2. Even if OneDrive fails calls – retrying will be handled by rclone, and each attempt will be making incremental progress by add more stuff the the cache, and therefore eventually succeed; and therefore duplciacy’s requests will ultimately succeed.

In any way, I would not alter your backup policies and cadence because of deficiencies of the target storage.

Note: I would only use duplicacy with mounted storage like that for restore, never for backup, because in the latter case Duplicacy completing the backup does not mean that the data is safe in the cloud (unless you configure cache as write-through, but then its not different than backing up directly)

1 Like

well, 5 threads working well, it was reduced from 30 to 10 but I think it is enough with 5

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.