Hi! I’m trying to understand why some of my backups are taking a comparatively long time, so I ran them with
duplicacy -d backup -stats | ‘[%Y-%m-%d %H:%M:%S]’ > /tmp/file
first with a single filter to say I don’t care about Plex log files:
[2020-04-13 14:51:06] There are 0 compiled regular expressions stored
[2020-04-13 14:51:06] Loaded 1 include/exclude pattern(s)
[2020-04-13 14:51:06] Pattern: -var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Logs/
[2020-04-13 14:51:06] Listing
[2020-04-13 15:13:04] Packing bin/journalctl
20 minutes 58 seconds to apply the filter
[2020-04-13 15:25:14] There are 0 compiled regular expressions stored
[2020-04-13 15:25:14] Loaded 0 include/exclude pattern(s)
[2020-04-13 15:25:14] Listing
[2020-04-13 15:27:51] Packing bin/journalctl
2 Minutes 37 seconds without any filters
As you can see it took an additional 18 minutes 20 seconds to filter out the most specific filter I could create.
Is there a better filter I can use? Is it expected to be such a large variation in time? My intention is to filter at the path level, so if a directory passes the filter all files in it should pass as well. Is there an optimization like that in duplicacy?