Feature Suggestion: Limit CPU usage

If that is the case, then it is the file indexing phase that causes the burst. I’m not sure if it is cpu intensive – more likely disk i/o intensive. The only solution I can think of is to add some intentional delay to slow it down.

1 Like

Well, it is CPU utilization that heats up the cores, so it is a code that runs on a CPU, regardless of what sybsystem or driver it belongs to.

Looking at spindump during the indexing stage: out of 10 seconds this thread unsurprisingly took about 7 seconds of CPU time:

   Thread 0x1dc823           1000 samples (1-1000)     priority 31-46 (base 31)  cpu time 7.107s (24.7G cycles, 33.6G instructions, 0.74c/i)
  992  runtime.main + 519 (duplicacy + 196007) [0x402fda7]
    992  main.main + 21694 (duplicacy + 8262430) [0x47e131e]
      992  github.com/gilbertchen/cli.(*App).Run + 1617 (duplicacy + 3513441) [0x4359c61]
        992  github.com/gilbertchen/cli.Command.Run + 1905 (duplicacy + 3523649) [0x435c441]
          992  main.backupRepository + 1471 (duplicacy + 8218735) [0x47d686f]
            992  github.com/gilbertchen/duplicacy/src.(*BackupManager).Backup + 1236 (duplicacy + 7707156) [0x4759a14]
              992  github.com/gilbertchen/duplicacy/src.CreateSnapshotFromDirectory + 988 (duplicacy + 7973420) [0x479aa2c]

But it is how that time is used is what is interesting. Going further down the stack those 992 samples are split between following stacks: 30% regex matching and 70% APFS kernel code:

One

            566  github.com/gilbertchen/duplicacy/src.ListEntries + 2665 (duplicacy + 7844649) [0x477b329]
              416  github.com/gilbertchen/duplicacy/src.(*Entry).ReadAttributes + 201 (duplicacy + 8139065) [0x47c3139]
                376  github.com/gilbertchen/xattr.Listxattr + 77 (duplicacy + 7633837) [0x4747bad]
                  372  syscall.Syscall6 + 48 (duplicacy + 729408) [0x40b2140]
                    [ truncated calls into kernel and APFS driver ]

Two

            226  github.com/gilbertchen/duplicacy/src.ListEntries + 4279 (duplicacy + 7846263) [0x477b977]
              208  github.com/gilbertchen/duplicacy/src.MatchPath + 746 (duplicacy + 8132506) [0x47c179a]
                208  regexp.(*Regexp).MatchString + 85 (duplicacy + 1250789) [0x41315e5]
                  208  regexp.(*Regexp).doMatch + 177 (duplicacy + 1237281) [0x412e121]
                    [ truncated user-mode stack all the way ]

Three

            195  github.com/gilbertchen/duplicacy/src.ListEntries + 462 (duplicacy + 7842446) [0x477aa8e]
              190  io/ioutil.ReadDir + 124 (duplicacy + 1757084) [0x41acf9c]
                190  os.(*File).Readdir + 62 (duplicacy + 831966) [0x40cb1de]
                  155  os.(*File).readdir + 446 (duplicacy + 832734) [0x40cb4de]
                    155  os.Lstat + 77 (duplicacy + 864125) [0x40d2f7d]
                      155  os.lstatNolog + 92 (duplicacy + 865740) [0x40d35cc]
                        155  syscall.Syscall + 54 (duplicacy + 729302) [0x40b20d6]
                          [ another call into APFS, also truncated ]

So yes, I guess if we could insert throttling in the loop in ListEntries it would help tremendously improve user experience: there is absolutely no hurry to burn CPU to backup ASAP, especially with filesystem snapshot support already in place. (If anything, even Time Machine takes ages to backup – I think it is deliberate for the similar reasons.)

2 Likes

Sounds like an easy, global option to add: go - how to change process (application) priority from Normal to Low Programmatically in Golang - Stack Overflow

I think this would be worthwhile in general. But I realize that if your laptop is not busy doing other tasks, lowering the priority could still consume a lot of CPU (and kick-in the cooling fans).

Process or thread priority only affects OS scheduler behavior; it does not throttle CPU for anyone.

If there were other processes in the system competing for CPU lowering Duplicacy priority would result in those processes being scheduled more often/for longer, making them progress faster, but it would not affect total CPU utilization in any way. That should be the default for backup tools in general. For the record, I run my copy of Duplicacy under nice For exact same reason - I don’t want it to interfere with anything else I do on the machine.

If there are no other processes - and some process wants to do work - well, it’s the one that gets scheduled. There is nobody else to compete with so it gets scheduled every time.

So yes, while it is useful to lower priority for Duplicacy,

  1. It does nothing to addres the issue on hand
  2. I don’t think Duplicacy should manage priorities itself in the first place. Every task scheduler supports setting priority, some go as far as not only managing CPU time but also IO (see launchd) and every decent OS has tools to run these things interactively outside of schedulers (see nice). Duplicacy need to focus on backing up, it just needs to learn to pace itself :slight_smile:
1 Like

I agree, backup tools should all default to run at “idle priority”, though there may be times when you would want to adjust it.

I would more think that forcing a backup utility to “slow down” is contrary to the best usage/intentions. The idea is to reduce the risk of data loss, and that partially equates to getting files backed up sooner rather than later.

It sounds like you are a pretty savvy PC user. Wouldn’t it be better to only schedule backups when you are not watching a video, or when the laptop is plugged in, etc.? You are probably increasing your risk a little, but it should easily produce the results you are after, no?

This would be just relocating the problem to the user’s shoulders: forcing the user to still “throttle” duplicacy, but instead of it being on a OS scheduler level and automatically this would now be on a global level: effectively pausing it when user wants silence and unpausing it afterwards. This will be forgotten 50% of the time, as any menial manual task.

This must be automatic set it and forget it type of things. Especially when maintenance tasks such as backup are concerned.

Very good example is Time Machine. I don’t know (in a sense that I don’t feel any impact of it on my workflow) wen it runs, but everytime I check on it - last backup was completed within past hour. That’s the behavior I’m looking for.

I would more think that forcing a backup utility to “slow down” is contrary to the best usage/intentions. The idea is to reduce the risk of data loss, and that partially equates to getting files backed up sooner rather than later.

I sort of disagree. Backup is usually scheduled every hour. or every four hours.

Duplicacy however completes backup pass in about 40 seconds (thumbs up for that!) burning 100% CPU on the way.

If it instead took 50 minutes - backup frequency will remain the same (so possibility of data loss does not change), it will still be every hour, however this will avoid burst CPU workload: it would not heat up, fans won’t spin, and this will also result in additional power savings (hot CPU burns more power at the same load (that’s how semiconductors work) – so even though it is accomplishing the same task over longer period slowing it down will result in power savings.)

Well, we’re still talking about YOUR use-case. Backups do not have to be every 4 hours or every hour. If that’s how you prefer it, it’s fine, and your example fits.

You’re also referring to some specific hardware. I have laptops that do not have ANY fans. I have PCs and servers that could use 1-2 cores and not materially increase fan usage. (I’m also not sure how many additional watt-hours you’d use over the course of the year for “power savings”, I would be surprised if it were material…it would seem academic.)

I will respectfully bow-out of your thread, as you have a Feature Request for the devs. I didn’t mean to troll or be negative; I was trying to offer an alternative that turned-out to not be acceptable to you. Best of luck to you.

On the contrary, thank you for contributing to the discussion. It’s always good to have diverse opinions and experiences.

Still, I’ll clarify:

Backups do not have to be every 4 hours or every hour.

Not really. In the GUI version the hourly backup is the default (as it should be). And most users will leave it at that. So this use case must be optimized the most. Power users that run it on servers can and should configure things per their requirements, but the out-of-the box configuration shall be polished.

Well, we’re still talking about YOUR use-case.
I have laptops that do not have ANY fans.
I’m also not sure how many additional watt-hours you’d use over the course of the year for “power savings”, I would be surprised if it were material…it would seem academic.)

Yes, this is about a specific use case, but it is not unique to me in any way: I feel there are plenty of people using laptops with batteries who totally don’t care about cost of power per kW, but absolutely treasure the battery life.

Also consider this: when the battery level is low the user could have worked on the remaining few percents for a while (editing text or watching a video consumes very little power) however 100% CPU hog kicking in for no reason will force almost immediate power alarm and sleep (as useful capacity depends on a current). Not ideal, to say the least. And it is a UX disaster in my view.

I’m more than most paranoid about avoiding feature creep, but I strongly feel that [controllable] throttling is paramount to ensure good user experience. (And user experience is arguably above everything else on the importance scale).

P.S. Actually for myself I’ve concocted lldb script that injects sleep after each call to “regexp.(*Regexp).doMatch” and few other places discussed above and I launch duplicacy that way (did not want to modify the source and then to have to maintain my changes) – while this solves the issue for me the solution is obnoxious and users should not need to jump through these hoops.

That’s all my [strong] opinion of course based on my personal experience and I do not deny existence of other views - hence I"m expressing it on a public forum and not in an email to duplicacy staff :slight_smile:

2 Likes

All good then.

I definitely second a FR for an option “Do not run on battery” (and this does not completely fix your problem, but could help some use-cases).

As I said, best of luck to you! :slight_smile:

Small update: Just stumbled upon wonderful little utility GitHub - opsengine/cpulimit: CPU usage limiter for Linux that works by monitoring the process (group) CPU usage and sending SIGSTOP followed by SIGCONT in rapid succession to pause/unpause the process thereby limiting CPU time.

I’m not sure how safe/reliable/appropriate it would be to expect the utility to do that to the software that actively touches network – but I’ll try to stress-test it for a few weeks. It would be still safer to throttle in the app itself, at the controlled barriers, (i.e. I don’t think it would be a good idea to pause the process in the middle of network transaction") but if this works - it may be viable solution, without need to change Duplicacy, which is awesome.

You’d think that the amount of resources the utility would need to monitor a second process and then issue the signals to the OS would be almost as much as just letting the original process run!

Interesting find though.

It takes about 2.8-3% of the CPU on my machine (iMacPro 3.2GHz). Fairly acceptable.
Also my concerns about possible network implications are unfounded – the delays introduced are insignificant.

I’ll start using it immediately, and will report if anything goes wrong.

So what is the state of this feature request? If I understand things correctly there is the cpulimit tool as a workaround for Linux. But nothing for Windows or OSX. In any case, we agree that duplicacy should discipline itself, right? So is @gchen planning anything like that?

Ping.

The CPULimit is not an option with DuplicacyWeb… we do need CPU throttling for the scanning phase please: it runs full speed saturating single core 100% since the SSD is not a bottleneck since forever.

3 Likes

Ping.

Apparently forum does not allow to just write one word as a comment, so I had to write this sentence, even though all I meant to do was to bring back this discussion and remind about this…

Fans flaring up periodically is annoying to the point of me considering to move back to using command line utility with launchd under throttler

I agree. The way I have to solve it for now for my Linux servers is to add cpu-flags to my Duplicacy Web Docker container. Not an ideal solution, but works quite well for me after a week of usage.

For my Windows and Mac machines I have yet found a way to limit the CPU usage in a proper way.

@saspus do you think this would help: Add a -max-list-rate option to backup to slow down the listing · gilbertchen/duplicacy@67a3103 · GitHub?


if listRateLimit > 0 {
			maxFiles := int(time.Now().Sub(startTime).Seconds() * float64(listRateLimit))
			if len(snapshot.Files) > maxFiles  {
				delay := float64(len(snapshot.Files) - maxFiles) / float64(listRateLimit)
				time.Sleep(time.Duration(delay * 1000.0) * time.Millisecond)
			}
		}
  • Should not this be a running average over fixed few seconds long window and not over ever-increasing one?
  • It may indeed have a side effect of somewhat accomplishing what we need, but the experience would not be smooth, because the delay is variable, control input coarse (on the order of seconds), and there is other variable cpu processing involved (like regex matching) that is different for different files. In other words this will not result in sustained use of say 10% cpu but instead would jump all over the place, including likely triggering turbo boost.
  • any solutions that attempts to guess and adjust code execution timing will be convoluted and I would argue that it’s not a job of a command line utility to moderate itself; there are other utilities available to do that. Why not add functionality similar to what cpulimit (PWM of SIGSTOP/SIGCOONT) does to duplicacy_web instead? This will avoid cluttering CLI engine code (users of which already have cpulimit available for them).

Pretty sure you’re a bot me ol’ chum. Quote what that link has to do with thread - other than the fact the word ‘Ping’ appears in the context of bumping the topic… :wink:

Yep, likely it was (I mean, likely a bot, but definitely spam). Looks like someone is testing various strategies to bypass discourses antispam. This approach (respond to old thread with keywords) seems to work more often than others, recently