Suggestion: Do not use linux cache or free already used pages for big files

I am doing backup of a 350GB mysql database , with one big file, 250GB on a 32GB server, running into a lot of issues due to swap being used during duplicacy backup.

After spending a lot of time thinking duplicacy was direct responsible for the ram usage, the main issue seems to be linux cache is alway using 10-15GB of RAM when doing the backup of the 250GB file, -actually this specific file is using 99% of the linux memory cache - which should not be a problem but the fact is when mysql suddenly needs several gigabytes of Ram, linux kernel does not free it quick enough and mysql ends up swapping which is really bad for performance.

Linux cache performance is actually awful while doing the backup with cache hit ratios lower than 15%, so freeing up manually actually made no difference in performance at all nor created a spike in io_wait neither for reading or writing, so I was thinking is would make sense for files bigger than a % of RAM to open them either bypassing the cache or freeing the pages after being used.

Tools I ended up using for diagnosing this particular problem, in case they help anyone:
GitHub - brendangregg/perf-tools: Performance analysis tools based on Linux perf_events (aka perf) and ftrace for cache usage
GitHub - tobert/pcstat: Page Cache stat: get page cache stats for files on Linux to find out what is in the linux cache.