Memory consumption grows steadily during most operations

Please describe what you are doing to trigger the bug:
Run a long operation such a check on entire datastore, with —files flag.

Please describe what you expect to happen (but doesn’t):
I expect memory consumption to stay level throughput the operation. There is no difference in resources needed to validate last file vs that for the first.

Please describe what actually happens (the wrong behaviour):

Memory consumption grows steadily. On my few TB datastore check slowly consumed up to 15GB of ram.

This looks like either a memory leak or simply garbage collector/reference counter does not have a chance to clean up temporarily objects that are being generated at an excessive rate. This is a well known phenomena with auto reference counting systems and is usually solved by adding local pools to source temporary objects to facilitate better reuse. (I’m not familiar with Go terminology, but on macOS AppKit with ARC I would wrap the internal loop that is generating those temp objects into a local auto release pool scope; see Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint. I’m sure something similar can be accomplished in Go).

2 Likes

This commit should fix the issue:

2 Likes