You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
check if we have enough docs talking about metadata stream deduplication and (not) archiving of atime.
if atime of a lot of fs objects changes between backups, deduplication suffers. but we have an option to not store atime into an archive (if it is not needed).
this also might strongly affect caching repo size when borg resyncs the chunks cache (and reads all metadata streams of all archives, using a caching repo):
as all metadata stream chunks get written into that caching repo, there will be a lot more if there isn't much deduplication if atimes of a lot of fs object are always changing between backups and thus create different metadata stream chunks in each archive.
The text was updated successfully, but these errors were encountered:
"With this option enabled, atime data is written to the disk only if the file has been modified since the atime data was last updated (mtime), or if the file was last accessed more than a certain amount of time ago (by default, one day)."
So if your backup frequency is >= 1/d, relatime does not help with this.
check if we have enough docs talking about metadata stream deduplication and (not) archiving of atime.
if atime of a lot of fs objects changes between backups, deduplication suffers. but we have an option to not store atime into an archive (if it is not needed).
this also might strongly affect caching repo size when borg resyncs the chunks cache (and reads all metadata streams of all archives, using a caching repo):
as all metadata stream chunks get written into that caching repo, there will be a lot more if there isn't much deduplication if atimes of a lot of fs object are always changing between backups and thus create different metadata stream chunks in each archive.
The text was updated successfully, but these errors were encountered: