[Borgbackup] Question about borg remote/local setup
Thomas Waldmann
tw at waldmann-edv.de
Tue Aug 22 13:37:36 EDT 2017
> I've got Borg working with all of the files (data, indexes, and
> everything) on an NFS share (actually, it's Amazon EFS). It's actually
> backing up image data from one NFS share (my WP Uploads files) to the
> borg repo on a different NFS share.
>
> It works fine, but it takes about 45 minutes to run through less than
> 3GB of image data, even on subsequent runs.
That's likely because it reads/chunks everything because it thinks the
file has changed. This is when one of mtime, inode nr, size is not
stable (usually the inode nr is not stable on network filesystems).
If you can trust only mtime and size for the "is the file changed"
detection, you can tell borg to ignore the inode number (see
--ignore-inode or so).
> I'm guessing the process
> would be much faster if I stored the Borg metadata on the local server
> and only the data blocks on NFS. Then, after the main borg backup, I
> could do a cron script that would zip up the remaining borg metadata
> files, then borg could back that zip file as well.
Not sure how you mean that, but in general local storage is usually
faster than network storage.
Also have a look at cache / index sizes and maybe increase borg 1.0.x
--checkpoint-interval from 300 (5mins) to 1800 (30mins) to have fewer
checkpoints but also less overhead - esp. if indexes/caches are large
and reading/writing them is slow.
> If you agree, which files would be best kept on the local disk and which
> should be on NFS. Judging by the repo contents, I'm guessing the data
> folder on NFS and the rest on the local disk. Can you confirm?
Maybe try the --checkpoint-interval first, then you maybe do not have to
do that.
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
More information about the Borgbackup
mailing list