From kramski at hoernle-marbach.de Mon Jan 6 06:11:03 2020 From: kramski at hoernle-marbach.de (Heinz Werner Kramski-Grote) Date: Mon, 06 Jan 2020 12:11:03 +0100 Subject: [Borgbackup] How to keep backup times short after a complete rebuild of the source filesystem? Message-ID: <526108508.DOFDKqvK0W@sandl> I have a data pool of approx. 3.6 TB which I backup daily to a remote system via ssh. The runtimes are ok for the daily differences, but because of its size, I did the inital backup locally on my LAN. Due to a failed disk, I had to copy all data from a degraded RAID to a new array of new disks (thereby moving from BTRFS to mdadm/lvm/EXT4, but that's another story). As a result, all ctimes now have changed to the the date of the copy event, like in this example: $ stat smm01.txt File: smm01.txt Size: 715 Blocks: 8 IO Block: 4096 regular file Device: fd00h/64768d Inode: 115409634 Links: 1 Access: (0744/-rwxr--r--) Uid: ( 1000/ kramski) Gid: ( 1000/ kramski) Access: 2020-01-04 21:58:36.555772941 +0100 Modify: 1999-10-31 18:50:24.000000000 +0100 Change: 2020-01-04 21:58:36.555772941 +0100 Birth: 2020-01-04 21:58:36.555772941 +0100 According to https://borgbackup.readthedocs.io/en/stable/usage/create.html, it's the ctime (Change) which used for identifying unmodified files. Should I move to "--files-cache=mtime,size,inode" (Modify) to avoid long initial backup times when I resume my daily backups over ssh? Regards, Heinz From tw at waldmann-edv.de Mon Jan 6 19:18:06 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 7 Jan 2020 01:18:06 +0100 Subject: [Borgbackup] How to keep backup times short after a complete rebuild of the source filesystem? In-Reply-To: <526108508.DOFDKqvK0W@sandl> References: <526108508.DOFDKqvK0W@sandl> Message-ID: > Should I move to "--files-cache=mtime,size,inode" (Modify) to avoid long initial backup times when I resume my daily backups over ssh? The problem is that ctime and inode number of all files have changed due to the copying to a new filesystem. So, what's left is only "size", which [if it is the only criteria used] is rather weak. If you are absolutely sure that all the files are identical as before (so even a weak "--files-cache=size" would be no problem), you could use that for the first backup. Note: it is also important that the absolute paths of the files do not change. https://github.com/borgbackup/borg/blob/1.1.9/src/borg/cache.py#L970 The code there deals with a change of the inode number in case of "cache hits" (read the comment above that line). It does not deal with the ctime change though, but we could think about whether it makes sense to add some "C" and "M" modes that ignore the ctime/mtime for change detection, but update the cmtime value in the cache to either the current ctime ("C") or the current mtime ("M") of the file. With that, the 2nd backup from the new filesystem could go back to the usual --files-cache=size,ctime,inode without triggering a full backup. Sadly, without that change and in a situation like yours, you could not enable ctime/mtime for change detection without triggering a full backup, but only --files-cache=inode,size (which also is a bit weak). -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From kramski at hoernle-marbach.de Wed Jan 8 13:37:44 2020 From: kramski at hoernle-marbach.de (Heinz Werner Kramski-Grote) Date: Wed, 08 Jan 2020 19:37:44 +0100 Subject: [Borgbackup] How to keep backup times short after a complete rebuild of the source filesystem? In-Reply-To: References: <526108508.DOFDKqvK0W@sandl> Message-ID: <4614161.31r3eYUQgx@sandl> On Dienstag, 7. Januar 2020 01:18:06 CET Thomas Waldmann wrote: > > Should I move to "--files-cache=mtime,size,inode" (Modify) to avoid long initial backup times when I resume my daily backups over ssh? > > The problem is that ctime and inode number of all files have changed due > to the copying to a new filesystem. [...] > Sadly, without that change and in a situation like yours, you could not > enable ctime/mtime for change detection without triggering a full > backup, but only --files-cache=inode,size (which also is a bit weak). I simply ran a full backup with the default options, which took approx. 12 h (mostly for local processing of every file on the client - very little data was actually transferred). I'm back on track now and re-enabled my cron jobs. Thanks, Heinz From thomas at portmann.org Thu Jan 9 11:42:52 2020 From: thomas at portmann.org (Thomas Portmann) Date: Thu, 09 Jan 2020 17:42:52 +0100 Subject: [Borgbackup] Isn't locking broken, because stale lock removal doesn't comply with the locking protocol? Message-ID: <3388BDF2-36D2-475F-A95C-9123AFF071C7@portmann.org> Hi, I'm currently writing a shell script for locking a borg repo with a (selectable) shared or exclusive lock while not touching the repo. Borg's with-lock command locks only exclusively and changes the repo after the target command has finished and before the exclusive lock is removed (for reasons I don't know yet). This is not what I need. So, in order to follow the protocol and to not damage my repos, I wanted to learn how exactly locking of a borg repo is accomplished. Based on what I learned and observed, I come to the preliminary assumption that borg locking is broken since introduction of the stale lock removal. The reason is as simple as already stated in the subject: The procedure of killing a stale exclusive lock violates the locking protocol as described in https://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#lock-files "If the process can create the lock.exclusive directory for a resource, it has the lock for it. If creation fails (because the directory has already been created by some other process), lock acquisition fails." and https://borgbackup.readthedocs.io/en/stable/usage/general.html#file-systems "mkdir(2) should be atomic, since it is used for locking." I used inotify and strace to observe how stale exclusive locks are removed by borg. The behaviour of version 1.1.7 and 1.1.10 seems to be the same and is as follows: 1. Try to create directory lock.exclusive. 2. If that fails, because it's already present, 2.a (Look at the process's own lock indicator file...for whatever reason...in this situation, it doesn't exist.) 2.b Remove any stale lock indicator. 2.c Remove directory lock.exclusive. # At this point, the actual lock acquisition happens: 3. Again, try to create directory lock.exclusive. # If it was successful now... 4. Create the process's own lock indicator file. ... (work) 5. Remove it. 6. Remove directory lock.exclusive. The violation is--looking exactly at the owning criterion (having created the directory successfully), that the process not owning the lock may remove it, while the owning process cannot safely detect this removal. Let's assume that two borg processes A and B run on the same repo in parallel, for example, like this: A.1/A.3 => lock.exclusive was created and is still empty, B.1, B.2..B.3 => lock.exclusive has been removed and created again... A.4, B.4 => BANG!! At least at this point, both A and B are thinking they own the lock. My questions: 1. Was any measure implemented to safely prevent this situation? 2. If so, which one? Is there a secret protocol extension? If not, what about making the locking protocol safe? For example like this: AFAIK, on most reasonable OSes / local filesystems, not only mkdir(2) is atomic, but also rename(2). So instead of successful creation of the lock.exclusive directory being the criterion, one could define successful renaming of a randomly named temporary directory already prepared with the host/process identifier to lock.exclusive being the criterion. This way, there is no time gap between lock.exclusive coming to existence and creation of the identifier, where any other process could intervene. In a POSIX shell on a local repo, the following code would do the essence of this job: tempdir=$(mktemp -d -p "$BORG_REPO") touch "$tempfile/$BORG_HOST_ID.$$-0" if mv -T "$tempfile" "$BORG_REPO/lock.exclusive" then # I have the lock, so maintain lock.roster, remove lock.exclusive in case of shared locking, and do my work. else # remove stale exclusive lock, if any, and try again or tidy up the temp dir. fi ... This is, because mv -T calls rename, which succeeds if the source is a directory and the destination is an empty directory or doesn't exist. It would also be compatible with the current protocol. It would be safe when running concurrently with another process with this protocol, and would have the same current problem when running concurrently with a process which follows the current protocol. Cheers Thomas -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at portmann.org Thu Jan 9 12:17:28 2020 From: thomas at portmann.org (Thomas Portmann) Date: Thu, 09 Jan 2020 18:17:28 +0100 Subject: [Borgbackup] Isn't locking broken, because stale lock removal doesn't comply with the locking protocol? In-Reply-To: <3388BDF2-36D2-475F-A95C-9123AFF071C7@portmann.org> References: <3388BDF2-36D2-475F-A95C-9123AFF071C7@portmann.org> Message-ID: <96B8FD4A-25DF-4740-BECC-5BF9E1D930F6@portmann.org> It must read "$tempdir" instead of "$tempfile". Am 9. Januar 2020 17:42:52 MEZ, schrieb Thomas Portmann : >Hi, > >I'm currently writing a shell script for locking a borg repo with a >(selectable) shared or exclusive lock while not touching the repo. >Borg's with-lock command locks only exclusively and changes the repo >after the target command has finished and before the exclusive lock is >removed (for reasons I don't know yet). This is not what I need. > >So, in order to follow the protocol and to not damage my repos, I >wanted to learn how exactly locking of a borg repo is accomplished. > >Based on what I learned and observed, I come to the preliminary >assumption that borg locking is broken since introduction of the stale >lock removal. > >The reason is as simple as already stated in the subject: The procedure >of killing a stale exclusive lock violates the locking protocol as >described in > >https://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#lock-files > >"If the process can create the lock.exclusive directory for a resource, >it has the lock for it. If creation fails (because the directory has >already been created by some other process), lock acquisition fails." > >and > >https://borgbackup.readthedocs.io/en/stable/usage/general.html#file-systems > >"mkdir(2) should be atomic, since it is used for locking." > >I used inotify and strace to observe how stale exclusive locks are >removed by borg. The behaviour of version 1.1.7 and 1.1.10 seems to be >the same and is as follows: > >1. Try to create directory lock.exclusive. >2. If that fails, because it's already present, >2.a (Look at the process's own lock indicator file...for whatever >reason...in this situation, it doesn't exist.) >2.b Remove any stale lock indicator. >2.c Remove directory lock.exclusive. ># At this point, the actual lock acquisition happens: >3. Again, try to create directory lock.exclusive. ># If it was successful now... >4. Create the process's own lock indicator file. >... (work) >5. Remove it. >6. Remove directory lock.exclusive. > >The violation is--looking exactly at the owning criterion (having >created the directory successfully), that the process not owning the >lock may remove it, while the owning process cannot safely detect this >removal. > >Let's assume that two borg processes A and B run on the same repo in >parallel, for example, like this: > >A.1/A.3 => lock.exclusive was created and is still empty, >B.1, B.2..B.3 => lock.exclusive has been removed and created again... >A.4, B.4 => BANG!! At least at this point, both A and B are thinking >they own the lock. > >My questions: >1. Was any measure implemented to safely prevent this situation? >2. If so, which one? Is there a secret protocol extension? > >If not, what about making the locking protocol safe? For example like >this: > >AFAIK, on most reasonable OSes / local filesystems, not only mkdir(2) >is atomic, but also rename(2). So instead of successful creation of the >lock.exclusive directory being the criterion, one could define >successful renaming of a randomly named temporary directory already >prepared with the host/process identifier to lock.exclusive being the >criterion. This way, there is no time gap between lock.exclusive coming >to existence and creation of the identifier, where any other process >could intervene. In a POSIX shell on a local repo, the following code >would do the essence of this job: > >tempdir=$(mktemp -d -p "$BORG_REPO") >touch "$tempfile/$BORG_HOST_ID.$$-0" >if mv -T "$tempfile" "$BORG_REPO/lock.exclusive" >then ># I have the lock, so maintain lock.roster, remove lock.exclusive in >case of shared locking, and do my work. >else ># remove stale exclusive lock, if any, and try again or tidy up the >temp dir. >fi >... > >This is, because mv -T calls rename, which succeeds if the source is a >directory and the destination is an empty directory or doesn't exist. >It would also be compatible with the current protocol. It would be safe >when running concurrently with another process with this protocol, and >would have the same current problem when running concurrently with a >process which follows the current protocol. > >Cheers >Thomas > > > > > >-- >Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail >gesendet. > >------------------------------------------------------------------------ > >_______________________________________________ >Borgbackup mailing list >Borgbackup at python.org >https://mail.python.org/mailman/listinfo/borgbackup -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Jan 9 14:22:30 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 9 Jan 2020 20:22:30 +0100 Subject: [Borgbackup] Isn't locking broken, because stale lock removal doesn't comply with the locking protocol? In-Reply-To: <3388BDF2-36D2-475F-A95C-9123AFF071C7@portmann.org> References: <3388BDF2-36D2-475F-A95C-9123AFF071C7@portmann.org> Message-ID: <1b217954-5883-b247-d6cf-520cd0f2af49@waldmann-edv.de> Hi Thomas, > I'm currently writing a shell script for locking a borg repo with a > (selectable) shared or exclusive lock while not touching the repo. borg does not support shared locking. > Borg's with-lock command locks only exclusively Yeah, for a reason. Which was deadlocks, iirc. > and changes the repo after the target command has finished Yeah, for another reason (see source). > So, in order to follow the protocol borg's locking is not intended to be a public protocol, it is for borg's use only. > Based on what I learned and observed, I come to the preliminary > assumption that borg locking is broken since introduction of the stale > lock removal. Yeah, guess you are right. There is a small time frame between lock directory and info file creation and the code in kill_stale_lock did not consider that state. > 1. Was any measure implemented to safely prevent this situation? Guess not. Guess most people did not find that yet because the short timeframe. > If not, what about making the locking protocol safe? For example like this: > > AFAIK, on most reasonable OSes / local filesystems, not only mkdir(2) is > atomic, but also rename(2). Well, it should not get worse atomic or less compatible than it is now. So, if you like, do a comparison between mkdir and rename and also consider network fs and less reasonable OSes like windows. I don't remember precisely why I chose mkdir over rename, but the locking of attic (borg's predecessor, iirc it was posix locking) was rather problematic and not very compatible with misc. OSes and FSes - and I don't want to get into a similar situation again. > So instead of successful creation of the > lock.exclusive directory being the criterion, one could define > successful renaming of a randomly named temporary directory already > prepared with the host/process identifier to lock.exclusive being the > criterion. This way, there is no time gap between lock.exclusive coming > to existence and creation of the identifier, where any other process > could intervene. Good idea. Can you research rename (of a directory) vs. mkdir compat. properties? BTW, an issue on github about this would be better than a ML post. Cheers, Thomas -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From clemence.magnien at lip6.fr Tue Jan 14 12:30:55 2020 From: clemence.magnien at lip6.fr (Clemence Magnien) Date: Tue, 14 Jan 2020 18:30:55 +0100 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? Message-ID: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> Hi, I hope this is the right place to post this question. I've been using borg for a while for backing up my home directory and am very satisfied with it. I'd like to keep: - 4 daily backups - 4 weekly backups - 12 monthly backups - 1 backup per year for all years since I've started using it. As I started using borg in may 2017, the keep-yearly option of prune wants to keep the latest backup in 2017 and remove the ones before that. I however would like to preserve as much of my history as possible, so I would like to keep: - one backup for may 2017 - one backup for may 2018 - 12 monthly backups for 2019 - (and 4 weekly and 4 daily recent bakcups) Is there a possibility to count the backups you want to keep from the time I started using borg rather than counting one at the end of each calendar year? I know that I can use the --keep-within option but it seems a bit convoluted. Best, Cl?mence From jdc at uwo.ca Tue Jan 14 13:28:31 2020 From: jdc at uwo.ca (Dan Christensen) Date: Tue, 14 Jan 2020 18:28:31 +0000 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? In-Reply-To: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> (Clemence Magnien's message of "Tue, 14 Jan 2020 18:30:55 +0100") References: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> Message-ID: <875zhduc35.fsf@uwo.ca> If you use "borg rename" to rename the oldest backup to something with a different prefix, and then limit to the main prefix when pruning, the oldest backup will always stick around. Then you'd get May 2017 Dec 2017 Dec 2018 Dec 2019 etc, which might be good enough for what you want. Or, you could manually rename one May backup each year to ensure that those ones are saved. You might also find that keeping monthly backups is fairly space efficient. When I've tried to save space by pruning old backups, the repo size usually changes much less than I expected. Dan On Jan 14, 2020, Clemence Magnien wrote: > Hi, > > I hope this is the right place to post this question. > > I've been using borg for a while for backing up my home directory > and am very satisfied with it. > > I'd like to keep: > - 4 daily backups > - 4 weekly backups > - 12 monthly backups > - 1 backup per year for all years since I've started using it. > > As I started using borg in may 2017, the keep-yearly option of prune wants to keep > the latest backup in 2017 and remove the ones before that. > > I however would like to preserve as much of my history as possible, so > I would like to keep: > - one backup for may 2017 > - one backup for may 2018 > - 12 monthly backups for 2019 > - (and 4 weekly and 4 daily recent bakcups) > > Is there a possibility to count the backups you want to keep from the time > I started using borg rather than counting one at the end of each calendar > year? > > I know that I can use the --keep-within option but it seems a bit convoluted. > > Best, > Cl?mence > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From HMayer6 at gmx.net Wed Jan 15 09:08:33 2020 From: HMayer6 at gmx.net (Hans Mayer) Date: Wed, 15 Jan 2020 15:08:33 +0100 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? In-Reply-To: <875zhduc35.fsf@uwo.ca> References: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> <875zhduc35.fsf@uwo.ca> Message-ID: --keep-yearly=-1 https://borgbackup.readthedocs.io/en/stable/usage/prune.html On 1/14/20 7:28 PM, Dan Christensen wrote: > If you use "borg rename" to rename the oldest backup to something with > a different prefix, and then limit to the main prefix when pruning, > the oldest backup will always stick around. Then you'd get > > May 2017 > Dec 2017 > Dec 2018 > Dec 2019 > > etc, which might be good enough for what you want. > > Or, you could manually rename one May backup each year to ensure that > those ones are saved. > > You might also find that keeping monthly backups is fairly space > efficient. When I've tried to save space by pruning old backups, > the repo size usually changes much less than I expected. > > Dan > > On Jan 14, 2020, Clemence Magnien wrote: > >> Hi, >> >> I hope this is the right place to post this question. >> >> I've been using borg for a while for backing up my home directory >> and am very satisfied with it. >> >> I'd like to keep: >> - 4 daily backups >> - 4 weekly backups >> - 12 monthly backups >> - 1 backup per year for all years since I've started using it. >> >> As I started using borg in may 2017, the keep-yearly option of prune wants to keep >> the latest backup in 2017 and remove the ones before that. >> >> I however would like to preserve as much of my history as possible, so >> I would like to keep: >> - one backup for may 2017 >> - one backup for may 2018 >> - 12 monthly backups for 2019 >> - (and 4 weekly and 4 daily recent bakcups) >> >> Is there a possibility to count the backups you want to keep from the time >> I started using borg rather than counting one at the end of each calendar >> year? >> >> I know that I can use the --keep-within option but it seems a bit convoluted. >> >> Best, >> Cl?mence >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From jdc at uwo.ca Wed Jan 15 10:08:34 2020 From: jdc at uwo.ca (Dan Christensen) Date: Wed, 15 Jan 2020 15:08:34 +0000 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? In-Reply-To: (Hans Mayer's message of "Wed, 15 Jan 2020 15:08:33 +0100") References: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> <875zhduc35.fsf@uwo.ca> Message-ID: <87o8v4sqof.fsf@uwo.ca> As the OP said, that would cause the May 2017 backup to get deleted. I suggested a workaround for that issue. Dan On Jan 15, 2020, Hans Mayer wrote: > --keep-yearly=-1 > https://borgbackup.readthedocs.io/en/stable/usage/prune.html > > On 1/14/20 7:28 PM, Dan Christensen wrote: >> If you use "borg rename" to rename the oldest backup to something with >> a different prefix, and then limit to the main prefix when pruning, >> the oldest backup will always stick around. Then you'd get >> >> May 2017 >> Dec 2017 >> Dec 2018 >> Dec 2019 >> >> etc, which might be good enough for what you want. >> >> Or, you could manually rename one May backup each year to ensure that >> those ones are saved. >> >> You might also find that keeping monthly backups is fairly space >> efficient. When I've tried to save space by pruning old backups, >> the repo size usually changes much less than I expected. >> >> Dan >> >> On Jan 14, 2020, Clemence Magnien wrote: >> >>> Hi, >>> >>> I hope this is the right place to post this question. >>> >>> I've been using borg for a while for backing up my home directory >>> and am very satisfied with it. >>> >>> I'd like to keep: >>> - 4 daily backups >>> - 4 weekly backups >>> - 12 monthly backups >>> - 1 backup per year for all years since I've started using it. >>> >>> As I started using borg in may 2017, the keep-yearly option of prune wants to keep >>> the latest backup in 2017 and remove the ones before that. >>> >>> I however would like to preserve as much of my history as possible, so >>> I would like to keep: >>> - one backup for may 2017 >>> - one backup for may 2018 >>> - 12 monthly backups for 2019 >>> - (and 4 weekly and 4 daily recent bakcups) >>> >>> Is there a possibility to count the backups you want to keep from the time >>> I started using borg rather than counting one at the end of each calendar >>> year? >>> >>> I know that I can use the --keep-within option but it seems a bit convoluted. >>> >>> Best, >>> Cl?mence From dave at gasaway.org Thu Jan 16 16:15:33 2020 From: dave at gasaway.org (David Gasaway) Date: Thu, 16 Jan 2020 13:15:33 -0800 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? In-Reply-To: <87o8v4sqof.fsf@uwo.ca> References: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> <875zhduc35.fsf@uwo.ca> <87o8v4sqof.fsf@uwo.ca> Message-ID: On Wed, Jan 15, 2020 at 11:54 AM Dan Christensen wrote: > As the OP said, that would cause the May 2017 backup to get deleted. > Why would it? "Specifying a negative number of archives to keep means that there is no limit." There would be no limit to the number of yearly backups. Is there a bug? -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at gasaway.org Thu Jan 16 16:59:43 2020 From: dave at gasaway.org (David Gasaway) Date: Thu, 16 Jan 2020 13:59:43 -0800 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? In-Reply-To: References: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> <875zhduc35.fsf@uwo.ca> <87o8v4sqof.fsf@uwo.ca> Message-ID: After rereading the OP and the docs, I realize was wrong. Sorry about that. Dan's trick seems quite reasonable, though the --glob-archives options might work better as it doesn't restrict you to strictly prefixes. Keeping that singular first backup out-of-repo is another option I'd consider. Thinking a little bigger, something like a '--keep-archive ' option (applied last) would be cool. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at portmann.org Thu Jan 16 18:55:14 2020 From: thomas at portmann.org (Thomas Portmann) Date: Fri, 17 Jan 2020 00:55:14 +0100 Subject: [Borgbackup] Isn't locking broken, because stale lock removal doesn't comply with the locking protocol? In-Reply-To: <1b217954-5883-b247-d6cf-520cd0f2af49@waldmann-edv.de> References: <3388BDF2-36D2-475F-A95C-9123AFF071C7@portmann.org> <1b217954-5883-b247-d6cf-520cd0f2af49@waldmann-edv.de> Message-ID: <52a71176-9d20-de20-c678-f4597bf709fc@portmann.org> On 09.01.20 20:22, Thomas Waldmann wrote: > BTW, an issue on github about this would be better than a ML post. I filed a bug, https://github.com/borgbackup/borg/issues/4923 Cheers Thomas From clemence.magnien at lip6.fr Fri Jan 17 05:12:27 2020 From: clemence.magnien at lip6.fr (Clemence Magnien) Date: Fri, 17 Jan 2020 11:12:27 +0100 Subject: [Borgbackup] Option to keep yearly backup counting from first backup? In-Reply-To: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> References: <20200114173055.4a6ljeshe5tgkylk@phoenix.rsr.lip6.fr> Message-ID: <20200117101227.zbl3o632znwn55oj@phoenix.rsr.lip6.fr> Hello, replying to myself since for some reason I didn't receive Dan's messages but was able to find them on the list archive. Thanks Dan for your answer, the renaming trick worked like a charm and I now have modified my backup scripts accordingly. :) Cl?mence On Tue, Jan 14, 2020 at 06:30:55PM +0100, Clemence Magnien wrote: > Hi, > > I hope this is the right place to post this question. > > I've been using borg for a while for backing up my home directory > and am very satisfied with it. > > I'd like to keep: > - 4 daily backups > - 4 weekly backups > - 12 monthly backups > - 1 backup per year for all years since I've started using it. > > As I started using borg in may 2017, the keep-yearly option of prune wants to keep > the latest backup in 2017 and remove the ones before that. > > I however would like to preserve as much of my history as possible, so > I would like to keep: > - one backup for may 2017 > - one backup for may 2018 > - 12 monthly backups for 2019 > - (and 4 weekly and 4 daily recent bakcups) > > Is there a possibility to count the backups you want to keep from the time > I started using borg rather than counting one at the end of each calendar > year? > > I know that I can use the --keep-within option but it seems a bit convoluted. > > Best, > Cl?mence > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From dac at conceptual-analytics.com Thu Jan 23 15:53:31 2020 From: dac at conceptual-analytics.com (Dave Cottingham) Date: Thu, 23 Jan 2020 15:53:31 -0500 Subject: [Borgbackup] Check if backup is running? Message-ID: In our setup I'm running unattended backups with borg via crontab, and I'd like my job to check if yesterday's backup is still running, so it can gracefully handle this condition. Anybody have a recommendation how I can do this? I see in the manual a "with-lock" that seems promising. What does "with-lock" do if it can't get the lock? Wait? Exit? If it exits, does it provide any indication that it couldn't get the lock? Thanks, Dave Cottingham -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at portmann.org Fri Jan 24 07:04:00 2020 From: thomas at portmann.org (Thomas Portmann) Date: Fri, 24 Jan 2020 13:04:00 +0100 Subject: [Borgbackup] Check if backup is running? In-Reply-To: References: Message-ID: <1bd713c3-4127-52c5-2ef1-3a86c1f7d621@portmann.org> Hi Dave, I also implemented a regular unattended borgbackup for a Linux server of a little company and it's been stable for 30 months. The main backup applies to the entire system and is done weekly. The amount of data is roughly about 100 GB. The changes over one week can vary up to 1 GB. The repository is located on a local file system as well as on a remote host. The bandwidth is ~5MBit/s. I never experienced that a remote archive creation took more than a day--except for the very first archive of the same file system in the same repository. In most cases, the backup took 5 minutes. Basically, borgbackup works gracefully already: Access to the repository is already locked by a shared lock during reading commands and by an exclusive lock otherwise. On lock acquisition, a timeout is applied by default (1 second), for the case that another borg command (e.g. the borg create still running since yesterday...) is already accessing the same repository. You can configure the timeout by the common option --lock-wait SECONDS. Therefore, wrapping any borg command with borg with-lock is not only unnecessary, if you deal with one and the same repository in your command--it will even fail, since the wrapped borg command cannot acquire the lock which is already owned by the wrapping borg with-lock command. (The with-lock command is used for wrapping non-borg commands, if you want to lock the repository while doing some stuff with it like copying the repository, for example.) What I did in my scripts: I observed the return code of the borg process in my crontab job. I deem it a good idea anyway, because there's a plenty of reasons why your job can fail.--Now, if an old borg process is still running, the new borg process on the same repository will fail anyway due to the locking, so you will get a non-zero exit code anyway, along with a meaningful error message on stderr. Cheers Thomas Am 23.01.20 um 21:53 schrieb Dave Cottingham: > In our setup I'm running unattended backups with borg via crontab, and > I'd like my job to check if yesterday's backup is still running, so it > can gracefully handle this condition. Anybody have a recommendation how > I can do this? > > I see in the manual a "with-lock" that seems promising. What does > "with-lock" do if it can't get the lock? Wait? Exit? If it exits, does > it provide any indication that it couldn't get the lock? > > Thanks, > Dave Cottingham > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From sberg at mississippi.com Fri Jan 24 12:49:54 2020 From: sberg at mississippi.com (sberg at mississippi.com) Date: Fri, 24 Jan 2020 11:49:54 -0600 Subject: [Borgbackup] getting backup status Message-ID: <52C7AE3896DF4746BAB3087E244FF61D.MAI@mississippi.com> I've got a quite a few systems at work backing up over ssh to a file server. Is there a suggested way to get the status of all those backups? Borg info and borg list are working on individual repos and I've started to work up a perl script to take a look at all the backups that I currently have. But that script chokes if one of those backups is actively working. I'd like to figure out a way to get a list of all the backups in a particular file location and have the listing show last time a backup completed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at portmann.org Sat Jan 25 12:41:40 2020 From: thomas at portmann.org (Thomas Portmann) Date: Sat, 25 Jan 2020 18:41:40 +0100 Subject: [Borgbackup] getting backup status In-Reply-To: <52C7AE3896DF4746BAB3087E244FF61D.MAI@mississippi.com> References: <52C7AE3896DF4746BAB3087E244FF61D.MAI@mississippi.com> Message-ID: <6be86b7d-0e9b-883d-64f7-6fb2d433ea47@portmann.org> Am 24.01.20 um 18:49 schrieb sberg at mississippi.com: > ... But that script chokes if one of those backups is actively > working. Do you mean that a call of `borg info` or `borg list` returns a non-zero exit code and an error message like 'Failed to create/acquire the lock ... (timeout)' while a backup to the related repository is in progress? This is the default locking behaviour: You get this if a reading borg command such as `borg info` cannot create a shared lock on the repository within one second, because another borg process is currently holding an exclusive lock for performing a modifying operation. I would suggest to increase the timeout until the `borg info/list` gives up. There is a common option --lock-wait SECONDS, see the documentation. From paul at xk7.net Sun Feb 9 07:12:40 2020 From: paul at xk7.net (Paul Waring) Date: Sun, 9 Feb 2020 12:12:40 +0000 Subject: [Borgbackup] When should I run borg check? Message-ID: <20200209121240.7olvk3pcoq4j52jk@morbius> I have a wrapper script written in Bash which allows me to create a backup by running one command. The script does the following: 1. Check if a repository exists, if not create it. 2. Generate a unique archive script based on the date/time. 3. Run borg create with the relevant parameters. 4. Run borg check 5. Run borg prune 6. Run borg check (again) borg check takes about 10 minutes to run, so I was wondering if there was a recommended time to run it? Is there any benefit to running it twice, or should I just run it after borg prune? Thanks Paul -- Paul Waring Freelance PHP developer https://www.phpdeveloper.org.uk From tw at waldmann-edv.de Sun Feb 9 09:11:32 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 9 Feb 2020 15:11:32 +0100 Subject: [Borgbackup] When should I run borg check? In-Reply-To: <20200209121240.7olvk3pcoq4j52jk@morbius> References: <20200209121240.7olvk3pcoq4j52jk@morbius> Message-ID: <0f237cd2-3a55-542d-a192-1a30a4794b88@waldmann-edv.de> > 1. Check if a repository exists, if not create it. You maybe do not want to automate that. You can manually create repos as needed and then they are just there. If they happen to vanish unexpectedly, you do not just want to automatically recreate them, but rather get notified and look at what happened. If you monitor borg create rc / log, you will automatically notice if the repo is gone. > 2. Generate a unique archive script based on the date/time. borg can create archive names based on date / time via placeholders. > 4. Run borg check > 5. Run borg prune > 6. Run borg check (again) I would not run borg check for each borg create, maybe rather once a week or so. You could even run borg prune less frequently, if its runtime matters for you. But do run it regularly. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From sb at plzk.de Wed Feb 19 04:56:43 2020 From: sb at plzk.de (Stefan Bauer) Date: Wed, 19 Feb 2020 09:56:43 +0000 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images Message-ID: Hi, use case is sending proxmox backups to remote site for disaster-recovery. given is a single file in a directory: -rw-r--r-- 1 root root 5.5G Feb 19 10:36 vzdump-qemu-202-2020_02_19-10_02_38.vma It's uncompressed. First borg backup with --compression zlib,6 sends ~ 2,7GB to remote site. That is good. Next day, we have two files in the directory. Almost no changes between the files. -rw-r--r-- 1 root root 5.5G Feb 19 10:36 vzdump-qemu-202-2020_02_19-10_02_38.vma -rw-r--r-- 1 root root 5.5G Feb 20 10:36 vzdump-qemu-202-2020_02_20-10_02_38.vma Now, borg sends _again_ the complete ~ 2,7GB of the new file to remote site. Why? I would expect that _at least_, not all again, will be transfered. Am i doing something wrong? Thank you. Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateusz.kijowski at gmail.com Wed Feb 19 07:24:22 2020 From: mateusz.kijowski at gmail.com (Mateusz Kijowski) Date: Wed, 19 Feb 2020 13:24:22 +0100 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images In-Reply-To: References: Message-ID: Hi, I have the same use-case, see this thread https://mail.python.org/pipermail/borgbackup/2017q4/000940.html TL;DR; version is: it's most likely the chunking parameters that prevent you from efficient dedupe. The chunker-params I currently use are: 10,22,16,4095 Regards, Mateusz ?r., 19 lut 2020 o 11:07 Stefan Bauer napisa?(a): > > Hi, > > > use case is sending proxmox backups to remote site for disaster-recovery. > > > given is a single file in a directory: > > > -rw-r--r-- 1 root root 5.5G Feb 19 10:36 vzdump-qemu-202-2020_02_19-10_02_38.vma > > > It's uncompressed. First borg backup with --compression zlib,6 sends ~ 2,7GB to remote site. That is good. > > > Next day, we have two files in the directory. Almost no changes between the files. > > > -rw-r--r-- 1 root root 5.5G Feb 19 10:36 vzdump-qemu-202-2020_02_19-10_02_38.vma > > -rw-r--r-- 1 root root 5.5G Feb 20 10:36 vzdump-qemu-202-2020_02_20-10_02_38.vma > > > Now, borg sends _again_ the complete ~ 2,7GB of the new file to remote site. Why? > > > I would expect that _at least_, not all again, will be transfered. > > > Am i doing something wrong? > > > Thank you. > > > Stefan > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From public at enkore.de Wed Feb 19 07:27:40 2020 From: public at enkore.de (Marian Beermann) Date: Wed, 19 Feb 2020 13:27:40 +0100 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images In-Reply-To: References: Message-ID: With ~2 MiB chunks, changing one bit every few MiB is sufficient to require complete retransmission and redundant storage of the image. Reduce chunk size or don't send a disk image but rather core data. E.g. instead of an image of your database server, send a database dump. These still tend to suffer from a similar issue, so instead create a classic database backup and add differentials, clustering changes together, further reducing transmission needs. -Marian From sb at plzk.de Wed Feb 19 15:44:35 2020 From: sb at plzk.de (Stefan Bauer) Date: Wed, 19 Feb 2020 20:44:35 +0000 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images Message-ID: Awesome, awesome, awesome! You folks really made my day! Thank you Mateusz and Marian. Now there is almost full-de-duplication. Only a few MBs have been transfered this time with --chunker-params 12,20,15,4095 --compression zlib,6. Thank you. Now i can go to bed and enjoy quick backups :) Stefan see this thread https://mail.python.org/pipermail/borgbackup/2017q4/000940.html TL;DR; version is: it's most likely the chunking parameters that prevent you from efficient dedupe. The chunker-params I currently use are: 10,22,16,4095 Regards, Mateusz -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Wed Feb 19 18:26:23 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 20 Feb 2020 00:26:23 +0100 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images In-Reply-To: References: Message-ID: > Now there is almost full-de-duplication. Only a few MBs have been > transfered this time with --chunker-params 12,20,15,4095 --compression > zlib,6. Tipps: - don't go too low with the target chunk size or you will create a lot of chunks which need a lot of RAM to get managed. - lz4 and zstd are more modern algorithms than zlib, see docs -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From heiko.helmle at horiba.com Thu Feb 20 01:33:23 2020 From: heiko.helmle at horiba.com (Heiko Helmle) Date: Thu, 20 Feb 2020 06:33:23 +0000 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images In-Reply-To: References: Message-ID: You're right. Borg's default chunker params and proxmox are not working together at all. I did some benchmarking when setting up our proxmox and we settled for 16,23,16,4095. Results were similar - gaining from almost no dedup to almost full dedup. The same values work pretty well for mongodump's output too! Would it make sense to gather experience values with the chunker params somewhere (wiki?), so people could look up some good starting params for their specific workload? The current default values seem to be optimized for vmdk and raw VM volume data by the looks of it. Best Regards Heiko -----Original Message----- From: Borgbackup On Behalf Of Mateusz Kijowski Sent: Mittwoch, 19. Februar 2020 13:24 To: Stefan Bauer Cc: borgbackup at python.org Subject: Re: [Borgbackup] No de-duplication on big VM raw-backups/images Hi, I have the same use-case, see this thread https://mail.python.org/pipermail/borgbackup/2017q4/000940.html TL;DR; version is: it's most likely the chunking parameters that prevent you from efficient dedupe. The chunker-params I currently use are: 10,22,16,4095 Regards, Mateusz ?r., 19 lut 2020 o 11:07 Stefan Bauer napisa?(a): > > Hi, > > > use case is sending proxmox backups to remote site for disaster-recovery. > > > given is a single file in a directory: > > > -rw-r--r-- 1 root root 5.5G Feb 19 10:36 > vzdump-qemu-202-2020_02_19-10_02_38.vma > > > It's uncompressed. First borg backup with --compression zlib,6 sends ~ 2,7GB to remote site. That is good. > > > Next day, we have two files in the directory. Almost no changes between the files. > > > -rw-r--r-- 1 root root 5.5G Feb 19 10:36 > vzdump-qemu-202-2020_02_19-10_02_38.vma > > -rw-r--r-- 1 root root 5.5G Feb 20 10:36 > vzdump-qemu-202-2020_02_20-10_02_38.vma > > > Now, borg sends _again_ the complete ~ 2,7GB of the new file to remote site. Why? > > > I would expect that _at least_, not all again, will be transfered. > > > Am i doing something wrong? > > > Thank you. > > > Stefan > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup _______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup Als GmbH eingetragen im Handelsregister Bad Homburg v.d.H. HRB 9816, USt.ID-Nr. DE 114 165 789 Gesch?ftsf?hrer: Dr. Hiroshi Nakamura, Dr. Robert Plank, Markus Bode, Heiko Lampert, Takashi Nagano, Takeshi Fukushima. Junichi Tajika From devzero at web.de Thu Feb 20 07:31:54 2020 From: devzero at web.de (Roland @web.de) Date: Thu, 20 Feb 2020 13:31:54 +0100 Subject: [Borgbackup] No de-duplication on big VM raw-backups/images In-Reply-To: References: Message-ID: <191faae6-f033-773b-240e-c0fd0828b66b@web.de> Am 20.02.20 um 07:33 schrieb Heiko Helmle: > You're right. > > Borg's default chunker params and proxmox are not working together at all. > > I did some benchmarking when setting up our proxmox and we settled for 16,23,16,4095. > > Results were similar - gaining from almost no dedup to almost full dedup. > > The same values work pretty well for mongodump's output too! > > Would it make sense to gather experience values with the chunker params somewhere (wiki?), so people could look up some good starting params for their specific workload? yes, please, i'd also like some "real world data params advisor" also see https://mail.python.org/pipermail/borgbackup/2019q1/001280.html and https://pve.proxmox.com/pipermail/pve-user/2019-March/170454.html > > The current default values seem to be optimized for vmdk and raw VM volume data by the looks of it. are they? regards roland > > Best Regards > Heiko > > -----Original Message----- > From: Borgbackup On Behalf Of Mateusz Kijowski > Sent: Mittwoch, 19. Februar 2020 13:24 > To: Stefan Bauer > Cc: borgbackup at python.org > Subject: Re: [Borgbackup] No de-duplication on big VM raw-backups/images > > Hi, I have the same use-case, > > see this thread https://mail.python.org/pipermail/borgbackup/2017q4/000940.html > > TL;DR; version is: it's most likely the chunking parameters that prevent you from efficient dedupe. The chunker-params I currently use > are: 10,22,16,4095 > > Regards, > > Mateusz > > ?r., 19 lut 2020 o 11:07 Stefan Bauer napisa?(a): >> Hi, >> >> >> use case is sending proxmox backups to remote site for disaster-recovery. >> >> >> given is a single file in a directory: >> >> >> -rw-r--r-- 1 root root 5.5G Feb 19 10:36 >> vzdump-qemu-202-2020_02_19-10_02_38.vma >> >> >> It's uncompressed. First borg backup with --compression zlib,6 sends ~ 2,7GB to remote site. That is good. >> >> >> Next day, we have two files in the directory. Almost no changes between the files. >> >> >> -rw-r--r-- 1 root root 5.5G Feb 19 10:36 >> vzdump-qemu-202-2020_02_19-10_02_38.vma >> >> -rw-r--r-- 1 root root 5.5G Feb 20 10:36 >> vzdump-qemu-202-2020_02_20-10_02_38.vma >> >> >> Now, borg sends _again_ the complete ~ 2,7GB of the new file to remote site. Why? >> >> >> I would expect that _at least_, not all again, will be transfered. >> >> >> Am i doing something wrong? >> >> >> Thank you. >> >> >> Stefan >> >> _______________________________________________ >> Borgbackup mailing list >> Borgbackup at python.org >> https://mail.python.org/mailman/listinfo/borgbackup > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > Als GmbH eingetragen im Handelsregister Bad Homburg v.d.H. HRB 9816, USt.ID-Nr. DE 114 165 789 Gesch?ftsf?hrer: Dr. Hiroshi Nakamura, Dr. Robert Plank, Markus Bode, Heiko Lampert, Takashi Nagano, Takeshi Fukushima. Junichi Tajika > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup From tw at waldmann-edv.de Sat Mar 7 19:18:36 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 8 Mar 2020 01:18:36 +0100 Subject: [Borgbackup] BorgBackup 1.1.11 released! Message-ID: <6b1437cb-31a4-bb10-b29d-cd4dea0cf6c9@waldmann-edv.de> BorgBackup 1.1.11 released with important bug fixes! https://github.com/borgbackup/borg/releases/tag/1.1.11 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From lazyvirus at gmx.com Mon Mar 9 18:43:33 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Mon, 9 Mar 2020 23:43:33 +0100 Subject: [Borgbackup] Odd error Message-ID: <20200309234333.3c4d09a9@msi.defcon1.lan> BB pip3 intall v1.1.11 Debian Stretch ===================================== Hi listers, on a BB server with a good disk, one of my old machine is making an odd error. As it's content isn't very important, I erased it's repo and recreated a brand new one : borg init -e keyfile-blake2 /BORG /BORG being a NFS directory mounted on the client (, just like all other clients. This machine has been memtest86+ tested (several full cycles) without any problem. However, even the first backup failed as follow : [?] A /usr/local/etc/ssl/cert.pem A /usr/local/etc/ssl/openssl.cnf A /usr/local/etc/ssl/x509v3.cnf A /usr/local/firefox-63.0.tar.bz2 Repository index missing or corrupted, trying to recover from: File failed integrity check: /BORG/index.19 Checking repository transaction due to previous error: File failed integrity check: /BORG/index.19 segment 20 not found, but listed in compaction data Local Exception Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/borg/repository.py", line 1347, in get_fd ts, fd = self.fds[segment] File "/usr/local/lib/python3.5/dist-packages/borg/lrucache.py", line 21, in __getitem__ value = self._cache[key] # raise KeyError if not found KeyError: 20 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 4529, in main exit_code = archiver.run(args) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 4461, in run return set_ec(func(args)) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 166, in wrapper return method(self, args, repository=repository, **kwargs) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 574, in do_create create_inner(archive, cache) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 537, in create_inner read_special=args.read_special, dry_run=dry_run, st=st) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 651, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 651, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 651, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 651, in _process read_special=read_special, dry_run=dry_run) File "/usr/local/lib/python3.5/dist-packages/borg/archiver.py", line 625, in _process status = archive.process_file(path, st, cache) File "/usr/local/lib/python3.5/dist-packages/borg/archive.py", line 1071, in process_file self.chunk_file(item, cache, self.stats, backup_io_iter(self.chunker.chunkify(fd, fh))) File "/usr/local/lib/python3.5/dist-packages/borg/archive.py", line 1011, in chunk_file from_chunk, part_number = self.write_part_file(item, from_chunk, part_number) File "/usr/local/lib/python3.5/dist-packages/borg/archive.py", line 981, in write_part_file self.write_checkpoint() File "/usr/local/lib/python3.5/dist-packages/borg/archive.py", line 480, in write_checkpoint self.cache.chunk_decref(self.id, self.stats) File "/usr/local/lib/python3.5/dist-packages/borg/cache.py", line 926, in chunk_decref self.repository.delete(id, wait=wait) File "/usr/local/lib/python3.5/dist-packages/borg/repository.py", line 1155, in delete size = self.io.read(segment, offset, id, read_data=False) File "/usr/local/lib/python3.5/dist-packages/borg/repository.py", line 1467, in read fd = self.get_fd(segment) File "/usr/local/lib/python3.5/dist-packages/borg/repository.py", line 1349, in get_fd fd = open_fd() File "/usr/local/lib/python3.5/dist-packages/borg/repository.py", line 1330, in open_fd fd = open(self.segment_filename(segment), 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/BORG/data/0/20' Platform: Linux apophis 4.9.0-12-rt-686-pae #1 SMP PREEMPT RT Debian 4.9.210-1 (2020-01-20) i686 Linux: debian 9.12 Borg: 1.1.11 Python: CPython 3.5.3 msgpack: 0.5.6 PID: 7671 CWD: /root sys.argv: ['/usr/local/bin/borg', 'create', '--verbose', '--exclude-caches', '--show-rc', '--filter', 'AME', '--list', '--stats', '--checkpoint-interval', '600', '--compression', 'auto,zlib,6', '--exclude-from', '/usr/local/sbin/BORG_EXCLUSIONS.list', '::{hostname}-{utcnow}Z', '/'] SSH_ORIGINAL_COMMAND: None terminating with error status, rc 2 ================================= Relaunching the same backup script also failed with the same "KeyError: 20" error :/ From here, I'm a bit lost. Jean-Yves From panayotis at panayotis.com Mon Mar 9 18:50:53 2020 From: panayotis at panayotis.com (Panayotis Katsaloulis) Date: Tue, 10 Mar 2020 00:50:53 +0200 Subject: [Borgbackup] How to test a remote fuse-based backup In-Reply-To: <64f0384b-aecb-4feb-98c0-589bd05b54f2@Spark> References: <64f0384b-aecb-4feb-98c0-589bd05b54f2@Spark> Message-ID: Hello people I am trying to find a solution for inexpensive cloud backup, mainly using my favorite borgbackup technology. The idea is to keep my backup on a onedrive server which, since it doesn?t support borgbackup (yet), I am going to use bcloud (as a fuse provider) and ?local? backup. The problem is that bcloud doesn?t seem really trustable (and rightly so). For this reason (and others) it recommends using caching. So the question is transferred ?how to be absolutely sure that what borgbackup sent, is what was really sent to the server, and *how to properly check it*. I tried using something like "borg check --verify-data? but it practically had to bring locally the whole backup, which of course doesn?t scale well. Any ideas? -- Panayotis -------------- next part -------------- An HTML attachment was scrubbed... URL: From lazyvirus at gmx.com Mon Mar 9 18:59:27 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Mon, 9 Mar 2020 23:59:27 +0100 Subject: [Borgbackup] Odd error In-Reply-To: <20200309234333.3c4d09a9@msi.defcon1.lan> References: <20200309234333.3c4d09a9@msi.defcon1.lan> Message-ID: <20200309235927.55212248@msi.defcon1.lan> On Mon, 9 Mar 2020 23:43:33 +0100 Bzzzz wrote: Forget it, I checked and it is the same error I had in 2019-10-21, although I still do not understand why it fails as the NFS mount doesn't generate any error in usual exploitation. So I launch a : borg check --repair and will restart from here. Sorry for the noise. Jean-Yves From eric at in3x.io Mon Mar 9 19:36:39 2020 From: eric at in3x.io (Eric S. Johansson) Date: Tue, 10 Mar 2020 01:36:39 +0200 (EET) Subject: [Borgbackup] How to test a remote fuse-based backup In-Reply-To: References: <64f0384b-aecb-4feb-98c0-589bd05b54f2@Spark> Message-ID: <449430151.76884.1583796999616.JavaMail.zimbra@in3x.io> https://www.rsync.net/products/borg.html use rsync.net borg service. you can use borg check command to verify your backup made it. I've used rsync for multiple customers and I am very happy with the service --- eric > From: "Panayotis Katsaloulis" > To: borgbackup at python.org > Sent: Monday, March 9, 2020 6:50:53 PM > Subject: [Borgbackup] How to test a remote fuse-based backup > Hello people > I am trying to find a solution for inexpensive cloud backup, mainly using my > favorite borgbackup technology. > The idea is to keep my backup on a onedrive server which, since it doesn?t > support borgbackup (yet), I am going to use bcloud (as a fuse provider) and > ?local? backup. > The problem is that bcloud doesn?t seem really trustable (and rightly so). For > this reason (and others) it recommends using caching. So the question is > transferred ?how to be absolutely sure that what borgbackup sent, is what was > really sent to the server, and *how to properly check it*. > I tried using something like "borg check --verify-data? but it practically had > to bring locally the whole backup, which of course doesn?t scale well. > Any ideas? > -- > Panayotis > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkpapp at gmail.com Sun Mar 15 10:50:36 2020 From: tkpapp at gmail.com (Tamas Papp) Date: Sun, 15 Mar 2020 15:50:36 +0100 Subject: [Borgbackup] borgbackup, ZFS, and compression Message-ID: <87fte93c5f.fsf@gmail.com> Hi, I am transitioning my home backup server to use ZFS, which features transparent file-system level compression. Borgbackup archives will be on a dedicated volume. I am what the best approach is regarding compression: 1. enable for borgbackup, disable for ZFS (little point in trying to compress it further I guess) 2. disable for borgbackup, enable for ZFS (will this be suboptimal? I think that borg knows its data structure the best so maybe that will optimal for compression), 3. something else? Thanks, Tamas From imperator at jedimail.de Sun Mar 15 12:14:08 2020 From: imperator at jedimail.de (Sascha Ternes) Date: Sun, 15 Mar 2020 17:14:08 +0100 Subject: [Borgbackup] borgbackup, ZFS, and compression In-Reply-To: <87fte93c5f.fsf@gmail.com> References: <87fte93c5f.fsf@gmail.com> Message-ID: Hi Tamas, in ZFS I always enable compression for better performance of the file system. In BorgBackup I would enable compression always, too - for the same reason. If you create your backups from regular directories and files, ZFS decompresses them transparently, then Borg puts them in the archive, so you will always want to enable compression. Always use lz4 for highest compression speed. Sascha Am 15.03.20 um 15:50 schrieb Tamas Papp: > I am transitioning my home backup server to use ZFS, which features > transparent file-system level compression. Borgbackup archives will be > on a dedicated volume. > > I am what the best approach is regarding compression: > > 1. enable for borgbackup, disable for ZFS (little point in trying to > compress it further I guess) > > 2. disable for borgbackup, enable for ZFS (will this be suboptimal? I > think that borg knows its data structure the best so maybe that will > optimal for compression), > > 3. something else? > > Thanks, > > Tamas From amar at mailbox.org Mon Mar 16 00:24:35 2020 From: amar at mailbox.org (Amar) Date: Mon, 16 Mar 2020 09:54:35 +0530 Subject: [Borgbackup] borg create is not accepting PATH with SPACE in it Message-ID: I have tried using the path with a ~ or without it. I have also tried using with \ as in ~/My\ Audio. Quotes (single or double) don?t work either. Creating archive at ?XXXXX.rsync.net:sink::XXXX-XXXXXX" /Users/amar/My\: [Errno 2] No such file or directory: '/Users/amar/My\\' Audio: [Errno 2] No such file or directory: 'Audio' The path is being read as two paths (error pasted above). This is my borg create: borg create -x --verbose --progress --stats --show-rc \ --filter AME \ --compression auto,lzma,9 \ --exclude-caches \ --exclude-from $BORG_EXCLUDE_FILE \ ::$BORG_LOCAL_HOSTNAME-$(date +"%d%m%y%H%M") $(cat $BORG_INCLUDE_FILE) BORG_INCLUDE_FILE has file/folder names to be backed up listed on separate lines. All folders/files are getting backed up successfully which don?t have SPACE in their PATH. Even if I try to pass a single path (instead of a cat command) after REPO::ARCHIVE with space in it it fails. I was told on IRC it could work with patterns-from (I am trying to find a good tutorial on this; also, docs mentioned it?s an experimental feature). Is that the only way (I mean other than changing the PATH to remove SPACE)? How to fix this? I am on Mac 10.15.3 and borg 1.1.10 (installed via Homebrew). Regards. Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at gasaway.org Mon Mar 16 01:39:26 2020 From: dave at gasaway.org (David Gasaway) Date: Sun, 15 Mar 2020 22:39:26 -0700 Subject: [Borgbackup] borgbackup, ZFS, and compression In-Reply-To: <87fte93c5f.fsf@gmail.com> References: <87fte93c5f.fsf@gmail.com> Message-ID: On Sun, Mar 15, 2020 at 7:51 AM Tamas Papp wrote: > > 2. disable for borgbackup, enable for ZFS (will this be suboptimal? I > think that borg knows its data structure the best so maybe that will > optimal for compression), > If your borg archives are encrypted, they are not going to compress well at all, or even come out larger than no compression, due to overhead. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From public at enkore.de Mon Mar 16 03:29:07 2020 From: public at enkore.de (Marian Beermann) Date: Mon, 16 Mar 2020 08:29:07 +0100 Subject: [Borgbackup] borg create is not accepting PATH with SPACE in it In-Reply-To: References: Message-ID: <4ea068c3-c03f-1088-5211-427592732957@enkore.de> Hi, you will have to understand shell. This is not a Borg problem; it's a shell problem. > Even if I try to pass a single path (instead of a *cat* command) after > REPO::ARCHIVE with space in it it fails. In this case you will need quotes or escaping, e.g. borg create ... "$HOME/My Documents" (~ is not expanded by shells inside quotes, but variables are) borg create ... '/home/amar/My Documents' (No variables are expanded by shells inside single quotes) borg create ... /home/amar/My\ Documents Either of these three pass the same path to borg. Now for the file problem. The tl;dr is you don't want to do this yourself and this is the exact reason Borg has --pattern-from and other applications things like @file to read args from a file. Without extra processing there is no way to pass the contents of a file as multiple command arguments if you need escaping. However, there is a standard utility that can be made to do this for you. It's not really meant for this particular use case, but it'll work. xargs borg create ... < $BORG_INCLUDE_FILE The format of BORG_INCLUDE_FILE has to look like this: '/home/amar/My Documents' '/home/amar/My Pictures' etc. (If you add too many files to the list, xargs will start to run multiple borg processes, which will cause errors or multiple archives containing subsets) -Marian From lazyvirus at gmx.com Mon Mar 16 08:43:16 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Mon, 16 Mar 2020 13:43:16 +0100 Subject: [Borgbackup] borgbackup, ZFS, and compression In-Reply-To: <87fte93c5f.fsf@gmail.com> References: <87fte93c5f.fsf@gmail.com> Message-ID: <20200316134316.048b44c8@msi.defcon1.lan> On Sun, 15 Mar 2020 15:50:36 +0100 Tamas Papp wrote: > I am what the best approach is regarding compression: This is a matter of balance between ZFS and the machines you backup (calculus power). I'd say that with today's hardware, you can go your way either path but not both together, because, as David Gasaway said, it would be counter productive and would take more disk place. That said, whether you choose to lz4 compress on clients or on the server doesn't matter very much, as lz4 is very fast, except if you backup a lot of clients at the same time, in which case the time/power taken by the server's compression might hamper your bandwidth. Of course, if you choose a much harder compression (ie: bzip2), clients will have to do the job to avoid pushing up the load of the server excessively. Jean-Yves From amar at mailbox.org Mon Mar 16 11:11:39 2020 From: amar at mailbox.org (Amar) Date: Mon, 16 Mar 2020 20:41:39 +0530 Subject: [Borgbackup] borg create is not accepting PATH with SPACE in it In-Reply-To: <4ea068c3-c03f-1088-5211-427592732957@enkore.de> References: <4ea068c3-c03f-1088-5211-427592732957@enkore.de> Message-ID: <37315BFC-83B3-43FE-8F05-64D8C2F804FC@mailbox.org> Thank you, Marian. I guess I will try to learn how shell is coming in the picture here. I didn?t mean to say it was borg?s problem. Maybe I should not have phrased it as ?borg create is not accepting..?. Sending path with space in it inside single quotes ?/home/user/folder name? and passing it directly in "borg create? command after REPO::ARCHIVE works fine (not if I use it in a ). Multiple paths worked too - so I guess I should be able to get it from a file that outputs a lis of paths as in ?path 1? ?path 2? after ARCHIVE (So if nothing works I will just do this or changed my folder name) Bu is there a way where I can list ?excludes? in a file and ?includes? in another file (something like this https://restic.readthedocs.io/en/latest/040_backup.html#including-files ; just an example)? I also think (I maybe wrong; couldn?t find detailed documentation about this) that if I use "--patterns-from" I will have to give a root directory in PATH (after ARCHIVE) and then in patterns I will have to exclude everything I don?t want to backup and include everything I want to backup (? I am not clear on this). Right now I am getting "--patterns-from: command not found? and also ?borg: error: Need at least one PATH argument.? Is there a comprehensive documentation with detailed examples? Or a tutorial that uses borg to backup multiple directories listed in a file? What combination of options can be used together? Which ones not? If I use patterns-from do I still have to supply a PATH, or the paths listed by +/- alone takes care of it? What?s R in the beginning of the pattern file? Regards. > On 16 Mar 20, at 12:59 PM, Marian Beermann wrote: > > Hi, > > you will have to understand shell. This is not a Borg problem; it's a > shell problem. > >> Even if I try to pass a single path (instead of a *cat* command) after >> REPO::ARCHIVE with space in it it fails. > > In this case you will need quotes or escaping, e.g. > > borg create ... "$HOME/My Documents" > (~ is not expanded by shells inside quotes, but variables are) > > borg create ... '/home/amar/My Documents' > (No variables are expanded by shells inside single quotes) > > borg create ... /home/amar/My\ Documents > > Either of these three pass the same path to borg. > > Now for the file problem. > > The tl;dr is you don't want to do this yourself and this is the exact > reason Borg has --pattern-from and other applications things like @file > to read args from a file. > > Without extra processing there is no way to pass the contents of a file > as multiple command arguments if you need escaping. > > However, there is a standard utility that can be made to do this for > you. It's not really meant for this particular use case, but it'll work. > > xargs borg create ... < $BORG_INCLUDE_FILE > > The format of BORG_INCLUDE_FILE has to look like this: > > '/home/amar/My Documents' > '/home/amar/My Pictures' > etc. > > (If you add too many files to the list, xargs will start to run multiple > borg processes, which will cause errors or multiple archives containing > subsets) > > -Marian Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at gasaway.org Tue Mar 17 17:23:20 2020 From: dave at gasaway.org (David Gasaway) Date: Tue, 17 Mar 2020 14:23:20 -0700 Subject: [Borgbackup] borg create is not accepting PATH with SPACE in it In-Reply-To: <37315BFC-83B3-43FE-8F05-64D8C2F804FC@mailbox.org> References: <4ea068c3-c03f-1088-5211-427592732957@enkore.de> <37315BFC-83B3-43FE-8F05-64D8C2F804FC@mailbox.org> Message-ID: On Mon, Mar 16, 2020 at 8:19 AM Amar wrote: > > Bu is there a way where I can list ?excludes? in a file and ?includes? in > another file (something like this > https://restic.readthedocs.io/en/latest/040_backup.html#including-files; > just an example)? > You want includes and excludes in separate files? I'm not sure you'd really want that, since the order of include and exclude patterns make a difference. See the pattern documentation for more about this. In any case, --patterns-from lets you put them both in the same file, and you could put them in in different "sections" if the file, if that's what you really want. > I also think (I maybe wrong; couldn?t find detailed documentation about > this) that if I use "--patterns-from" I will have to give a root directory > in PATH (after ARCHIVE) and then in patterns I will have to exclude > everything I don?t want to backup and include everything I want to backup > (? I am not clear on this). > With --patterns-from, you specify the root (or multiple roots) in the pattern file using the 'R' prefix. No need to put a root on the command line. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From amar at mailbox.org Wed Mar 18 00:17:45 2020 From: amar at mailbox.org (Amar) Date: Wed, 18 Mar 2020 09:47:45 +0530 Subject: [Borgbackup] borg create is not accepting PATH with SPACE in it In-Reply-To: References: <4ea068c3-c03f-1088-5211-427592732957@enkore.de> <37315BFC-83B3-43FE-8F05-64D8C2F804FC@mailbox.org> Message-ID: <2AC09B90-EE22-4D86-B75C-BF735560305A@mailbox.org> Hi David, Using (only) multiple roots worked that I learned from your reply (didn?t comprehend the docs I believe). I had these to back up: /folder/path0 /folder/path1/a /folder/path2/b /folder/path3/x/y /folder/path3/x/z/a /folder/path3/x/z/b /folder/path4/folder name with a space /folder/a_file I wrote a PATTERN_FILE that looks like this: R /folder/path0 R /folder/path1/a R /folder/path2/b ... and so on ... then added excludes - *.cache - .cache - .DS_Store - *.swp Basically I added all the folders/files I had to backup as roots and nothing else, not other inclusion/exclusion (except junk file exclusions). It worked. PS. Adding few roots (or just one "R /folder") and removing remaining would have meant hundreds of files/folders to exclude using ?-?. Thank you! > With --patterns-from, you specify the root (or multiple roots) in the pattern file using the 'R' prefix. No need to put a root on the command line. Amar -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander_goetzenstein at web.de Wed Mar 25 08:56:16 2020 From: alexander_goetzenstein at web.de (Alexander Goetzenstein) Date: Wed, 25 Mar 2020 13:56:16 +0100 Subject: [Borgbackup] Find a file to restore Message-ID: Hi, assuming I made daily backups of my /home directory all the time, and now I need a certain file of a specific version. All I know is a part of it's file name and that it must be backupped within the last year. I need, let's say, the second or third last version of that file. Is there a container overlapping function to get a list to find and pick the wanted file from? If not, what would be the easiest and fastest way to find this file? -- Gru? Alex From tw at waldmann-edv.de Wed Mar 25 09:01:48 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Wed, 25 Mar 2020 14:01:48 +0100 Subject: [Borgbackup] Find a file to restore In-Reply-To: References: Message-ID: <72bf246a-9f1c-c42f-9666-a7ae53312316@waldmann-edv.de> > Is there a container overlapping function to get a list to find and pick > the wanted file from? Have a look at borg mount --help, esp. the "versions" mount option. If you have huge archives or a lot of archives, maybe limit the resource needs (time and memory) by using some of the other borg mount options so it does not have to process ALL the repo contents. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From frederic.pavy at free.fr Thu Mar 26 07:02:33 2020 From: frederic.pavy at free.fr (=?utf-8?B?RnLDqWTDqXJpYw==?= PAVY) Date: Thu, 26 Mar 2020 12:02:33 +0100 (CET) Subject: [Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <1434805665.54430777.1585219904437.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <1765635777.54473199.1585220552053.JavaMail.root@zimbra65-e11.priv.proxad.net> Hello, I do borg backups automatically, silently, and recently I discovered that some backups returned in error (return code 1). I launched 'borg list' for the repository and even 'borg info' for an individual archive and was surprised not to see the return code for the 'borg create' corresponding command. Where could I retrieve this information? Isn't it stored in the backup? If not, could I understand the reason? Thanks for any input, Frederic From lazyvirus at gmx.com Thu Mar 26 10:35:41 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 26 Mar 2020 15:35:41 +0100 Subject: [Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <1765635777.54473199.1585220552053.JavaMail.root@zimbra65-e11.priv.proxad.net> References: <1434805665.54430777.1585219904437.JavaMail.root@zimbra65-e11.priv.proxad.net> <1765635777.54473199.1585220552053.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <20200326153541.13c7672e@msi.defcon1.lan> On Thu, 26 Mar 2020 12:02:33 +0100 (CET) Fr?d?ric PAVY wrote: > Hello, Hi, > I do borg backups automatically, silently, and recently I discovered > that some backups returned in error (return code 1). > > I launched 'borg list' for the repository and even 'borg info' for an > individual archive and was surprised not to see the return code for > the 'borg create' corresponding command. Where could I retrieve this > information? Isn't it stored in the backup? If not, could I understand > the reason? You can retrieve the system exit code of any executable into bash with $? ie : ls ; echo $? Jean-Yves From lazyvirus at gmx.com Thu Mar 26 11:10:45 2020 From: lazyvirus at gmx.com (Bzzzz) Date: Thu, 26 Mar 2020 16:10:45 +0100 Subject: [Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: References: <1434805665.54430777.1585219904437.JavaMail.root@zimbra65-e11.priv.proxad.net> <1765635777.54473199.1585220552053.JavaMail.root@zimbra65-e11.priv.proxad.net> <20200326153541.13c7672e@msi.defcon1.lan> Message-ID: <20200326161045.611449d8@msi.defcon1.lan> On Thu, 26 Mar 2020 16:01:32 +0100 Mario Emmenlauer wrote: > > Hi, > > On 26.03.20 15:35, Bzzzz wrote: > > On Thu, 26 Mar 2020 12:02:33 +0100 (CET) > > Fr?d?ric PAVY wrote: > > > >> Hello, > > > > Hi, > > > >> I do borg backups automatically, silently, and recently I discovered > >> that some backups returned in error (return code 1). > >> > >> I launched 'borg list' for the repository and even 'borg info' for > >> an individual archive and was surprised not to see the return code > >> for the 'borg create' corresponding command. Where could I retrieve > >> this information? Isn't it stored in the backup? If not, could I > >> understand the reason? > > > > You can retrieve the system exit code of any executable into bash > > with $? > > > > ie : ls ; echo $? > > I think he is talking about where to get this status after the fact, > like days or weeks after the creation of the archive. Fr?d?ric PAVY > is that correct? > > All the best, > > Mario Emmenlauer Oh, ok, but in this case (never had that), I suppose the command will fail explaining by itself what was wrong? JY From mario at emmenlauer.de Thu Mar 26 11:15:36 2020 From: mario at emmenlauer.de (Mario Emmenlauer) Date: Thu, 26 Mar 2020 16:15:36 +0100 Subject: [Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <20200326161045.611449d8@msi.defcon1.lan> References: <1434805665.54430777.1585219904437.JavaMail.root@zimbra65-e11.priv.proxad.net> <1765635777.54473199.1585220552053.JavaMail.root@zimbra65-e11.priv.proxad.net> <20200326153541.13c7672e@msi.defcon1.lan> <20200326161045.611449d8@msi.defcon1.lan> Message-ID: <8ea37bf5-5877-f102-1b86-45c80667f5d6@emmenlauer.de> On 26.03.20 16:10, Bzzzz wrote: > On Thu, 26 Mar 2020 16:01:32 +0100 > Mario Emmenlauer wrote: >> On 26.03.20 15:35, Bzzzz wrote: >>> On Thu, 26 Mar 2020 12:02:33 +0100 (CET) >>> Fr?d?ric PAVY wrote: >>>> I do borg backups automatically, silently, and recently I discovered >>>> that some backups returned in error (return code 1). >>>> >>>> I launched 'borg list' for the repository and even 'borg info' for >>>> an individual archive and was surprised not to see the return code >>>> for the 'borg create' corresponding command. Where could I retrieve >>>> this information? Isn't it stored in the backup? If not, could I >>>> understand the reason? >>> >>> You can retrieve the system exit code of any executable into bash >>> with $? >>> >>> ie : ls ; echo $? >> >> I think he is talking about where to get this status after the fact, >> like days or weeks after the creation of the archive. Fr?d?ric PAVY >> is that correct? > > Oh, ok, but in this case (never had that), I suppose the command will > fail explaining by itself what was wrong? I may be completely on the wrong track, but I understood the question still different. I think he created many repos, a longer time ago. And some of these creations failed. Now its long after the fact, but he is presented with the list of repo directories. And he's wondering which of these repos are trustworthy, in the sense that their creation was completed successfully. Is there some property that would ensure that a repo creation was completed all the way to the end, successfully? Fr?d?ric PAVY, is that your question? Cheers, Mario From mario at emmenlauer.de Thu Mar 26 11:01:32 2020 From: mario at emmenlauer.de (Mario Emmenlauer) Date: Thu, 26 Mar 2020 16:01:32 +0100 Subject: [Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <20200326153541.13c7672e@msi.defcon1.lan> References: <1434805665.54430777.1585219904437.JavaMail.root@zimbra65-e11.priv.proxad.net> <1765635777.54473199.1585220552053.JavaMail.root@zimbra65-e11.priv.proxad.net> <20200326153541.13c7672e@msi.defcon1.lan> Message-ID: Hi, On 26.03.20 15:35, Bzzzz wrote: > On Thu, 26 Mar 2020 12:02:33 +0100 (CET) > Fr?d?ric PAVY wrote: > >> Hello, > > Hi, > >> I do borg backups automatically, silently, and recently I discovered >> that some backups returned in error (return code 1). >> >> I launched 'borg list' for the repository and even 'borg info' for an >> individual archive and was surprised not to see the return code for >> the 'borg create' corresponding command. Where could I retrieve this >> information? Isn't it stored in the backup? If not, could I understand >> the reason? > > You can retrieve the system exit code of any executable into bash > with $? > > ie : ls ; echo $? I think he is talking about where to get this status after the fact, like days or weeks after the creation of the archive. Fr?d?ric PAVY is that correct? All the best, Mario Emmenlauer From frederic.pavy at free.fr Thu Mar 26 19:03:11 2020 From: frederic.pavy at free.fr (=?utf-8?B?RnLDqWTDqXJpYyBQQVZZ?=) Date: Fri, 27 Mar 2020 00:03:11 +0100 Subject: [Borgbackup] Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: References: Message-ID: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> Hi all, > Le 26 mars 2020 ? 17:00, borgbackup-request at python.org a ?crit : > > On 26.03.20 16:10, Bzzzz wrote: >> On Thu, 26 Mar 2020 16:01:32 +0100 >> Mario Emmenlauer > wrote: >>> On 26.03.20 15:35, Bzzzz wrote: >>>> On Thu, 26 Mar 2020 12:02:33 +0100 (CET) >>>> Fr?d?ric PAVY > wrote: >>>>> I do borg backups automatically, silently, and recently I discovered >>>>> that some backups returned in error (return code 1). >>>>> >>>>> I launched 'borg list' for the repository and even 'borg info' for >>>>> an individual archive and was surprised not to see the return code >>>>> for the 'borg create' corresponding command. Where could I retrieve >>>>> this information? Isn't it stored in the backup? If not, could I >>>>> understand the reason? >>>> >>>> You can retrieve the system exit code of any executable into bash >>>> with $? >>>> >>>> ie : ls ; echo $? >>> >>> I think he is talking about where to get this status after the fact, >>> like days or weeks after the creation of the archive. Fr?d?ric PAVY >>> is that correct? >> >> Oh, ok, but in this case (never had that), I suppose the command will >> fail explaining by itself what was wrong? > > I may be completely on the wrong track, but I understood the question > still different. I think he created many repos, a longer time ago. And > some of these creations failed. > > Now its long after the fact, but he is presented with the list of repo > directories. And he's wondering which of these repos are trustworthy, > in the sense that their creation was completed successfully. Is there > some property that would ensure that a repo creation was completed all > the way to the end, successfully? > > Fr?d?ric PAVY, is that your question? > > Cheers, > > Mario Yes, you?re right! Here is my configuration? I have some Raspberry pi, each for one function (octopi, retropie, ?). I have automated their backup with Borg on another pi, always on and with a big disk (this one runs nextcloudPi too). I didn?t verify the backup process (I confess). Sometimes I watched the repositories and seeing that there are always new backups in the repository gave me the illusion that all was OK. When I had a problem on one machine, I wasn?t able to restore the pi correctly, and after having recreated entirely the machine, I realised that the backup wasn?t complete, some files couldn?t be saved, and the backup ended with a return code of 1. I think it could be useful to have the information of the return code, for example in the 'borg list /path/to/repository? command. But anyway, I will improve my script, and I thing I?ll log borg?s return code after each backup, and append the log to a file on the drive containing the repository, to have a vision of this backups Cheers, Frederic -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Thu Mar 26 19:13:12 2020 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Fri, 27 Mar 2020 00:13:12 +0100 Subject: [Borgbackup] Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> References: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> Message-ID: > But?anyway, I will improve my script, and I thing I?ll log borg?s return > code after each backup, and append the log to a file on the drive > containing the repository, to have a vision of this ?backups borg create --show-rc ... And BTW, 1 is not "error", but just "warning", see the docs. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From andrea.francesconi at gmail.com Fri Mar 27 02:56:15 2020 From: andrea.francesconi at gmail.com (fRANz) Date: Fri, 27 Mar 2020 07:56:15 +0100 Subject: [Borgbackup] Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> References: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> Message-ID: On Fri, Mar 27, 2020 at 12:03 AM Fr?d?ric PAVY wrote: > When I had a problem on one machine, I wasn?t able to restore the pi correctly, and after having recreated entirely the machine, I realised that the backup wasn?t complete, some files couldn?t be saved, and the backup ended with a return code of 1. > I think it could be useful to have the information of the return code, for example in the 'borg list /path/to/repository? command. > But anyway, I will improve my script, and I thing I?ll log borg?s return code after each backup, and append the log to a file on the drive containing the repository, to have a vision of this backups Frederic, as Thomas suggested, you can use --show-rc with borg create in order to get a complete report after each backup, something like this: #### ... Archive name: x1c7-2020-03-01T10:29:40 Archive fingerprint: 5e182c1af492f9555aa155a9593f5d0bbb677a576308c06c9aa4a5758f8f7b15 Time (start): Sun, 2020-03-01 10:29:41 Time (end): Sun, 2020-03-01 10:29:50 Duration: 8.19 seconds Number of files: 18973 Utilization of max. archive size: 0% ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size This archive: 2.26 GB 2.00 GB 5.41 MB All archives: 13.09 GB 11.55 GB 1.74 GB Unique chunks Total chunks Chunk index: 8498 117293 ------------------------------------------------------------------------------ terminating with success status, rc 0 #### The last line is the info you're looking for. Stream/copy all the borg logs in a collector that alerts you when rc != 0 Despite the log, tests your backup with restores, this is the only safe way. -f From l0f4r0 at tuta.io Fri Mar 27 03:37:40 2020 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Fri, 27 Mar 2020 08:37:40 +0100 (CET) Subject: [Borgbackup] Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: References: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> Message-ID: Hi, 27 mars 2020 ? 07:56 de andrea.francesconi at gmail.com: > Despite the log, tests your backup with restores, this is the only safe way. > Yes of course, this is the best practice. And I think it could be relevant sometimes to do a 'borg check' just to make sure your repositories/archives consistency looks good :) Best regards, l0f4r0 From l0f4r0 at tuta.io Fri Mar 27 03:55:44 2020 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Fri, 27 Mar 2020 08:55:44 +0100 (CET) Subject: [Borgbackup] Borgbackup] Retrieve the status (return code) of a backup? In-Reply-To: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> References: <0B75C82D-158D-4FB1-9760-5A535386BAF7@free.fr> Message-ID: Hi, 27 mars 2020 ? 00:03 de frederic.pavy at free.fr: > But?> > anyway, I will improve my script, and I thing I?ll log borg?s return code after each backup, and append the log to a file on the drive containing the repository, to have a vision of this ?backups > Nonetheless, I like your suggestion to display return codes inside 'borg info'. Maybe you could share the idea on?https://github.com/borgbackup/borg/issues? Best regards, l0f4r0