From eluther at smartleaf.com Thu Jul 18 14:10:43 2019 From: eluther at smartleaf.com (Eric Luther) Date: Thu, 18 Jul 2019 14:10:43 -0400 Subject: [Borgbackup] Frequent MemoryError Message-ID: We've been running borg for several months in our environment and the most common failure we see is a "MemoryError". We use backupninja to manage the backup jobs and it produces the following output. > Info: Repository was already initialized > Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/borg/remote.py", line 248, in serve res = f(**args) File "/usr/lib/python3/dist-packages/borg/repository.py", line 1118, in delete self.prepare_txn(self.get_transaction_id()) File "/usr/lib/python3/dist-packages/borg/repository.py", line 498, in prepare_txn self.index = self.open_index(transaction_id, auto_recover=False) File "/usr/lib/python3/dist-packages/borg/repository.py", line 466, in open_index return NSIndex.read(fd) File "src/borg/hashindex.pyx", line 113, in borg.hashindex.IndexBase.read (src/borg/hashindex.c:1916) File "src/borg/hashindex.pyx", line 100, in borg.hashindex.IndexBase.__cinit__ (src/borg/hashindex.c:1660) File "/usr/lib/python3/dist-packages/borg/crypto/file_integrity.py", line 32, in read return self.fd.read(n) File "/usr/lib/python3/dist-packages/borg/crypto/file_integrity.py", line 82, in read data = super().read(n) File "/usr/lib/python3/dist-packages/borg/crypto/file_integrity > .py", line 32, in read return self.fd.read(n) MemoryError Borg server: Platform: Linux $HOSTNAME 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u3 (2019-06-16) x86_64 Borg server: Linux: debian 9.9 Borg server: Borg: 1.1.9 Python: CPython 3.5.3 Borg server: PID: 32750 CWD: $BACKUP_PATH Borg server: sys.argv: ['/usr/bin/borg', 'serve', '--umask=077'] Borg server: SSH_ORIGINAL_COMMAND: None Platform: Linux $HOSTNAME 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u3 (2019-06-16) x86_64 Linux: debian 9.9 Borg: 1.1.9 Python: CPython 3.5.3 PID: 7116 CWD: /root sys.argv: ['/usr/bin/borg', 'create', '--stats', '--compression', 'lz4', '--exclude', '/home/postgresql', 'ssh://$USERNAME@$HOSTNAME/$BACKUP_PATH::{now:%Y-%m-%dT%H:%M:%S}', '/etc', '/home', '/var/backups/postgres'] SSH_ORIGINAL_COMMAND: None Google yields limited results for the above error, it's not clear what the issue is. We're assuming it means that the process is running out of memory but this does not match what we see in our monitoring. While borg often pushes memory usage to it's limit we see this error on machines that have not used up their buffer or began using available swap yet. I would appreciate if anyone had a better understanding of what was happening here. Generally we can rerun the job and it'll be fine but we'll see streaks of failures with this error message and we'd like to minimize or avoid it entirely. Thanks! -- Eric Luther Ops Smartleaf Inc. From tw at waldmann-edv.de Thu Jul 18 14:30:17 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Thu, 18 Jul 2019 20:30:17 +0200 Subject: [Borgbackup] Frequent MemoryError In-Reply-To: References: Message-ID: As you suspected, MemoryError means that the process ran out of memory. > Google yields limited results for the above error, it's not clear what > the issue is. We're assuming it means that the process is running out of > memory but this does not match what we see in our monitoring. Maybe it is too quick to get sampled by monitoring? You can use the formula from the docs to roughly estimate memory needs. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From public at enkore.de Mon Jul 22 10:14:45 2019 From: public at enkore.de (Marian Beermann) Date: Mon, 22 Jul 2019 16:14:45 +0200 Subject: [Borgbackup] Frequent MemoryError In-Reply-To: References: Message-ID: While hashindex doesn't copy when doing PyObject I/O (iirc I was careful about that when I ported it), it still needs to move the entire file into memory. The problem here is that your repository is too big for the free memory you have. The only fixes are to get more free memory, or make the repository smaller. -Marian From eluther at smartleaf.com Mon Jul 22 11:31:14 2019 From: eluther at smartleaf.com (Eric Luther) Date: Mon, 22 Jul 2019 11:31:14 -0400 Subject: [Borgbackup] Frequent MemoryError In-Reply-To: References: Message-ID: I appreciate the replies, thanks for the info! On 7/22/19 10:14 AM, Marian Beermann wrote: > While hashindex doesn't copy when doing PyObject I/O (iirc I was careful > about that when I ported it), it still needs to move the entire file > into memory. > > The problem here is that your repository is too big for the free memory > you have. The only fixes are to get more free memory, or make the > repository smaller. > > -Marian > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Eric Luther Ops Smartleaf Inc. From mail at tgries.de Tue Jul 23 11:04:01 2019 From: mail at tgries.de (Thomas Gries) Date: Tue, 23 Jul 2019 17:04:01 +0200 Subject: [Borgbackup] Question: borgbackupping several USB disks to a single common repo Message-ID: <3e8326c3-17d9-48da-2ac1-9bdc3a235186@tgries.de> I found this relevant information: https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-backup-from-multiple-servers-into-a-single-repository Scenario: 1. I wish to make a "intelligent, deduplicating, adding" backup of my (less than 10TB) LUKS video and audio USB disks to a common (10TB, RAID, LUKS) repo. 2. It is a dedicated system (running OpenMediaVault), and I use the WEB GUI to delock/decrypt and mount all the drives, as usual. Questions: 1. What is the "best practice" in my case? 2. Is there any known pitfall using Borgbackup (command line) to backup the several smaller disks to the new common repo? 3. Remark: I tried to use "Vorta", but this was not helpful in my case (because when I connect disk+1, the files of the previous disks are not found any more and therefore deleted in the repo.) Regards Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmatht at gmail.com Tue Jul 30 22:00:00 2019 From: gmatht at gmail.com (John McCabe-Dansted) Date: Wed, 31 Jul 2019 10:00:00 +0800 Subject: [Borgbackup] Question: borgbackupping several USB disks to a single common repo In-Reply-To: <3e8326c3-17d9-48da-2ac1-9bdc3a235186@tgries.de> References: <3e8326c3-17d9-48da-2ac1-9bdc3a235186@tgries.de> Message-ID: > > > > - What is the "best practice" in my case? > > Borgbackup should be fine. No need to specify compression since I presume your video (and probably audio) are already compressed. > > > > - Is there any known pitfall using Borgbackup (command line) to backup > the several smaller disks to the new common repo? > > No. Multiple caches could cause speed issues, but they are per server, not per USB. > > > > - Remark: I tried to use "Vorta", but this was not helpful in my case > (because when I connect disk+1, the files of the previous disks are not > found any more and therefore deleted in the repo.) > > I am not too sure what you mean by deleted in the repo. Every borg archive starts empty, the files wouldn't appear from other archives unless you did something clever with `borg mount`. However borg shouldn't delete the old archives. It is customary to name borg archives in the form basename-date, and perge archives with older dates. Some time level tools like Vorta might automate this. Did you gave backups of different USBs the same base name? -- John C. McCabe-Dansted -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at lucassen.org Sat Aug 17 03:22:38 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Sat, 17 Aug 2019 09:22:38 +0200 Subject: [Borgbackup] round robin Message-ID: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> I'm new to borgbackup and I'd like to run a round robin backup, e.g. a daily backup Sunday, Monday etc. but I can't find an option to overwrite an archive. Something like: borg overwrite /path/to/repo::Sunday /path/to/source Or should I simply delete the archive and create a new one? R. -- richard lucassen http://contact.xaq.nl/ From tw at waldmann-edv.de Sat Aug 17 11:36:08 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sat, 17 Aug 2019 17:36:08 +0200 Subject: [Borgbackup] round robin In-Reply-To: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> Message-ID: <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> > I'm new to borgbackup and I'd like to run a round robin backup, e.g. a > daily backup Sunday, Monday etc. Why do you want to do it like that? Considering a likely high deduplication between archives, there is no reason to just have 5 or 7. borg prune offers a good means to not let the amount of archives grow while still having quite some history. > but I can't find an option to overwrite an archive. Because you can't. You need a unique new name for each archive. The placeholder {now} is a nice way to achieve that. Like: myserver-homes-{now} as archivename. > Or should I simply delete the archive and create a new one? Yes, that is of course possible, but not the usual way. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From mailinglists at lucassen.org Sat Aug 17 16:28:40 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Sat, 17 Aug 2019 22:28:40 +0200 Subject: [Borgbackup] round robin In-Reply-To: <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> Message-ID: <20190817222840.4bf9ed1530f9212b1d86343d@lucassen.org> On Sat, 17 Aug 2019 17:36:08 +0200 Thomas Waldmann wrote: > > I'm new to borgbackup and I'd like to run a round robin backup, > > e.g. a daily backup Sunday, Monday etc. > > Why do you want to do it like that? Because I'm lazy :) > Considering a likely high deduplication between archives, there is no > reason to just have 5 or 7. > > borg prune offers a good means to not let the amount of archives grow > while still having quite some history. ok, I'll have a look at that > > but I can't find an option to overwrite an archive. > > Because you can't. You need a unique new name for each archive. > > The placeholder {now} is a nice way to achieve that. > > Like: myserver-homes-{now} as archivename. > > > Or should I simply delete the archive and create a new one? > > Yes, that is of course possible, but not the usual way. Ok, thnx, I think, as a newbie to Borg, I missed something crucial somewhere. I'll have a closer look at "prune" as a start :) R. -- richard lucassen http://contact.xaq.nl/ From mailinglists at lucassen.org Sat Aug 17 16:34:24 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Sat, 17 Aug 2019 22:34:24 +0200 Subject: [Borgbackup] round robin In-Reply-To: <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> Message-ID: <20190817223424.42dd92cd7fcf080cec19060d@lucassen.org> On Sat, 17 Aug 2019 17:36:08 +0200 Thomas Waldmann wrote: > > I'm new to borgbackup and I'd like to run a round robin backup, > > e.g. a daily backup Sunday, Monday etc. > > Why do you want to do it like that? > > Considering a likely high deduplication between archives, there is no > reason to just have 5 or 7. > > borg prune offers a good means to not let the amount of archives grow > while still having quite some history. Ok, I read about borg prune and that's the way to go. It's a different way of thinking about backup (which is not my cup of tea) Thnx! R. (my other reply was rejected due to a wrong From: address, that's why this answer almost comes simultaneously) -- richard lucassen http://contact.xaq.nl/ From ngoonee.talk at gmail.com Sun Aug 25 23:00:15 2019 From: ngoonee.talk at gmail.com (Oon-Ee Ng) Date: Mon, 26 Aug 2019 11:00:15 +0800 Subject: [Borgbackup] round robin In-Reply-To: <20190817223424.42dd92cd7fcf080cec19060d@lucassen.org> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> <20190817223424.42dd92cd7fcf080cec19060d@lucassen.org> Message-ID: On Sun, Aug 18, 2019 at 4:34 AM richard lucassen wrote: > On Sat, 17 Aug 2019 17:36:08 +0200 > Thomas Waldmann wrote: > > > > I'm new to borgbackup and I'd like to run a round robin backup, > > > e.g. a daily backup Sunday, Monday etc. > > > > Why do you want to do it like that? > > > > Considering a likely high deduplication between archives, there is no > > reason to just have 5 or 7. > > > > borg prune offers a good means to not let the amount of archives grow > > while still having quite some history. > > Ok, I read about borg prune and that's the way to go. It's a different > way of thinking about backup (which is not my cup of tea) > > Thnx! > > R. > > (my other reply was rejected due to a wrong From: address, that's why > this answer almost comes simultaneously) > In addition to your primary question (which has been answered), I'd just chime in that round robin daily is fairly dangerous in the case of ransomware attacks, since there's a possibility all your good backups get over-written before you notice it (7 days response time). Obviously if that's not important for your particular situation then that's fine, but I think for most private internet-exposed systems ransomware is likely to be one of the larger threat vectors. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at lucassen.org Mon Aug 26 03:39:55 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 09:39:55 +0200 Subject: [Borgbackup] round robin In-Reply-To: References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> <20190817223424.42dd92cd7fcf080cec19060d@lucassen.org> Message-ID: <20190826093955.0df8bec73ef1f47191466adf@lucassen.org> On Mon, 26 Aug 2019 11:00:15 +0800 Oon-Ee Ng wrote: > > Ok, I read about borg prune and that's the way to go. It's a > > different way of thinking about backup (which is not my cup of tea) > > In addition to your primary question (which has been answered), I'd > just chime in that round robin daily is fairly dangerous in the case > of ransomware attacks, since there's a possibility all your good > backups get over-written before you notice it (7 days response time). > > Obviously if that's not important for your particular situation then > that's fine, but I think for most private internet-exposed systems > ransomware is likely to be one of the larger threat vectors. I already discovered the power of the prune command and its options, so don't you worry! The backup has been running for a few days now and it seems to work perfectly well :-) R. -- richard lucassen http://contact.xaq.nl/ From mailinglists at lucassen.org Mon Aug 26 04:03:25 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 10:03:25 +0200 Subject: [Borgbackup] round robin In-Reply-To: References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> <20190817223424.42dd92cd7fcf080cec19060d@lucassen.org> Message-ID: <20190826100325.153ae1d35be8f737b8b3994b@lucassen.org> On Mon, 26 Aug 2019 11:00:15 +0800 Oon-Ee Ng wrote: > In addition to your primary question (which has been answered), I'd > just chime in that round robin daily is fairly dangerous in the case > of ransomware attacks, since there's a possibility all your good > backups get over-written before you notice it (7 days response time). > > Obviously if that's not important for your particular situation then > that's fine, but I think for most private internet-exposed systems > ransomware is likely to be one of the larger threat vectors. For each user system I use a "canary.txt" file in a "canary" directory containing "Do not delete or alter this file". A canary was (or is maybe) used in coal mines to be able to detect mine gas: https://en.wiktionary.org/wiki/canary_in_a_coal_mine Somewhere in the user's share I place this canary file (r/w for that user) and he/she should not touch it. When this user is hit by ransomewhere the file will be altered or renamed by the ransomware. Each 5 minutes I collect all these canary files on another machine and check them with the md5sums I have on that machine. If one of these files is changed or has disappeared (renamed), the backup is blocked for that user and all alarm bells will ring. (Un)fortunately I have not been able yet to test it in practice, so if it is an effective system, I don't know. And BTW, maybe someone here knows this: does ransomware also encrypts IMAP mail systems? Searching for this shows lots of articles saying that ransomware is getting in by IMAP, but not a word about ransomware encrypting mails on an IMAP server. R. -- richard lucassen http://contact.xaq.nl/ From mailinglists at lucassen.org Mon Aug 26 07:49:44 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 13:49:44 +0200 Subject: [Borgbackup] round robin In-Reply-To: <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> Message-ID: <20190826134944.21d20f3589d12984aee83a6e@lucassen.org> On Sat, 17 Aug 2019 17:36:08 +0200 Thomas Waldmann wrote: > borg prune offers a good means to not let the amount of archives grow > while still having quite some history. Just a newbie question about the shell script on this page: https://borgbackup.readthedocs.io/en/stable/quickstart.html If "borg create" exits non-zero, "borg prune" is invoked anyway. Suppose "borg create" exits non-zero during a longer period, let's say a week or so, would "borg prune --keep-hourly 24" leave at least the last 24 hour backups intact? Or would it be better to invoke "borg prune" only after a zero exit of "borg create"? R. -- richard lucassen https://contact.xaq.nl/ From tw at waldmann-edv.de Mon Aug 26 09:41:54 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Mon, 26 Aug 2019 15:41:54 +0200 Subject: [Borgbackup] round robin In-Reply-To: <20190826134944.21d20f3589d12984aee83a6e@lucassen.org> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> <20190826134944.21d20f3589d12984aee83a6e@lucassen.org> Message-ID: <81f48b53-1544-2a1e-22e6-336df1bd960a@waldmann-edv.de> > https://borgbackup.readthedocs.io/en/stable/quickstart.html > > If "borg create" exits non-zero, "borg prune" is invoked anyway. > Suppose "borg create" exits non-zero during a longer period, let's say > a week or so, would "borg prune --keep-hourly 24" leave at least the > last 24 hour backups intact? Yes. And in case borg create did not manage to create a new archive, the amount of archives did not grow, so the set of archive borg prune is seeing does not change, so it won't delete anything that it had not deleted before already. > Or would it be better to invoke "borg prune" only after a zero exit of > "borg create"? That would be somehow cautious, but maybe does not matter. -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mailinglists at lucassen.org Mon Aug 26 09:57:49 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 15:57:49 +0200 Subject: [Borgbackup] round robin In-Reply-To: <81f48b53-1544-2a1e-22e6-336df1bd960a@waldmann-edv.de> References: <20190817092238.9212d032637ebeb066df9a0e@lucassen.org> <6c071ac2-61e5-831b-2a4d-a2bd4442888d@waldmann-edv.de> <20190826134944.21d20f3589d12984aee83a6e@lucassen.org> <81f48b53-1544-2a1e-22e6-336df1bd960a@waldmann-edv.de> Message-ID: <20190826155749.c1bdcda50835e384b0a5852a@lucassen.org> On Mon, 26 Aug 2019 15:41:54 +0200 Thomas Waldmann wrote: > > https://borgbackup.readthedocs.io/en/stable/quickstart.html > > > > If "borg create" exits non-zero, "borg prune" is invoked anyway. > > Suppose "borg create" exits non-zero during a longer period, let's > > say a week or so, would "borg prune --keep-hourly 24" leave at > > least the last 24 hour backups intact? > > Yes. And in case borg create did not manage to create a new archive, > the amount of archives did not grow, so the set of archive borg prune > is seeing does not change, so it won't delete anything that it had not > deleted before already. > > > Or would it be better to invoke "borg prune" only after a zero exit > > of "borg create"? > > That would be somehow cautious, but maybe does not matter. Ok thnx! It works like a charm btw :-) -- richard lucassen https://contact.xaq.nl/ From mailinglists at lucassen.org Mon Aug 26 12:53:51 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 18:53:51 +0200 Subject: [Borgbackup] config Message-ID: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> The Debian Stretch version is 1.0.9. Compared to Debian Buster, which runs 1.1.9, the 1.0.9 version uses 5MB files with a max of 10000 per dir. The 1.1.9 uses 500MB and a max of 1000 per dir. Can I use the 1.1.9 settings for 1.0.9 or should I stay with 5MB/10000 files settings? 1.0.9: [repository] version = 1 segments_per_dir = 10000 max_segment_size = 5242880 append_only = 0 id = 1.1.9: [repository] version = 1 segments_per_dir = 1000 max_segment_size = 524288000 append_only = 0 id = The reason is that there is enough disk space, but I might run out of inodes one day. R. -- richard lucassen https://contact.xaq.nl/ From public at enkore.de Mon Aug 26 13:15:30 2019 From: public at enkore.de (Marian Beermann) Date: Mon, 26 Aug 2019 19:15:30 +0200 Subject: [Borgbackup] config In-Reply-To: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> References: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> Message-ID: For writing (i.e. borg create) it doesn't really matter much. You should see improved throughput, although 1.1.x should do better than 1.0.x with the equivalent settings (despite extra bloat). For deleting archives depending on the deduplication structure you will likely see much longer delete times involving much more I/O compared to 5 MB segments. This is because deleting even a single byte chunk from a segment requires writing the other ~500 MB to a new segment, compared to only writing ~5 MB with the 1.0 settings. 1.1 does some extra stuff that is basically meant to defer re-writing of segments until there is a significant amount of deleted stuff in them, which amortizes I/O at the expense of some storage overhead. The algorithms used are quite simplistic but should usually work ok-ish. -Marian Am 26.08.19 um 18:53 schrieb richard lucassen: > The Debian Stretch version is 1.0.9. Compared to Debian Buster, which > runs 1.1.9, the 1.0.9 version uses 5MB files with a max of 10000 per > dir. > > The 1.1.9 uses 500MB and a max of 1000 per dir. Can I use the 1.1.9 > settings for 1.0.9 or should I stay with 5MB/10000 files settings? > > 1.0.9: > [repository] > version = 1 > segments_per_dir = 10000 > max_segment_size = 5242880 > append_only = 0 > id = > > 1.1.9: > [repository] > version = 1 > segments_per_dir = 1000 > max_segment_size = 524288000 > append_only = 0 > id = > > The reason is that there is enough disk space, but I might run out of > inodes one day. > > R. > From mailinglists at lucassen.org Mon Aug 26 15:36:15 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 21:36:15 +0200 Subject: [Borgbackup] config In-Reply-To: References: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> Message-ID: <20190826213615.3dba2b078006dddcb0a84c41@lucassen.org> On Mon, 26 Aug 2019 19:15:30 +0200 Marian Beermann wrote: > For writing (i.e. borg create) it doesn't really matter much. You > should see improved throughput, although 1.1.x should do better than > 1.0.x with the equivalent settings (despite extra bloat). > > For deleting archives depending on the deduplication structure you > will likely see much longer delete times involving much more I/O > compared to 5 MB segments. This is because deleting even a single > byte chunk from a segment requires writing the other ~500 MB to a new > segment, compared to only writing ~5 MB with the 1.0 settings. > > 1.1 does some extra stuff that is basically meant to defer re-writing > of segments until there is a significant amount of deleted stuff in > them, which amortizes I/O at the expense of some storage overhead. The > algorithms used are quite simplistic but should usually work ok-ish. Thnx for the explanation. As I'm at the start of backing up the system, there's no problem if I delete everything and restart a new backup. It's a 100GB system with lots of small files on a ext4 fs. As I'm just backing up each and every hour, and as you said, "create" won't be a problem. OTOH restoring is something that should not occur very often. There is no borgbackup in Debian backports. Given these facts, shall I leave it like it is now (500MB/1000) or should I switch back to the 5MB/10000 system? R. -- richard lucassen https://contact.xaq.nl/ From jdc at uwo.ca Mon Aug 26 15:47:01 2019 From: jdc at uwo.ca (Dan Christensen) Date: Mon, 26 Aug 2019 19:47:01 +0000 Subject: [Borgbackup] config In-Reply-To: <20190826213615.3dba2b078006dddcb0a84c41@lucassen.org> (richard lucassen's message of "Mon, 26 Aug 2019 21:36:15 +0200") References: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> <20190826213615.3dba2b078006dddcb0a84c41@lucassen.org> Message-ID: <87a7bvzp3f.fsf@uwo.ca> On Aug 26, 2019, richard lucassen wrote: > Given these facts, shall I leave it like it is now (500MB/1000) or > should I switch back to the 5MB/10000 system? I think you should switch to using the latest borg. There's a single-file download that you can simply run in place. Dan From mailinglists at lucassen.org Mon Aug 26 16:10:49 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Mon, 26 Aug 2019 22:10:49 +0200 Subject: [Borgbackup] config In-Reply-To: <87a7bvzp3f.fsf@uwo.ca> References: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> <20190826213615.3dba2b078006dddcb0a84c41@lucassen.org> <87a7bvzp3f.fsf@uwo.ca> Message-ID: <20190826221049.6e429b5493101368cc93484a@lucassen.org> On Mon, 26 Aug 2019 19:47:01 +0000 Dan Christensen wrote: > > Given these facts, shall I leave it like it is now (500MB/1000) or > > should I switch back to the 5MB/10000 system? > > I think you should switch to using the latest borg. There's a > single-file download that you can simply run in place. That is just what came up to my mind, I read there are static linked binaries. R. -- richard lucassen https://contact.xaq.nl/ From mailinglists at lucassen.org Tue Aug 27 06:54:34 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Tue, 27 Aug 2019 12:54:34 +0200 Subject: [Borgbackup] config In-Reply-To: <87a7bvzp3f.fsf@uwo.ca> References: <20190826185351.77bb855add9289709bcc81f4@lucassen.org> <20190826213615.3dba2b078006dddcb0a84c41@lucassen.org> <87a7bvzp3f.fsf@uwo.ca> Message-ID: <20190827125434.4b8b31a89ea96da875f3d90c@lucassen.org> On Mon, 26 Aug 2019 19:47:01 +0000 Dan Christensen wrote: > I think you should switch to using the latest borg. There's a > single-file download that you can simply run in place. Binary 1.1.10: just eager to know: why is there this "gap"? Is that on purpose? [..] -rw------- 1 root root 524296600 Aug 27 12:41 51 -rw------- 1 root root 524318526 Aug 27 12:41 52 -rw------- 1 root root 524518238 Aug 27 12:42 53 -rw------- 1 root root 429395609 Aug 27 12:43 54 -rw------- 1 root root 17 Aug 27 12:43 55 -rw------- 1 root root 17 Aug 27 12:43 56 -rw------- 1 root root 524306941 Aug 27 12:45 57 -rw------- 1 root root 524307339 Aug 27 12:46 58 -rw------- 1 root root 525531434 Aug 27 12:47 59 [..] R. -- richard lucassen https://contact.xaq.nl/ From mailinglists at lucassen.org Tue Aug 27 09:33:50 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Tue, 27 Aug 2019 15:33:50 +0200 Subject: [Borgbackup] one file system Message-ID: <20190827153350.7aa3c015801924a4660586d2@lucassen.org> I try to run a backup of a / fs with the option --one-file-system, the / fs is 3GB big, but borg is creating an archive that is at least 10GB big (after wich it seems to do nothing according to strace) This is the commandline. Borg is the Debian buster version 1.1.9. $ uname -a Linux qnap2 4.19.0-5-marvell #1 Debian 4.19.37-5+deb10u2 (2019-08-08) armv5tel GNU/Linux BORG_REPO is exported /usr/bin/python3 /usr/bin/borg create --stats --show-rc --one-file-system --compression lz4 --numeric-owner --exclude-caches --exclude-from /etc/borg/repo/system.exclude ::system--Tuesday-27-Aug-2019--15h15 / Did I do something wrong somewhere? R. -- richard lucassen https://contact.xaq.nl/ From tw at waldmann-edv.de Tue Aug 27 10:56:41 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Tue, 27 Aug 2019 16:56:41 +0200 Subject: [Borgbackup] one file system In-Reply-To: <20190827153350.7aa3c015801924a4660586d2@lucassen.org> References: <20190827153350.7aa3c015801924a4660586d2@lucassen.org> Message-ID: On 27.08.19 15:33, richard lucassen wrote: > I try to run a backup of a / fs with the option --one-file-system, > the / fs is 3GB big, but borg is creating an archive that is at least > 10GB big (after wich it seems to do nothing according to strace) Sounds strange... > /usr/bin/python3 /usr/bin/borg Usually, that is just "borg". Why are you calling it via python3? And isn't /usr/bin in $PATH anyway? > --compression lz4 lz4 compression is the default in borg 1.1. > --numeric-owner Is there some special reason you're using that? > Did I do something wrong somewhere? I didn't see anything wrong (just some nitpicks). You can just have a look into the archive after it has finished (borg list), then you'll see what's in there. You could also run borg using --list or --progress (not both), then you see what it is doing while it is running. -- GPG Fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 Encrypted E-Mail is preferred / Verschluesselte E-Mail wird bevorzugt. From mailinglists at lucassen.org Tue Aug 27 11:30:44 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Tue, 27 Aug 2019 17:30:44 +0200 Subject: [Borgbackup] one file system In-Reply-To: References: <20190827153350.7aa3c015801924a4660586d2@lucassen.org> Message-ID: <20190827173044.06100f94023a217ca8e9e4ca@lucassen.org> On Tue, 27 Aug 2019 16:56:41 +0200 Thomas Waldmann wrote: > On 27.08.19 15:33, richard lucassen wrote: > > I try to run a backup of a / fs with the option --one-file-system, > > the / fs is 3GB big, but borg is creating an archive that is at > > least 10GB big (after wich it seems to do nothing according to > > strace) > > Sounds strange... Yep. I will umount the data part and retry. > > /usr/bin/python3 /usr/bin/borg > > Usually, that is just "borg". Why are you calling it via python3? This is a "ps fax | grep borg" > And isn't /usr/bin in $PATH anyway? Of couse :) > > --compression lz4 > > lz4 compression is the default in borg 1.1. Ok, I'll remove it > > --numeric-owner > > Is there some special reason you're using that? It is an old rsync habit. When /etc/passwd is in the game I use --numeric-ids, I often use rsync accross machines to backup and then a --numeric-ids will keep the users.group together with the /et/passwd or group of that machine. Indeed there is no reason for it as the backup is invoked on the same machine. But I can imagine that I will use borg to backup one machine on another. > > Did I do something wrong somewhere? > > I didn't see anything wrong (just some nitpicks). > > You can just have a look into the archive after it has finished (borg > list), then you'll see what's in there. > > You could also run borg using --list or --progress (not both), then > you see what it is doing while it is running. Yes, I need to run it from the commandline, in the script I use fd 3 to write the output to a mailbody.txt file that is mailed when the script finishes. I will try it again tonight. -- richard lucassen https://contact.xaq.nl/ From mailinglists at lucassen.org Tue Aug 27 12:18:48 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Tue, 27 Aug 2019 18:18:48 +0200 Subject: [Borgbackup] one file system In-Reply-To: <20190827173044.06100f94023a217ca8e9e4ca@lucassen.org> References: <20190827153350.7aa3c015801924a4660586d2@lucassen.org> <20190827173044.06100f94023a217ca8e9e4ca@lucassen.org> Message-ID: <20190827181848.54523b848180f14416f444d9@lucassen.org> On Tue, 27 Aug 2019 17:30:44 +0200 richard lucassen wrote: > > > I try to run a backup of a / fs with the option --one-file-system, > > > the / fs is 3GB big, but borg is creating an archive that is at > > > least 10GB big (after wich it seems to do nothing according to > > > strace) > > > > Sounds strange... > > Yep. I will umount the data part and retry. First of all, my stupidity: I thought the segments were 5GB, but they are only 500MB, so the collected data was 1GB big and not 10GB. I tried to start it from an xterm and it finishes without a problem. Anyway, there was another problem because after the two first segments, borg was still running for at least half an hour and was waiting for something according to strace. I just ^C'd the script because of the 10GB issue (which was in fact 1GB). Perhaps something wrong with the luks encrypted disk I'm using as backup device. I'll find that out later tonight. R. -- richard lucassen https://contact.xaq.nl/ From mailinglists at lucassen.org Thu Aug 29 08:38:55 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Thu, 29 Aug 2019 14:38:55 +0200 Subject: [Borgbackup] borg serve Message-ID: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> I have a fileserver with restricted CPU capabilities. On the same network segment there is a host with enough CPU power. No transport encryption is needed. I'd like to backup the data of that fileserver on the host with enough CPU power. I can use an ssh://path repository, but then borg process runs on the client with restricted cpu power AFAIUI. I can run borg on the server with enough cpu power, reading input over NFS. But somewhere I read that the best way to handle this is to run borg at both sides. I can't remember if this used for creating archives or for some other functions. Can anyone shine a light on this matter? R. -- richard lucassen https://contact.xaq.nl/ From mailinglists at lucassen.org Thu Aug 29 08:45:32 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Thu, 29 Aug 2019 14:45:32 +0200 Subject: [Borgbackup] borg serve In-Reply-To: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> References: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> Message-ID: <20190829144532.f88b729b4891f6659674a6c2@lucassen.org> On Thu, 29 Aug 2019 14:38:55 +0200 richard lucassen wrote: [addendum] I suppose "borge serve" is needed for this, but the "borge serve" doc page is not really clear to me at the moment. -- richard lucassen https://contact.xaq.nl/ From gmatht at gmail.com Thu Aug 29 22:11:33 2019 From: gmatht at gmail.com (John McCabe-Dansted) Date: Fri, 30 Aug 2019 10:11:33 +0800 Subject: [Borgbackup] borg serve In-Reply-To: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> References: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> Message-ID: On Thu, 29 Aug 2019 at 20:39, richard lucassen wrote: > I can run borg on the server with enough cpu power, reading input over > NFS. > > But somewhere I read that the best way to handle this is to run borg at > both sides. I can't remember if this used for creating archives or for > some other functions. IIRC Running borg on both ends can reduce network usage. If you don't care about that I don't think it really matters. On a fast CPU and slow network, I'd expect running borg on both ends to be faster. I presume you benchmarked both cases and found NFS to be faster? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at lucassen.org Fri Aug 30 02:41:27 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Fri, 30 Aug 2019 08:41:27 +0200 Subject: [Borgbackup] borg serve In-Reply-To: References: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> Message-ID: <20190830084127.3f95a90d7e0802ce686a1312@lucassen.org> On Fri, 30 Aug 2019 10:11:33 +0800 John McCabe-Dansted wrote: > > I can run borg on the server with enough cpu power, reading input > > over NFS. > > > > But somewhere I read that the best way to handle this is to run > > borg at both sides. I can't remember if this used for creating > > archives or for some other functions. > > IIRC Running borg on both ends can reduce network usage. If you don't > care about that I don't think it really matters. On a fast CPU and > slow network, I'd expect running borg on both ends to be faster. I > presume you benchmarked both cases and found NFS to be faster? No, I didn't install it yet, until now I only installed one-host-only versions. I ask this just because "better ask one person who knows, than twenty people to search" R. -- richard lucassen https://contact.xaq.nl/ From eric at in3x.io Fri Aug 30 11:49:04 2019 From: eric at in3x.io (Eric S. Johansson) Date: Fri, 30 Aug 2019 11:49:04 -0400 Subject: [Borgbackup] borg serve In-Reply-To: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> References: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> Message-ID: <38ed3633-2d03-4883-bb0d-b81ea1af5d00@getmailbird.com> I had a similar situation. I needed an NAS with NFS and I bought one off the shelf. Turns out it's NFS solution was?not ideal. Instead of having three different machines hitting the NAS by NFS, I decided to route all of the Borg traffic through one central machine running borg. The central machine would then use NFS to store the Borg archives on the NAS. This solution works really well and the only downsides are that if the central machine goes away, backups for everyone fails and there is network congestion because of traffic going into and out of the central machine. It's not a huge problem because I only have about 12 TB of data to back up and thanks to borg de-duplication, it's nicely manageable.? Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 On 8/29/2019 8:39:04 AM, richard lucassen wrote: I have a fileserver with restricted CPU capabilities. On the same network segment there is a host with enough CPU power. No transport encryption is needed. I'd like to backup the data of that fileserver on the host with enough CPU power. I can use an ssh://path repository, but then borg process runs on the client with restricted cpu power AFAIUI. I can run borg on the server with enough cpu power, reading input over NFS. But somewhere I read that the best way to handle this is to run borg at both sides. I can't remember if this used for creating archives or for some other functions. Can anyone shine a light on this matter? R. -- richard lucassen https://contact.xaq.nl/ _______________________________________________ Borgbackup mailing list Borgbackup at python.org https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglists at lucassen.org Tue Sep 3 03:36:16 2019 From: mailinglists at lucassen.org (richard lucassen) Date: Tue, 3 Sep 2019 09:36:16 +0200 Subject: [Borgbackup] borg serve In-Reply-To: <38ed3633-2d03-4883-bb0d-b81ea1af5d00@getmailbird.com> References: <20190829143855.73ed20efa64e7533588eb42e@lucassen.org> <38ed3633-2d03-4883-bb0d-b81ea1af5d00@getmailbird.com> Message-ID: <20190903093616.a00b047ed316c748e7aa29cb@lucassen.org> On Fri, 30 Aug 2019 11:49:04 -0400 "Eric S. Johansson" wrote: > I had a similar situation. I needed an NAS with NFS and I bought one > off the shelf. Turns out it's NFS solution was?not ideal. Instead of > having three different machines hitting the NAS by NFS, I decided to > route all of the Borg traffic through one central machine running > borg. The central machine would then use NFS to store the Borg > archives on the NAS. I think I want this the other way around: I want to backup files of a low capacity CPU NAS to a machine that has all CPU power needed. +-----------+ +-----------+ | NAS with | | PC with | +-----------+ | small CPU |------>-----| big CPU |-->--| Borg disk | | 512MB RAM | | 8GB RAM | +-----------+ +-----------+ +-----------+ What I want to know: can I run borg on both sides in order to reduce the Borg's CPU power consumption on the NAS? IOW: share the work. Or should I configure this in another way? Both machines are on the same /24 subnet. R. -- richard lucassen http://contact.xaq.nl/ From gordonm at zanotech.com.au Thu Sep 5 03:20:37 2019 From: gordonm at zanotech.com.au (Gordon Marinovich) Date: Thu, 5 Sep 2019 15:20:37 +0800 Subject: [Borgbackup] Obtaining borg exit codes when using `timeout` Message-ID: Hi all I've been using Borg for some months now and it's working well. Recently I've tried to smarten up the daily backup checks by using email subject lines based on the exit code of the borg backup. So I've written a simple script in bash which is run from cron. Here is an extract of it: timeout --preserve-status 12h borg create \ --stats \ --show-rc \ $REPO::fs1-{now:%Y-%m-%d} /mnt/nfs-fs1-data/ 2>&1 \ | tee -a $LOGFILE $MONTHLY_REPORT \ backup_exit=$? echo "BORG backup finished with exit code: $backup_exit.">> $LOGFILE timeout is used so that the job doesn't run into business hours, but the problem I'm having is that when timeout invokes and kills the backup, the exit code captured by *$backup_exit* is always 0. But if I browse the $LOGFILE, the output there shows the following: Received SIGTERM terminating with error status, rc 2 BORG backup finished at Thu 2019-Sep-05 06:00 with exit code: 0. Initially I didn't have the timeout option --preserve-status, which I thought would solve my problem, and allow the capture of borg's actual exit code, which should be "2", but alas not, it's still always zero. Has anyone used timeout --preserve status with borg, and could you obtain the exit code similar to what I am trying to do? If so, I would be grateful for some direction on how to solve this. Regards -- Gordon -------------- next part -------------- An HTML attachment was scrubbed... URL: From givens at cipsoft.com Thu Sep 5 04:57:12 2019 From: givens at cipsoft.com (Bruce Givens) Date: Thu, 5 Sep 2019 10:57:12 +0200 Subject: [Borgbackup] Obtaining borg exit codes when using `timeout` In-Reply-To: References: Message-ID: Hi Gordon, looks to me like you'll be capturing the exit status of tee in backup_exit. I'd suggest looking into the bash variable PIPESTATUS. Regards, Bruce On 9/5/19 9:20 AM, Gordon Marinovich wrote: > Hi all > > I've been using Borg for some months now and it's working well. > Recently I've tried to smarten up the daily backup checks by using email > subject lines based on the exit code of the borg backup. > So I've written a simple script in bash which is run from cron.? Here is > an extract of it: > | > > | > | > timeout --preserve-status 12h borg create \ > || > --stats \ > || > --show-rc \ > || > || > $REPO::fs1-{now:%Y-%m-%d} /mnt/nfs-fs1-data/ 2>&1 \ > || > | tee -a $LOGFILE $MONTHLY_REPORT \ > || > | > | > || > backup_exit=$? > || > || > echo "BORG backup finished with exit code: $backup_exit.">> $LOGFILE > | > | > > |timeout is used so that the job doesn't run into business hours, but > the problem I'm having is that when timeout invokes and kills the > backup, the exit code captured by *$backup_exit* is always 0. > > But if I browse the $LOGFILE, the output there shows the following: > Received SIGTERM > terminating with error status, rc 2 > BORG backup finished at Thu 2019-Sep-05 06:00 with exit code: 0. > > Initially I didn't have the timeout option --preserve-status, which I > thought would solve my problem, and allow the capture of borg's actual > exit code, which should be "2", but alas not, it's still always zero. > > Has anyone used timeout --preserve status with borg, and could you > obtain the exit code similar to what I am trying to do?? If so, I would > be grateful for some direction on how to solve this. > > Regards > > -- > Gordon > > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > From kevin.elliott at javajedi.com Thu Sep 5 16:57:48 2019 From: kevin.elliott at javajedi.com (Kevin Elliott) Date: Thu, 5 Sep 2019 21:57:48 +0100 Subject: [Borgbackup] borg check repository -> very slow Message-ID: Hi folks, I am running borg backup to a remote repo (a disk connected to a raspberry pi 3b+). Most operations are nice and fast, creating archives and restoring files are OK. However, borg check --repository-only is incredibly slow (taking several *hours* to complete!). By contrast, borg check archives-only takes a few minutes to complete. I thought this strange because it seems to conflict with the documentation, which suggests the repository check should normally be faster than the archive check. Am I understanding this correctly please? https://borgbackup.readthedocs.io/en/stable/usage/check.html https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/ borg check pi at raspberry:my_borg_repo --repository-only --debug --show-rc using builtin fallback logging configuration 35 self tests completed in 0.13 seconds SSH command line: ['ssh', 'pi at raspberry', 'borg', 'serve', '--umask=077', '--debug'] Remote: using builtin fallback logging configuration Remote: 35 self tests completed in 0.66 seconds Remote: using builtin fallback logging configuration Remote: Initialized logging system for JSON-based protocol Remote: Resolving repository path b'my_borg_repo' Remote: Resolved repository path to '/media/pi/disk/my_borg_repo' Remote: Starting repository check Remote: Verified integrity of /media/pi/disk/my_borg_repo/index.3086 Remote: Read committed index of transaction 3086 Remote: Segment transaction is 3086 Remote: Determined transaction is 3086 Remote: Found 2985 segments -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at gasaway.org Fri Sep 6 02:10:15 2019 From: dave at gasaway.org (David Gasaway) Date: Thu, 5 Sep 2019 23:10:15 -0700 Subject: [Borgbackup] borg check repository -> very slow In-Reply-To: References: Message-ID: On Thu, Sep 5, 2019 at 1:58 PM Kevin Elliott wrote: > Hi folks, > > I am running borg backup to a remote repo (a disk connected to a raspberry > pi 3b+). Most operations are nice and fast, creating archives and restoring > files are OK. However, borg check --repository-only is incredibly slow > (taking several *hours* to complete!). By contrast, borg check > archives-only takes a few minutes to complete. I thought this strange > because it seems to conflict with the documentation, which suggests the > repository check should normally be faster than the archive check. Am I > understanding this correctly please? > As the docs say, a repository check reads and checksums all the data in the repo. Depending on configuration, it may be pulling all the data over the network, as well. -- -:-:- David K. Gasaway -:-:- Email: dave at gasaway.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.elliott at javajedi.com Sat Sep 7 04:24:38 2019 From: kevin.elliott at javajedi.com (Kevin Elliott) Date: Sat, 7 Sep 2019 09:24:38 +0100 Subject: [Borgbackup] Monitoring health of borg backups Message-ID: Hello, What do people recommend for monitoring the health and ongoing success/failure of borg backups? In short, I have mine running on a daily crontab and want to configure some kind of alerting if any problems. What tools are others using? Am thinking healthchecks.io or home-assistant etc. Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From clickwir at gmail.com Sat Sep 7 09:53:18 2019 From: clickwir at gmail.com (Zack Coffey) Date: Sat, 7 Sep 2019 07:53:18 -0600 Subject: [Borgbackup] Monitoring health of borg backups In-Reply-To: References: Message-ID: I use the borgmatic script to automate backups, checks etc. It emails me daily. On Sat, Sep 7, 2019, 2:24 AM Kevin Elliott wrote: > > Hello, > > What do people recommend for monitoring the health and ongoing > success/failure of borg backups? In short, I have mine running on a daily > crontab and want to configure some kind of alerting if any problems. What > tools are others using? > > Am thinking healthchecks.io or home-assistant etc. > > Kevin > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at in3x.io Sat Sep 7 11:40:09 2019 From: eric at in3x.io (Eric S. Johansson) Date: Sat, 7 Sep 2019 11:40:09 -0400 Subject: [Borgbackup] Monitoring health of borg backups In-Reply-To: References: Message-ID: I use borgmatic, send the output to telegram. On 9/7/2019 4:24 AM, Kevin Elliott wrote: > > Hello, > > What do people recommend for monitoring the health and ongoing > success/failure of borg backups? In short, I have mine running on a > daily crontab and want to configure some kind of alerting if any > problems. What tools are others using? > > Am thinking healthchecks.io or home-assistant > etc. > > Kevin > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup -- Eric S. Johansson eric at in3x.io http://www.in3x.io 978-512-0272 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tw at waldmann-edv.de Sat Sep 7 18:48:07 2019 From: tw at waldmann-edv.de (Thomas Waldmann) Date: Sun, 8 Sep 2019 00:48:07 +0200 Subject: [Borgbackup] new borgbackup 1.2 alpha release for testing Message-ID: <34fbccbf-c872-a033-6376-22ebf8f39184@waldmann-edv.de> new borgbackup 1.2 alpha release for testing, see there: https://github.com/borgbackup/borg/releases/tag/1.2.0a7 -- GPG ID: 9F88FB52FAF7B393 GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393 From mats.lidell at lidells.se Sun Sep 8 10:52:23 2019 From: mats.lidell at lidells.se (Mats Lidell) Date: Sun, 08 Sep 2019 16:52:23 +0200 Subject: [Borgbackup] Monitoring health of borg backups In-Reply-To: (Kevin Elliott's message of "Sat, 7 Sep 2019 09:24:38 +0100") References: Message-ID: <87sgp68w8y.fsf@lidells.se> > Kevin Elliott writes: > Hello, > What do people recommend for monitoring the health and ongoing success/failure of borg backups? In short, I have mine running on a daily crontab and > want to configure some kind of alerting if any problems. What tools are others using? I use email. Depending on the exit status of borg I send me a fail or success mail. Not as powerful as getting an alert if everything fails but if there is no mail I know something is wrong an I need to check things. %% Mats From archont at gmx.com Sun Sep 15 04:40:37 2019 From: archont at gmx.com (archont) Date: Sun, 15 Sep 2019 08:40:37 +0000 Subject: [Borgbackup] borg serve Message-ID: `borg serve` helps only if you have a remote repository as it runs on that remote coputer - AFAIK, it does not do much processing, it merely reads/writes to repo and sends data stream to `borg create` over ssh. The heavy work is done by `borg create`. BORG_CACHE_DIR resides on the same computer as `borg create` is run. In your scenario, you can mount NAS filesystem on the PC to make it accessible locally and then you can run `borg create` on that PC. (So that both source and repo are actually considered "local", and borg serve will not be used at all.) That is, you do not need to install borg at NAS at all. archont -------- On Tue Sep 3 03:36:16 EDT 2019 I think I want this the other way around: I want to backup files of a low capacity CPU NAS to a machine that has all CPU power needed. +-----------+ +-----------+ | NAS with | | PC with | +-----------+ | small CPU |------>-----| big CPU |-->--| Borg disk | | 512MB RAM | | 8GB RAM | +-----------+ +-----------+ +-----------+ What I want to know: can I run borg on both sides in order to reduce the Borg's CPU power consumption on the NAS? IOW: share the work. Or should I configure this in another way? Both machines are on the same /24 subnet. R. -- richard lucassen http://contact.xaq.nl From archont at gmx.com Sun Sep 15 04:52:51 2019 From: archont at gmx.com (archont) Date: Sun, 15 Sep 2019 08:52:51 +0000 Subject: [Borgbackup] Multiple clients to a single repo considerations Message-ID: <9e361dad5e7b93b82fa59c9b2b0cd884@gmx.com> Hi, About get some automation to my home network PCs backup. That is: attach one HDD to my 24/7 server, run borgmatic overnight (server) or when powered on for some time (HTPC, desktop). Then, rclone to Backblaze B2. About 2 TB (deduplicated) in total. All Linux computers. I use repokey and have a separate backup of both passphrase and key. Now it comes to some security considerations: * Single REPO for all computers is acceptable: it is only me who will recover data from archives. * Borg docs say that it is not a good idea to have multiple remote clients accessing the same BORG_REPO, because local cache gets invalidated any time REPO is accessed by other client (cache rebuilding is slow) and multiple backups cannot run in parallel (I can live with that). * What is worse, Borg docs also say: When ... multiple clients independently updating the same repository, then Borg fails to provide confidentiality... Now, to resolve the issue with rebuilding the cache and the lack of confidentiality, how about this solution: * Run borgmatic (or `borg create` if that matters) individually on each computer, and share a single BORG_BASE_DIR (i.e. cache and configs) from server via SMB; BORG_REPO would still be accessed via `borg serve` remotely, even when backing up the server itself (via localhost loopback). the BORG_REPO location would always be the same as 'root at server:/path/to/repo'. **Would that resolve the limitations described in Borg docs?** Regards, Archont From gordonm at zanotech.com.au Mon Sep 16 04:46:38 2019 From: gordonm at zanotech.com.au (Gordon Marinovich) Date: Mon, 16 Sep 2019 16:46:38 +0800 Subject: [Borgbackup] Obtaining borg exit codes when using `timeout` In-Reply-To: References: Message-ID: Good pointer Bruce. After having a read about it, I think that PIPESTATUS will do the trick. BASH - the endless journey! :-D On Thu, 5 Sep 2019 at 17:05, Bruce Givens wrote: > Hi Gordon, > > looks to me like you'll be capturing the exit status of tee in backup_exit. > I'd suggest looking into the bash variable PIPESTATUS. > > Regards, > Bruce > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From panayotis at panayotis.com Tue Sep 24 13:21:50 2019 From: panayotis at panayotis.com (Panayotis Katsaloulis) Date: Tue, 24 Sep 2019 20:21:50 +0300 Subject: [Borgbackup] Is it possible to show missing files? In-Reply-To: <8d617598-27a6-47b5-9391-fefb430a0d1d@Spark> References: <8d617598-27a6-47b5-9391-fefb430a0d1d@Spark> Message-ID: Hello I am successfully using Borg backup in my every-day routine. In order for me to feel safe, I am using the ?list option, so that I?d know if a new file will be added or changed. What I am wishing for is a similar option of missing files, so that I?d know that (compared with the last backup) my new backup is missing some files. This is crucial for me and the way to solve it is, after backup, to restore to a demo place with borg mount & rsync, so that I?d see what is really happening. But I wish there would be a better way than this. Any hints on how to be able to monitor the missing files? -------------- next part -------------- An HTML attachment was scrubbed... URL: From clickwir at gmail.com Tue Sep 24 18:07:08 2019 From: clickwir at gmail.com (Zack Coffey) Date: Tue, 24 Sep 2019 16:07:08 -0600 Subject: [Borgbackup] Is it possible to show missing files? In-Reply-To: References: <8d617598-27a6-47b5-9391-fefb430a0d1d@Spark> Message-ID: You don't have to mount and rsync. You can mount an just run an 'ls', tie that with a little 'uniq' and make a little script to output the difference. On Tue, Sep 24, 2019 at 11:22 AM Panayotis Katsaloulis < panayotis at panayotis.com> wrote: > Hello > > I am successfully using Borg backup in my every-day routine. > In order for me to feel safe, I am using the ?list option, so that I?d > know if a new file will be added or changed. > > What I am wishing for is a similar option of missing files, so that I?d > know that (compared with the last backup) my new backup is missing some > files. > This is crucial for me and the way to solve it is, after backup, to > restore to a demo place with borg mount & rsync, so that I?d see what is > really happening. > But I wish there would be a better way than this. > > Any hints on how to be able to monitor the missing files? > > _______________________________________________ > Borgbackup mailing list > Borgbackup at python.org > https://mail.python.org/mailman/listinfo/borgbackup > -------------- next part -------------- An HTML attachment was scrubbed... URL: From panayotis at panayotis.com Tue Sep 24 18:13:45 2019 From: panayotis at panayotis.com (Panayotis Katsaloulis) Date: Wed, 25 Sep 2019 01:13:45 +0300 Subject: [Borgbackup] Is it possible to show missing files? In-Reply-To: References: <8d617598-27a6-47b5-9391-fefb430a0d1d@Spark> Message-ID: <3aeb2c77-b006-4089-8a87-8e104aa3a755@Spark> Thank you for the reply. I?ll need to rsync anyway. My question is, if there is a way, when creating the archive, to see also the deleted files. On 25 Sep 2019, 01:07 +0300, Zack Coffey , wrote: > You don't have to mount and rsync. You can mount an just run an 'ls', tie that with a little 'uniq' and make a little script to output the difference. > > > On Tue, Sep 24, 2019 at 11:22 AM Panayotis Katsaloulis wrote: > > > Hello > > > > > > I am successfully using Borg backup in my every-day routine. > > > In order for me to feel safe, I am using the ?list option, so that I?d know if a new file will be added or changed. > > > > > > What I am wishing for is a similar option of missing files, so that I?d know that (compared with the last backup) my new backup is missing some files. > > > This is crucial for me and the way to solve it is, after backup, to restore to a demo place with borg mount & rsync, so that I?d see what is really happening. > > > But I wish there would be a better way than this. > > > > > > Any hints on how to be able to monitor the missing files? > > > > > > _______________________________________________ > > > Borgbackup mailing list > > > Borgbackup at python.org > > > https://mail.python.org/mailman/listinfo/borgbackup -------------- next part -------------- An HTML attachment was scrubbed... URL: From l0f4r0 at tuta.io Wed Sep 25 01:04:08 2019 From: l0f4r0 at tuta.io (l0f4r0 at tuta.io) Date: Wed, 25 Sep 2019 07:04:08 +0200 (CEST) Subject: [Borgbackup] Is it possible to show missing files? In-Reply-To: <3aeb2c77-b006-4089-8a87-8e104aa3a755@Spark> References: <8d617598-27a6-47b5-9391-fefb430a0d1d@Spark> <3aeb2c77-b006-4089-8a87-8e104aa3a755@Spark> Message-ID: Hello, I'm not aware of such a directly accessible option in borg. However, after your last borg create, why don't you run a diff (in a specific script) on the borg create --list output compared to the output of the second to last output? Of course, you would need some (automatic) editing of these listings before making diff (compare only file status lines, throw away the status themselves...) l0f4r0 25 sept. 2019 ? 00:13 de panayotis at panayotis.com: > My question is, if there is a way, when creating the archive, to see also the deleted files. > From gait at ATComputing.nl Wed Sep 25 02:37:53 2019 From: gait at ATComputing.nl (Gerrit A. Smit) Date: Wed, 25 Sep 2019 08:37:53 +0200 Subject: [Borgbackup] Is it possible to show missing files? In-Reply-To: References: <8d617598-27a6-47b5-9391-fefb430a0d1d@Spark> <3aeb2c77-b006-4089-8a87-8e104aa3a755@Spark> Message-ID: <209a3069-c988-523c-ca0f-1087328d630f@ATComputing.nl> Why make something new that's already in your toolbox? => Mount two archives and do a diff -r on those. Gerrit -- Gerrit A. Smit -------------- next part -------------- An HTML attachment was scrubbed... URL: