From mats.lidell at cag.se Mon Jan 1 17:47:22 2018
From: mats.lidell at cag.se (Mats Lidell)
Date: Mon, 01 Jan 2018 23:47:22 +0100
Subject: [Borgbackup] Possible to include sub folder of excluded folder?
Message-ID: <87tvw5ci2t.fsf@mail.contactor.se>
Hi,
Just started to explore borg and have done some initial backups. Seems to work fine. I would now like to come up with a way to exclude a folder but include one of its sub folders. I fail to find info about that in the FAQ or the general docs.
So here goes: will I get the wanted behavior by including the sub folder in the list of folders to backup? Like this:
[...]
--exclude '/some/folder/to/exclude'
::...
/some/folder/to/exclude/but/backup/this/subfolder'
[...]
Will this give the intended behavior to backup the sub folder or will the exclude effectively hide all sub folders from the backup?
Yours
--
%% Mats
From joern.koerner at gmail.com Wed Jan 3 02:40:35 2018
From: joern.koerner at gmail.com (=?UTF-8?B?SsO2cm4gS8O2cm5lcg==?=)
Date: Wed, 3 Jan 2018 08:40:35 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
Message-ID: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
Hi
I'm trying to backup a CIFS mount into a borg repository which shows
errors on quite a lot files but not all.
/mnt/REMOTECIFS/var/lib/gems/1.9.1/doc/shoulda-matchers-2.8.0/rdoc/Shoulda/Matchers/ActiveRecord.html:
read: [Errno 5] Input/output error
I mounted the share with
mount -t cifs -o nofail,ro,username=root,password=SECRETPASS //FQDN/var
/mnt/REMOTECIFS/var
and backup with
borg create --debug --filter AME --stats --show-rc --compression lz4
--exclude-caches /root/test/::{hostname}-{now} /mnt/REMOTECIFS/var/lib
I'm able to list, and even read the file but borg shows errors.
# ls -la
/mnt/REMOTECIFS/var/lib/gems/1.9.1/doc/shoulda-matchers-2.8.0/rdoc/Shoulda/Matchers/ActiveRecord.html
-rw-r--r-- 1 root root 148311 Mar ?6 ?2015
/mnt/REMOTECIFS/var/lib/gems/1.9.1/doc/shoulda-matchers-2.8.0/rdoc/Shoulda/Matchers/ActiveRecord.html
# head
/mnt/REMOTECIFS/var/lib/gems/1.9.1/doc/shoulda-matchers-2.8.0/rdoc/Shoulda/Matchers/ActiveRecord.html
...
I also played with several mount options like
'cache=none,noacl,nouser_xattr,noperm' with no success at all.
Does anyone knows hot to solve this? I've no clue why I'm able to read
those files but unable to backup...
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From joern.koerner at gmail.com Wed Jan 3 04:32:06 2018
From: joern.koerner at gmail.com (=?UTF-8?B?SsO2cm4gS8O2cm5lcg==?=)
Date: Wed, 3 Jan 2018 10:32:06 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <871sj7gyks.fsf@mail.contactor.se>
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
Message-ID: <082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
> The option "ro" means read only. Have you tried without it?
>
Well I know, but ... no, I didn't tried it with read/write yet. All I
need to do is to backup (i.e. read only) those mountpoints so ro should
be enough I thought.
Anyway when mounting rw it seems to work. Does borg tries to
access/update atime?
If so why do I not get this error on all files, just some sort of?
Showing file stats
# stat
/mnt/REMOTECIFS/var/lib/gems/1.9.1/doc/shoulda-matchers-2.8.0/rdoc/Shoulda/Matchers/ActiveRecord.html
? File:
/mnt/REMOTECIFS/var/lib/gems/1.9.1/doc/shoulda-matchers-2.8.0/rdoc/Shoulda/Matchers/ActiveRecord.html
? Size: 148311????????? Blocks: 2048?????? IO Block: 16384? regular file
Device: 31h/49d Inode: 1187416???? Links: 1
Access: (0644/-rw-r--r--)? Uid: (??? 0/??? root)?? Gid: (??? 0/??? root)
Access: 2018-01-03 08:06:14.036727300 +0100
Modify: 2015-03-06 10:12:58.000000000 +0100
Change: 2017-05-12 09:58:08.281000000 +0200
?Birth: -
It shows out that adding '--noatime' to 'borg create' does resolve this
behaviour.
Thanks a lot for pointing this out!
From tw at waldmann-edv.de Wed Jan 3 04:49:10 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Wed, 3 Jan 2018 10:49:10 +0100
Subject: [Borgbackup] Possible to include sub folder of excluded folder?
In-Reply-To: <87tvw5ci2t.fsf@mail.contactor.se>
References: <87tvw5ci2t.fsf@mail.contactor.se>
Message-ID: <1469b1ac-531e-2ba0-0766-ce0643a9be03@waldmann-edv.de>
> Will this give the intended behavior to backup the sub folder or will the exclude effectively hide all sub folders from the backup?
Just try it?
What you can also try (if using borg 1.1.x) is the "patterns"
include/exclude mechanism, which officially considers the case that
there can be something included inside some excluded directory.
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From tw at waldmann-edv.de Wed Jan 3 05:06:15 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Wed, 3 Jan 2018 11:06:15 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
Message-ID: <44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
On 01/03/2018 10:32 AM, J?rn K?rner wrote:
>
>> The option "ro" means read only. Have you tried without it?
Did you publically answer to a private email here?
I did not receive that mail on the list.
To look at this, I am missing some info.
If you have a github account, you could also file this as an issue on
our github issue tracker:
- your borg version and how you obtained/installed it, your python
version (if relevant)
- your OS / version (on the machine doing the mount)
- your OS / version (on the machine providing the smb share you mount)
Does it only say "Backup error read: [Errno 5] Input/Output error
" or is there more info, like a python traceback?
Can you reproduce the error?
Can you run an strace and post the relevant part of it?
> Anyway when mounting rw it seems to work. Does borg tries to
> access/update atime?
Access: sure (atime comes as part of stat() result which is also needed
for other reasons).
Update: borg tries to open files in a mode that does not update atime,
but this mode is only available under specific circumstances (like when
running as root or owning the file). If the mode is not available, it
accesses the files normally, which usually leads to an atime update done
by the filesystem / OS itself (at least when mounted rw).
> It shows out that adding '--noatime' to 'borg create' does resolve this
> behaviour.
That's interesting.
I can look up what's happening as soon as I know your precise borg version.
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From mats.lidell at cag.se Wed Jan 3 03:01:55 2018
From: mats.lidell at cag.se (Mats Lidell)
Date: Wed, 03 Jan 2018 09:01:55 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
(=?utf-8?Q?=22J=C3=B6rn_K=C3=B6rner=22's?=
message of "Wed, 3 Jan 2018 08:40:35 +0100")
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
Message-ID: <871sj7gyks.fsf@mail.contactor.se>
Hi J?rn
> J?rn K?rner writes:
> I mounted the share with
> mount -t cifs -o nofail,ro,username=root,password=SECRETPASS //FQDN/var /mnt/REMOTECIFS/var
[...]
> Does anyone knows hot to solve this? I've no clue why I'm able to read those files but unable to backup...
The option "ro" means read only. Have you tried without it?
Yours
--
%% Mats
From mats.lidell at cag.se Wed Jan 3 05:36:56 2018
From: mats.lidell at cag.se (Mats Lidell)
Date: Wed, 03 Jan 2018 11:36:56 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de> (Thomas
Waldmann's message of "Wed, 3 Jan 2018 11:06:15 +0100")
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
<44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
Message-ID: <87mv1vfctz.fsf@mail.contactor.se>
Hi Thomas,
> Thomas Waldmann writes:
> Did you publically answer to a private email here?
> I did not receive that mail on the list.
No. I got it from the list.
Yours
--
%% Mats
From mats.lidell at cag.se Wed Jan 3 05:34:44 2018
From: mats.lidell at cag.se (Mats Lidell)
Date: Wed, 03 Jan 2018 11:34:44 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
(=?utf-8?Q?=22J=C3=B6rn_K=C3=B6rner=22's?=
message of "Wed, 3 Jan 2018 10:32:06 +0100")
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
Message-ID: <87r2r7fcxn.fsf@mail.contactor.se>
Hi J?rn,
> J?rn K?rner writes:
> Well I know, but ... no, I didn't tried it with read/write yet. All I
> need to do is to backup (i.e. read only) those mountpoints so ro should
> be enough I thought.
Ah. My bad. Should have read your post better. I thought you were mounting where to put the backup. Sorry for the confusion but glad that it helped in some way.
Yours
--
%% Mats
From joern.koerner at gmail.com Wed Jan 3 06:55:48 2018
From: joern.koerner at gmail.com (=?UTF-8?B?SsO2cm4gS8O2cm5lcg==?=)
Date: Wed, 3 Jan 2018 12:55:48 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
<44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
Message-ID: <8320408d-d364-6e1a-6cbe-067183400ca6@gmail.com>
Am 03.01.2018 um 11:06 schrieb Thomas Waldmann:
> On 01/03/2018 10:32 AM, J?rn K?rner wrote:
>>> The option "ro" means read only. Have you tried without it?
>
> To look at this, I am missing some info.
>
> If you have a github account, you could also file this as an issue on
> our github issue tracker:
>
> - your borg version and how you obtained/installed it, your python
> version (if relevant)
> - your OS / version (on the machine doing the mount)
> - your OS / version (on the machine providing the smb share you mount)
I dont' know if it's a bug or just an user error (me)
If it turns out that's a but I'll create a new issue.
> Does it only say "Backup error read: [Errno 5] Input/Output error
> " or is there more info, like a python traceback?
There's no python traceback, just an borg-error like
> Can you reproduce the error?
Yes, kind of. Details follow
>
> Can you run an strace and post the relevant part of it?
>
>> Anyway when mounting rw it seems to work. Does borg tries to
>> access/update atime?
> Access: sure (atime comes as part of stat() result which is also needed
> for other reasons).
>
> Update: borg tries to open files in a mode that does not update atime,
> but this mode is only available under specific circumstances (like when
> running as root or owning the file). If the mode is not available, it
> accesses the files normally, which usually leads to an atime update done
> by the filesystem / OS itself (at least when mounted rw).
>
>> It shows out that adding '--noatime' to 'borg create' does resolve this
>> behaviour.
> That's interesting.
>
> I can look up what's happening as soon as I know your precise borg version.
>
I'll collect all information and also a way to reproduce in my next reply.
From public at enkore.de Wed Jan 3 07:05:48 2018
From: public at enkore.de (Marian Beermann)
Date: Wed, 3 Jan 2018 13:05:48 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <8320408d-d364-6e1a-6cbe-067183400ca6@gmail.com>
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
<44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
<8320408d-d364-6e1a-6cbe-067183400ca6@gmail.com>
Message-ID: <3180e677-ef6e-83d8-7430-0ed8d2d9423f@enkore.de>
Try CIFS/SAMBA debugging/logging options, it might give you details what
causes it to error.
From joern.koerner at gmail.com Wed Jan 3 07:31:21 2018
From: joern.koerner at gmail.com (=?UTF-8?B?SsO2cm4gS8O2cm5lcg==?=)
Date: Wed, 3 Jan 2018 13:31:21 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <3180e677-ef6e-83d8-7430-0ed8d2d9423f@enkore.de>
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
<44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
<8320408d-d364-6e1a-6cbe-067183400ca6@gmail.com>
<3180e677-ef6e-83d8-7430-0ed8d2d9423f@enkore.de>
Message-ID: <0449e6cd-50ab-aeae-81a9-e4990524100b@gmail.com>
Am 03.01.2018 um 13:05 schrieb Marian Beermann:
> Try CIFS/SAMBA debugging/logging options, it might give you details what
> causes it to error.
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
Samba does not log anything interesting. As mentioned in my first post I
am able to read/cat/less... the file from bash.
From joern.koerner at gmail.com Wed Jan 3 08:51:44 2018
From: joern.koerner at gmail.com (=?UTF-8?B?SsO2cm4gS8O2cm5lcg==?=)
Date: Wed, 3 Jan 2018 14:51:44 +0100
Subject: [Borgbackup] Backup error read: [Errno 5] Input/output error on
CIFS mount
In-Reply-To: <44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
References: <5c85d345-7776-4cb7-32e1-6f219e8e9390@gmail.com>
<871sj7gyks.fsf@mail.contactor.se>
<082d3dd4-008e-9f69-b15d-d3920c2a3df5@gmail.com>
<44eedcda-819b-8fb7-2b59-8c7e860a3b35@waldmann-edv.de>
Message-ID:
Edit: Issue solved - scroll down to the end to read the solution.
Prerequisites:
I've a server I need to backup over CIFS, Debian 7.8 (Samba 3.6.6)
In /etc/samba/smb.conf I created a CIFS share (Debian default smb.conf):
?[var]
??? guest ok??? = no
??? valid users = root
??? read only?? = yes
??? path??????? = /var
A side note: I noticed the Input/Output error also on backups over nfs
The Borg-machine is a recent Debian 9.2 running borg 1.1.3
I created a small script which excludes as much as possible:
#!/bin/bash
set -x
umount /mnt/REMOTECIFS/var
mkdir -p /mnt/REMOTECIFS/var
mount -t cifs -o ro,nofail,username=root,password=DPwl32768 //FQDN/var
/mnt/REMOTECIFS/var
# borg delete --force /root/test
# rm -rf test ; borg init --encryption=none test
strace -f -e trace=all -o borg.strace borg create \
? --debug??? \
? --progress \
? --filter AME \
? --stats?? \
? --show-rc \
? --compression lz4 \
? --exclude-caches \
? --exclude '/mnt/REMOTECIFS/var/lib/apt/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/aptitude/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/colord/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/dbus/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/dhcp/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/dictionaries-common/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/dpkg/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/exim4/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/gconf/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/gems/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/ghostscript/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/initramfs-tools/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/initscripts/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/insserv/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/ispell/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/jenkins/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/libuuid/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/logrotate/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/misc/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/mlocate/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/mysql-files/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/nfs/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/ntp/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/ntpdate/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/os-prober/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/pam/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/polkit-1/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/pycentral/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/python-support/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/security/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/sgml-base/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/sudo/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/tex-common/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/udisks/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/update-rc.d/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/urandom/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/usbutils/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/vim/*' \
? --exclude '/mnt/REMOTECIFS/var/lib/xml-core/*' \
? \
? /root/test/::{hostname}-{now} /mnt/REMOTECIFS/var/lib/
# Here are some problematic files in:
#? --exclude '/mnt/REMOTECIFS/var/lib/ucf/*' \
#? --exclude '/mnt/REMOTECIFS/var/lib/mysql/*' \
#? --exclude '/mnt/REMOTECIFS/var/lib/samba/*' \
read -e -p "Unmount ? [y/n]" ret
if [ "$ret" == "y" ]; then
??? umount /mnt/REMOTECIFS/var
fi
set +x
The output is:
# ./borg-test
+ umount /mnt/REMOTECIFS/var
+ mkdir -p /mnt/REMOTECIFS/var
+ mount -t cifs -o ro,nofail,username=root,password=DPwl32768 //FQDN/var
/mnt/REMOTECIFS/var
+ strace -f -e trace=all -o borg.strace borg create --debug --progress
--filter AME --stats --show-rc --compression lz4 --exclude-caches
--exclude '/mnt/REMOTECIFS/var/lib/apt/*' --exclude
'/mnt/REMOTECIFS/var/lib/aptitude/*' --exclude
'/mnt/REMOTECIFS/var/lib/colord/*' --exclude
'/mnt/REMOTECIFS/var/lib/dbus/*' --exclude
'/mnt/REMOTECIFS/var/lib/dhcp/*' --exclude
'/mnt/REMOTECIFS/var/lib/dictionaries-common/*' --exclude
'/mnt/REMOTECIFS/var/lib/dpkg/*' --exclude
'/mnt/REMOTECIFS/var/lib/exim4/*' --exclude
'/mnt/REMOTECIFS/var/lib/gconf/*' --exclude
'/mnt/REMOTECIFS/var/lib/gems/*' --exclude
'/mnt/REMOTECIFS/var/lib/ghostscript/*' --exclude
'/mnt/REMOTECIFS/var/lib/initramfs-tools/*' --exclude
'/mnt/REMOTECIFS/var/lib/initscripts/*' --exclude
'/mnt/REMOTECIFS/var/lib/insserv/*' --exclude
'/mnt/REMOTECIFS/var/lib/ispell/*' --exclude
'/mnt/REMOTECIFS/var/lib/jenkins/*' --exclude
'/mnt/REMOTECIFS/var/lib/libuuid/*' --exclude
'/mnt/REMOTECIFS/var/lib/logrotate/*' --exclude
'/mnt/REMOTECIFS/var/lib/misc/*' --exclude
'/mnt/REMOTECIFS/var/lib/mlocate/*' --exclude
'/mnt/REMOTECIFS/var/lib/mysql-files/*' --exclude
'/mnt/REMOTECIFS/var/lib/nfs/*' --exclude
'/mnt/REMOTECIFS/var/lib/ntp/*' --exclude
'/mnt/REMOTECIFS/var/lib/ntpdate/*' --exclude
'/mnt/REMOTECIFS/var/lib/os-prober/*' --exclude
'/mnt/REMOTECIFS/var/lib/pam/*' --exclude
'/mnt/REMOTECIFS/var/lib/polkit-1/*' --exclude
'/mnt/REMOTECIFS/var/lib/pycentral/*' --exclude
'/mnt/REMOTECIFS/var/lib/python-support/*' --exclude
'/mnt/REMOTECIFS/var/lib/security/*' --exclude
'/mnt/REMOTECIFS/var/lib/sgml-base/*' --exclude
'/mnt/REMOTECIFS/var/lib/sudo/*' --exclude
'/mnt/REMOTECIFS/var/lib/tex-common/*' --exclude
'/mnt/REMOTECIFS/var/lib/udisks/*' --exclude
'/mnt/REMOTECIFS/var/lib/update-rc.d/*' --exclude
'/mnt/REMOTECIFS/var/lib/urandom/*' --exclude
'/mnt/REMOTECIFS/var/lib/usbutils/*' --exclude
'/mnt/REMOTECIFS/var/lib/vim/*' --exclude
'/mnt/REMOTECIFS/var/lib/xml-core/*' '/root/test/::{hostname}-{now}'
/mnt/REMOTECIFS/var/lib/
using builtin fallback logging configuration
35 self tests completed in 0.23 seconds
Verified integrity of /root/test/index.32
TAM-verified manifest
security: read previous location '/root/test'
security: read manifest timestamp '2018-01-03T12:26:10.730626'
security: determined newest manifest timestamp as 2018-01-03T12:26:10.730626
security: repository checks ok, allowing access
Verified integrity of
/root/.cache/borg/9ab7ea808040f6ebf6356a86cb4ce092ccca693626e8832922bc2fa2f14c3804/chunks
security: read previous location '/root/test'
security: read manifest timestamp '2018-01-03T12:26:10.730626'
security: determined newest manifest timestamp as 2018-01-03T12:26:10.730626
security: repository checks ok, allowing access
Reading files cache
...nt/REMOTECIFS/var/lib/insserv????????????????????????????????????????????????????????????????????????????????????????????????????????????
Verified integrity of
/root/.cache/borg/9ab7ea808040f6ebf6356a86cb4ce092ccca693626e8832922bc2fa2f14c3804/files
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:apt:listchanges.conf: open:
[Errno 2] No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:apt:listchanges.conf'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:after6.rules: open: [Errno 2]
No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:after6.rules'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:gconf:2:path: open: [Errno 2] No
such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:gconf:2:path'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:samba:smb.conf: open: [Errno 2]
No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:samba:smb.conf'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:default:grub: open: [Errno 2] No
such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:default:grub'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:before.rules: open: [Errno 2]
No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:before.rules'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:idmapd.conf: open: [Errno 2] No
such file or directory: '/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:idmapd.conf'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:after.rules: open: [Errno 2]
No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:after.rules'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:default:nfs-common: open: [Errno
2] No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:default:nfs-common'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:before6.rules: open: [Errno
2] No such file or directory:
'/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:ufw:before6.rules'
/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:papersize: open: [Errno 2] No
such file or directory: '/mnt/REMOTECIFS/var/lib/ucf/cache/:etc:papersize'
/mnt/REMOTECIFS/var/lib/mysql/mysql/help_topic.MYD: read: [Errno 5]
Input/output
error??????????????????????????????????????????????????????????????????????????
/mnt/REMOTECIFS/var/lib/mysql/ibdata1: read: [Errno 5] Input/output
error???????????????????????????????????????????????????????????????????????????????????????
/mnt/REMOTECIFS/var/lib/samba/secrets.tdb: read: [Errno 5] Input/output
error???????????????????????????????????????????????????????????????????????????????????
/mnt/REMOTECIFS/var/lib/samba/passdb.tdb: read: [Errno 5] Input/output error
/mnt/REMOTECIFS/var/lib/samba/share_info.tdb: read: [Errno 5]
Input/output error
/mnt/REMOTECIFS/var/lib/samba/account_policy.tdb: read: [Errno 5]
Input/output error
/mnt/REMOTECIFS/var/lib/samba/registry.tdb: read: [Errno 5] Input/output
error
Cleaned up 0 uncommitted segment files (== everything after segment
32).????????????????????????????????????????????????????????????????????????????????????????
Verified integrity of /root/test/hints.32
check_free_space: few segments, not requiring a full free segment
check_free_space: calculated working space for compact as 106484256 bytes
check_free_space: required bytes 109114244, free bytes 10742345728
security: saving state for
9ab7ea808040f6ebf6356a86cb4ce092ccca693626e8832922bc2fa2f14c3804 to
/root/.config/borg/security/9ab7ea808040f6ebf6356a86cb4ce092ccca693626e8832922bc2fa2f14c3804
security: current location?? /root/test
security: key type?????????? 2
security: manifest timestamp 2018-01-03T12:27:44.155892
------------------------------------------------------------------------------??????????????????????????????????????????????????????????????????????????????????
Archive name: devnull-2018-01-03T13:27:10
Archive fingerprint:
cc24ac999f299ffead66b89cc46b558a9f46f7be6fd8e7bd8c56a6847482282d
Time (start): Wed, 2018-01-03 13:27:10
Time (end):?? Wed, 2018-01-03 13:27:44
Duration: 33.65 seconds
Number of files: 160
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
?????????????????????? Original size????? Compressed size???
Deduplicated size
This archive:?????????????? 65.98 MB???????????? 14.19 MB?????????????
1.03 kB
All archives:??????????????? 1.08 GB??????????? 373.93 MB???????????
106.10 MB
?????????????????????? Unique chunks???????? Total chunks
Chunk index:??????????????????? 9432??????????????? 30590
------------------------------------------------------------------------------
terminating with warning status, rc 1
+ read -e -p 'Unmount ? [y/n]' ret
Unmount ? [y/n]y
+ '[' y == y ']'
+ umount /mnt/REMOTECIFS/var
+ set +x
The strace log of the file
'/mnt/REMOTECIFS/var/lib/mysql/mysql/help_topic.MYD: read: [Errno 5]
Input/output error'
3158? lstat64("/mnt/REMOTECIFS/var/lib/mysql/mysql/help_topic.MYD",
{st_mode=S_IFREG|0660, st_size=492348, ...}) = 0
3158? open("/mnt/REMOTECIFS/var/lib/mysql/mysql/help_topic.MYD",
O_RDONLY|O_LARGEFILE|O_NOATIME|O_CLOEXEC) = 6
3158? fstat64(6, {st_mode=S_IFREG|0660, st_size=492348, ...}) = 0
3158? ioctl(6, TCGETS, 0xbf8bc50c)????? = -1 ENOTTY (Inappropriate ioctl
for device)
3158? _llseek(6, 0, [0], SEEK_CUR)????? = 0
3158? read(6, 0xb4d3e008, 8388608)????? = -1 EIO (Input/output error)
3158? close(6)????????????????????????? = 0
As already menitond I can access and read the file from bash:
# head /mnt/REMOTECIFS/var/lib/mysql/mysql/help_topic.MYD
?JOINC
MySQL supports the following JOIN syntax for the table_references part
of SELECT statements and multiple-table DELETE and UPDATE statements:
table_references:
??? escaped_table_reference [, escaped_table_reference] ...
escaped_table_reference:
??? table_reference
Now with samba debugging turned on (which is the hell on productive
machines ;-) I found the following entry:
[2018/01/03 14:06:37.967032,? 4] smbd/vfs.c:780(vfs_ChDir)
? vfs_ChDir to /var
[2018/01/03 14:06:37.967085,? 2] smbd/open.c:1033(open_file)
? root opened file lib/mysql/mysql/help_topic.MYD read=Yes write=No
(numopen=1)
[2018/01/03 14:06:37.967143,? 3]
smbd/oplock_linux.c:122(linux_set_kernel_oplock)
? linux_set_kernel_oplock: Refused oplock on file
lib/mysql/mysql/help_topic.MYD, fd = 30, file_id = 801:122637:0.
(Resource temporarily unavailable)
[2018/01/03 14:06:37.967215,? 4] smbd/trans2.c:3948(store_file_unix_basic)
? store_file_unix_basic: st_mode=100660
[2018/01/03 14:06:37.967271,? 9] smbd/trans2.c:982(send_trans2_replies)
? t2_rep: params_sent_thistime = 2, data_sent_thistime = 112,
useable_space = 131010
[2018/01/03 14:06:37.967310,? 9] smbd/trans2.c:984(send_trans2_replies)
? t2_rep: params_to_send = 2, data_to_send = 112, paramsize = 2,
datasize = 112
[2018/01/03 14:06:37.967334,? 5] lib/util.c:332(show_msg)
[2018/01/03 14:06:37.967368,? 5] lib/util.c:342(show_msg)
?
A little bit googling told me to turn off samba kernel oplocks in
smb.conf with 'kernel oplocks=no' and voila the error has gone.
To cut a long story short:
----------------------------------------------------------------------------------
Better do not try to mount a backup source over network. Install borg on
every client to backup and set up a remote repository instead just like
mentioned in the docs:
https://borgbackup.readthedocs.io/en/stable/quickstart.html#remote-repositories
That's the one and only way to backup securely from any client.
I've learned a lot because I initially was too lazy to do it right...
Lesson learned, apologize for the inconvenience it was my fault ;-)
From andrea.francesconi at gmail.com Wed Jan 3 13:16:42 2018
From: andrea.francesconi at gmail.com (fRANz)
Date: Wed, 3 Jan 2018 19:16:42 +0100
Subject: [Borgbackup] Possible to include sub folder of excluded folder?
In-Reply-To: <87tvw5ci2t.fsf@mail.contactor.se>
References: <87tvw5ci2t.fsf@mail.contactor.se>
Message-ID:
On Mon, Jan 1, 2018 at 11:47 PM, Mats Lidell wrote:
> Will this give the intended behavior to backup the sub folder or will the exclude effectively hide all sub folders from the backup?
This didn't work for me:
...
--exclude /Users/xxx/yyy/apps/ff \
...
/Users/xxx/yyy/apps/ff/bookmarkbackups
...
borg 1.1.2 on macosx64
-f
From stefan at sbuehler.de Sat Jan 6 06:41:32 2018
From: stefan at sbuehler.de (Stefan Buehler)
Date: Sat, 6 Jan 2018 12:41:32 +0100
Subject: [Borgbackup] Mac Issues
Message-ID: <5D041C72-398F-419B-8D6F-D4757DAFFEC2@sbuehler.de>
Dear all,
I?m new to borg and have some issues and questions. I'm running on macOS 10.13.2, borg was installed by "pip install borg", and I have a repository on a local disk. Here is comes:
1. The link to the issue tracker on https://www.borgbackup.org/support/free.html is not leading anywhere, but stays on the same page (at least for me).
2. When running create, I never see any files with M status. Instead, all modified files have status A. I found the item "I am seeing ?A? (added) status for an unchanged file!?" in the FAQ, but it seem to not really match what I'm seeing.
3. And finally a more speculative question: Has anyone figured out a way to be more intelligent about what not to backup on a Mac? It would be great to somehow use Time Machine's exclusion list.
Cheers,
Stefan
From mats.lidell at cag.se Sat Jan 6 11:30:55 2018
From: mats.lidell at cag.se (Mats Lidell)
Date: Sat, 06 Jan 2018 17:30:55 +0100
Subject: [Borgbackup] Possible to include sub folder of excluded folder?
In-Reply-To: <1469b1ac-531e-2ba0-0766-ce0643a9be03@waldmann-edv.de> (Thomas
Waldmann's message of "Wed, 3 Jan 2018 10:49:10 +0100")
References: <87tvw5ci2t.fsf@mail.contactor.se>
<1469b1ac-531e-2ba0-0766-ce0643a9be03@waldmann-edv.de>
Message-ID: <87tvvzymo0.fsf@mail.contactor.se>
> Thomas Waldmann writes:
> What you can also try (if using borg 1.1.x) is the "patterns"
> include/exclude mechanism, which officially considers the case that
> there can be something included inside some excluded directory.
I tried it by using the --pattern option just before the old exclude line like this.
[...]
--pattern '+ pp:/some/folder/to/exclude/but/backup/this/subfolder'
--exclude '/some/folder/to/exclude'
::...
[...]
That sort of worked except that it did only list/backup new files, ie files recently added to the subfolder, which surprised me. I have not verified this but it looked like new files since I made the first backup a few days ago. I was expecting a backup of all the files in the subfolder since that had not been backuped before.
Do I need to do a "recreate" with the new set of options to get all files from subfolder backuped?
Is this the expected behavior when changing include/exclude to a backup?
Yours
--
%% Mats
From giuseppe.arvati at gmail.com Wed Jan 10 12:27:54 2018
From: giuseppe.arvati at gmail.com (Giuseppe Arvati)
Date: Wed, 10 Jan 2018 18:27:54 +0100
Subject: [Borgbackup] performance problem
In-Reply-To:
References:
<48561fcc-ff39-f98b-748f-b9441fb30f1d@gmail.com>
<39c2bd07-820f-fd14-4327-438c28542ce2@waldmann-edv.de>
<69c7e572-61a0-67c5-4c54-b0b9a04fb97c@gmail.com>
<219b9977-37ba-d122-59c4-a9f15c76862f@waldmann-edv.de>
Message-ID: <01b235f4-d6e0-5d99-de0a-7478c783ead6@gmail.com>
Hello,
just to update info about this issue.
I find a workaround.
1) ( 190GB 11:00 hours to backup )
I tried to archive a samba share directly
to a borg archive but was very slow.
I found that samba, for each file managed by borg, log an error message
and this slow down the backup performance
Before I used to take a "snapshot" of samba share with rsync and
I did not have any problem.
http://samba.2283325.n4.nabble.com/Failed-to-find-domain-NT-AUTHORITY-td4726039.html
I talked with samba list and they told me that this problem
can depend from my not recommended configuration.
I set up a file server and AD domain controller in the same machine
as workaround I avoided to access the samba share directly from borg
so
2) now ( 190GB 2:20 hours to backup )
-snapshot of samba share with rsync on externa NAS ( 20mins )
-from a different machine backup the snapshot to a borg archive ( 2huors)
The problem is solved but I do not understand why when rsync read files
from samba share samba do not log errors and when borg read the same
files samba log errors.
thank you
giuseppe
From mlnl at mailbox.org Sat Jan 13 15:01:09 2018
From: mlnl at mailbox.org (mlnl)
Date: Sat, 13 Jan 2018 21:01:09 +0100
Subject: [Borgbackup] keyfile clarification
Message-ID:
Hi,
quickstart says:
$ borg init --encryption=keyfile PATH
man borg-init says:
"The key will be stored in your home directory (in .config/borg/keys)"
or ~/.config/borg/keys, but doesn't mention PATH
so, is PATH an optional argument for the command line to store the
keyfile in another dir than the default ~/.config/borg/keys and could be
substituted with BORG_KEYS_DIR/BORG_KEY_FILE?
--
mlnl
From gait at atcomputing.nl Sun Jan 14 10:23:54 2018
From: gait at atcomputing.nl (Gerrit A. Smit)
Date: Sun, 14 Jan 2018 16:23:54 +0100
Subject: [Borgbackup] keyfile clarification
In-Reply-To:
References:
Message-ID:
borg(1) says:
$ borg init --encryption=repokey /path/to/repo
mlnl schreef op 13-01-18 om 21:01:
> quickstart says:
> $ borg init --encryption=keyfile PATH
>
From dave at gasaway.org Fri Jan 26 00:06:11 2018
From: dave at gasaway.org (David Gasaway)
Date: Thu, 25 Jan 2018 21:06:11 -0800
Subject: [Borgbackup] Pattern questions
Message-ID:
Hi,
I've been playing with using --pattern in an attempt to write a borg
handler for backupninja. I already got some questions answered in the bug
tracker, but still not quite grasping something. For example, let's say I
have this command line...
borg create --list --dry-run --pattern 'R/' --pattern '- /*'
/backup/ninja-test::Test3
I would expect nothing to get listed, as everything should match the first
exclude pattern. Yet, it will start to list every file on the system. If
I try this...
borg create --list --dry-run --pattern 'R/' --exclude '/*'
/backup/ninja-test::Test3
I get nothing, as expected. If I modify it to this...
borg create --list --dry-run --pattern 'R/' --pattern '+ /home/*' --exclude
'/*' /backup/ninja-test::Test3
I still get nothing, which is not what I would expect. The documentation
states, "The first matching pattern is used so if an include pattern
matches before an exclude pattern, the file is backed up." It is probably
referring to an include --pattern followed by an exclude --pattern. But if
I try this...
borg create --list --dry-run --pattern 'R/' --pattern '+ /home/*' --exclude
'/*' /backup/ninja-test::Test3
I'm back to where I started - a list of every file on the system. The
documentation also states, "Patterns (--pattern) and excludes (--exclude)
from the command line are considered first (in the order of appearance)."
This makes me think the third example should have worked. Are --excludes
always evaluated regardless of matches to any prior --pattern?
Let me set up a concrete scenario. Suppose I wanted a backup of all home
directories that start with 'a' using only patterns. How would this be
done? The following does not work...
borg create --pattern 'R/home' --pattern '+ /home/a*' --pattern '- /home/*'
/backup::archive
Thanks.
--
-:-:- David K. Gasaway
-:-:- Email: dave at gasaway.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From howardm at xmission.com Sun Jan 28 21:51:06 2018
From: howardm at xmission.com (Howard Mann)
Date: Sun, 28 Jan 2018 19:51:06 -0700
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
Message-ID: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
Hi,
I?m a new (non-techie) Borg user. I?ve successfully created a repository? with passphrase-aasociated encryption. I use Mac OS.
For each individual command I now issue in Terminal, such as ?borg list?? I have to enter the requested passphrase.
Is there a way I can avoid (or minimize) this requirement.
I know about the use of ?export BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve created and used successfully.
Thanks!
Howard
From azarus at posteo.net Mon Jan 29 00:12:58 2018
From: azarus at posteo.net (azarus)
Date: Mon, 29 Jan 2018 06:12:58 +0100
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
In-Reply-To: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
References: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
Message-ID:
On 29 January 2018 03:51:06 CET, Howard Mann wrote:
>Hi,
>
>I?m a new (non-techie) Borg user. I?ve successfully created a
>repository? with passphrase-aasociated encryption. I use Mac OS.
>
>For each individual command I now issue in Terminal, such as ?borg
>list?? I have to enter the requested passphrase.
>
>Is there a way I can avoid (or minimize) this requirement.
>
>I know about the use of ?export
>BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve
>created and used successfully.
That what you've just mentioned can be used inside a script or outside a script and is called an 'environment variable'.
Borg regards that environment variable either way, so I'd just export it before listing the repos.
Greetings,
--
azarus
From sitaramc at gmail.com Mon Jan 29 00:36:36 2018
From: sitaramc at gmail.com (Sitaram Chamarty)
Date: Mon, 29 Jan 2018 11:06:36 +0530
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
In-Reply-To:
References: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
Message-ID: <20180129053636.GA16903@sita-lt.atc.tcs.com>
On Mon, Jan 29, 2018 at 06:12:58AM +0100, azarus wrote:
>
>
> On 29 January 2018 03:51:06 CET, Howard Mann wrote:
> >Hi,
> >
> >I?m a new (non-techie) Borg user. I?ve successfully created a
> >repository? with passphrase-aasociated encryption. I use Mac OS.
> >
> >For each individual command I now issue in Terminal, such as ?borg
> >list?? I have to enter the requested passphrase.
> >
> >Is there a way I can avoid (or minimize) this requirement.
> >
> >I know about the use of ?export
> >BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve
> >created and used successfully.
>
> That what you've just mentioned can be used inside a script or outside a script and is called an 'environment variable'.
>
> Borg regards that environment variable either way, so I'd just export it before listing the repos.
I'm also using that environment variable, but that is not ideal.
On multi user systems where /proc is mounted default, it can
reveal the passphrase to a "ps" command.
Many systems take input from STDIN, and -- when combined with a
gpg-based password manager like "pass" [1] -- can make things
just a little bit safer.
I now see from 'man borg' that there is also a BORG_PASSCOMMAND
variable, which seems to me to be effectively the same as
passing in the passpharse via STDIN.
I will probably switch to that in my scripts when I get some
time. I suggest to the original poster that this may be a
better idea in the long run, even if there is a bit of upfront
effort in setting up either "pass" or something equivalent.
regards
sitaram
[1]: http://zx2c4.com/projects/password-store/
From howardm at xmission.com Mon Jan 29 00:42:47 2018
From: howardm at xmission.com (Howard Mann)
Date: Sun, 28 Jan 2018 22:42:47 -0700
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
References: <2ACD2023-17E1-4003-8E0C-38F2E00ED1DD@xmission.com>
Message-ID: <2512F22E-A22E-4006-9F09-2004A0E4608A@xmission.com>
>>
>>
>>
>>
>> On 29 January 2018 03:51:06 CET, Howard Mann > wrote:
>>> Hi,
>>>
>>> I?m a new (non-techie) Borg user. I?ve successfully created a
>>> repository? with passphrase-aasociated encryption. I use Mac OS.
>>>
>>> For each individual command I now issue in Terminal, such as ?borg
>>> list?? I have to enter the requested passphrase.
>>>
>>> Is there a way I can avoid (or minimize) this requirement.
>>>
>>> I know about the use of ?export
>>> BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve
>>> created and used successfully.
______
>>
>> That what you've just mentioned can be used inside a script or outside a script and is called an 'environment variable'.
>>
>> Borg regards that environment variable either way, so I'd just export it before listing the repos.
>
> ____
>
> @azarus
>
> If I understand you correctly:
>
> 1. When I start a Terminal session, I export the BORG_PASSPHRASE as my first action. Or does the export action survive a previous Terminal session ?
> 2. When the passphrase is needed, the Terminal just provides it, and I hit ?return.? Or, the provision of the passphrase just happens in the ?background? ?
>
> (I?ll do some reading on environmental variables, in general.)
>
> Thanks!
>
> Howard
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From public at enkore.de Mon Jan 29 04:32:47 2018
From: public at enkore.de (Marian Beermann)
Date: Mon, 29 Jan 2018 10:32:47 +0100
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
In-Reply-To: <20180129053636.GA16903@sita-lt.atc.tcs.com>
References: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
<20180129053636.GA16903@sita-lt.atc.tcs.com>
Message-ID: <4285ecb9-1d59-074a-41b2-77e3464ecfcf@enkore.de>
On 29.01.2018 06:36, Sitaram Chamarty wrote:
> On Mon, Jan 29, 2018 at 06:12:58AM +0100, azarus wrote:
>>
>>
>> On 29 January 2018 03:51:06 CET, Howard Mann wrote:
>>> Hi,
>>>
>>> I?m a new (non-techie) Borg user. I?ve successfully created a
>>> repository? with passphrase-aasociated encryption. I use Mac OS.
>>>
>>> For each individual command I now issue in Terminal, such as ?borg
>>> list?? I have to enter the requested passphrase.
>>>
>>> Is there a way I can avoid (or minimize) this requirement.
>>>
>>> I know about the use of ?export
>>> BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve
>>> created and used successfully.
>>
>> That what you've just mentioned can be used inside a script or outside a script and is called an 'environment variable'.
>>
>> Borg regards that environment variable either way, so I'd just export it before listing the repos.
>
> I'm also using that environment variable, but that is not ideal.
> On multi user systems where /proc is mounted default, it can
> reveal the passphrase to a "ps" command.
Process environments are private. "export FOO=bar" can't be observed by
ps, because "export" can't be a command, but must always be a shell
built-in.
Even if you do "FOO=bar some_command", the "FOO=bar" part is interpreted
by the shell and won't show up in ps.
-Marian
From sitaramc at gmail.com Mon Jan 29 05:39:43 2018
From: sitaramc at gmail.com (Sitaram Chamarty)
Date: Mon, 29 Jan 2018 16:09:43 +0530
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
In-Reply-To: <4285ecb9-1d59-074a-41b2-77e3464ecfcf@enkore.de>
References: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
<20180129053636.GA16903@sita-lt.atc.tcs.com>
<4285ecb9-1d59-074a-41b2-77e3464ecfcf@enkore.de>
Message-ID: <20180129103943.GA27805@sita-lt.atc.tcs.com>
On Mon, Jan 29, 2018 at 10:32:47AM +0100, Marian Beermann wrote:
> On 29.01.2018 06:36, Sitaram Chamarty wrote:
> > On Mon, Jan 29, 2018 at 06:12:58AM +0100, azarus wrote:
> >>
> >>
> >> On 29 January 2018 03:51:06 CET, Howard Mann wrote:
> >>> Hi,
> >>>
> >>> I?m a new (non-techie) Borg user. I?ve successfully created a
> >>> repository? with passphrase-aasociated encryption. I use Mac OS.
> >>>
> >>> For each individual command I now issue in Terminal, such as ?borg
> >>> list?? I have to enter the requested passphrase.
> >>>
> >>> Is there a way I can avoid (or minimize) this requirement.
> >>>
> >>> I know about the use of ?export
> >>> BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve
> >>> created and used successfully.
> >>
> >> That what you've just mentioned can be used inside a script or outside a script and is called an 'environment variable'.
> >>
> >> Borg regards that environment variable either way, so I'd just export it before listing the repos.
> >
> > I'm also using that environment variable, but that is not ideal.
> > On multi user systems where /proc is mounted default, it can
> > reveal the passphrase to a "ps" command.
>
> Process environments are private. "export FOO=bar" can't be observed by
> ps, because "export" can't be a command, but must always be a shell
> built-in.
>
> Even if you do "FOO=bar some_command", the "FOO=bar" part is interpreted
> by the shell and won't show up in ps.
You're right; I forgot that it's only root that can pull
environment variables from another user's process.
Still, using environment variables makes it too easy for root.
Using a pipe makes him work harder to get your password!
From public at enkore.de Mon Jan 29 11:20:00 2018
From: public at enkore.de (Marian Beermann)
Date: Mon, 29 Jan 2018 17:20:00 +0100
Subject: [Borgbackup] Providing passphrase on the command line (Terminal)
In-Reply-To: <20180129103943.GA27805@sita-lt.atc.tcs.com>
References: <1E6573F4-02DC-4B96-9988-0CD9B7AC4387@xmission.com>
<20180129053636.GA16903@sita-lt.atc.tcs.com>
<4285ecb9-1d59-074a-41b2-77e3464ecfcf@enkore.de>
<20180129103943.GA27805@sita-lt.atc.tcs.com>
Message-ID: <37033974-e6b8-d8f2-3e56-3600b03eecbc@enkore.de>
On 29.01.2018 11:39, Sitaram Chamarty wrote:
> On Mon, Jan 29, 2018 at 10:32:47AM +0100, Marian Beermann wrote:
>> On 29.01.2018 06:36, Sitaram Chamarty wrote:
>>> On Mon, Jan 29, 2018 at 06:12:58AM +0100, azarus wrote:
>>>>
>>>>
>>>> On 29 January 2018 03:51:06 CET, Howard Mann wrote:
>>>>> Hi,
>>>>>
>>>>> I?m a new (non-techie) Borg user. I?ve successfully created a
>>>>> repository? with passphrase-aasociated encryption. I use Mac OS.
>>>>>
>>>>> For each individual command I now issue in Terminal, such as ?borg
>>>>> list?? I have to enter the requested passphrase.
>>>>>
>>>>> Is there a way I can avoid (or minimize) this requirement.
>>>>>
>>>>> I know about the use of ?export
>>>>> BORG_PASSPHRASE=?superawesomepassphrase?? in a script, which I?ve
>>>>> created and used successfully.
>>>>
>>>> That what you've just mentioned can be used inside a script or outside a script and is called an 'environment variable'.
>>>>
>>>> Borg regards that environment variable either way, so I'd just export it before listing the repos.
>>>
>>> I'm also using that environment variable, but that is not ideal.
>>> On multi user systems where /proc is mounted default, it can
>>> reveal the passphrase to a "ps" command.
>>
>> Process environments are private. "export FOO=bar" can't be observed by
>> ps, because "export" can't be a command, but must always be a shell
>> built-in.
>>
>> Even if you do "FOO=bar some_command", the "FOO=bar" part is interpreted
>> by the shell and won't show up in ps.
>
> You're right; I forgot that it's only root that can pull
> environment variables from another user's process.
>
> Still, using environment variables makes it too easy for root.
> Using a pipe makes him work harder to get your password!
>
Not really, root can just grep the memory of the borg process for
BORG_PASSPHRASE; the passphrase will be starting after the next byte (\0
terminator in front and back).
Failing that root could also attach a debugger and traverse CPython's
data structures (also possible without debugger).
-Marian
From tuomov at iki.fi Wed Jan 31 17:48:33 2018
From: tuomov at iki.fi (Tuomo Valkonen)
Date: Wed, 31 Jan 2018 22:48:33 +0000
Subject: [Borgbackup] Borgend - "dreamtime" scheduler and macOS tray icon
Message-ID:
After Crashplan ended their consumer product, I could not find another suitable ?ready? backup service. Borg did most of what I needed, and using my own server space. It was just lacking a scheduler that works reliably and without any user intervention on a laptop.
Well, here it is. Or will be?things are still pretty much in the early stages, and there are bound to be several issues to fix and improve. But it already works well enough for my daily use. I hope you don?t mind me advertising about it.
In any case, Borgend is a retrying and queuing scheduler as well as a macOS tray icon for BorgBackup. If you are not on macOS, no tray icon will be displayed, but you can still use Borgend as a scheduler.
* Designed with laptops in mind, Borgend works in ?dreamtime?: on macOS the scheduler discounts system sleep periods from the backup intervals. If you wish, you can also choose ?realtime? scheduling.
* You can have multiple backups to the same repository; for example, you may backup a small subset of your files every couple of hours, and everything once a day or once a week. Borgend will ensure that only one backup is launched at a time, and queue the other one until the repository is available.
* If there was an error, such as when you are offline and backup to a remote sshlocation, Borgend will also retry the backup at set shorter intervals.
You can get it at: https://bitbucket.org/tuomov/borgend/
Tuomo
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stefan at sbuehler.de Thu Feb 1 05:02:09 2018
From: stefan at sbuehler.de (Stefan Buehler)
Date: Thu, 1 Feb 2018 11:02:09 +0100
Subject: [Borgbackup] Borgend - "dreamtime" scheduler and macOS tray icon
In-Reply-To:
References:
Message-ID: <56F774EB-2D62-4259-8E55-01B0A882861C@sbuehler.de>
Dear Tuomo,
I?m very excited about your post, since this seems to exactly match my need. With a lot of trial and error, I have managed to set up a borg backup on my Mac, using Lingon to create a Launch Daemon for borg.
Something that needed quite a bit of puzzling is that I want to backup all users, so borg has to run as root. Is that possible with Borgend?
Best wishes,
Stefan
> On 31. Jan 2018, at 23:48, Tuomo Valkonen wrote:
>
> After Crashplan ended their consumer product, I could not find another suitable ?ready? backup service. Borg did most of what I needed, and using my own server space. It was just lacking a scheduler that works reliably and without any user intervention on a laptop.
>
> Well, here it is. Or will be?things are still pretty much in the early stages, and there are bound to be several issues to fix and improve. But it already works well enough for my daily use. I hope you don?t mind me advertising about it.
>
> In any case, Borgend is a retrying and queuing scheduler as well as a macOS tray icon for BorgBackup. If you are not on macOS, no tray icon will be displayed, but you can still use Borgend as a scheduler.
>
> * Designed with laptops in mind, Borgend works in ?dreamtime?: on macOS the scheduler discounts system sleep periods from the backup intervals. If you wish, you can also choose ?realtime? scheduling.
> * You can have multiple backups to the same repository; for example, you may backup a small subset of your files every couple of hours, and everything once a day or once a week. Borgend will ensure that only one backup is launched at a time, and queue the other one until the repository is available.
> * If there was an error, such as when you are offline and backup to a remote sshlocation, Borgend will also retry the backup at set shorter intervals.
>
> You can get it at: https://bitbucket.org/tuomov/borgend/
>
> Tuomo
>
>
>
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
From tuomov at iki.fi Thu Feb 1 13:03:19 2018
From: tuomov at iki.fi (Tuomo Valkonen)
Date: Thu, 1 Feb 2018 18:03:19 +0000
Subject: [Borgbackup] Borgend - "dreamtime" scheduler and macOS tray icon
In-Reply-To: <56F774EB-2D62-4259-8E55-01B0A882861C@sbuehler.de>
References:
<56F774EB-2D62-4259-8E55-01B0A882861C@sbuehler.de>
Message-ID: <9D176E39-0D2D-4043-85A4-AE91721B8057@iki.fi>
Hi Stefan,
> On 1 Feb 2018, at 10:02, Stefan Buehler wrote:
>
> Something that needed quite a bit of puzzling is that I want to backup all users, so borg has to run as root. Is that possible with Borgend?
It should be possible to run it as a root, however without the tray icon (--no-tray option). So you would only be able to monitor what is happening from the log. It is on the todo-list to separate the tray icon from the main scheduler into a separate application, and also this way be able to provide a command line interface to the scheduler. However, the priority right now is to get the basics working reliably.
Tuomo
From stefan at sbuehler.de Sat Feb 3 10:16:57 2018
From: stefan at sbuehler.de (Stefan Buehler)
Date: Sat, 3 Feb 2018 16:16:57 +0100
Subject: [Borgbackup] Borg on Mac with Fusion Drive
Message-ID: <9E07E15E-44F6-4783-96C7-2C3F2A53E4F9@sbuehler.de>
Dear all,
I?m running borg backups on a Mac that has a ?fusion drive?. (A combination of SSD and spinning disk.) What puzzles me is that I never see file status ?M? in the log output, and also there seem to always be a lot of (seemingly random) old files being added again.
Is it possible that this behaviour is related to the fusion drive? Perhaps a file that is shuffled by its internal magic from SSD to spinning disk, or vice versa, will look like new to bork?
I?m running with the defaults for the --files-cache variable, should I use something else there?
All the best,
Stefan
From frank at free.de Wed Feb 14 03:34:47 2018
From: frank at free.de (Frank)
Date: Wed, 14 Feb 2018 02:34:47 -0600
Subject: [Borgbackup] Error during borg prune
Message-ID: <20180214023447.35fd77f6@free.de>
I got an error while doing borg prune. It seems this had been happening
for some weeks now, but I just noticed that the old archives were
accumulating and not being pruned. It seems new archives can be
created, but old ones cannot be pruned. I tried using the --force option
but the prune process exits before completing. Is there something I can
do to rescue my repository?
Frank
Data integrity error: Invalid segment entry size 2778763224 - too big
[segment 90673, offset 3739400] Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 4175,
in main exit_code = archiver.run(args)
File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 4107,
in run return set_ec(func(args))
File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 150,
in wrapper return method(self, args, repository=repository, **kwargs)
File "/usr/lib/python3.6/site-packages/borg/archiver.py", line 1562,
in do_prune progress=args.progress).delete(stats, forced=args.forced)
File "/usr/lib/python3.6/site-packages/borg/archive.py", line 786, in
delete for (i, (items_id, data)) in enumerate(zip(items_ids,
self.repository.get_many(items_ids))): File
"/usr/lib/python3.6/site-packages/borg/remote.py", line 928, in
get_many for resp in self.call_many('get', [{'id': id} for id in ids],
is_preloaded=is_preloaded): File
"/usr/lib/python3.6/site-packages/borg/remote.py", line 773, in
call_many handle_error(unpacked) File
"/usr/lib/python3.6/site-packages/borg/remote.py", line 735, in
handle_error raise IntegrityError(args[0].decode())
borg.helpers.IntegrityError: Data integrity error: Invalid segment
entry size 2778763224 - too big [segment 90673, offset 3739400]
Platform: Linux GNU 4.15.2-gnu-1 #1 SMP PREEMPT Mon Feb 12 19:40:37
CET 2018 x86_64 Linux: arch
Borg: 1.1.4 Python: CPython 3.6.4
PID: 2083 CWD: /home/user
sys.argv: ['/bin/borg', 'prune', '-v', '--list', '-s', '--force',
'--prefix', '{hostname}-', '--keep-daily=7', '--keep-weekly=4',
'--keep-monthly=1', 'ssh://user at 192.168.0.100:22//mnt/HD/Borg']
SSH_ORIGINAL_COMMAND: None
From rob at coldstripe.net Thu Feb 15 18:09:34 2018
From: rob at coldstripe.net (Rob Klingsten)
Date: Thu, 15 Feb 2018 18:09:34 -0500
Subject: [Borgbackup] Hostname change, now there is a problem?
Message-ID: <838C6807-5B0F-46AF-95DE-BE9DF1EA6D6F@coldstripe.net>
Hi, I?ve been running Borg for about 5 months, backing all my home media and stuff up to a remote server ? it?s about 2.3 TB of stuff. After the initial backup, nightly runs have been taking about 15 minutes assuming I add a few files here and there.
Last week I upgraded my server to new hardware, and I wasn?t thinking and gave it a new hostname. I brought over all the data from the old server with rsync -av and it?s in the exact same path as it was on the old server (/data/*) ?
I added ?files-cache=mtime,size to the borg create command, as I didn?t want to re-backup everything, since certainly all the inodes are different.
So, the initial run on the new server took over 11 hours, and looked like this:
Synchronizing chunks cache...
Archives: 14, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 14.
Fetching and building archive index for kauai-2018-02-01 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2017-11-30 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2017-10-31 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-21 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2017-12-31 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-31 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-14 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-29 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-30 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2017-09-30 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-26 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-27 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-07 ...
Merging into master chunks index ...
Fetching and building archive index for kauai-2018-01-28 ...
Merging into master chunks index ...
Done.
------------------------------------------------------------------------------
Archive name: borneo-2018-02-13
Archive fingerprint: 3eac3565bdf9379ed5d5433e07c0de984cb6cf3cc9aaa2d0c5f95a54ad469319
Time (start): Tue, 2018-02-13 09:21:04
Time (end): Tue, 2018-02-13 20:33:00
Duration: 11 hours 11 minutes 56.53 seconds
Number of files: 40354
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
Original size Compressed size Deduplicated size
This archive: 2.52 TB 2.52 TB 5.76 GB
All archives: 37.67 TB 37.67 TB 2.52 TB
Unique chunks Total chunks
Chunk index: 1013776 15132838
------------------------------------------------------------------------------
Keeping archive: borneo-2018-02-13 Tue, 2018-02-13 09:21:04 [3eac3565bdf9379ed5d5433e07c0de984cb6cf3cc9aaa2d0c5f95a54ad469319]
The first thing I noticed that it didn?t list the previous retained archives with the old hostname, and then, the second borg run the next day was still running after 24 hours and I finally killed it. It didn?t seem to be transmitting any data, because I can tell when a big upload is going on, my connection will be crushed .? is my remote archive now hosed? Or is there a way I can salvage it or am I simply doing something wrong?
thanks for any help!
Rob K.
From gait at ATComputing.nl Fri Feb 16 03:47:23 2018
From: gait at ATComputing.nl (Gerrit A. Smit)
Date: Fri, 16 Feb 2018 09:47:23 +0100
Subject: [Borgbackup] Hostname change, now there is a problem?
In-Reply-To: <838C6807-5B0F-46AF-95DE-BE9DF1EA6D6F@coldstripe.net>
References: <838C6807-5B0F-46AF-95DE-BE9DF1EA6D6F@coldstripe.net>
Message-ID: <21bcfe4c-0303-2851-344b-fb03046237c8@ATComputing.nl>
Op 16-02-18 om 00:09 schreef Rob Klingsten:
> I brought over all the data from the old server with rsync -av and it?s in the exact same path as it was on the old server (/data/*) ?
So this rsync included all local borg data?
And did you configure borg to not do compression?
--
Met vriendelijke groeten,
AT COMPUTING BV
Gerrit A. Smit
AT Computing Telefoon: +31 24 352 72 22
D? one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72
Kerkenbos 12-38 TI at ATComputing.nl
6546 BE Nijmegen www.atcomputing.nl
https://www.linkedin.com/in/gesmit/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From frank at free.de Fri Feb 16 17:23:35 2018
From: frank at free.de (Frank)
Date: Fri, 16 Feb 2018 16:23:35 -0600
Subject: [Borgbackup] Good-bye Borg
Message-ID: <20180216162335.11814c63@free.de>
This is the second time a repository gets corrupted, thus I find Borg
to be unrealiable as a backup solution. Honestly I'm disappointed
because Borg has all the features that I was looking for in a backup
tool except for repositories getting corrupted. For the moment I will
have to go back to rsync as a backup solution, but I don't discard the
possibility of using Borg in the future when it has become more
reliable.
Thank you anyway to the creators of Borg.
--
Frank
From clickwir at gmail.com Fri Feb 16 17:46:01 2018
From: clickwir at gmail.com (Zack Coffey)
Date: Fri, 16 Feb 2018 15:46:01 -0700
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <20180216162335.11814c63@free.de>
References: <20180216162335.11814c63@free.de>
Message-ID:
I've just started expanding my use of borg as a backup solution. I'm
interested in hearing more detail about what kind of problems and
corruption you've run into.
We've been using the borgmatic script to help manage borg and we've seen
quite nice results so far. So far we've not heard of many problems with
borg. Like I said, we'd be interested to know more about what you've seen.
On Fri, Feb 16, 2018 at 3:23 PM, Frank wrote:
> This is the second time a repository gets corrupted, thus I find Borg
> to be unrealiable as a backup solution. Honestly I'm disappointed
> because Borg has all the features that I was looking for in a backup
> tool except for repositories getting corrupted. For the moment I will
> have to go back to rsync as a backup solution, but I don't discard the
> possibility of using Borg in the future when it has become more
> reliable.
>
> Thank you anyway to the creators of Borg.
>
> --
> Frank
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From frank at free.de Fri Feb 16 19:02:54 2018
From: frank at free.de (Frank)
Date: Fri, 16 Feb 2018 18:02:54 -0600
Subject: [Borgbackup] Good-bye Borg
In-Reply-To:
References: <20180216162335.11814c63@free.de>
Message-ID: <20180216180254.2b0264b7@free.de>
I explained what happened in this post
https://mail.python.org/pipermail/borgbackup/2018q1/000987.html but I
received no response.
--
Frank
--Previous message--
From: Zack Coffey
Sent on Friday, 16 February 2018 at 03:46 pm (UTC -0700)
To: Frank borgbackup at python.org
Subject: Re: [Borgbackup] Good-bye Borg
> I've just started expanding my use of borg as a backup solution. I'm
> interested in hearing more detail about what kind of problems and
> corruption you've run into.
>
> We've been using the borgmatic script to help manage borg and we've
> seen quite nice results so far. So far we've not heard of many
> problems with borg. Like I said, we'd be interested to know more
> about what you've seen.
>
> On Fri, Feb 16, 2018 at 3:23 PM, Frank wrote:
>
> > This is the second time a repository gets corrupted, thus I find
> > Borg to be unrealiable as a backup solution. Honestly I'm
> > disappointed because Borg has all the features that I was looking
> > for in a backup tool except for repositories getting corrupted. For
> > the moment I will have to go back to rsync as a backup solution,
> > but I don't discard the possibility of using Borg in the future
> > when it has become more reliable.
> >
> > Thank you anyway to the creators of Borg.
> >
> > --
> > Frank
> > _______________________________________________
> > Borgbackup mailing list
> > Borgbackup at python.org
> > https://mail.python.org/mailman/listinfo/borgbackup
> >
From frank at free.de Fri Feb 16 19:23:04 2018
From: frank at free.de (Frank)
Date: Fri, 16 Feb 2018 18:23:04 -0600
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <87sha0h4y5.fsf@curie.anarc.at>
References: <20180216162335.11814c63@free.de> <87sha0h4y5.fsf@curie.anarc.at>
Message-ID: <20180216182304.04576c5a@free.de>
I just checked the drive with smartmontools and it just shows the usual
wear. The problem is that Borg repositories are very
sensitive that a small failure in a sector affects the entire
repository - and I consider that unsuitable for a backup solution.
Thank you for the link. I will check it right away!
--
Frank
--Previous message--
From: Antoine Beaupr?
Sent on Friday, 16 February 2018 at 06:48 pm (UTC -0500)
To: Frank
Subject: Re: [Borgbackup] Good-bye Borg
> On 2018-02-16 16:23:35, Frank wrote:
> > This is the second time a repository gets corrupted, thus I find
> > Borg to be unrealiable as a backup solution. Honestly I'm
> > disappointed because Borg has all the features that I was looking
> > for in a backup tool except for repositories getting corrupted. For
> > the moment I will have to go back to rsync as a backup solution,
> > but I don't discard the possibility of using Borg in the future
> > when it has become more reliable.
>
> Hi Frank!
>
> Did you consider disk corruption issues?
>
> If you're going to switch away from borg, I would encourage you to
> take a look at the Golang equivalent: https://restic.net/
>
> I'm thinking of switching myself because the community is so much more
> welcoming... I would probably write a converter as well...
>
> Good luck!
>
> A.
>
> --
> Tu conna?tras la v?rit? de ton chemin ? ce qui te rend heureux.
> - Aristote
From anarcat at debian.org Fri Feb 16 20:09:51 2018
From: anarcat at debian.org (=?utf-8?Q?Antoine_Beaupr=C3=A9?=)
Date: Fri, 16 Feb 2018 20:09:51 -0500
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <20180216182304.04576c5a@free.de>
References: <20180216162335.11814c63@free.de> <87sha0h4y5.fsf@curie.anarc.at>
<20180216182304.04576c5a@free.de>
Message-ID: <87k1vch15s.fsf@curie.anarc.at>
On 2018-02-16 18:23:04, Frank wrote:
> I just checked the drive with smartmontools and it just shows the usual
> wear. The problem is that Borg repositories are very
> sensitive that a small failure in a sector affects the entire
> repository - and I consider that unsuitable for a backup solution.
error-correction options have been discussed in the past:
https://github.com/borgbackup/borg/issues/225
> Thank you for the link. I will check it right away!
I'm not sure why you forwarded my message to the mailing list: this was
sent in private on purpose... :) But whatever, hope that helps!
A.
PS: you might probably have the same issues with restic, if you have
bit-wise errors in the repository. no error correction there either:
https://github.com/restic/restic/issues/804
--
La mer, cette grande unificatrice, est l'unique espoir de l'homme.
Aujourd'hui plus que jamais auparavant, ce vieux dicton dit
litt?ralement ceci: nous sommes tous dans le m?me bateau.
- Jacques Yves Cousteau - Oc?anographe
From frank at free.de Fri Feb 16 20:28:40 2018
From: frank at free.de (Frank)
Date: Fri, 16 Feb 2018 19:28:40 -0600
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <87k1vch15s.fsf@curie.anarc.at>
References: <20180216162335.11814c63@free.de> <87sha0h4y5.fsf@curie.anarc.at>
<20180216182304.04576c5a@free.de> <87k1vch15s.fsf@curie.anarc.at>
Message-ID: <20180216192840.73fb438f@free.de>
Sorry about the forwarding. I thought all messages in the mailing list
were public. My bad.
> PS: you might probably have the same issues with restic, if you have
> bit-wise errors in the repository. no error correction there either:
>
> https://github.com/restic/restic/issues/804
A good reason to stick to plain rsync and what could be called
a sneakernet metabackup, i.e. a backup of the backup drive every three
months to a newer drive that stays safely stored and disconnected the
rest of the time.
Anyway thanks for the info and good luck!
--
Frank
--Previous message--
From: Antoine Beaupr?
Sent on Friday, 16 February 2018 at 08:09 pm (UTC -0500)
To: Frank , borgbackup at python.org
Subject: Re: [Borgbackup] Good-bye Borg
> On 2018-02-16 18:23:04, Frank wrote:
> > I just checked the drive with smartmontools and it just shows the
> > usual wear. The problem is that Borg repositories are very
> > sensitive that a small failure in a sector affects the entire
> > repository - and I consider that unsuitable for a backup solution.
>
> error-correction options have been discussed in the past:
>
> https://github.com/borgbackup/borg/issues/225
>
> > Thank you for the link. I will check it right away!
>
> I'm not sure why you forwarded my message to the mailing list: this
> was sent in private on purpose... :) But whatever, hope that helps!
>
> A.
>
> PS: you might probably have the same issues with restic, if you have
> bit-wise errors in the repository. no error correction there either:
>
> https://github.com/restic/restic/issues/804
> --
> La mer, cette grande unificatrice, est l'unique espoir de l'homme.
> Aujourd'hui plus que jamais auparavant, ce vieux dicton dit
> litt?ralement ceci: nous sommes tous dans le m?me bateau.
> - Jacques Yves Cousteau - Oc?anographe
From public at enkore.de Sat Feb 17 05:02:34 2018
From: public at enkore.de (Marian Beermann)
Date: Sat, 17 Feb 2018 11:02:34 +0100
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <20180216192840.73fb438f@free.de>
References: <20180216162335.11814c63@free.de> <87sha0h4y5.fsf@curie.anarc.at>
<20180216182304.04576c5a@free.de> <87k1vch15s.fsf@curie.anarc.at>
<20180216192840.73fb438f@free.de>
Message-ID: <62266040-60db-40b4-8489-5eb6b972a980@enkore.de>
On 17.02.2018 02:28, Frank wrote:
> Sorry about the forwarding. I thought all messages in the mailing list
> were public. My bad.
>
>> PS: you might probably have the same issues with restic, if you have
>> bit-wise errors in the repository. no error correction there either:
>>
>> https://github.com/restic/restic/issues/804
>
> A good reason to stick to plain rsync and what could be called
> a sneakernet metabackup, i.e. a backup of the backup drive every three
> months to a newer drive that stays safely stored and disconnected the
> rest of the time.
>
> Anyway thanks for the info and good luck!
>
The "metabackup" approach should work with most of the "other" backup
tools as well.
Going where the community is better makes a ton of sense, especially if
you think you're going to stick around or contribute. People > code.
-Marian
PS: The problem of "what is the worst thing that can happen if I flip a
single BIT?" affects a lot of software, even file systems; it is quite
hard to deal with. A much easier-to-fix question is "what is the thing
that *probably* happens if I flip a single bit?".
From sitaramc at gmail.com Sat Feb 17 05:58:26 2018
From: sitaramc at gmail.com (Sitaram Chamarty)
Date: Sat, 17 Feb 2018 16:28:26 +0530
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <62266040-60db-40b4-8489-5eb6b972a980@enkore.de>
References: <20180216162335.11814c63@free.de> <87sha0h4y5.fsf@curie.anarc.at>
<20180216182304.04576c5a@free.de> <87k1vch15s.fsf@curie.anarc.at>
<20180216192840.73fb438f@free.de>
<62266040-60db-40b4-8489-5eb6b972a980@enkore.de>
Message-ID:
On 02/17/2018 03:32 PM, Marian Beermann wrote:
> On 17.02.2018 02:28, Frank wrote:
>> Sorry about the forwarding. I thought all messages in the mailing list
>> were public. My bad.
>>
>>> PS: you might probably have the same issues with restic, if you have
>>> bit-wise errors in the repository. no error correction there either:
>>>
>>> https://github.com/restic/restic/issues/804
>>
>> A good reason to stick to plain rsync and what could be called
>> a sneakernet metabackup, i.e. a backup of the backup drive every three
>> months to a newer drive that stays safely stored and disconnected the
>> rest of the time.
>>
>> Anyway thanks for the info and good luck!
>>
>
> The "metabackup" approach should work with most of the "other" backup
> tools as well.
>
> Going where the community is better makes a ton of sense, especially if
> you think you're going to stick around or contribute. People > code.
>
> -Marian
>
> PS: The problem of "what is the worst thing that can happen if I flip a
> single BIT?" affects a lot of software, even file systems; it is quite
> hard to deal with. A much easier-to-fix question is "what is the thing
> that *probably* happens if I flip a single bit?".
If you had a single bit flip, an rsync won't catch it (modulo things
like ZFS etc., which I am not too familiar with), and you may end up
propagating a bad copy.
My solution is to take multiple backups for the really important stuff.
I have 3 cron-ed backups (once an hour to an SD card permanently
inserted in the SD card slot of my laptop, once a day to two different
servers at work if I am in the office and connected) and 3 manual,
approximately once/month, backups (two to external HDs, one to a VPS in
Europe (I live in India)).
I am pretty sure I am the only person I know who is this paranoid. As
further evidence, I should mention that, when passing through airport
security, I take out the SD card and put it in my pocket. You know...
just in case someone walks off with the laptop (intentionally or
otherwise) and I am unable to catch him and retrieve it!
regards
sitaram
From tw at waldmann-edv.de Sat Feb 17 08:18:27 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Sat, 17 Feb 2018 14:18:27 +0100
Subject: [Borgbackup] Good-bye Borg
In-Reply-To: <20180216162335.11814c63@free.de>
References: <20180216162335.11814c63@free.de>
Message-ID: <753f763c-40ba-f136-ba0f-641b7ab7a5cc@waldmann-edv.de>
On 16.02.2018 23:23, Frank wrote:
> This is the second time a repository gets corrupted, thus I find Borg
> to be unrealiable as a backup solution.
Yeah, shoot the messenger, that is a historically proven solution.
No, just joking.
Find the root cause of the corruption and fix that. Likely it is some of
your hardware and we even have docs about that, read them.
> For the moment I will have to go back to rsync as a backup solution,
rsync will likely complain less as it does not compute/store strong
cryptographic hashes / MACs of your data, so a lot of corruption might
just go unnoticed.
If you use the popular rsync+hardlinks approach (and timestamps to
detect changes), a silently corrupted file in the backup will also
affect all backups of that file (with that timestamp).
From tw at waldmann-edv.de Sat Feb 17 08:32:15 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Sat, 17 Feb 2018 14:32:15 +0100
Subject: [Borgbackup] Error during borg prune
In-Reply-To: <20180214023447.35fd77f6@free.de>
References: <20180214023447.35fd77f6@free.de>
Message-ID:
On 14.02.2018 09:34, Frank wrote:
> I got an error while doing borg prune. It seems this had been happening
> for some weeks now, but I just noticed that the old archives were
> accumulating and not being pruned.
Well, you need to check your logs more often.
Did you check prune's return code and react somehow if it is not 0?
> Is there something I can do to rescue my repository?
The rather obvious answer is "use borg check [--repair]".
And take serious what --repair is telling you when you invoke it.
> Data integrity error: Invalid segment entry size 2778763224 - too big
> [segment 90673, offset 3739400]
That is one of the many integrity / validity checks in borg. It fails
because the data is invalid.
borg does not create such big chunks, so there is obvious corruption
going on:
- either it is corrupted segment metadata (additionally to the data,
borg stores tag, size, crc32 and id on repo level)
- or it is a corrupted repo index pointing to a random offset somewhere
into the middle of arbitrary data.
Could be faulty RAM or something else, check your hardware.
> '--keep-monthly=1', 'ssh://user at 192.168.0.100:22//mnt/HD/Borg']
Didn't I tell you already that the 2nd "//" is wrong and should be "/"?
---
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From gait at ATComputing.nl Mon Feb 19 07:13:39 2018
From: gait at ATComputing.nl (Gerrit A. Smit)
Date: Mon, 19 Feb 2018 13:13:39 +0100
Subject: [Borgbackup] Number of slashes in a pathname
In-Reply-To:
References: <20180214023447.35fd77f6@free.de>
Message-ID: <0f1bd49f-f14d-f8ad-9044-40966ee168bf@ATComputing.nl>
Op 17-02-18 om 14:32 schreef Thomas Waldmann:
>> '--keep-monthly=1','ssh://user at 192.168.0.100:22//mnt/HD/Borg']
> Didn't I tell you already that the 2nd "//" is wrong and should be "/"?
IMHO, in a pathname, the number of consecutive slashes doesn't matter:
when you need this pathname separator, it can be 1 or more.
--
Met vriendelijke groeten,
AT COMPUTING BV
Gerrit A. Smit
AT Computing Telefoon: +31 24 352 72 22
D? one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72
Kerkenbos 12-38 TI at ATComputing.nl
6546 BE Nijmegen www.atcomputing.nl
https://www.linkedin.com/in/gesmit/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From public at enkore.de Mon Feb 19 10:28:01 2018
From: public at enkore.de (Marian Beermann)
Date: Mon, 19 Feb 2018 16:28:01 +0100
Subject: [Borgbackup] Number of slashes in a pathname
In-Reply-To: <0f1bd49f-f14d-f8ad-9044-40966ee168bf@ATComputing.nl>
References: <20180214023447.35fd77f6@free.de>
<0f1bd49f-f14d-f8ad-9044-40966ee168bf@ATComputing.nl>
Message-ID:
That's actually not true, in Posix ///[/*] collapse, but specifically //
at the beginning of a path *does not collapse* and invokes
implementation-defined behaviour instead.
On 19.02.2018 13:13, Gerrit A. Smit wrote:
> Op 17-02-18 om 14:32 schreef Thomas Waldmann:
>>> '--keep-monthly=1', 'ssh://user at 192.168.0.100:22//mnt/HD/Borg']
>> Didn't I tell you already that the 2nd "//" is wrong and should be "/"?
>
> IMHO, in a pathname, the number of consecutive slashes doesn't matter:
>
> when you need this pathname separator, it can be 1 or more.
>
> --
> Met vriendelijke groeten,
>
> AT COMPUTING BV
>
> Gerrit A. Smit
>
> AT Computing Telefoon: +31 24 352 72 22
> D? one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72
> Kerkenbos 12-38 TI at ATComputing.nl
> 6546 BE Nijmegen www.atcomputing.nl
>
> https://www.linkedin.com/in/gesmit/
>
>
>
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
From gait at ATComputing.nl Tue Feb 20 03:01:25 2018
From: gait at ATComputing.nl (Gerrit A. Smit)
Date: Tue, 20 Feb 2018 09:01:25 +0100
Subject: [Borgbackup] Number of slashes in a pathname
In-Reply-To:
References: <20180214023447.35fd77f6@free.de>
<0f1bd49f-f14d-f8ad-9044-40966ee168bf@ATComputing.nl>
Message-ID: <18afe75e-00e0-9acf-a626-fb12e57584ae@ATComputing.nl>
Op 19-02-18 om 16:28 schreef Marian Beermann:
> That's actually not true, in Posix ///[/*] collapse, but specifically //
> at the beginning of a path *does not collapse* and invokes
> implementation-defined behaviour instead.
O, my shell collapses them. But that is just an implementation. Thanks!
By coincidence I happen to have // at the start of the pathname in the destination,
leading to good results.
So Borgs behavior is to collapse them?
What's wrong with that?
--
Met vriendelijke groeten,
AT COMPUTING BV
Gerrit A. Smit
AT Computing Telefoon: +31 24 352 72 22
D? one-stop-Linux-shop Telefoon cursussecretariaat: +31 24 352 72 72
Kerkenbos 12-38 TI at ATComputing.nl
6546 BE Nijmegen www.atcomputing.nl
https://www.linkedin.com/in/gesmit/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From t.guillet at gmail.com Wed Feb 21 05:59:52 2018
From: t.guillet at gmail.com (Thomas Guillet)
Date: Wed, 21 Feb 2018 11:59:52 +0100
Subject: [Borgbackup] Modified ("M") file status with --list
Message-ID:
Hi all,
I noticed that when running borg (v1.1.4) with --list, I never see any
file marked M for modified; files that were indeed modified seem
always listed as A.
Here's a simple example script illustrating it. The files being backed
up live on an ext4 partition, with mount options:
rw,relatime,errors=remount-ro,data=ordered
and running on kernel:
4.4.0-112-generic #135-Ubuntu SMP on x86_64.
Here's the script:
#!/bin/bash
set -x
borg --version
borg init -e none repo.borg
mkdir files
function backup() {
sleep 5
touch files/.empty
borg create --list 'repo.borg::{now}' files/
}
date > files/A
date > files/B
stat files/A
backup
sleep 5
date >> files/A
stat files/A
backup
and the results:
+ borg --version
borg 1.1.4
+ borg init -e none repo.borg
+ mkdir files
+ date
+ date
+ stat files/A
File: 'files/A'
Size: 29 Blocks: 8 IO Block: 4096 regular file
Device: fc01h/64513d Inode: 8130154 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ thomas) Gid: ( 1000/ thomas)
Access: 2018-02-21 11:25:27.053753395 +0100
Modify: 2018-02-21 11:25:27.053753395 +0100
Change: 2018-02-21 11:25:27.053753395 +0100
Birth: -
+ backup
+ sleep 5
+ touch files/.empty
+ borg create --list 'repo.borg::{now}' files/
A files/A
A files/B
A files/.empty
d files
+ sleep 5
+ date
+ stat files/A
File: 'files/A'
Size: 58 Blocks: 8 IO Block: 4096 regular file
Device: fc01h/64513d Inode: 8130154 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ thomas) Gid: ( 1000/ thomas)
Access: 2018-02-21 11:25:27.053753395 +0100
Modify: 2018-02-21 11:25:37.905755879 +0100
Change: 2018-02-21 11:25:37.905755879 +0100
Birth: -
+ backup
+ sleep 5
+ touch files/.empty
+ borg create --list 'repo.borg::{now}' files/
A files/A
U files/B
A files/.empty
d files
On the second backup, files/A is listed with status A again, not M.
Do you have any idea why?
Thanks!
Thomas
From services at ianholden.com Wed Feb 21 14:30:31 2018
From: services at ianholden.com (Ian Holden)
Date: Wed, 21 Feb 2018 19:30:31 +0000
Subject: [Borgbackup] exception from prune
Message-ID:
Hi, I've started getting an exception when pruning. Not cleat what could be
the cause.
Any ideas?
Pruning archive: tsuki2-projects-ct-2018-02-07_22-32-56 Wed, 2018-02-07
22:32:59 [5babcade7a044b92f4778640d857f8ed5e7b0b0ea526e4eb0a71b9aa3a482358]
(14/14)
Local Exception
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4175, in main
exit_code = archiver.run(args)
File "/usr/lib/python3/dist-packages/borg/archiver.py", line 4107, in run
return set_ec(func(args))
File "/usr/lib/python3/dist-packages/borg/archiver.py", line 150, in
wrapper
return method(self, args, repository=repository, **kwargs)
File "/usr/lib/python3/dist-packages/borg/archiver.py", line 1562, in
do_prune
progress=args.progress).delete(stats, forced=args.forced)
File "/usr/lib/python3/dist-packages/borg/archive.py", line 794, in delete
item = Item(internal_dict=item)
File "src/borg/item.pyx", line 44, in borg.item.PropDict.__init__
(src/borg/item.c:1463)
File "src/borg/item.pyx", line 54, in borg.item.PropDict.update_internal
(src/borg/item.c:1966)
AttributeError: 'int' object has no attribute 'items'
Platform: Linux tsuki2-ubuntu 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19
11:48:36 UTC 2018 x86_64 x86_64
Linux: Ubuntu 16.04 xenial
Borg: 1.1.4 Python: CPython 3.5.2
PID: 11312 CWD: /root
sys.argv: ['/usr/bin/borg', 'prune', '--list', '--show-rc',
'--keep-within', '6H', '--keep-hourly', '12', '--keep-daily', '7',
'--keep-weekly', '4', '--keep-monthly', '6', '--keep-yearly', '6',
'--prefix', 'tsuki2-projects-ct-']
SSH_ORIGINAL_COMMAND: None
terminating with error status, rc 2
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tw at waldmann-edv.de Thu Feb 22 08:58:23 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Thu, 22 Feb 2018 14:58:23 +0100
Subject: [Borgbackup] exception from prune
In-Reply-To:
References:
Message-ID:
On 21.02.2018 20:30, Ian Holden wrote:
> Hi, I've started getting an exception when pruning. Not cleat what could
> be the cause.
I'ld say corruption - check your hardware (see our docs)
> ? File "/usr/lib/python3/dist-packages/borg/archive.py", line 794, in delete
> ? ? item = Item(internal_dict=item)
> ? File "src/borg/item.pyx", line 44, in borg.item.PropDict.__init__
> (src/borg/item.c:1463)
> ? File "src/borg/item.pyx", line 54, in
> borg.item.PropDict.update_internal (src/borg/item.c:1966)
> AttributeError: 'int' object has no attribute 'items'
It tried to make an Item instance from the low-level item dict data
(internal_dict) it read/unserialized from the repo.
But internal_dict was not a dictionary here, but an int(eger) object,
which is unexpected and crashed the code.
Such stuff can happen, if the serialized data is corrupted and msgpack
creates a wrong data type when unserializing a corrupted data type byte.
Another cause could be that msgpack was somehow out of sync and did not
unserialize from the start of a serialized dict, but from somewhere in
the middle of one. But I guess that just means another type of corruption.
So, first make sure your hardware works correctly, then run borg check
--repair and take the hint/warning it gives about being experimental
seriously.
BTW, what encryption mode (borg init --encryption x) are you using for
that repo?
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From devzero at web.de Thu Feb 22 12:42:51 2018
From: devzero at web.de (devzero at web.de)
Date: Thu, 22 Feb 2018 18:42:51 +0100
Subject: [Borgbackup] Modified ("M") file status with --list
In-Reply-To:
References:
Message-ID:
i'm also wondering about this for a while now and cannot find any option (beside diff) to see the difference between added and modified files
roland
> Gesendet: Mittwoch, 21. Februar 2018 um 11:59 Uhr
> Von: "Thomas Guillet"
> An: borgbackup at python.org
> Betreff: [Borgbackup] Modified ("M") file status with --list
>
> Hi all,
>
> I noticed that when running borg (v1.1.4) with --list, I never see any
> file marked M for modified; files that were indeed modified seem
> always listed as A.
>
> Here's a simple example script illustrating it. The files being backed
> up live on an ext4 partition, with mount options:
> rw,relatime,errors=remount-ro,data=ordered
> and running on kernel:
> 4.4.0-112-generic #135-Ubuntu SMP on x86_64.
>
>
> Here's the script:
>
>
> #!/bin/bash
>
> set -x
>
> borg --version
>
> borg init -e none repo.borg
> mkdir files
>
> function backup() {
> sleep 5
> touch files/.empty
> borg create --list 'repo.borg::{now}' files/
> }
>
> date > files/A
> date > files/B
> stat files/A
> backup
>
> sleep 5
>
> date >> files/A
> stat files/A
> backup
>
>
> and the results:
>
>
> + borg --version
> borg 1.1.4
> + borg init -e none repo.borg
> + mkdir files
> + date
> + date
> + stat files/A
> File: 'files/A'
> Size: 29 Blocks: 8 IO Block: 4096 regular file
> Device: fc01h/64513d Inode: 8130154 Links: 1
> Access: (0664/-rw-rw-r--) Uid: ( 1000/ thomas) Gid: ( 1000/ thomas)
> Access: 2018-02-21 11:25:27.053753395 +0100
> Modify: 2018-02-21 11:25:27.053753395 +0100
> Change: 2018-02-21 11:25:27.053753395 +0100
> Birth: -
> + backup
> + sleep 5
> + touch files/.empty
> + borg create --list 'repo.borg::{now}' files/
> A files/A
> A files/B
> A files/.empty
> d files
> + sleep 5
> + date
> + stat files/A
> File: 'files/A'
> Size: 58 Blocks: 8 IO Block: 4096 regular file
> Device: fc01h/64513d Inode: 8130154 Links: 1
> Access: (0664/-rw-rw-r--) Uid: ( 1000/ thomas) Gid: ( 1000/ thomas)
> Access: 2018-02-21 11:25:27.053753395 +0100
> Modify: 2018-02-21 11:25:37.905755879 +0100
> Change: 2018-02-21 11:25:37.905755879 +0100
> Birth: -
> + backup
> + sleep 5
> + touch files/.empty
> + borg create --list 'repo.borg::{now}' files/
> A files/A
> U files/B
> A files/.empty
> d files
>
>
> On the second backup, files/A is listed with status A again, not M.
> Do you have any idea why?
>
> Thanks!
> Thomas
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
From services at ianholden.com Thu Feb 22 13:47:37 2018
From: services at ianholden.com (Ian Holden)
Date: Thu, 22 Feb 2018 18:47:37 +0000
Subject: [Borgbackup] exception from prune
In-Reply-To: <5b757cde-1151-f3d8-4417-560d3187f59e@waldmann-edv.de>
References:
<5b757cde-1151-f3d8-4417-560d3187f59e@waldmann-edv.de>
Message-ID:
Thanks.
Hopefully this will go to the mailing list. Still getting my head around
how to use the mailing list.
Cheers!
Ian
On 22 February 2018 at 14:50, Thomas Waldmann wrote:
> On 22.02.2018 15:33, Ian Holden wrote:
> > Thanks Thomas, I guessed it was probably a corrupted repos.
> > I'll try a check --repair
> > And I'll check the hardware which is an old laptop with a USB connected
> > disk.
> > I am using repokey-blake2 encryption
>
> Should this go to me privately or to the list?
>
>
> If you use repokey-blake2 there is an authentication (blake2 keyed hash
> check) happening making sure the data is authentic and valid before it
> touches / unserializes any data.
>
> As you did not get an IntegrityError (authentication hash check
> failure), there seems to be no bit rot on disk.
>
> But maybe your RAM or sth else is faulty.
>
> --
>
> GPG ID: 9F88FB52FAF7B393
> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tw at waldmann-edv.de Mon Feb 26 02:58:32 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Mon, 26 Feb 2018 08:58:32 +0100
Subject: [Borgbackup] Modified ("M") file status with --list
In-Reply-To:
References:
Message-ID: <6d8cbed7-e73e-b137-5cfc-e3dcc10da229@waldmann-edv.de>
> I noticed that when running borg (v1.1.4) with --list, I never see any
> file marked M for modified; files that were indeed modified seem
> always listed as A.
I had (another) look at the related code an in fact found a problem there.
Can you try that?:
https://github.com/borgbackup/borg/pull/3633
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From tw at waldmann-edv.de Mon Feb 26 03:10:13 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Mon, 26 Feb 2018 09:10:13 +0100
Subject: [Borgbackup] exception from prune
In-Reply-To:
References:
<5b757cde-1151-f3d8-4417-560d3187f59e@waldmann-edv.de>
Message-ID:
> Hopefully this will go to the mailing list.
It did. :)
> Still getting my head around how to use the mailing list.
Just send mail to borgbackup at python.org - that is the ML.
Some mail clients like thunderbird have a special "reply list" function
that make sure the reply goes to the list and not to private mail to the
author of the post you are responding to.
Did you check your RAM meanwhile (memtest86+)?
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From tw at waldmann-edv.de Mon Feb 26 03:52:07 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Mon, 26 Feb 2018 09:52:07 +0100
Subject: [Borgbackup] Hostname change, now there is a problem?
In-Reply-To: <838C6807-5B0F-46AF-95DE-BE9DF1EA6D6F@coldstripe.net>
References: <838C6807-5B0F-46AF-95DE-BE9DF1EA6D6F@coldstripe.net>
Message-ID: <98a05a3a-56f3-109a-5f0a-f4afe0bbde14@waldmann-edv.de>
> I added ?files-cache=mtime,size to the borg create command, as I didn?t want to re-backup everything, since certainly all the inodes are different.
Assuming you did the previous backups with borg 1.1 and default options,
that should have been ctime,size (as the default is ctime,size,inode).
Changing to mtime likely caused some waste of time.
Changing back to ctime will waste some more time, but after that, it'll
be ok again.
> So, the initial run on the new server took over 11 hours, and looked like this:
Yeah, because likely the ctime in the cache was different from the mtime
read from the fs rather frequently, so it re-chunked a lot of files.
> Synchronizing chunks cache...
> Archives: 14, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 14.
> Fetching and building archive index for kauai-2018-02-01 ...
^ Also that likely took a bit.
> The first thing I noticed that it didn?t list the previous retained archives with the old hostname,
Hm? Not sure what you mean.
Archive names (historically stored into the repo) do not change by
moving the repo.
> and then, the second borg run the next day was still running after 24 hours and I finally killed it.
Each time you change --files-cache value that may trigger a full
rechunking. So if you removed your mtime,size, it went back to the
default ctime,size,inode.
The inode part should be no problem as the first run updated the inodes
in the cache. But the mtime -> ctime change likely made it rechunk often.
> It didn?t seem to be transmitting any data,
Yes, because the chunks it made were already in the repo IF the file did
not really change.
> is my remote archive now hosed?
Well, the backup run you cancelled is of course incomplete, so maybe
there is a checkpoint archive and some after-commit stuff that will get
cleaned up automatically the next time borg uses the repo.
So, guess the best option now is to remove the --files-cache option to
go back to its defaults and just let it complete, even if it takes long.
You can remove the .checkpoint archive(s) AFTER you did a complete
backup. Prune should also do that automatically.
> Or is there a way I can salvage it or am I simply doing something wrong?
Your change to mtime was causing the waste of time, but everything else
should be fine.
borg 1.0 used mtime by default, but we switched to ctime in 1.1 for
better / more safe change detection.
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From tw at waldmann-edv.de Mon Feb 26 04:05:06 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Mon, 26 Feb 2018 10:05:06 +0100
Subject: [Borgbackup] Borg on Mac with Fusion Drive
In-Reply-To: <9E07E15E-44F6-4783-96C7-2C3F2A53E4F9@sbuehler.de>
References: <9E07E15E-44F6-4783-96C7-2C3F2A53E4F9@sbuehler.de>
Message-ID:
> I?m running borg backups on a Mac that has a ?fusion drive?. (A combination of SSD and spinning disk.)
> What puzzles me is that I never see file status ?M? in the log output,
See one of my previous posts from today. It's a (cosmetic) bug.
> and also there seem to always be a lot of (seemingly random) old files being added again.
That might be covered by the FAQ already:
http://borgbackup.readthedocs.io/en/stable/faq.html#i-am-seeing-a-added-status-for-an-unchanged-file
> Is it possible that this behaviour is related to the fusion drive?
No.
At least not if the hardware is working as expected (which means that it
looks like any other block storage device and the optimization of the
slow hdd using the fast flash memory is fully transparent to the user).
Borg usually does not even talk to the block storage device directly,
borg just uses filesystem calls.
Exception is when you use --read-special and directly read from
/dev/thatblockdevice, then borg reads blocks directly from that device
(via the kernel, so nothing unusual here either).
> I?m running with the defaults for the --files-cache variable, should I use something else there?
No, the defaults are usually fine (and safe).
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From t.guillet at gmail.com Mon Feb 26 04:48:14 2018
From: t.guillet at gmail.com (Thomas Guillet)
Date: Mon, 26 Feb 2018 10:48:14 +0100
Subject: [Borgbackup] Modified ("M") file status with --list
In-Reply-To: <6d8cbed7-e73e-b137-5cfc-e3dcc10da229@waldmann-edv.de>
References:
<6d8cbed7-e73e-b137-5cfc-e3dcc10da229@waldmann-edv.de>
Message-ID:
Hi Thomas,
On 26 February 2018 at 08:58, Thomas Waldmann wrote:
> Can you try that?:
>
> https://github.com/borgbackup/borg/pull/3633
This seems to fix it, thanks. I tested your PR with the simple test
script from my original question, and also with an actual backup repo.
I now indeed see files marked as M, and from a quick cursory look at
the files list, the status flags (M vs A) look reasonable.
Thanks a lot for looking into this!
Thomas
From stefan at sbuehler.de Thu Mar 1 05:42:04 2018
From: stefan at sbuehler.de (Stefan Buehler)
Date: Thu, 1 Mar 2018 11:42:04 +0100
Subject: [Borgbackup] Borg on Mac with Fusion Drive
In-Reply-To:
References: <9E07E15E-44F6-4783-96C7-2C3F2A53E4F9@sbuehler.de>
Message-ID:
Dear Thomas,
thanks for the explanations and for the bug fix. Even if it was just the info output that was wrong, it really is useful to see which files are truly new.
All the best, and keep up the good borg work
Stefan
> On 26. Feb 2018, at 10:05, Thomas Waldmann wrote:
>
>
>> I?m running borg backups on a Mac that has a ?fusion drive?. (A combination of SSD and spinning disk.)
>> What puzzles me is that I never see file status ?M? in the log output,
> See one of my previous posts from today. It's a (cosmetic) bug.
>
>
>> and also there seem to always be a lot of (seemingly random) old files being added again.
>
> That might be covered by the FAQ already:
>
> http://borgbackup.readthedocs.io/en/stable/faq.html#i-am-seeing-a-added-status-for-an-unchanged-file
>
>
>> Is it possible that this behaviour is related to the fusion drive?
>
> No.
>
> At least not if the hardware is working as expected (which means that it
> looks like any other block storage device and the optimization of the
> slow hdd using the fast flash memory is fully transparent to the user).
>
> Borg usually does not even talk to the block storage device directly,
> borg just uses filesystem calls.
>
> Exception is when you use --read-special and directly read from
> /dev/thatblockdevice, then borg reads blocks directly from that device
> (via the kernel, so nothing unusual here either).
>
>
>> I?m running with the defaults for the --files-cache variable, should I use something else there?
>
> No, the defaults are usually fine (and safe).
>
>
> --
>
> GPG ID: 9F88FB52FAF7B393
> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
>
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
From andreas at 2li.ch Sun Mar 11 06:42:29 2018
From: andreas at 2li.ch (Andreas Zweili)
Date: Sun, 11 Mar 2018 11:42:29 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
Message-ID: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
Hello Everyone,
To get my diploma I have to do a bigger project. I've decided that I
would like to start developing a GUI for borg. It should provide normal
users with an interface to use borg. It's not really difficult to use
borg. However I still think that they would benefit quite a lot from
having a GUI to guide them. The main language I'm going to code in will
be Python.
One question which I couldn't figure out so far is how I should
implement the borg functions.
Should I clone the repository and try to implement borg as a sub module
in the GUI? Means work directly with the source.
Or
Should I install the binary and just implement the binary in my GUI to
provide the functionality I would like to support in the GUI?
Best Regards and thanks a ton for borg :).
Andreas Zweili
From imperator at jedimail.de Sun Mar 11 07:32:58 2018
From: imperator at jedimail.de (Sascha Ternes)
Date: Sun, 11 Mar 2018 12:32:58 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
Message-ID: <97cd5eb2-c0da-0bcb-4ac2-3252ee4436f9@jedimail.de>
Hi Andreas,
Am 11.03.2018 um 11:42 schrieb Andreas Zweili:
> To get my diploma I have to do a bigger project. I've decided that I
> would like to start developing a GUI for borg. It should provide normal
> users with an interface to use borg. It's not really difficult to use
> borg. However I still think that they would benefit quite a lot from
> having a GUI to guide them. The main language I'm going to code in will
> be Python.
I hope you did not choose Python because of BorgBackup.
> Should I clone the repository and try to implement borg as a sub module
> in the GUI? Means work directly with the source.
>
> Or
>
> Should I install the binary and just implement the binary in my GUI to
> provide the functionality I would like to support in the GUI?
You could do the research yourself as part of your project. One should
take a closer look on this. My first shoot is "be independent" and call
the Borg binary.
Best regards
Sascha
From public at enkore.de Sun Mar 11 07:42:08 2018
From: public at enkore.de (Marian Beermann)
Date: Sun, 11 Mar 2018 12:42:08 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
Message-ID: <3cdc5347-856a-70f1-6894-e1b866f4a8c8@enkore.de>
The Python API is unstable and basically breaks compatibility every five
minutes, so that's probably not a good way to do it.
Take a look at
https://borgbackup.readthedocs.io/en/stable/internals/frontends.html
On 11.03.2018 11:42, Andreas Zweili wrote:
> Hello Everyone,
>
> To get my diploma I have to do a bigger project. I've decided that I
> would like to start developing a GUI for borg. It should provide normal
> users with an interface to use borg. It's not really difficult to use
> borg. However I still think that they would benefit quite a lot from
> having a GUI to guide them. The main language I'm going to code in will
> be Python.
>
> One question which I couldn't figure out so far is how I should
> implement the borg functions.
>
> Should I clone the repository and try to implement borg as a sub module
> in the GUI? Means work directly with the source.
>
> Or
>
> Should I install the binary and just implement the binary in my GUI to
> provide the functionality I would like to support in the GUI?
>
> Best Regards and thanks a ton for borg :).
>
> Andreas Zweili
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
From andreas at 2li.ch Sun Mar 11 08:32:53 2018
From: andreas at 2li.ch (Andreas Zweili)
Date: Sun, 11 Mar 2018 13:32:53 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To: <97cd5eb2-c0da-0bcb-4ac2-3252ee4436f9@jedimail.de>
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
<97cd5eb2-c0da-0bcb-4ac2-3252ee4436f9@jedimail.de>
Message-ID: <8e7e060a-891f-19fe-6724-599207b012fa@2li.ch>
No I would've chosen Python regardless of the language borg uses. At the
beginning it was even a possibility that I would write a GUI for restic.
But I went for borg because it worked better for me.
I must admit I don't have that much experience with programming that's
why I thought it would be better to ask the project on what it
recommends. In the end my work might not even be useful outside the
project :).
On 11.03.2018 12:32, Sascha Ternes wrote:
> Hi Andreas,
>
> Am 11.03.2018 um 11:42 schrieb Andreas Zweili:
>> To get my diploma I have to do a bigger project. I've decided that I
>> would like to start developing a GUI for borg. It should provide normal
>> users with an interface to use borg. It's not really difficult to use
>> borg. However I still think that they would benefit quite a lot from
>> having a GUI to guide them. The main language I'm going to code in will
>> be Python.
>
> I hope you did not choose Python because of BorgBackup.
>
>> Should I clone the repository and try to implement borg as a sub module
>> in the GUI? Means work directly with the source.
>>
>> Or
>>
>> Should I install the binary and just implement the binary in my GUI to
>> provide the functionality I would like to support in the GUI?
>
> You could do the research yourself as part of your project. One should
> take a closer look on this. My first shoot is "be independent" and call
> the Borg binary.
>
> Best regards
>
> Sascha
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
From andreas at 2li.ch Sun Mar 11 08:35:40 2018
From: andreas at 2li.ch (Andreas Zweili)
Date: Sun, 11 Mar 2018 13:35:40 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To: <3cdc5347-856a-70f1-6894-e1b866f4a8c8@enkore.de>
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
<3cdc5347-856a-70f1-6894-e1b866f4a8c8@enkore.de>
Message-ID: <24c38f43-7197-7ecf-b5bf-5bde8e1bdd2c@2li.ch>
Oh thank you very much for the link.
No idea how I missed that and I explicitly checked the docs before
writing here...
Ah well that's exactly what I was looking for.
On 11.03.2018 12:42, Marian Beermann wrote:
> The Python API is unstable and basically breaks compatibility every five
> minutes, so that's probably not a good way to do it.
>
> Take a look at
> https://borgbackup.readthedocs.io/en/stable/internals/frontends.html
>
> On 11.03.2018 11:42, Andreas Zweili wrote:
>> Hello Everyone,
>>
>> To get my diploma I have to do a bigger project. I've decided that I
>> would like to start developing a GUI for borg. It should provide normal
>> users with an interface to use borg. It's not really difficult to use
>> borg. However I still think that they would benefit quite a lot from
>> having a GUI to guide them. The main language I'm going to code in will
>> be Python.
>>
>> One question which I couldn't figure out so far is how I should
>> implement the borg functions.
>>
>> Should I clone the repository and try to implement borg as a sub module
>> in the GUI? Means work directly with the source.
>>
>> Or
>>
>> Should I install the binary and just implement the binary in my GUI to
>> provide the functionality I would like to support in the GUI?
>>
>> Best Regards and thanks a ton for borg :).
>>
>> Andreas Zweili
>> _______________________________________________
>> Borgbackup mailing list
>> Borgbackup at python.org
>> https://mail.python.org/mailman/listinfo/borgbackup
>>
>
From imperator at jedimail.de Sun Mar 11 08:37:17 2018
From: imperator at jedimail.de (Sascha Ternes)
Date: Sun, 11 Mar 2018 13:37:17 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To: <8e7e060a-891f-19fe-6724-599207b012fa@2li.ch>
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
<97cd5eb2-c0da-0bcb-4ac2-3252ee4436f9@jedimail.de>
<8e7e060a-891f-19fe-6724-599207b012fa@2li.ch>
Message-ID:
Hi Andreas,
Am 11.03.2018 um 13:32 schrieb Andreas Zweili:
> I must admit I don't have that much experience with programming that's
> why I thought it would be better to ask the project on what it
> recommends. In the end my work might not even be useful outside the
> project :).
then you should definitely not mess with BorgBackup source code and be
happy with the frontend JSON API Borg gives you.
Best regards
Sascha
From andreas at 2li.ch Sun Mar 11 08:47:00 2018
From: andreas at 2li.ch (Andreas Zweili)
Date: Sun, 11 Mar 2018 13:47:00 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To:
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
<97cd5eb2-c0da-0bcb-4ac2-3252ee4436f9@jedimail.de>
<8e7e060a-891f-19fe-6724-599207b012fa@2li.ch>
Message-ID:
Aye, will definitly do that.
No idea how I missed that in the first place.
I hope I didn't bother anyone with my question.
Thank you Sascha and Marian for your quick answers!
Best Regards
Andreas
On 11.03.2018 13:37, Sascha Ternes wrote:
> Hi Andreas,
>
> Am 11.03.2018 um 13:32 schrieb Andreas Zweili:
>> I must admit I don't have that much experience with programming that's
>> why I thought it would be better to ask the project on what it
>> recommends. In the end my work might not even be useful outside the
>> project :).
>
> then you should definitely not mess with BorgBackup source code and be
> happy with the frontend JSON API Borg gives you.
>
> Best regards
>
> Sascha
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
From draget at speciesm.net Sun Mar 11 09:47:15 2018
From: draget at speciesm.net (Draget)
Date: Sun, 11 Mar 2018 14:47:15 +0100
Subject: [Borgbackup] How to implement borg in a GUI?
In-Reply-To:
References: <5e80b236-68fe-0562-2c30-748621e6c404@2li.ch>
<97cd5eb2-c0da-0bcb-4ac2-3252ee4436f9@jedimail.de>
<8e7e060a-891f-19fe-6724-599207b012fa@2li.ch>
Message-ID:
Hi Andreas,
try to define for yourself what your goals are and what you need for
your project. Tho using the JSON API is definatly the way to go. :)
There have been some ideas bouncing around for a QT GUI. If you want to
take a look: https://github.com/borgbackup/borg/issues/2960
Best wishes,
Michael
Am 11.03.2018 um 13:47 schrieb Andreas Zweili:
> Aye, will definitly do that.
> No idea how I missed that in the first place.
> I hope I didn't bother anyone with my question.
>
> Thank you Sascha and Marian for your quick answers!
>
> Best Regards
>
> Andreas
From dchebota at gmu.edu Tue Mar 13 10:31:08 2018
From: dchebota at gmu.edu (Dmitri Chebotarov)
Date: Tue, 13 Mar 2018 14:31:08 +0000
Subject: [Borgbackup] BorgBackup and GlusterFS storage ?
Message-ID:
Hello
Are there any known issues/concerns with storing Borg repositories on a GlusterFS volume (Erasure Coded)?
Any limits/recommendations for size of repos and data? Some groups may have ~300TB of data to backup. Is it advisable to split it between multiple repos?
I plan to populate 1PT with Borg backups (multiple repos) and considering my options.
If I understand it correctly BorgBackup is good fit for GlusterFS's EC volumes - the segments don't change much (at all?) once created and only used for RO operations if data needs to be restored or accessed via Fuse mount.
I also like to increase max_segment_size from 512MB to a large value (2GB), is it as simple as 524288000*4? My goal is to have less files on GlusterFS volume(s).
I've been using Borg for at least a year now and it seems to work very well for all my other projects involving backing data or Linux systems...
Thank you,
--
Dmitri Chebotarov.
George Mason University,
4400 University Drive,
Fairfax, VA, 22030
GPG Public key# 5E19F14D: [https://goo.gl/SlE8tj]
From howardm at xmission.com Wed Mar 14 00:23:48 2018
From: howardm at xmission.com (Howard Mann)
Date: Tue, 13 Mar 2018 22:23:48 -0600
Subject: [Borgbackup] Are named archives not made if no source file changes
occur ?
Message-ID:
Hi,
Here is an excerpt of a Borg list command:
[my username]MacBook.local-2018-03-09T20:30:01 Fri, 2018-03-09 20:30:08 [6dc66bf20f4cde8606aff520f08c2656bddef60f0ec48f01e1fec7e0bd7ba1cb]
[my username]MacBook.local-2018-03-12T20:30:02 Mon, 2018-03-12 20:30:08 [27f753940ecd04cdc91f5ca261d4787f8626d2c6826f2dbf93dd54daa9eecb86]
I run the backup script once a day. I received the usual notifications for 3-10 and 3-11.
Yet, there are no archives for 3-10 and 3-11.
Is that perhaps because no (source) changes occurred on those days ?
Thanks,
Howard
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From me at blufinney.com Wed Mar 14 22:01:20 2018
From: me at blufinney.com (Blu Finney)
Date: Wed, 14 Mar 2018 19:01:20 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
Message-ID: <1521079280.3.16.camel@blufinney.com>
Hi,
I've been using Borg backup since Mar 5th and am not clear why the
pruning schedule isn't working as expected.
Details:
- Running backup every 3 hours (8 archives created per day)
- Pruning command used: borg prune :: --keep-within 3H --keep-minutely
60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --
keep-yearly 1 --prefix {hostname}- --debug --stats --list
Using these prune settings I was expecting some of the 8 daily archives
to have been pruned by now. Yet none have been pruned.
Have I misunderstood or misconfigured something?
Thanks,
Blu
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL:
From elladan at eskimo.com Wed Mar 14 22:42:22 2018
From: elladan at eskimo.com (Elladan)
Date: Wed, 14 Mar 2018 19:42:22 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To: <1521079280.3.16.camel@blufinney.com>
References: <1521079280.3.16.camel@blufinney.com>
Message-ID:
On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
>
> Hi,
>
> I've been using Borg backup since Mar 5th and am not clear why the
> pruning schedule isn't working as expected.
>
> Details:
> - Running backup every 3 hours (8 archives created per day)
> - Pruning command used: borg prune :: --keep-within 3H --keep-minutely
> 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --
> keep-yearly 1 --prefix {hostname}- --debug --stats --list
>
> Using these prune settings I was expecting some of the 8 daily archives
> to have been pruned by now. Yet none have been pruned.
>
> Have I misunderstood or misconfigured something?
You've basically asked borg prune to do this (rules applied one after another):
1. Keep all archives in the last 3 hours.
2. Keep the last 60 archives, but not more than one per minute.
3. Keep the last the last 24 archives, but not more than one per hour.
And so on. So basically, you've asked it to keep a very large number
of archives.
From me at blufinney.com Thu Mar 15 00:06:21 2018
From: me at blufinney.com (Blu Finney)
Date: Wed, 14 Mar 2018 21:06:21 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To:
References: <1521079280.3.16.camel@blufinney.com>
Message-ID: <1521086781.3.9.camel@blufinney.com>
Thank you for the simple explanation. I totally misunderstood the
documentation - I think because of my "snapshot" mindset (like good ole
sun zfs).
Here I thought "--keep-hourly 24" meant keep archives from the past 24
hours, and "--keep-minutely 60" meant keep archives from the past 60
minutes, and so on.
Is there a way to implement the snapshot mentality using "--keep-
within"?
e.g.
--keep-within 60M --keep-within 24H --keep-within 7D ...
Sorry for the sophmoric question, I'm really having a hard time
wrapping my head around the "--keep-hourly X" way of thinking in order
to achieve the results I'm looking for.
-----Original Message-----
From: Elladan
To: Blu Finney
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 19:42:22 -0700
On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
>
> Hi,
>
> I've been using Borg backup since Mar 5th and am not clear why the
> pruning schedule isn't working as expected.
>
> Details:
> - Running backup every 3 hours (8 archives created per day)
> - Pruning command used: borg prune :: --keep-within 3H --keep-
> minutely
> 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 12
> --
> keep-yearly 1 --prefix {hostname}- --debug --stats --list
>
> Using these prune settings I was expecting some of the 8 daily
> archives
> to have been pruned by now. Yet none have been pruned.
>
> Have I misunderstood or misconfigured something?
You've basically asked borg prune to do this (rules applied one after
another):
1. Keep all archives in the last 3 hours.
2. Keep the last 60 archives, but not more than one per minute.
3. Keep the last the last 24 archives, but not more than one per hour.
And so on. So basically, you've asked it to keep a very large number
of archives.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL:
From me at blufinney.com Thu Mar 15 00:17:09 2018
From: me at blufinney.com (Blu Finney)
Date: Wed, 14 Mar 2018 21:17:09 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To: <1521086781.3.9.camel@blufinney.com>
References: <1521079280.3.16.camel@blufinney.com>
<1521086781.3.9.camel@blufinney.com>
Message-ID: <1521087429.3.11.camel@blufinney.com>
You'll have to pardon me. After thinking about this more I can
understand why the "snapshot" mentality really doesn't apply here.
I'll have to spend some time doing some math to get the results I'm
looking for.
Thanks for your help.
-----Original Message-----
From: Blu Finney
To: Elladan
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 21:06:21 -0700
Thank you for the simple explanation. I totally misunderstood the
documentation - I think because of my "snapshot" mindset (like good ole
sun zfs).
Here I thought "--keep-hourly 24" meant keep archives from the past 24
hours, and "--keep-minutely 60" meant keep archives from the past 60
minutes, and so on.
Is there a way to implement the snapshot mentality using "--keep-
within"?
e.g.
--keep-within 60M --keep-within 24H --keep-within 7D ...
Sorry for the sophmoric question, I'm really having a hard time
wrapping my head around the "--keep-hourly X" way of thinking in order
to achieve the results I'm looking for.
-----Original Message-----
From: Elladan
To: Blu Finney
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 19:42:22 -0700
On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
>
> Hi,
>
> I've been using Borg backup since Mar 5th and am not clear why the
> pruning schedule isn't working as expected.
>
> Details:
> - Running backup every 3 hours (8 archives created per day)
> - Pruning command used: borg prune :: --keep-within 3H --keep-
> minutely
> 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 12
> --
> keep-yearly 1 --prefix {hostname}- --debug --stats --list
>
> Using these prune settings I was expecting some of the 8 daily
> archives
> to have been pruned by now. Yet none have been pruned.
>
> Have I misunderstood or misconfigured something?
You've basically asked borg prune to do this (rules applied one after
another):
1. Keep all archives in the last 3 hours.
2. Keep the last 60 archives, but not more than one per minute.
3. Keep the last the last 24 archives, but not more than one per hour.
And so on. So basically, you've asked it to keep a very large number
of archives.
_______________________________________________
Borgbackup mailing list
Borgbackup at python.org
https://mail.python.org/mailman/listinfo/borgbackup
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL:
From elladan at eskimo.com Thu Mar 15 02:15:00 2018
From: elladan at eskimo.com (Elladan)
Date: Wed, 14 Mar 2018 23:15:00 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To: <1521087429.3.11.camel@blufinney.com>
References: <1521079280.3.16.camel@blufinney.com>
<1521086781.3.9.camel@blufinney.com> <1521087429.3.11.camel@blufinney.com>
Message-ID:
I'm not sure what your exact needs are, but my quick impression is
that just removing the --keep-minutely option and keeping the rest the
same will probably give you something approximating what you want.
On Wed, Mar 14, 2018 at 9:17 PM, Blu Finney wrote:
> You'll have to pardon me. After thinking about this more I can
> understand why the "snapshot" mentality really doesn't apply here.
>
> I'll have to spend some time doing some math to get the results I'm
> looking for.
>
> Thanks for your help.
>
>
> -----Original Message-----
> From: Blu Finney
> To: Elladan
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 21:06:21 -0700
>
> Thank you for the simple explanation. I totally misunderstood the
> documentation - I think because of my "snapshot" mindset (like good ole
> sun zfs).
>
> Here I thought "--keep-hourly 24" meant keep archives from the past 24
> hours, and "--keep-minutely 60" meant keep archives from the past 60
> minutes, and so on.
>
> Is there a way to implement the snapshot mentality using "--keep-
> within"?
>
> e.g.
> --keep-within 60M --keep-within 24H --keep-within 7D ...
>
>
> Sorry for the sophmoric question, I'm really having a hard time
> wrapping my head around the "--keep-hourly X" way of thinking in order
> to achieve the results I'm looking for.
>
>
> -----Original Message-----
> From: Elladan
> To: Blu Finney
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 19:42:22 -0700
>
> On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
>>
>> Hi,
>>
>> I've been using Borg backup since Mar 5th and am not clear why the
>> pruning schedule isn't working as expected.
>>
>> Details:
>> - Running backup every 3 hours (8 archives created per day)
>> - Pruning command used: borg prune :: --keep-within 3H --keep-
>> minutely
>> 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 12
>> --
>> keep-yearly 1 --prefix {hostname}- --debug --stats --list
>>
>> Using these prune settings I was expecting some of the 8 daily
>> archives
>> to have been pruned by now. Yet none have been pruned.
>>
>> Have I misunderstood or misconfigured something?
>
> You've basically asked borg prune to do this (rules applied one after
> another):
>
> 1. Keep all archives in the last 3 hours.
> 2. Keep the last 60 archives, but not more than one per minute.
> 3. Keep the last the last 24 archives, but not more than one per hour.
>
> And so on. So basically, you've asked it to keep a very large number
> of archives.
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
From me at blufinney.com Thu Mar 15 02:57:56 2018
From: me at blufinney.com (Blu Finney)
Date: Wed, 14 Mar 2018 23:57:56 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To:
References: <1521079280.3.16.camel@blufinney.com>
<1521086781.3.9.camel@blufinney.com> <1521087429.3.11.camel@blufinney.com>
Message-ID: <1521097076.3.4.camel@blufinney.com>
Yep, in addition to a couple other fine tunings. After spending some
time thinking about it I used the following. After pruning with these
new settings the results matched what I was ultimately shooting for.
keep_within: 3H
keep_hourly: 16
keep_daily: 7
keep_weekly: 4
keep_monthly: 12
keep_yearly: 5
-----Original Message-----
From: Elladan
To: Blu Finney
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 23:15:00 -0700
I'm not sure what your exact needs are, but my quick impression is
that just removing the --keep-minutely option and keeping the rest the
same will probably give you something approximating what you want.
On Wed, Mar 14, 2018 at 9:17 PM, Blu Finney wrote:
> You'll have to pardon me. After thinking about this more I can
> understand why the "snapshot" mentality really doesn't apply here.
>
> I'll have to spend some time doing some math to get the results I'm
> looking for.
>
> Thanks for your help.
>
>
> -----Original Message-----
> From: Blu Finney
> To: Elladan
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 21:06:21 -0700
>
> Thank you for the simple explanation. I totally misunderstood the
> documentation - I think because of my "snapshot" mindset (like good
> ole
> sun zfs).
>
> Here I thought "--keep-hourly 24" meant keep archives from the past
> 24
> hours, and "--keep-minutely 60" meant keep archives from the past 60
> minutes, and so on.
>
> Is there a way to implement the snapshot mentality using "--keep-
> within"?
>
> e.g.
> --keep-within 60M --keep-within 24H --keep-within 7D ...
>
>
> Sorry for the sophmoric question, I'm really having a hard time
> wrapping my head around the "--keep-hourly X" way of thinking in
> order
> to achieve the results I'm looking for.
>
>
> -----Original Message-----
> From: Elladan
> To: Blu Finney
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 19:42:22 -0700
>
> On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
> >
> > Hi,
> >
> > I've been using Borg backup since Mar 5th and am not clear why the
> > pruning schedule isn't working as expected.
> >
> > Details:
> > - Running backup every 3 hours (8 archives created per day)
> > - Pruning command used: borg prune :: --keep-within 3H --keep-
> > minutely
> > 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly
> > 12
> > --
> > keep-yearly 1 --prefix {hostname}- --debug --stats --list
> >
> > Using these prune settings I was expecting some of the 8 daily
> > archives
> > to have been pruned by now. Yet none have been pruned.
> >
> > Have I misunderstood or misconfigured something?
>
> You've basically asked borg prune to do this (rules applied one after
> another):
>
> 1. Keep all archives in the last 3 hours.
> 2. Keep the last 60 archives, but not more than one per minute.
> 3. Keep the last the last 24 archives, but not more than one per
> hour.
>
> And so on. So basically, you've asked it to keep a very large number
> of archives.
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL:
From tw at waldmann-edv.de Thu Mar 15 09:57:58 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Thu, 15 Mar 2018 14:57:58 +0100
Subject: [Borgbackup] BorgBackup and GlusterFS storage ?
In-Reply-To:
References:
Message-ID: <1579be52-9a71-ca30-e4d5-71c0fb0ceb18@waldmann-edv.de>
> Are there any known issues/concerns with storing Borg repositories on a GlusterFS volume (Erasure Coded)?
I didn't use / test GlusterFS yet. But the general rule for borg repo FS
is that borg expects a sane and consistent POSIX-like fs behaviour.
The docs have some infos about what we expect from the FS.
> Any limits/recommendations for size of repos and data?
The biggest segment file number can be ~ 2^32.
The max. configurable segment file size is ~4GB, default is 500MB (borg
1.1). So, multiplied, we are at ~ 2^64B.
That's the only "repo limit" I am aware of right now.
But there is an archive limit:
http://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#archives
Besides these, the more files you have in the backup set and the more
total chunks you have in the repo, the more memory you will need for the
files cache and chunks index.
http://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#indexes-caches-memory-usage
> Some groups may have ~300TB of data to backup. Is it advisable to split it between multiple repos?
If you want to run borg check, that would take rather long for 300TB.
So yes, better make multiple smaller repos.
It's also better for concurrent access as borg will lock repos when
working with them.
It is also faster to not backup into the same repo from multiple client
machines because borg then does not need to resync its chunks cache.
Considering the scale (and FS), I am not sure how many people used borg
and that large repos (on GlusterFS) before, so be careful.
> I plan to populate 1PT with Borg backups (multiple repos) and considering my options.
Quite a lot of data.
Did you estimate how much would you save by using borg and its
compression/dedup?
In any case, I would be very much interested in the outcome of this, so
keep us updated.
> If I understand it correctly BorgBackup is good fit for GlusterFS's EC volumes - > the segments don't change much (at all?) once created and only used
for RO operations
This completely true for append_only mode (== never effectively deleting
anything).
For normal mode, borg will run compact_segments() after doing write /
delete operations to the repo. This will read segment files with unused
entries and rewrite the used entries to new segment files. For borg 1.1
this will only happen above some hardcoded threshold unused/used ratio.
> I also like to increase max_segment_size from 512MB to a large value (2GB),> is it as simple as 524288000*4?
Yes.
> My goal is to have less files on GlusterFS volume(s).
Be aware that for compacting such a large segment file, it will read it
completely and write the new compacted one to storage again.
The threshold will make sure that this won't happen for only tiny
amounts of unused entries though.
> I've been using Borg for at least a year now and it seems to work> very well for all my other projects involving backing data or Linux
systems...
Great. We always try to fix severe bugs ASAP.
Cheers, Thomas
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From liori at exroot.org Fri Mar 16 17:32:57 2018
From: liori at exroot.org (Tomasz Melcer)
Date: Fri, 16 Mar 2018 22:32:57 +0100
Subject: [Borgbackup] TMPDIR and "large files"
Message-ID:
Hello!
Borg documentation states:
> TMPDIR ? where temporary files are stored (might need a lot of
> temporary space for some operations)
(http://borgbackup.readthedocs.io/en/stable/usage/general.html)
How much space can we expect to be required there, and which operations
are the ones that will require it?
My concern is that I would like to make some of my systems as small as
possible, but I would like to avoid a surprise of running out of local
disk space during a normal operation of the system.
--
Tomasz Melcer
From me at blufinney.com Wed Mar 21 00:56:44 2018
From: me at blufinney.com (Blu Finney)
Date: Tue, 20 Mar 2018 21:56:44 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To: <1521097076.3.4.camel@blufinney.com>
References: <1521079280.3.16.camel@blufinney.com>
<1521086781.3.9.camel@blufinney.com> <1521087429.3.11.camel@blufinney.com>
<1521097076.3.4.camel@blufinney.com>
Message-ID: <1521608204.3.10.camel@blufinney.com>
Now that I've been running backups with pruning using the configuration
mentioned in this thread, I'm seeing unexpected results again - based
on my new understanding.
To recap, I've been running the backup/prune every three hours since
March 5th. Borg list currently gives the results below. I don't
understand why at least one day from week of Mar 4th wasn't kept?
(based on "keep_weekly: 4")
borg-2018-03-11T21:10:14 Sun, 2018-03-11 21:10:20
borg-2018-03-12T22:31:26 Mon, 2018-03-12 22:31:33
borg-2018-03-13T21:10:09 Tue, 2018-03-13 21:10:15
borg-2018-03-14T21:11:14 Wed, 2018-03-14 21:11:20
borg-2018-03-15T21:14:58 Thu, 2018-03-15 21:15:05
borg-2018-03-16T21:11:07 Fri, 2018-03-16 21:11:13
borg-2018-03-17T21:11:35 Sat, 2018-03-17 21:11:40
borg-2018-03-18T06:11:52 Sun, 2018-03-18 06:11:57
borg-2018-03-18T09:10:08 Sun, 2018-03-18 09:10:14
borg-2018-03-18T12:11:18 Sun, 2018-03-18 12:11:24
borg-2018-03-18T15:10:59 Sun, 2018-03-18 15:11:05
borg-2018-03-18T18:11:03 Sun, 2018-03-18 18:11:09
borg-2018-03-18T21:10:59 Sun, 2018-03-18 21:11:05
borg-2018-03-19T00:06:05 Mon, 2018-03-19 00:06:11
borg-2018-03-19T06:11:47 Mon, 2018-03-19 06:11:53
borg-2018-03-19T09:10:16 Mon, 2018-03-19 09:10:22
borg-2018-03-19T12:11:14 Mon, 2018-03-19 12:11:20
borg-2018-03-19T15:11:03 Mon, 2018-03-19 15:11:08
borg-2018-03-19T18:10:55 Mon, 2018-03-19 18:11:01
borg-2018-03-19T21:11:12 Mon, 2018-03-19 21:11:18
borg-2018-03-20T00:06:05 Tue, 2018-03-20 00:06:12
borg-2018-03-20T12:11:39 Tue, 2018-03-20 12:11:45
borg-2018-03-20T15:10:14 Tue, 2018-03-20 15:10:19
borg-2018-03-20T18:11:10 Tue, 2018-03-20 18:11:16
borg-2018-03-20T21:11:10 Tue, 2018-03-20 21:11:16
-----Original Message-----
From: Blu Finney
To: Elladan
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 23:57:56 -0700
Yep, in addition to a couple other fine tunings. After spending some
time thinking about it I used the following. After pruning with these
new settings the results matched what I was ultimately shooting for.
keep_within: 3H
keep_hourly: 16
keep_daily: 7
keep_weekly: 4
keep_monthly: 12
keep_yearly: 5
-----Original Message-----
From: Elladan
To: Blu Finney
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 23:15:00 -0700
I'm not sure what your exact needs are, but my quick impression is
that just removing the --keep-minutely option and keeping the rest the
same will probably give you something approximating what you want.
On Wed, Mar 14, 2018 at 9:17 PM, Blu Finney wrote:
> You'll have to pardon me. After thinking about this more I can
> understand why the "snapshot" mentality really doesn't apply here.
>
> I'll have to spend some time doing some math to get the results I'm
> looking for.
>
> Thanks for your help.
>
>
> -----Original Message-----
> From: Blu Finney
> To: Elladan
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 21:06:21 -0700
>
> Thank you for the simple explanation. I totally misunderstood the
> documentation - I think because of my "snapshot" mindset (like good
> ole
> sun zfs).
>
> Here I thought "--keep-hourly 24" meant keep archives from the past
> 24
> hours, and "--keep-minutely 60" meant keep archives from the past 60
> minutes, and so on.
>
> Is there a way to implement the snapshot mentality using "--keep-
> within"?
>
> e.g.
> --keep-within 60M --keep-within 24H --keep-within 7D ...
>
>
> Sorry for the sophmoric question, I'm really having a hard time
> wrapping my head around the "--keep-hourly X" way of thinking in
> order
> to achieve the results I'm looking for.
>
>
> -----Original Message-----
> From: Elladan
> To: Blu Finney
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 19:42:22 -0700
>
> On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
> >
> > Hi,
> >
> > I've been using Borg backup since Mar 5th and am not clear why the
> > pruning schedule isn't working as expected.
> >
> > Details:
> > - Running backup every 3 hours (8 archives created per day)
> > - Pruning command used: borg prune :: --keep-within 3H --keep-
> > minutely
> > 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly
> > 12
> > --
> > keep-yearly 1 --prefix {hostname}- --debug --stats --list
> >
> > Using these prune settings I was expecting some of the 8 daily
> > archives
> > to have been pruned by now. Yet none have been pruned.
> >
> > Have I misunderstood or misconfigured something?
>
> You've basically asked borg prune to do this (rules applied one after
> another):
>
> 1. Keep all archives in the last 3 hours.
> 2. Keep the last 60 archives, but not more than one per minute.
> 3. Keep the last the last 24 archives, but not more than one per
> hour.
>
> And so on. So basically, you've asked it to keep a very large number
> of archives.
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
_______________________________________________
Borgbackup mailing list
Borgbackup at python.org
https://mail.python.org/mailman/listinfo/borgbackup
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL:
From me at blufinney.com Wed Mar 21 01:27:42 2018
From: me at blufinney.com (Blu Finney)
Date: Tue, 20 Mar 2018 22:27:42 -0700
Subject: [Borgbackup] Pruning Schedule Not Working as Expected
In-Reply-To: <1521608204.3.10.camel@blufinney.com>
References: <1521079280.3.16.camel@blufinney.com>
<1521086781.3.9.camel@blufinney.com> <1521087429.3.11.camel@blufinney.com>
<1521097076.3.4.camel@blufinney.com> <1521608204.3.10.camel@blufinney.com>
Message-ID: <1521610062.3.12.camel@blufinney.com>
I *think I figured out why the discrepency.
It seems borg pruning uses a Mon-Sun week vs a Sun-Sat week. We'll see
tomorrow. =)
-----Original Message-----
From: Blu Finney
To: Elladan
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Tue, 20 Mar 2018 21:56:44 -0700
Now that I've been running backups with pruning using the configuration
mentioned in this thread, I'm seeing unexpected results again - based
on my new understanding.
To recap, I've been running the backup/prune every three hours since
March 5th. Borg list currently gives the results below. I don't
understand why at least one day from week of Mar 4th wasn't kept?
(based on "keep_weekly: 4")
borg-2018-03-11T21:10:14 Sun, 2018-03-11 21:10:20
borg-2018-03-12T22:31:26 Mon, 2018-03-12 22:31:33
borg-2018-03-13T21:10:09 Tue, 2018-03-13 21:10:15
borg-2018-03-14T21:11:14 Wed, 2018-03-14 21:11:20
borg-2018-03-15T21:14:58 Thu, 2018-03-15 21:15:05
borg-2018-03-16T21:11:07 Fri, 2018-03-16 21:11:13
borg-2018-03-17T21:11:35 Sat, 2018-03-17 21:11:40
borg-2018-03-18T06:11:52 Sun, 2018-03-18 06:11:57
borg-2018-03-18T09:10:08 Sun, 2018-03-18 09:10:14
borg-2018-03-18T12:11:18 Sun, 2018-03-18 12:11:24
borg-2018-03-18T15:10:59 Sun, 2018-03-18 15:11:05
borg-2018-03-18T18:11:03 Sun, 2018-03-18 18:11:09
borg-2018-03-18T21:10:59 Sun, 2018-03-18 21:11:05
borg-2018-03-19T00:06:05 Mon, 2018-03-19 00:06:11
borg-2018-03-19T06:11:47 Mon, 2018-03-19 06:11:53
borg-2018-03-19T09:10:16 Mon, 2018-03-19 09:10:22
borg-2018-03-19T12:11:14 Mon, 2018-03-19 12:11:20
borg-2018-03-19T15:11:03 Mon, 2018-03-19 15:11:08
borg-2018-03-19T18:10:55 Mon, 2018-03-19 18:11:01
borg-2018-03-19T21:11:12 Mon, 2018-03-19 21:11:18
borg-2018-03-20T00:06:05 Tue, 2018-03-20 00:06:12
borg-2018-03-20T12:11:39 Tue, 2018-03-20 12:11:45
borg-2018-03-20T15:10:14 Tue, 2018-03-20 15:10:19
borg-2018-03-20T18:11:10 Tue, 2018-03-20 18:11:16
borg-2018-03-20T21:11:10 Tue, 2018-03-20 21:11:16
-----Original Message-----
From: Blu Finney
To: Elladan
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 23:57:56 -0700
Yep, in addition to a couple other fine tunings. After spending some
time thinking about it I used the following. After pruning with these
new settings the results matched what I was ultimately shooting for.
keep_within: 3H
keep_hourly: 16
keep_daily: 7
keep_weekly: 4
keep_monthly: 12
keep_yearly: 5
-----Original Message-----
From: Elladan
To: Blu Finney
Cc: borgbackup at python.org
Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
Date: Wed, 14 Mar 2018 23:15:00 -0700
I'm not sure what your exact needs are, but my quick impression is
that just removing the --keep-minutely option and keeping the rest the
same will probably give you something approximating what you want.
On Wed, Mar 14, 2018 at 9:17 PM, Blu Finney wrote:
> You'll have to pardon me. After thinking about this more I can
> understand why the "snapshot" mentality really doesn't apply here.
>
> I'll have to spend some time doing some math to get the results I'm
> looking for.
>
> Thanks for your help.
>
>
> -----Original Message-----
> From: Blu Finney
> To: Elladan
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 21:06:21 -0700
>
> Thank you for the simple explanation. I totally misunderstood the
> documentation - I think because of my "snapshot" mindset (like good
> ole
> sun zfs).
>
> Here I thought "--keep-hourly 24" meant keep archives from the past
> 24
> hours, and "--keep-minutely 60" meant keep archives from the past 60
> minutes, and so on.
>
> Is there a way to implement the snapshot mentality using "--keep-
> within"?
>
> e.g.
> --keep-within 60M --keep-within 24H --keep-within 7D ...
>
>
> Sorry for the sophmoric question, I'm really having a hard time
> wrapping my head around the "--keep-hourly X" way of thinking in
> order
> to achieve the results I'm looking for.
>
>
> -----Original Message-----
> From: Elladan
> To: Blu Finney
> Cc: borgbackup at python.org
> Subject: Re: [Borgbackup] Pruning Schedule Not Working as Expected
> Date: Wed, 14 Mar 2018 19:42:22 -0700
>
> On Wed, Mar 14, 2018 at 7:01 PM, Blu Finney wrote:
> >
> > Hi,
> >
> > I've been using Borg backup since Mar 5th and am not clear why the
> > pruning schedule isn't working as expected.
> >
> > Details:
> > - Running backup every 3 hours (8 archives created per day)
> > - Pruning command used: borg prune :: --keep-within 3H --keep-
> > minutely
> > 60 --keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly
> > 12
> > --
> > keep-yearly 1 --prefix {hostname}- --debug --stats --list
> >
> > Using these prune settings I was expecting some of the 8 daily
> > archives
> > to have been pruned by now. Yet none have been pruned.
> >
> > Have I misunderstood or misconfigured something?
>
> You've basically asked borg prune to do this (rules applied one after
> another):
>
> 1. Keep all archives in the last 3 hours.
> 2. Keep the last 60 archives, but not more than one per minute.
> 3. Keep the last the last 24 archives, but not more than one per
> hour.
>
> And so on. So basically, you've asked it to keep a very large number
> of archives.
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
_______________________________________________
Borgbackup mailing list
Borgbackup at python.org
https://mail.python.org/mailman/listinfo/borgbackup
_______________________________________________
Borgbackup mailing list
Borgbackup at python.org
https://mail.python.org/mailman/listinfo/borgbackup
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL:
From devzero at web.de Sat Mar 24 07:25:08 2018
From: devzero at web.de (devzero at web.de)
Date: Sat, 24 Mar 2018 12:25:08 +0100
Subject: [Borgbackup] recreate --recompress incredibly slow
Message-ID:
Hi,
i'm trying to get a guess how much space i would save with transition from lz4 to zstd, so i'm converting an 15GB archive like this:
/backup/bin/borg-1.1.4 recreate /iscsi/lun1/borg-repos/host1 --recompress --compression zstd
the process is incredibly slow, so slow that i possibly cannot convert all existing repos (despite the danger...). maybe i need to start from scratch...
can someone explain why it is so slow (maybe it's by design?) or is there room for optimization ?
regards
roland
From manas.nagpure at gmail.com Mon Mar 26 03:59:57 2018
From: manas.nagpure at gmail.com (Manas Nagpure)
Date: Mon, 26 Mar 2018 13:29:57 +0530
Subject: [Borgbackup] hi : )
Message-ID:
I am a new member here : ) I love Borg
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From dac at conceptual-analytics.com Fri Mar 30 12:16:17 2018
From: dac at conceptual-analytics.com (Dave Cottingham)
Date: Fri, 30 Mar 2018 12:16:17 -0400
Subject: [Borgbackup] When is the cache rebuilt?
Message-ID:
borg is spending this week doing a cache rebuild, and I'm trying to figure
out the cause so I can avoid it in the future. I did notice that due to
ntpd failure the system time on the client was quite different from the
time on the server. Would this be expected to trigger a cache rebuild? If
so, how different would the time have to be to cause this problem?
Thanks,
Dave Cottingham
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tw at waldmann-edv.de Fri Mar 30 14:01:53 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Fri, 30 Mar 2018 20:01:53 +0200
Subject: [Borgbackup] When is the cache rebuilt?
In-Reply-To:
References:
Message-ID:
On 30.03.2018 18:16, Dave Cottingham wrote:
> borg is spending this week doing a cache rebuild, and I'm trying to
> figure out the cause so I can avoid it in the future. I did notice that
> due to ntpd failure the system time on the client was quite different
> from the time on the server. Would this be expected to trigger a cache
> rebuild? If so, how different would the time have to be to cause this
> problem?
It is not time based, but based on the manifest hash.
If that differs from what the cache was built from, it will rebuild.
You can avoid that by having a separate repo per borg client.
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From dac at conceptual-analytics.com Fri Mar 30 14:29:30 2018
From: dac at conceptual-analytics.com (Dave Cottingham)
Date: Fri, 30 Mar 2018 14:29:30 -0400
Subject: [Borgbackup] When is the cache rebuilt?
In-Reply-To:
References:
Message-ID:
Separate repo per client is pretty much automatic in my case, since I have
only one client.
How is the client identified? By host name, or IP address, or some hash of
something, or what? I'm trying to figure out what might have changed.
Thanks,
Dave Cottingham
On Fri, Mar 30, 2018 at 2:01 PM, Thomas Waldmann wrote:
> On 30.03.2018 18:16, Dave Cottingham wrote:
> > borg is spending this week doing a cache rebuild, and I'm trying to
> > figure out the cause so I can avoid it in the future. I did notice that
> > due to ntpd failure the system time on the client was quite different
> > from the time on the server. Would this be expected to trigger a cache
> > rebuild? If so, how different would the time have to be to cause this
> > problem?
>
> It is not time based, but based on the manifest hash.
>
> If that differs from what the cache was built from, it will rebuild.
>
> You can avoid that by having a separate repo per borg client.
>
>
> --
>
> GPG ID: 9F88FB52FAF7B393
> GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
>
> _______________________________________________
> Borgbackup mailing list
> Borgbackup at python.org
> https://mail.python.org/mailman/listinfo/borgbackup
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tw at waldmann-edv.de Fri Mar 30 14:46:29 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Fri, 30 Mar 2018 20:46:29 +0200
Subject: [Borgbackup] When is the cache rebuilt?
In-Reply-To:
References:
Message-ID:
On 30.03.2018 20:29, Dave Cottingham wrote:
> Separate repo per client is pretty much automatic in my case, since I
> have only one client.
>
> How is the client identified?
Not at all.
The client either has a cache for that repo manifest hash or not.
> I'm trying to figure out what might have changed.
Did you use different users (home dirs)?
Did you lose / delete the cache?
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From dac at conceptual-analytics.com Fri Mar 30 16:50:49 2018
From: dac at conceptual-analytics.com (Dave Cottingham)
Date: Fri, 30 Mar 2018 16:50:49 -0400
Subject: [Borgbackup] When is the cache rebuilt?
In-Reply-To:
References:
Message-ID:
On Fri, Mar 30, 2018 at 2:46 PM, Thomas Waldmann wrote:
> On 30.03.2018 20:29, Dave Cottingham wrote:
> > Separate repo per client is pretty much automatic in my case, since I
> > have only one client.
> >
> > How is the client identified?
>
> Not at all.
>
> The client either has a cache for that repo manifest hash or not.
>
> > I'm trying to figure out what might have changed.
>
> Did you use different users (home dirs)?
>
> Did you lose / delete the cache?
>
I don't use different users, and my script sets BORG_CACHE_DIR so the cache
location wouldn't depend on that (I think).
It occurs to me that maybe I have misidentified what's going on as a cache
rebuild. What I see is, I'm backing up when very little has changed, but
borg is spending a lot of time on files that haven't changed. Usually it
goes like greased lightning through unchanged files. So I'm guessing that
it's checksumming all the files, which it doesn't usually do. So maybe the
cache is fine, but something else is making borg think it needs to checksum
files that haven't changed? How does borg decide which files to checksum?
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tw at waldmann-edv.de Fri Mar 30 20:27:28 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Sat, 31 Mar 2018 02:27:28 +0200
Subject: [Borgbackup] When is the cache rebuilt?
In-Reply-To:
References:
Message-ID: <4233e733-51c0-e3fc-9059-623ded08b6c8@waldmann-edv.de>
> I don't use different users, and my script sets BORG_CACHE_DIR so the
> cache location wouldn't depend on that (I think).
>
> It occurs to me that maybe I have misidentified what's going on as a
> cache rebuild. What I see is, I'm backing up when very little has
> changed, but borg is spending a lot of time on files that haven't
> changed.
Use -v --list to clearly see what they are classified as (see docs what
the letters mean).
> So I'm guessing that it's checksumming all the files, which it doesn't
> usually do.
When you did an upgrade from 1.0 to 1.1, that will happen once for all
files due to the changed files cache content (1.0: mtime,inode,size
1.1:ctime,inode,size).
> So maybe the cache is fine, but something else is making
> borg think it needs to checksum files that haven't changed?
If it is happening withing 1.1 backups (not directly after switching
from 1.0), it could be that you did e.g. some chown -R and that changed
the ctime.
Or you have a filesystem that has no stable inode numbers, like some
network filesystems.
Change detection is based on the 3 values I mentioned above. If they did
not change, borg won't chunk. If at least one of them changed, it will
assume that the file might have changed contents.
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
From tw at waldmann-edv.de Sat Mar 31 18:18:28 2018
From: tw at waldmann-edv.de (Thomas Waldmann)
Date: Sun, 1 Apr 2018 00:18:28 +0200
Subject: [Borgbackup] borgbackup 1.1.5 released!
Message-ID:
Released borgbackup 1.1.5 with misc. bug fixes.
https://github.com/borgbackup/borg/releases/tag/1.1.5
--
GPG ID: 9F88FB52FAF7B393
GPG FP: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393