[Borgbackup] Test : Borg vs Restic

Fabio Pedretti fabio.pedretti at unibs.it
Tue Sep 12 04:11:23 EDT 2017


[...]

2017-09-12 4:12 GMT+02:00 Melkor Lord <melkor.lord at gmail.com>:

> Features I *DISLIKE* in BOTH tools :
> ====================================
>
> - Their design geared at "backup-and-push-to-repository" which is nice but
> not desired in my environment. I need a "repository-pulls-backup-from-agent"
> design. There could be in both tools an additional "agent" command that
> would :
>   * Use ssh transport by default to contact an host and the ssh keys
> benefits (authorized keys, )
>   * Spawn a Borg/Restic instance to make the backup on the remote host (like
> a normal Borg call) but feed the result back to the calling Borg, which
> holds the repository
>   * A way to securely transmit the repokey data to the remote instance so
> the local Borg can mount/check the local repository
>
>   Of course, it would be of the administrator responsability to setup
> everything accordingly to use either one repokey for every remote host or
> script something a bit smarter to use a repokey per host or group of hosts,
> whatever suits the needs.
>
>   Why such a setup?
>
>   Because, in my case at least, the backup server is of critical importance
> and network isolated from the other hosts. I really don't want the
> "all-hosts-can-contact-the-backup-server" style but the
> "only-backup-server-can-contact-hosts" kind of behavior. This also helps to
> limit the strain on the backup server. Having all the hosts, with no
> predictable backup size, hammering the backup server at the same time
> (cronjob) is not desirable, especially on sites with storage on budget :-)
>
>   For instance, I currently use a very spartan/crude system but which is
> rock solid and never failed once in over two decades. A simple script which,
> in sequence, connects via SSH to each host and uses the remote tar command
> to perform the backup. SSH's piped stdout/stderr allows to retrieve the
> tarball as well as errors and act accordingly. This is not scalable but
> highly effective, battle tested and disaster recovery proven! Booting a new
> server with some rescue OS and restoring from a tarball works in ALL
> conditions, no matter how long it takes :-) But now, I need encryption and
> deduplication given the huge sizes of the data to backup, hence my tests
> with Borg/Restic which both have nice features *AND* provide a single file
> binary for disaster scenarios.

I also had that requirement and can be done with borg. I did set up a
borg server mounting via nfs/sshfs the hosts to backup, then you can
schedule all the backups in a sequential way from the borg server with
crontab. Having only a single concurrent backup you can also have a
shared repository for all hosts to backup, potentially increasing
deduplication.
Also no need to have a borg client on the hosts to backup, just ssh
for sshfs or nfs server (another benefit is that you have to update
only one borg binary when a new release is out).


-- 
ing. Fabio Pedretti
Responsabile U.O.C. "Reti e Sistemi"
http://www.unibs.it/organizzazione/amministrazione-centrale/servizio-servizi-ict/uoc-reti-e-sistemi
Università degli Studi di Brescia
Via Valotti, 9 - 25121 Brescia
E-mail: fabio.pedretti at unibs.it

-- 

Informativa sulla Privacy: http://www.unibs.it/node/8155


More information about the Borgbackup mailing list