
Thanks for asking, it's the way to learn :) The documentation of BackInTime could be better, so it's good that you ask. There is one big misunderstanding that is the source of most of your confusion. buhtz has already mentioned this in the other thread: BackInTime does not make "incremental" backups in the way that you understand the word. BackInTime uses hardlinks to "store" unchanged files without using any space. It makes copies of all the files that have changed between one backup and another. If you're not sure what a hardlink is, how to recognize one, and why hardlinks "save space" when used for backups, you should make yourself familiar with them. You can start here: https://www.redhat.com/sysadmin/linking-linux-explained – but most of all, play around with a few small files with unimportant content and try out BackInTime on them. Imagine you have three files, and their content is listed in brackets here. If the file named "foo" contains the letters "ABC", then I'll write foo[ABC]. So you have three files: - foo[ABC] - bar[DEF] - baz[XYZ] You make a BackInTime backup of these files, and they get copied to the backup location. Now you change the content of baz from [XYZ] to [999]. The next time you make a backup with BackInTime, your backup will contain this: - foo[ABC] <-- not a copy, just a hardlink to the same file from the last backup! - bar[DEF] <-- not a copy, just a hardlink to the same file from the last backup! - baz[999] <-- a new file! Your backup is now "incremental" in the sense that there are NO NEW COPIES of foo[ABC] and bar[DEF] on the backup location. They're just hardlinks, because the files are identical to before. BUT: BackInTime's backups are never "incremental" in the sense that the CONTENTS of a file are updated from one backup to another. If you change one byte in a 5GB video file, your backup will contain a totally new copy of that file. BackInTime will use 10GB of space to represent the fact that a 5GB file has changed by one byte. That's the way it's designed. I hope this makes things clearer. If not, feel free to ask again. Also: Play around with BackInTime! Use data that doesn't matter, a few small files or a few large ones, and just see what happens. Check the space usage of the backup location to learn how hardlinks are used. Cheers Michael On 30.11.2023 17:58, LateJunction wrote:
I am having difficulty in setting up backintime-qt to function as I expect; I would appreciate advice from experienced users. I’m running BIT 1.4.1-1 under Linux Mint 21.2 Cinnamon (based on Ubuntu Jammy), with Kernel 6.2.0-37. Hardware is 11th Gen Intel Core i5-11600, with 32 GiB. /Root and /Home are on separate NVME SSDs. Back up is to a 1TB 7200 rpm HDD. An nVidia 3060 GPU with 12 GiB of memory is installed.
This post is about the process of restore, which seems to operate different to my expectation.
That expectation is that a file which exists in the ‘original’ location in the file system at the time of restore will be overwritten by the file of the same name from the ‘full’ backup. This will in turn be overwritten by any later files of that file name in the sequence of incremental backups, from oldest to most recent, no matter how many incremental backups there are. The final state of the file will be identical to the newest incremental backup prior to the restore operation. There will be just a single file ‘A’ in the original location..
This is the process that every backup application, using incremental backups, appears to work, according to my understanding of the application descriptions that I have read on Google and YouTube.
In contrast. the Man Page, for BIT, under the heading ‘DESCRIPTION’ says: “When you restore a file ‘A’, if it already exists on the file system it will be renamed to ‘A.backup.currentdate’.” I understand ‘currentdate’ to mean ‘current date and time to a sufficient resolution for file name to be unique’ – probably to the millisecond. The manual page statement then implies that after the restore, my data set will contain a file ‘A’ plus as many versions of the file A.backup.currentdate’ as there are unique versions of File A in the incremental backups. This could be 10’s to hundreds of additional file versions for EVERY file in my data set which has been changed in the period from the initial, or ‘full’ backup, to the last incremental backup. This could result in the original file location being filled to capacity, at which point I assume the restore will be terminated.
This is not what I want and, more to the point, doesn't seem to be generally desirable. What have I misunderstood?