The Tahoe BackupDB¶
Overview¶
To speed up backup operations, Tahoe maintains a small database known as the ābackupdbā. This is used to avoid re-uploading files which have already been uploaded recently.
This database lives in ~/.tahoe/private/backupdb.sqlite
, and is a SQLite
single-file database. It is used by the ātahoe backup
ā command. In the
future, it may optionally be used by other commands such as ātahoe cp
ā.
The purpose of this database is twofold: to manage the file-to-cap translation (the āuploadā step) and the directory-to-cap translation (the āmkdir-immutableā step).
The overall goal of optimizing backup is to reduce the work required when the
source disk has not changed (much) since the last backup. In the ideal case,
running ātahoe backup
ā twice in a row, with no intervening changes to the
disk, will not require any network traffic. Minimal changes to the source
disk should result in minimal traffic.
This database is optional. If it is deleted, the worst effect is that a subsequent backup operation may use more effort (network bandwidth, CPU cycles, and disk IO) than it would have without the backupdb.
The database uses sqlite3, which is included as part of the standard Python library with Python 2.5 and later. For Python 2.4, Tahoe will try to install the āpysqliteā package at build-time, but this will succeed only if sqlite3 with development headers is already installed. On Debian and Debian derivatives you can install the āpython-pysqlite2ā package (which, despite the name, actually provides sqlite3 rather than sqlite2). On old distributions such as Debian etch (4.0 āoldstableā) or Ubuntu Edgy (6.10) the āpython-pysqlite2ā package wonāt work, but the āsqlite3-devā package will.
Schema¶
The database contains the following tables:
CREATE TABLE version
(
version integer # contains one row, set to 1
);
CREATE TABLE local_files
(
path varchar(1024), PRIMARY KEY -- index, this is an absolute UTF-8-encoded local filename
size integer, -- os.stat(fn)[stat.ST_SIZE]
mtime number, -- os.stat(fn)[stat.ST_MTIME]
ctime number, -- os.stat(fn)[stat.ST_CTIME]
fileid integer
);
CREATE TABLE caps
(
fileid integer PRIMARY KEY AUTOINCREMENT,
filecap varchar(256) UNIQUE -- URI:CHK:...
);
CREATE TABLE last_upload
(
fileid INTEGER PRIMARY KEY,
last_uploaded TIMESTAMP,
last_checked TIMESTAMP
);
CREATE TABLE directories
(
dirhash varchar(256) PRIMARY KEY,
dircap varchar(256),
last_uploaded TIMESTAMP,
last_checked TIMESTAMP
);
Upload Operation¶
The upload process starts with a pathname (like ~/.emacs
) and wants to end up
with a file-cap (like URI:CHK:...
).
The first step is to convert the path to an absolute form
(/home/warner/.emacs
) and do a lookup in the local_files table. If the path
is not present in this table, the file must be uploaded. The upload process
is:
- record the fileās size, ctime (which is the directory-entry change time or file creation time depending on OS) and modification time
- upload the file into the grid, obtaining an immutable file read-cap
- add an entry to the ācapsā table, with the read-cap, to get a fileid
- add an entry to the ālast_uploadā table, with the current time
- add an entry to the ālocal_filesā table, with the fileid, the path, and the local fileās size/ctime/mtime
If the path is present in ālocal_filesā, the easy-to-compute identifying information is compared: file size and ctime/mtime. If these differ, the file must be uploaded. The row is removed from the local_files table, and the upload process above is followed.
If the path is present but ctime or mtime differs, the file may have changed. If the size differs, then the file has certainly changed. At this point, a future version of the ābackupā command might hash the file and look for a match in an as-yet-defined table, in the hopes that the file has simply been moved from somewhere else on the disk. This enhancement requires changes to the Tahoe upload API before it can be significantly more efficient than simply handing the file to Tahoe and relying upon the normal convergence to notice the similarity.
If ctime, mtime, or size is different, the client will upload the file, as above.
If these identifiers are the same, the client will assume that the file is
unchanged (unless the --ignore-timestamps
option is provided, in which
case the client always re-uploads the file), and it may be allowed to skip
the upload. For safety, however, we require the client periodically perform a
filecheck on these probably-already-uploaded files, and re-upload anything
that doesnāt look healthy. The client looks the fileid up in the
ālast_checkedā table, to see how long it has been since the file was last
checked.
A ārandom early checkā algorithm should be used, in which a check is performed with a probability that increases with the age of the previous results. E.g. files that were last checked within a month are not checked, files that were checked 5 weeks ago are re-checked with 25% probability, 6 weeks with 50%, more than 8 weeks are always checked. This reduces the āthundering herdā of filechecks-on-everything that would otherwise result when a backup operation is run one month after the original backup. If a filecheck reveals the file is not healthy, it is re-uploaded.
If the filecheck shows the file is healthy, or if the filecheck was skipped, the client gets to skip the upload, and uses the previous filecap (from the ācapsā table) to add to the parent directory.
If a new file is uploaded, a new entry is put in the ācapsā and ālast_uploadā table, and an entry is made in the ālocal_filesā table to reflect the mapping from local disk pathname to uploaded filecap. If an old file is re-uploaded, the ālast_uploadā entry is updated with the new timestamps. If an old file is checked and found healthy, the ālast_uploadā entry is updated.
Relying upon timestamps is a compromise between efficiency and safety: a file
which is modified without changing the timestamp or size will be treated as
unmodified, and the ātahoe backup
ā command will not copy the new contents
into the grid. The --no-timestamps
option can be used to disable this
optimization, forcing every byte of the file to be hashed and encoded.
Directory Operations¶
Once the contents of a directory are known (a filecap for each file, and a
dircap for each directory), the backup process must find or create a tahoe
directory node with the same contents. The contents are hashed, and the hash
is queried in the ādirectoriesā table. If found, the last-checked timestamp
is used to perform the same random-early-check algorithm described for files
above, but no new upload is performed. Since ātahoe backup
ā creates immutable
directories, it is perfectly safe to re-use a directory from a previous
backup.
If not found, the web-API āmkdir-immutableā operation is used to create a new directory, and an entry is stored in the table.
The comparison operation ignores timestamps and metadata, and pays attention solely to the file names and contents.
By using a directory-contents hash, the ātahoe backup
ā command is able to
re-use directories from other places in the backed up data, or from old
backups. This means that renaming a directory and moving a subdirectory to a
new parent both count as āminor changesā and will result in minimal Tahoe
operations and subsequent network traffic (new directories will be created
for the modified directory and all of its ancestors). It also means that you
can perform a backup (ā#1ā), delete a file or directory, perform a backup
(ā#2ā), restore it, and then the next backup (ā#3ā) will re-use the
directories from backup #1.
The best case is a null backup, in which nothing has changed. This will
result in minimal network bandwidth: one directory read and two modifies. The
Archives/
directory must be read to locate the latest backup, and must be
modified to add a new snapshot, and the Latest/
directory will be updated to
point to that same snapshot.