5 \h'-\w'\fB\\$1\ \fP'u'\fB\\$1\ \fP\c
7 .TH rsync-backup 8 "7 October 2012" rsync-backup
9 rsync-backup \- back up files using rsync
18 script is a backup program of the currently popular
23 ability to create hardlinks from (apparently) similar existing local
24 trees to make incremental dumps efficient, even from remote sources.
25 Restoring files is easy because the backups created are just directories
26 full of files, exactly as they were on the source \(en and this is
31 The script does more than just running
33 It is also responsible for creating and removing snapshots of volumes to
34 be backed up, and expiring old dumps according to a user-specified
39 script should be installed and run on a central backup server with local
40 access to the backup volumes.
42 The script should be run with full (root) privileges, so that it can
43 correctly record file ownership information. The server should also be
46 to the client machines, and run processes there as root. (This is not a
47 security disaster. Remember that the backup server is, in the end,
48 responsible for the integrity of the backup data. A dishonest backup
49 server can easily compromise a client which is being restored from
51 .SS Command-line options
52 Most of the behaviour of
54 is controlled by a configuration file, described starting with the
56 .B Configuration commands
58 But a few features are controlled by command-line options.
61 Show a brief help message for the program, and exit successfully.
66 version number and some choice pieces of build-time configuration, and
72 instead of the default configuration file (shown as
79 Don't actually take a backup, or write proper logs: instead, write a
80 description of what would be done to standard error.
83 Produce verbose progress information on standard output while the backup
84 is running. This keeps one amused while running a backup
85 interactively. In any event,
87 will report failures to standard error, and otherwise run silently, so
88 it doesn't annoy unnecessarily if run by
91 Backing up a filesystem works as follows.
93 Make a snapshot of the filesystem on the client, and ensure that the
94 snapshot is mounted. There are some `trivial' snapshot types which use
95 the existing mounted filesystem, and either prevent processes writing to
96 it during the backup, or just hope for the best. Other snapshot types
97 require the snapshot to be mounted somewhere distinct from the main
98 filesystem, so that the latter can continue being used.
102 to copy the snapshot to the backup volume \(en specifically, to
103 .IB host / fs / new \fR.
104 If this directory already exists, then it's presumed to be debris from a
105 previous attempt to dump this filesystem:
107 will update it appropriately, by adding, deleting or modifying the
108 files. This means that retrying a failed dump \(en after fixing whatever
109 caused it to go wrong, obviously! \(en is usually fairly quick.
113 on the client to generate a `digest' describing the contents of the
114 filesystem, and send this to the server as
115 .IB host / fs / new .fshash \fR.
117 Release the snapshot: we don't need it any more.
121 over the new backup; specifically, to
122 .BI tmp/fshash. host . fs . date \fR.
123 This gives us a digest for what the backup volume actually stored.
127 digests. If they differ then dump the differences to the log file and
128 report a backup failure. (Backups aren't any good if they don't
129 actually back up the right thing. And you stand a better chance of
130 fixing them if you know that they're going wrong.)
132 Commit the backup, by renaming the dump directory to
137 .IB host / fs / date .fshash \fR.
139 The backup is now complete.
140 .SS Configuration commands
141 The configuration file is simply a Bash shell fragment: configuration
142 commands are shell functions.
144 .BI "backup " "fs\fR[:\fIfsarg\fR] ..."
145 Back up the named filesystems. The corresponding
147 may be required by the snapshot type.
152 commands will back up filesystems on the named
154 To back up filesystems on the backup server itself, use its hostname:
156 will avoid inefficient and pointless messing about
159 This command clears the
161 list, and resets the retention policy to its default (i.e., the to
162 policy defined prior to the first
166 .BI "like " "host\fR ..."
167 Declare that subsequent filesystems are `similar' to like-named
168 filesystems on the named
172 should use those trees as potential sources of hardlinkable files. Be
173 careful when using this option without
176 option: an erroneous hardlink will cause the backup to fail. (The
177 backup won't be left silently incorrect.)
179 .BI "retain " frequency " " duration
180 Define part a backup retention policy: backup trees of the
182 should be kept for the
194 which means the same); the
202 Expiry considers each existing dump against the policy lines in order:
203 the last applicable line determines the dump's fate \(en so you should
204 probably write the lines in decreasing order of duration.
212 commands collectively define a retention policy. Once a policy is
215 operations use the policy. The first
221 command clears the policy and starts defining a new one. The policy
222 defined before the first
226 policy: at the start of each
228 stanza, the policy is reset to the default.
233 snapshot type (see below) doesn't prevent a filesystem from being
234 modified while it's being backed up. If this happens, the
236 pass will detect the difference and fail. If the filesystem in question
237 is relatively quiescent, then maybe retrying the backup will result in a
238 successful consistent copy. Following this command, a backup which
241 mismatch will be retried up to
243 times before being declared a failure.
245 .BI "snap " type " " \fR[\fIargs\fR...]
248 for subsequent backups. Some snapshot types require additional
249 arguments, which may be supplied here. This command clears the
252 .SS Configuration variables
253 The following shell variables may be overridden by the configuration
257 The number of log files to be kept for each filesystem. Old logfiles
258 are deleted to keep the total number below this bound. The default
262 Command-line options to pass to
264 in addition to the basic set:
271 .B \-\-one-file-system
273 .BR "\-\-filter=""dir-merge .rsync-backup""" .
280 snapshots are mounted on subdirectories below the
282 .IR "on backup clients" .
287 is the backup mount directory configured at build time.
290 The volume size option to pass to
292 when creating a snapshot. The default is
294 which seems to work fairly well.
297 Where the actual backup trees should be stored. See the section on
304 is the backup mount directory configured at build time.
307 The hash function to use for verifying archive integrity. This is
312 so it must name one of the hash functions supported by your Python's
314 module. The default is
317 The configuration file may define shell functions to perform custom
318 actions at various points in the backup process.
320 .BI "backup_precommit_hook " host " " fs " " date
321 Called after a backup has been verified complete and about to be
322 committed. The backup tree is in
324 in the current directory, and the
328 A typical action would be to create a digital signature on the
331 .BI "backup_commit_hook " host " " fs " " date
332 Called during the commit procedure. The backup tree and manifest have
333 been renamed into their proper places. Typically one would use this
334 hook to rename files created by the
335 .B backup_precommit_hook
338 .BR "whine " [ \-n ] " " \fItext\fR...
339 Called to report `interesting' events when the
341 option is in force. The default action is to echo the
343 to (what was initially) standard output, followed by a newline unless
347 The following snapshot types are available.
350 A trivial snapshot type: attempts to back up a live filesystem. How
351 well this works depends on how active the filesystem is. If files
352 change while the dump is in progress then the
354 verification will likely fail. Backups using this snapshot type must
355 specify the filesystem mount point as the
359 A slightly less trivial snapshot type: make the filesystem read-only
360 while the dump is in progress. Backups using this snapshot type must
361 specify the filesystem mount point as the
365 Create snapshots using LVM. The snapshot argument is interpreted as the
366 relevant volume group. The filesystem name is interpreted as the origin
367 volume name; the snapshot will be called
370 .IB SNAPDIR / fs \fR;
371 space will be allocated to it according to the
375 .BI "rfreezefs " client " " vg
376 This gets complicated. Suppose that a server has an LVM volume group,
377 and exports (somehow) a logical volume to a client. Examples are a host
378 providing a virtual disk to a guest, or a server providing
379 network-attached storage to a client. The server can create a snapshot
380 of the volume using LVM, but must synchronize with the client to ensure
381 that the filesystem image captured in the snapshot is clean. The
383 program should be installed on the client to perform this rather
384 delicate synchronization. Declare the server using the
386 command as usual; pass the client's name as the
389 server's volume group name as the
391 snapshot arguments. Finally, backups using this snapshot type must
392 specify the filesystem mount point (or, actually, any file in the
393 filesystem) on the client, as the
396 Additional snapshot types can be defined in the configuration file. A
397 snapshot type requires two shell functions.
399 .BI snap_ type " " snapargs " " fs " " fsarg
400 Create the snapshot, and write the mountpoint (on the client host) to
401 standard output, in a form suitable as an argument to
404 .BI unsnap_ type " " snapargs " " fs " " fsarg
407 There are a number of utility functions which can be used by snapshot
408 type handlers: please see the script for details. Please send the
409 author interesting snapshot handlers for inclusion in the main
411 .SS Archive structure
412 Backup trees are stored in a fairly straightforward directory tree.
414 At the top level is one directory for each client host. There are also
415 some special entries:
417 .B \&.rsync-backup-store
418 This file must be present in order to indicate that a backup volume is
419 present (and not just an empty mount point).
422 The cache database used for improving performance of local file
423 hashing. There may be other
425 files used by SQLite for its own purposes.
428 Part of the filesystem used on the backup volume. You don't want to
432 Used to store temporary files during the backup process. (Some of them
433 want to be on the same filesystem as the rest of the backup.) When
434 things go wrong, files are left behind in the hope that they might help
435 someone debug the mess. It's always safe to delete the files in here
436 when no backup is running.
438 So don't use those names for your hosts.
440 The next layer down contains a directory for each filesystem on the given host.
442 The bottom layer contains a directory for each dump of that filesystem,
443 named with the date at which the dump was started (in ISO8601
444 .IB yyyy \(en mm \(en dd
445 format), together with associated files named
454 Mark Wooding, <mdw@distorted.org.uk>