1 Cumulus: Efficient Filesystem Backup to the Cloud
7 - libuuid (sometimes part of e2fsprogs)
9 - Python (2.7 or later, or 3.2 or later)
10 - Python six, a Python 2/3 compatibility library
11 https://pypi.python.org/pypi/six
12 - boto, the python interface to Amazon's Web Services (for S3 storage)
13 http://code.google.com/p/boto
14 - paramiko, SSH2 protocol for python (for sftp storage)
15 http://www.lag.net/paramiko/
17 Building should be a simple matter of running "make". This will produce
18 an executable called "cumulus".
24 Two directories are needed for backups: one for storing the backup
25 snapshots themselves, and one for storing bookkeeping information to go
26 with the backups. In this example, the first will be "/cumulus", and
27 the second "/cumulus.db", but any directories will do. Only the first
28 directory, /cumulus, needs to be stored somewhere safe. The second is
29 only used when creating new snapshots, and is not needed when restoring.
31 1. Create the snapshot directory and the local database directory:
32 $ mkdir /cumulus /cumulus.db
34 2. Initialize the local database using the provided script schema.sql
36 $ sqlite3 /cumulus.db/localdb.sqlite
37 sqlite> .read schema.sql
40 3. If encrypting or signing backups with gpg, generate appropriate
41 keypairs. The keys can be kept in a user keyring or in a separate
42 keyring just for backups; this example does the latter.
43 $ mkdir /cumulus.db/gpg; chmod 700 /cumulus.db/gpg
44 $ gpg --homedir /cumulus.db/gpg --gen-key
45 (generate a keypair for encryption; enter a passphrase for
47 $ gpg --homedir /cumulus.db/gpg --gen-key
48 (generate a second keypair for signing; for automatic
49 signing do not use a passphrase to protect the secret key)
50 Be sure to store the secret key needed for decryption somewhere
51 safe, perhaps with the backup itself (the key protected with an
52 appropriate passphrase). The secret signing key need not be stored
53 with the backups (since in the event of data loss, it probably
54 isn't necessary to create future backups that are signed with the
57 To achieve better compression, the encryption key can be edited to
58 alter the preferred compression algorithms to list bzip2 before
60 $ gpg --homedir /cumulus.db/gpg --edit-key <encryption key>
62 (prints a terse listing of preferences associated with the
65 (allows preferences to be changed; copy the same preferences
66 list printed out by the previous command, but change the
67 order of the compression algorithms, which start with "Z",
68 to be "Z3 Z2 Z1" which stands for "BZIP2, ZLIB, ZIP")
71 Copy the provided encryption filter program, cumulus-filter-gpg,
72 somewhere it may be run from.
74 4. Create a script for launching the Cumulus backup process. A simple
78 export LBS_GPG_HOME=/cumulus.db/gpg
79 export LBS_GPG_ENC_KEY=<encryption key>
80 export LBS_GPG_SIGN_KEY=<signing key>
81 cumulus --dest=/cumulus --localdb=/cumulus.db --scheme=test \
82 --filter="cumulus-filter-gpg --encrypt" --filter-extension=.gpg \
83 --signature-filter="cumulus-filter-gpg --clearsign" \
84 /etc /home /other/paths/to/store
86 Make appropriate substitutions for the key IDs and any relevant
87 paths. Here "--scheme=test" gives a descriptive name ("test") to
88 this collection of snapshots. It is possible to store multiple sets
89 of backups in the same directory, using different scheme names to
90 distinguish them. The --scheme option can also be left out
97 Segment cleaning must periodically be done to identify backup segments
98 that are mostly unused, but are storing a small amount of useful data.
99 Data in these segments will be rewritten into new segments in future
100 backups to eliminate the dependence on the almost-empty old segments.
102 The provided cumulus-util tool can perform the necessary cleaning. Run
104 $ cumulus-util --localdb=/cumulus.db clean
105 Cleaning is still under development, and so may be improved in the
106 future, but this version is intended to be functional.
108 Old backup snapshots can be pruned from the snapshot directory
109 (/cumulus) to recover space. A snapshot which is still referenced by
110 the local database should not be deleted, however. Deleting an old
111 backup snapshot is a simple matter of deleting the appropriate snapshot
112 descriptor file (snapshot-*.lbs) and any associated checksums
113 (snapshot-*.sha1sums). Segments used by that snapshot, but not any
114 other snapshots, can be identified by running the clean-segments.pl
115 script from the /cumulus directory--this will perform a scan of the
116 current directory to identify unreferenced segments, and will print a
117 list to stdout. Assuming the list looks reasonable, the segments can be
119 $ rm `./clean-segments.pl`
120 A tool to make this easier will be implemented later.
122 The clean-segments.pl script will also print out a warning message if
123 any snapshots appear to depend upon segments which are not present; this
124 is a serious error which indicates that some of the data needed to
125 recover a snapshot appears to be lost.
128 Listing and Restoring Snapshots
129 -------------------------------
131 A listing of all currently-stored snapshots (and their sizes) can be
133 $ cumulus-util --store=/cumulus list-snapshot-sizes
135 If data from a snapshot needs to be restored, this can be done with
136 $ cumulus-util --store=/cumulus restore-snapshot \
137 test-20080101T121500 /dest/dir <files...>
138 Here, "test-20080101T121500" is the name of the snapshot (consisting of
139 the scheme name and a timestamp; this can be found from the output of
140 list-snapshot-sizes) and "/dest/dir" is the path under which files
141 should be restored (this directory should initially be empty).
142 "<files...>" is a list of files or directories to restore. If none are
143 specified, the entire snapshot is restored.
149 The cumulus-util command can operate directly on remote backups. The
150 --store parameter accepts, in addition to a raw disk path, a URL.
151 Supported URL forms are
152 file:///path Equivalent to /path
153 s3://bucket/path Storage in Amazon S3
154 (Expects the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
155 environment variables to be set appropriately)
156 sftp://server/path Storage on sftp server
157 (note that no password authentication or password protected
158 authorization keys are not supported atm and config options
159 like port or individual authorization keys are to be
160 configured in ~/.ssh/config and the public key of the
161 server has to be in ~/.ssh/known_hosts)
163 To copy backup snapshots from one storage area to another, the
164 cumulus-sync command can be used, as in
165 $ cumulus-sync file:///cumulus s3://my-bucket/cumulus
167 Support for directly writing backups to a remote location (without using
168 a local staging directory and cumulus-sync) is slightly more
169 experimental, but can be achieved by replacing
172 --upload-script="cumulus-store s3://my-bucket/cumulus"
175 Alternate Restore Tool
176 ----------------------
178 The contrib/restore.pl script is a simple program for restoring the
179 contents of a Cumulus snapshot. It is not as full-featured as the
180 restore functionality in cumulus-util, but it is far more compact. It
181 could be stored with the backup files so a tool for restores is
182 available even if all other data is lost.
184 The restore.pl script does not know how to decompress segments, so this
185 step must be performed manually. Create a temporary directory for
186 holding all decompressed objects. Copy the snapshot descriptor file
187 (*.lbs) for the snapshot to be restored to this temporary directory.
188 The snapshot descriptor includes a list of all segments which are needed
189 for the snapshot. For each of these snapshots, decompress the segment
190 file (with gpg or the appropriate program based on whatever filter was
191 used), then pipe the resulting data through "tar -xf -" to extract. Do
192 this from the temporary directory; the temporary directory should be
193 filled with one directory for each segment decompressed.
195 Run restore.pl giving two arguments: the snapshot descriptor file
196 (*.lbs) in the temporary directory, and a directory where the restored
197 files should be written.