1 Cumulus: Efficient Filesystem Backup to the Cloud
7 - libuuid (sometimes part of e2fsprogs)
9 - Python (2.7 or later, or 3.2 or later)
10 - Python six, a Python 2/3 compatibility library
11 https://pypi.python.org/pypi/six
12 - boto, the python interface to Amazon's Web Services (for S3 storage)
13 http://code.google.com/p/boto
14 - paramiko, SSH2 protocol for python (for sftp storage)
15 http://www.lag.net/paramiko/
18 Building should be a simple matter of running "make". This will produce
19 an executable called "cumulus".
25 Two directories are needed for backups: one for storing the backup
26 snapshots themselves, and one for storing bookkeeping information to go
27 with the backups. In this example, the first will be "/cumulus", and
28 the second "/cumulus.db", but any directories will do. Only the first
29 directory, /cumulus, needs to be stored somewhere safe. The second is
30 only used when creating new snapshots, and is not needed when restoring.
32 1. Create the snapshot directory and the local database directory:
33 $ mkdir /cumulus /cumulus.db
35 2. Initialize the local database using the provided script schema.sql
37 $ sqlite3 /cumulus.db/localdb.sqlite
38 sqlite> .read schema.sql
41 3. If encrypting or signing backups with gpg, generate appropriate
42 keypairs. The keys can be kept in a user keyring or in a separate
43 keyring just for backups; this example does the latter.
44 $ mkdir /cumulus.db/gpg; chmod 700 /cumulus.db/gpg
45 $ gpg --homedir /cumulus.db/gpg --gen-key
46 (generate a keypair for encryption; enter a passphrase for
48 $ gpg --homedir /cumulus.db/gpg --gen-key
49 (generate a second keypair for signing; for automatic
50 signing do not use a passphrase to protect the secret key)
51 Be sure to store the secret key needed for decryption somewhere
52 safe, perhaps with the backup itself (the key protected with an
53 appropriate passphrase). The secret signing key need not be stored
54 with the backups (since in the event of data loss, it probably
55 isn't necessary to create future backups that are signed with the
58 To achieve better compression, the encryption key can be edited to
59 alter the preferred compression algorithms to list bzip2 before
61 $ gpg --homedir /cumulus.db/gpg --edit-key <encryption key>
63 (prints a terse listing of preferences associated with the
66 (allows preferences to be changed; copy the same preferences
67 list printed out by the previous command, but change the
68 order of the compression algorithms, which start with "Z",
69 to be "Z3 Z2 Z1" which stands for "BZIP2, ZLIB, ZIP")
72 Copy the provided encryption filter program, cumulus-filter-gpg,
73 somewhere it may be run from.
75 4. Create a script for launching the Cumulus backup process. A simple
79 export LBS_GPG_HOME=/cumulus.db/gpg
80 export LBS_GPG_ENC_KEY=<encryption key>
81 export LBS_GPG_SIGN_KEY=<signing key>
82 cumulus --dest=/cumulus --localdb=/cumulus.db --scheme=test \
83 --filter="cumulus-filter-gpg --encrypt" --filter-extension=.gpg \
84 --signature-filter="cumulus-filter-gpg --clearsign" \
85 /etc /home /other/paths/to/store
87 Make appropriate substitutions for the key IDs and any relevant
88 paths. Here "--scheme=test" gives a descriptive name ("test") to
89 this collection of snapshots. It is possible to store multiple sets
90 of backups in the same directory, using different scheme names to
91 distinguish them. The --scheme option can also be left out
98 Segment cleaning must periodically be done to identify backup segments
99 that are mostly unused, but are storing a small amount of useful data.
100 Data in these segments will be rewritten into new segments in future
101 backups to eliminate the dependence on the almost-empty old segments.
103 The provided cumulus-util tool can perform the necessary cleaning. Run
105 $ cumulus-util --localdb=/cumulus.db clean
106 Cleaning is still under development, and so may be improved in the
107 future, but this version is intended to be functional.
109 Old backup snapshots can be pruned from the snapshot directory
110 (/cumulus) to recover space. A snapshot which is still referenced by
111 the local database should not be deleted, however. Deleting an old
112 backup snapshot is a simple matter of deleting the appropriate snapshot
113 descriptor file (snapshot-*.lbs) and any associated checksums
114 (snapshot-*.sha1sums). Segments used by that snapshot, but not any
115 other snapshots, can be identified by running the clean-segments.pl
116 script from the /cumulus directory--this will perform a scan of the
117 current directory to identify unreferenced segments, and will print a
118 list to stdout. Assuming the list looks reasonable, the segments can be
120 $ rm `./clean-segments.pl`
121 A tool to make this easier will be implemented later.
123 The clean-segments.pl script will also print out a warning message if
124 any snapshots appear to depend upon segments which are not present; this
125 is a serious error which indicates that some of the data needed to
126 recover a snapshot appears to be lost.
129 Listing and Restoring Snapshots
130 -------------------------------
132 A listing of all currently-stored snapshots (and their sizes) can be
134 $ cumulus-util --store=/cumulus list-snapshot-sizes
136 If data from a snapshot needs to be restored, this can be done with
137 $ cumulus-util --store=/cumulus restore-snapshot \
138 test-20080101T121500 /dest/dir <files...>
139 Here, "test-20080101T121500" is the name of the snapshot (consisting of
140 the scheme name and a timestamp; this can be found from the output of
141 list-snapshot-sizes) and "/dest/dir" is the path under which files
142 should be restored (this directory should initially be empty).
143 "<files...>" is a list of files or directories to restore. If none are
144 specified, the entire snapshot is restored.
150 The cumulus-util command can operate directly on remote backups. The
151 --store parameter accepts, in addition to a raw disk path, a URL.
152 Supported URL forms are
153 file:///path Equivalent to /path
154 s3://bucket/path Storage in Amazon S3
155 (Expects the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
156 environment variables to be set appropriately)
157 sftp://server/path Storage on sftp server
158 (note that no password authentication or password protected
159 authorization keys are not supported atm and config options
160 like port or individual authorization keys are to be
161 configured in ~/.ssh/config and the public key of the
162 server has to be in ~/.ssh/known_hosts)
164 To copy backup snapshots from one storage area to another, the
165 cumulus-sync command can be used, as in
166 $ cumulus-sync file:///cumulus s3://my-bucket/cumulus
168 Support for directly writing backups to a remote location (without using
169 a local staging directory and cumulus-sync) is slightly more
170 experimental, but can be achieved by replacing
173 --upload-script="cumulus-store s3://my-bucket/cumulus"
176 Alternate Restore Tool
177 ----------------------
179 The contrib/restore.pl script is a simple program for restoring the
180 contents of a Cumulus snapshot. It is not as full-featured as the
181 restore functionality in cumulus-util, but it is far more compact. It
182 could be stored with the backup files so a tool for restores is
183 available even if all other data is lost.
185 The restore.pl script does not know how to decompress segments, so this
186 step must be performed manually. Create a temporary directory for
187 holding all decompressed objects. Copy the snapshot descriptor file
188 (*.lbs) for the snapshot to be restored to this temporary directory.
189 The snapshot descriptor includes a list of all segments which are needed
190 for the snapshot. For each of these snapshots, decompress the segment
191 file (with gpg or the appropriate program based on whatever filter was
192 used), then pipe the resulting data through "tar -xf -" to extract. Do
193 this from the temporary directory; the temporary directory should be
194 filled with one directory for each segment decompressed.
196 Run restore.pl giving two arguments: the snapshot descriptor file
197 (*.lbs) in the temporary directory, and a directory where the restored
198 files should be written.