This optimization is aimed at large files that are composed of many
blocks--including the size of each block allows a restore program to
determine the offset at which each block begins in the output file (by
adding up the sizes of the previous block). This may allow for more
efficient restores, in which file data is filled in as blocks are
encountered, instead of having to find the blocks in the order they appear
in the data list.
A future change might be to only include the sizes when necessary--files
which are composed of a single object do not need a size, nor does the last
block of a large file. But for now, simply include the size on all
objects.
This is part of a recommended format change, but one that is both forward-
and backward-compatible.
} else if (rc == SQLITE_ROW) {
ref = ObjectReference(IdToSegment(sqlite3_column_int64(stmt, 0)),
(const char *)sqlite3_column_text(stmt, 1));
+ ref.set_range(0, size);
} else {
fprintf(stderr, "Could not execute SELECT statement!\n");
ReportError(rc);
o->write(tss);
ref = o->get_ref();
db->StoreObject(ref, block_csum, bytes, block_age);
+ ref.set_range(0, bytes);
delete o;
}