[Tux3] Comparison to Hammer fs design

Daniel Phillips phillips at phunq.net
Sun Jul 27 04:51:34 PDT 2008


linSubscribed now, everything should be OK.

On Friday 25 July 2008 19:02, Matthew Dillon wrote:
> :Yes, that is the main difference indeed, essentially "log everything" vs
> :"commit" style versioning.  The main similarity is the lifespan oriented
> :version control at the btree leaves.
> 
>     Reading this and a little more that you describe later let me make
>     sure I understand the forward-logging methodology you are using.
>     You would have multiple individually-tracked transactions in
>     progress due to parallelism in operations initiated by userland and each
>     would be considered committed when the forward-log logs the completion
>     of that particular operation?

Yes.  Writes tend to be highly parallel in Linux because they are
mainly driven by the VMM attempting to clean cache dirtied by active
writers, who generally do not wait for syncing.  So this will work
really well for buffered IO, which is most of what goes on in Linux.
I have not thought much about how well this works for O_SYNC or
O_DIRECT from a single process.  I might have to do it slightly
differently to avoid performance artifacts there, for example, guess
where the next few direct writes are going to land based on where the
most recent ones did and commit a block that says "the next few commit
blocks will be found here, and here, and here...".

When a forward commit block is actually written it contains a sequence
number and a hash of its transaction in order to know whether the
commit block write ever completed.  This introduces a risk that data
overwritten by the commit block might contain the same hash and same
sequence number in the same position, causing corruption on replay.
The chance of this happening is inversely related to the size of the
hash times the chance of colliding with the same sequence number in
random data times the chance of of rebooting randomly.  So the risk can
be set arbitrarily small by selecting the size of the hash, and using
a good hash.  (Incidentally, TEA was not very good when I tested it in
the course of developing dx_hack_hash for HTree.)

Note: I am well aware that a debate will ensue about whether there is
any such thing as "acceptable risk" in relying on a hash to know if a
commit has completed.  This occurred in the case of Graydon Hoare's
Monotone version control system and continues to this day, but the fact
is, the cool modern version control systems such as Git and Mercurial
now rely very successfully on such hashes.  Nonetheless, the debate
will keep going, possibly as FUD from parties who just plain want to
use some other filesystem for their own reasons.  To quell that
definitively I need a mount option that avoids all such commit risk,
perhaps by providing modest sized journal areas salted throughout the
volume whose sole purpose is to record log commit blocks, which then
are not forward.  Only slightly less efficient than forward logging
and better than journalling, which has to seek far away to the journal
and has to provide journal space for the biggest possible journal
transaction as opposed to the most commit blocks needed for the largest
possible VFS transaction (probably one).

>     If the forward log entries are not (all) cached in-memory that would mean
>     that accesses to the filesystem would have to be run against the log
>     first (scanning backwards), and then through to the B-Tree?  You
>     would solve the need for having an atomic commit ('flush groups' in
>     HAMMER), but it sounds like the algorithmic complexity would be
>     very high for accessing the log.

Actually, the btree node images are kept fully up to date in the page
cache which is the only way the high level filesystem code accesses
them.  They do not reflect exactly what is on the disk, but they do
reflect exactly what would be on disk if all the logs were fully
rolled up ("replay").

The only operations on forward logs are:

  1) Write them
  2) Roll them up into the target objects
  3) Wait on rollup completion

The rollup operation is also used for replay after a crash.

A forward log that carries the edits to some dirty cache block pins
that dirty block in memory and must be rolled up into a physical log
before the cache block can be flushed to disk.  Fortunately, such a
rollup requires only a predictable amount of memory: space to load
enough of the free tree to allocate space for the rollup log, enough
available cache blocks to probe the btrees involved, and a few
cache blocks to set up the physical log transaction.  It is the
responsibility of the transaction manager to ensure that sufficient
memory to complete the transaction is available before initiating it,
otherwise deadlock may occur in the block writeout path.  (This
requirement is the same as for any other transaction scheme).

One traditional nasty case that becomes really nice with logical
forward logging is truncate of a gigantic file.  We just need to
commit a logical update like ['resize', inum, 0] then the inode data
truncate can proceed as convenient.  Another is orphan inode handling
where an open file has been completely unlinked, in which case we
log the logical change ['free', inum] then proceed with the actual
delete when the file is closed or when the log is replayed after a
surprise reboot.

Logical log replay is not idempotent, so special care has to be taken
on replay to ensure that specified changes have not already been
applied to the target object.  I choose not to go the traditional route
of providing special case tests for "already applied" which get really
ugly or unreliable when there are lot of stacked changes.  Instead I
introduce the rule that a logical change can only be applied to a known
good version of the target object, which promise is fullfilled via the
physical logging layer.

For example, on replay, if we have some btree index on disk for which
some logical changes are outstanding then first we find the most recent
physically logged version of the index block and read it into cache,
then apply the logical changes to it there.  Where interdependencies
exist between updates, for example the free tree should be updated to
reflect a block freed by merging two btree nodes, the entire collection
of logical and physical changes has to be replayed in topologically
sorted order, the details of which I have not thought much about other
than to notice it is always possible.

When replay is completed, we have a number of dirty cache blocks which
are identical to the unflushed cache blocks at the time of a crash,
and we have not yet flushed any of those to disk.  (I suppose this gets
interesting and probably does need some paranoid flushing logic in
replay to handle the bizarre case where a user replays on a smaller
memory configuration than they crashed on.)  The thing is, replay
returns the filesystem to the logical state it was in when the crash
happened.  This is a detail that journalling filesystem authors tend
to overlook: actually flushing out the result of the replay is
pointless and only obscures the essential logic.  Think about a crash
during replay, what good has the flush done?

>     And even though you wouldn't have to group transactions into larger
>     commits the crash recovery code would still have to implement those
>     algorithms to resolve directory and file visibility issues.  The problem
>     with namespace visibility is that it is possible to create a virtually
>     unending chain of separate but inter-dependant transactions which either
>     all must go, or none of them.  e.g. creating a, a/b, a/b/c, a/b/x, a/b/c/d,
>     etc etc.

I do not see why this example cannot be logically logged in pieces:

   ['new', inum_a, mode etc] ['link', inum_parent, inum_a, "a"]
   ['new', inum_b, mode etc] ['link' inum_a, inum_b, "b"]
   ['new', inum_c, mode etc] ['link' inum_a_b, inum_c, "c"]
   ['new', inum_x, mode etc] ['link' inum_a_b, inum_x, "x"]
   ['new', inum_d, mode etc] ['link' inum_a_b_c, inum_d, "d"]

Logical updates on one line are in the same logical commit.  Logical
allocations of blocks to record the possibly split btree leaves and new
allocations omitted for clarity.  The omitted logical updates are
bounded by the depth of the btrees.  To keep things simple, the logical
log format should be such that it is impossible to overflow one commit
block with the updates required to represent a single vfs level
transaction.

I suspect there may be some terminology skew re the term "transaction".
Tux3 understands this as "VFS transaction", which does not include
trying to make an entire write (2) call atomic for example, but only
such things as allocate+write for a single page cache page or
allocate+link for a sys_link call.  Fsync(2) is not a transaction but
a barrier that Tux3 is free to realize via multiple VFS transactions,
which the Linux VFS now takes care of pretty well now after many years
of patching up the logic.

>     At some point you have to be able to commit so the whole mess 
>     does not get undone by a crash, and many completed mini-transactions
>     (file or directory creates) actually cannot be considered complete until
>     their governing parent directories (when creating) or children (when
>     deleting) have been committed.  The problem can become very complex.

Indeed.  I think I have identified a number of techniques for stripping
away much of that complexity.  For example, the governing parent update
can be considered complete as soon as the logical 'link' log commit has
completed.

>     Last question here:  Your are forward-logging high level operations.
>     You are also going to have to log meta-data (actual B-Tree manipulation)
>     commits in order to recover from a crash while making B-Tree
>     modifications.  Correct?

Correct!  That is why there are two levels of logging: logical and
physical.  The physical logging level takes care of updating the cache
images of disk blocks to match what the logical logging level expects
before it can apply its changes.  These two logging levels are
interleaved: where a logical change requires splitting a btree block,
the resulting blocks are logged logically, but linked into the parent
btree block using a logical update stored in the commit block of the
physical log transaction.  How cool is that?  The physical log
transactions are not just throwaway things, they are the actual new
data.  Only the commit block is discarded, which I suppose will leave
a lot of one block holes around the volume, but then I do not have to
require that the commit block be immediately adjacent to the body of
the transaction, which will allow me to get good value out of such
holes.  On modern rotating media, strictly linear transfers are not
that much more efficient than several discontiguous transfers that all
land relatively close to each other.

>     So your crash recovery code will have to handle 
>     both meta-data undo and completed and partially completed transactions.

Tux3 does not use undo logging, only redo, so a transaction is complete
as soon as there is enough information on durable media to replay the
redo records.

>     And there will have to be a tie-in between the meta-data commits and
>     the transactions so you know which ones have to be replayed.  That
>     sounds fairly hairy.  Have you figured out how you are doing to do that?

Yes.  Each new commit records the sequence number of the oldest commit
that should be replayed.  So the train lays down track in front of
itself and pulls it up again when the caboose passes.  For now, there
is just one linear sequence of commits, though I could see elaborating
that to support efficient clustering.

One messy detail: each forward log transaction is written into free
space wherever physically convenient, but we need to be sure that that
free space is not allocated for data until log rollup has proceeded
past that transaction.  One way to do this is to make a special check
against the list of log transactions in flight at the point where
extent allocation thinks it has discovered a suitable free block, which
is the way ddsnap currently implements the idea.  I am not sure whether
I am going to stick with that method for Tux3 or just update the disk
image of the free tree to include the log transaction blocks and
somehow avoid logging those particular free tree changes to disk.  Hmm,
a choice of two ugly but workable methods, but thankfully neither
affects the disk image.

> :Once I get down to the leaf level I binary search on logical address in
> :the case of a file index btree or on version in the case of an inode
> :table block, so this cost is still Log(N) with a small k.  For a
> :heavily versioned inode or file region this might sometimes result in
> :an overflow block or two that has to be linearly searched which is not
> :a big problem so long as it is a rare case, which it really ought to be
> :for the kinds of filesystem loads I have seen.  A common example of a
> :worst case is /var/log/messages, where the mtime and size are going to
> :change in pretty much every version, so if you have hourly snapshots
> :and hold them for three months it adds up to about 2200 16 byte inode
> :table attributes to record, about 8 inode table leaf blocks.  I really
> :do not see that as a problem.  If it goes up to 100 leaf blocks to
> :search then that could be a problem.
>     
>     I think you can get away with this as long as you don't have too many
>     snapshots, and even if you do I noticed with HAMMER that only a small
>     percentage of inodes have a large number of versions associated with
>     them from normal production operation.

Yes, that is my expectation.  I think everything will perform fine up
to a few hundred snapshots without special optimization, which is way
beyond current expectations.  Most users only recently upgraded to
journalling filesystems let alone having any exposure to snapshots
at all.

>     /var/log/messages 
>     is an excellent example of that.  Log files were effected the most though
>     I also noticed that very large files also wind up with multiple versions
>     of the inode, such as when writing out a terrabyte-sized file. 

Right, when writing the file takes longer than the snapshot interval.

>     Even with a direct bypass for data blocks (but not their meta-data,
>     clearly), HAMMER could only cache so much meta-data in memory before
>     it had to finalize the topology and flush the inode out.  A
>     terrabyte-sized file wound up with about 1000 copies of the inode
>     prior to pruning (one had to be written out about every gigabyte or so).

For Tux3 with an hourly snapshot schedule that will only be four or
five versions of the [mtime, size] attribute to reflect the 4.7 hours
write time or so, and just one version of each block pointer.

> :The penultimate inode table index block tells me how many blocks a
> :given inode lives in because several blocks will have the same inum
> :key.  So the lookup algorithm for a massively versioned file becomes:
> :1) read the first inode table block holding that inum; 2) read the last
> :block with the same inum.  The latter operation only needs to consult
> :the immediate parent index block, which is locked in the page cache at
> :that point.
> 
>     How are you dealing with expansion of the logical inode block(s) as
>     new versions are added?  I'm assuming you are intending to pack the
>     inodes on the media so e.g. a 128-byte inode would only take up
>     128 bytes of media space in the best case.  Multiple inodes would be
>     laid out next to each other logically (I assume), but since the physical
>     blocks are larger they would also have to be laid out next to each
>     other physically within any given backing block.  Now what happens
>     when one has to be expanded?

The inode table block is split at a boundary between inodes.

An "inode" is broken up into attribute groups arranged so that it makes
sense to update or version all the members of an attribute group
together.  The "standard" attribute group consists of mode, uid, gid,
ctime and version, which adds up to 16 bytes.  The smallest empty inode
(touch foo) is 40 bytes and a file with "foo" in it as immediate data
is 64 bytes.  This has a standard attribute, a link count attribute, a
size/mtime attribute, and an immediate data attribute, all versioned.
An inode with heavily versioned attributes might overflow into the next
btree leaf as described elsewhere.

Free inodes lying between active inodes in the same leaf use two bytes
each, a consequence of the inode leaf directory, which has a table
at the top of the leaf of two byte pointers giving the offset of each
inode in the leaf, stored backwards and growing down towards the inode
attributes.  (This is a slight evolution of the ddsnap leaf format.)

>     I'm sure this ties into the forward-log but even with the best algorithms
>     you are going to hit limited cases where you have to expand the inode.
>     Are you just copying the inode block(s) into a new physical allocation
>     then?

Split the inode in memory, allocating a new buffer; forward log the two
new pieces physically out to disk with a logical record in the commit
block recording where the two new pointers are to be inserted without
actually inserting them until a logical rollup episode is triggered.

> :>     I couldn't get away from having a delete_tid (the 'death version
> :>     numbers').  I really tried :-)  There are many cases where one is only
> :>     deleting, rather then overwriting.
> :
> :By far the most common case I would think.  But check out the versioned
> :pointer algorithms.  Surprisingly that just works, which is not exactly
> :obvious:
> :
> :   Versioned pointers: a new method of representing snapshots
> :   http://lwn.net/Articles/288896/
> 
>     Yes, it makes sense.  If the snapshot is explicitly taken then you
>     can store direct references and chain, and you wouldn't need a delete
>     id in that case.  From that article though the chain looks fairly
>     linear.  Historical access could wind up being rather costly.

I can think of three common cases of files that get a lot of historical
modifications:

  * Append log
      - The first iteration in each new version generates a new size
        attribute

  * Truncate/rewrite
     - The first iteration in each new version generates a new size
       attribute and either a new file index root or a new immediate
       data attribute

  * Database
     - The first file change in each new version generates a new
       versioned pointer at the corresponding logical address in the
       file btree and a new size attribute (just because mtime is
       bundled together with the size in one attribute group).

So the only real proliferation is the size/mtime attributes, which gets
back to what I was thinking about providing quick access for the
"current" version (whatever that means).

> :I was originally planning to keep all versions of a truncate/rewrite
> :file in the same file index, but recently I realized that that is dumb
> :because there will never be any file data shared by a successor version
> :in that case.  So the thing to do is just create an entirely new
> :versioned file data attribute for each rewrite, bulking up the inode
> :table entry a little but greatly constraining the search for versions
> :to delete and reducing cache pressure by not loading unrelated version
> :data when traversing a file.
> 
>     When truncating to 0 I would agree with your assessment.  For the 
>     topology you are using you would definitely want to use different
>     file data sets.  You also have to deal with truncations that are not
>     to 0, that might be to the middle of a file.  Certainly not a
>     common case, but still a case that has to be coded for.  If you treat
>     truncation to 0 as a special case you will be adding considerable
>     complexity to the algorithm.

Yes, the proposal is to treat truncation to exactly zero as a special
case.  It is a tiny amount of extra code for what is arguably the most
common file rewrite case.

>     With HAMMER I chose to keep everything in one B-Tree, whether historical
>     or current.  That way one set of algorithms handles both cases and code
>     complexity is greatly reduced.  It isn't optimal... large amounts of
>     built up history still slow things down (though in a bounded fashion).
>     In that regard creating a separate topology for snapshots is a good
>     idea.

Essentially I chose the same strategy, except that I have the file
trees descending from the inode table instead of stretching out to the
side.  I think this gives a more compact tree overall, and since I am
using just one generic set of btree operations to handle these two
variant btrees, additional code complexity is minimal.

> :>     Both numbers are then needed to 
> :>     be able to properly present a historical view of the filesystem.
> :>     For example, when one is deleting a directory entry a snapshot after
> :>     that deletion must show the directory entry gone.
> :
> :...which happens in Tux3 courtesy of the fact that the entire block
> :containing the dirent will have been versioned, with the new version
> :showing the entry gone.  Here is one of two places where I violate my
> :vow to avoid copying an entire block when only one data item in it
> :changes (the other being the atime table).  I rely on two things to
> :make this nice: 1) Most dirent changes will be logically logged and
> :only rolled up into the versioned file blocks when there are enough to
> :be reasonably sure that each changed directory block will be hit
> :numerous times in each rollup episode.  (Storing the directory blocks
> :in dirent-create order as in PHTree makes this very likely for mass
> :deletes.)  2) When we care about this is usually during a mass delete,
> :where most or all dirents in each directory file block are removed
> :before moving on to the next block.
> 
>     This could wind up being a sticky issue for your implementation.
>     I like the concept of using the forward-log but if you actually have
>     to do a version copy of the directory at all you will have to update the
>     link (or some sort of) count for all the related inodes to keep track
>     of inode visibility, and to determine when the inode can be freed and
>     its storage space recovered.

There is a versioned link count attribute in the inode.  When a link
attribute goes to zero for a particular version it is removed from the
inode.  When there are no more link attributes left, that means no
directory block references the inode for any version and the inode may
be reused.

Note: one slightly bizarre property of this scheme is that an inode
can be reused in any version in which its link count is zero, and the
data blocks referenced by different versions in the same file index
can be completely unrelated.  I doubt there is a use for that.

>     Directories in HAMMER are just B-Tree elements.  One element per
>     directory-entry.  There are no directory blocks.   You may want to
>     consider using a similar mechanism.  For one thing, it makes lookups
>     utterly trivial... the file name is hashed and a lookup is performed
>     based on the hash key, then B-Tree elements with the same hash key
>     are iterated until a match is found (usually the first element is the
>     match).  Also, when adding or removing directory entries only the
>     directory inode's mtime field needs to be updated.  It's size does not.

Ext2 dirents are 8 bytes + name + round up to 4 bytes, very tough to
beat that compactness.  We have learned through bitter experience that
anything other than an Ext2/UFS style physically stable block of
dirents makes it difficult to support NFS telldir cookies accurately
because NFS vs gives us only a 31 bit cookie to work with, and that is
not enough to store a cursor for, say, a hash order directory
traversal.  This is the main reason that I have decided to go back to
basics for the Tux3 directory format, PHTree, and make it physically
stable.

In the PHTree directory format lookups are also trivial: the directory
btree is keyed by a hash of the name, then each dirent block (typically
one) that has a name with that hash is searched linearly.  Dirent block
pointer/hash pairs are at the btree leaves.  A one million entry
directory has about 5,000 dirent blocks referenced by about 1000 btree
leaf blocks, in turn referenced by three btree index blocks (branching
factor of 511 and 75% fullness).  These blocks all tend to end up in
the page cache for the directory file, so searching seldom references
the file index btree.

I did some back of the envelope calculations of the number of cache
lines that have to be hit for a lookup by Hammer with its fat btree
keys and lower branching factor, vs Tux3 with its high branching
factor, small keys, small pointers, extra btree level for dirent
offset stability and stupidly wasteful linear dirent search.  I will
not bore the list with the details, but it is obvious that Tux3/PHTree
will pass Hammer in lookup speed for some size of directory because of
the higher btree fanout.  PHTree starts from way behind courtesy of the
32 cache lines that have to be hit on average for the linear search,
amounting to more than half the CPU cost of performing a lookup in a
million element directory, so the crossover point is somewhere up in
the millions of entries.

Thus annoyed, I cast about for a better dirent leaf format than the
traditional Ext2 directory block, and found one after not too much
head scratching.  I reuse the same leaf directory format as for inode
leaf blocks, but this time the internal offsets are sorted by lexical
name order instead of inode number order.  The dirents become Ext2
dirents minus the record number format, and minus the padding to
four byte alignment, which does not do anything useful.  Dirent inum
increases to 6 bytes, balanced by saving 1.5 pad bytes on average,
so the resulting structure stores about 2% fewer dirents than the
Ext2 format against a probable 50% reduction in CPU latency per lookup.
A fine trade indeed.

The new format is physically stable and suitable for binary searching.
When we need to manage space within the leaf for creating or deleting
an entry, a version of the leaf directory ordered by offset can be
created rapidly from the lexically sorted directory, which occupies at
most about six cache lines.  The resulting small structure can be
cached to take advantage of the fact that most mass creates and deletes
keep hitting the same dirent block repeatedly, because Tux3 hands the
dirents to getdents in physical dirent order.

I concluded that both Hammer and PHTree are exceptionally fast at name
resolution.  When I have the new dirent block format in place I expect
to really hammer Hammer.  But then I do not expect Hammer to stand
still either ;-)

>     Your current directory block method could also represent a problem for
>     your directory inodes... adding and removing directory entries causing
>     size expansion or contraction could require rolling new versions
>     of the directory inode to update the size field.  You can't hold too
>     much in the forward-log without some seriously indexing.  Another
>     case to consider along with terrabyte-sized files and log files.

A PHTree directory grows like an append-only file, a block at a time,
though every time a new entry is added, mtime has to change, so mtime
changes more often than size.  They are grouped together in the same
attribute, so the distinction is moot.  If the directory is modified at
least once after every snapshot (quite likely) then the snapshot
retention policy governs how many mtime/size attributes are stored in
the inode.

Say the limit is 1,024 snapshots, then 16K worth of those attributes
have to be stored, which once again encourages me to store the "current"
mtime specially where it can be retrieved quickly.  It would also be a
good idea to store the file data attribute in front of the mtime/size
attributes so only stat has to worry about the bulky version info, not
lookup.

Note that there is no such thing as revving an entire inode, only bits
and pieces of it.

For truly massive number of versions, the plan is to split up the
version tree into subtrees, each having a thousand versions or so.  For
each version subtree (hmm, subversion tree...) there is a separate
inode btree carrying only attributes and file data owned by members of
that subtree.  At most log(subtrees) of those tables need to be
accessed by any versioned entity algorithm.  (Note the terminology
shift from versioned pointers to versioned entities to reflect the fact
that the same algorithms work for both pointers and attributes.)  I
think this approach scales roughly to infinity, or at least to the
point where log(versions) goes vertical which is essentially never.  It
requires a bounded amount of implementation effort, which will probably
be deferred for months or years.

Incidentally, PHTree directories are pretty much expand-only, which
is required in order to maintain physical dirent stability.  Compaction
is not hard, but it has to be an admin-triggered operation so it will
not collide with traversing the directory.

> :>     Both numbers are 
> :>     also needed to be able to properly prune out unwanted historical data
> :>     from the filesystem.  HAMMER's pruning algorithm (cleaning out old  
> :>     historical data which is no longer desired) creates holes in the
> :>     sequence so once you start pruning out unwanted historical data
> :>     the delete_tid of a prior record will not match the create_tid of the
> :>     following one (historically speaking).
> :
> :Again, check out the versioned pointer algorithms.  You can tell what
> :can be pruned just by consulting the version tree and the create_tids
> :(birth versions) for a particular logical address.  Maybe the hang is
> :that you do not organize the btrees by logical address (or inum in the
> :case of the inode table tree).  I thought you did but have not read
> :closely enough to be sure.
> 
>     Yes, I see.  Can you explain the versioned pointer algorithm a bit more,
>     it looks almost like a linear chain (say when someone is doing a daily
>     snapshot).  It looks great for optimal access to the HEAD but it doesn't
>     look very optimal if you want to dive into an old snapshot. 

It is fine for diving into old snapshots.  Lookup is always O(elements)
where an element is a versioned pointer or (later) extent, which are
very compact at 8 bytes each.  Because the truncate/rewrite file update
case is so common, you will rarely see more than one version at any
given logical address in a file.  A directory growing very slowly could
end up with about 200 different versions of each of its final few
blocks, one for each added entry.  It takes on the order of a
microsecond to scan through a few hundred versioned pointers to find
the one that points at the version in which we are interested, and that
only happens the first time the directory block is read into the page
cache.  After that, each reference to the block costs a couple of
lookups in a page cache radix tree.

The version lookup algorithm is not completely obvious: we scan through
the list of pointers looking for the version label that is nearest the
version being accessed on the path from that version to the root of the
version tree.  This potentially expensive search is accelerated using a
bitmap table to know when a version is "on the path" and a pre-computed
ord value for each version, to know which of the versions present and
"on the path" for a given logical address is furthest from the root
version.  This notion of furthest from the root on the path from a given
version to the root implements data inheritance, which is what yields
the compact representation of versioned data, and is also what
eliminates the need to explicitly represent the death version.  You have
discovered this too, in a simpler form.  I believe that once you get the
aha you will want to add versions of versions to your model.

>     For informational purposes: HAMMER has one B-Tree, organized using
>     a strict key comparison.  The key is made up of several fields which
>     are compared in priority order:
> 
> 	localization	- used to localize certain data types together and
> 			  to separate pseudo filesystems created within
> 			  the filesystem.
> 	obj_id		- the object id the record is associated with.
> 			  Basically the inode number the record is 
> 			  associated with.
> 
> 	rec_type	- the type of record, e.g. INODE, DATA, SYMLINK-DATA,
> 			  DIRECTORY-ENTRY, etc.
> 
> 	key		- e.g. file offset
> 
> 	create_tid	- the creation transaction id
> 
>     Inodes are grouped together using the localization field so if you
>     have a million inodes and are just stat()ing files, the stat
>     information is localized relative to other inodes and doesn't have to
>     skip file contents or data, resulting in highly localized accesses
>     on the storage media.
> 
>     Beyond that the B-Tree is organized by inode number and file offset.
>     In the case of a directory inode, the 'offset' is the hash key, so
>     directory entries are organized by hash key (part of the hash key is
>     an iterator to deal with namespace collisions).
> 
>     The structure seems to work well for both large and small files, for
>     ls -lR (stat()ing everything in sight) style traversals as well as 
>     tar-like traversals where the file contents for each file is read.

That is really sweet, provided that you are ok with the big keys.  It
was smart to go with that regular structure, giving you tons of control
over everything and get the system up and running early, thus beating the
competition.  I think you can iterate from there to compress things and
be even faster.  Though if I were to presume to choose your priorities
for you, it would be to make the reblocking optional.

>     The create_tid is the creation transaction id.  Historical accesses are
>     always 'ASOF' a particular TID, and will access the highest create_tid
>     that is still <= the ASOF TID.  The 'current' version of the filesystem
>     uses an asof TID of 0xFFFFFFFFFFFFFFFF and hence accesses the highest
>     create_tid.

So you have to search through a range of TIDs to find the operative one
for a particular version, just as I have to do with versioned pointers.
Now just throw in a bitmap like I use to take the version inheritance
into account and you have versioned pointers, which means snapshots of
snapshots and other nice things.  You're welcome :-)

I think that versioned pointer lookup can be implemented in O(log(E))
where E is the number of versioned pointers (entities) at the same
logical address, which would put it on a par with btree indexing, but
more compact.  I have sketched out an algorithm for this but I have not
yet tried to implement it.  

>     There is also a delete_tid which is used to filter out elements that
>     were deleted prior to the ASOF TID.  There is currently an issue with
>     HAMMER where the fact that the delete_tid is *not* part of the B-Tree
>     compare can lead to iterations for strictly deleted elements, verses
>     replaced elements which will have a new element with a higher create_tid
>     that the B-Tree can seek to directly.  In fact, at one point I tried
>     indexing on delete_tid instead of create_tid, but created one hell of a
>     mess in that I had to physically *move* B-Tree elements being deleted
>     instead of simply marking them as deleted.

This is where I do not quite follow you.  File contents are never
really deleted in subsequent versions, they are just replaced.  To
truncate a file, just add or update size attribute for the target
version and recover any data blocks past the truncate point that
belongs only to the target version.

Extended attributes and dirents can be truly deleted.  To delete an
extended attribute that is shared by some other version I will insert
a new versioned attribute that says "this attribute is not here" for
the current version, which will be inherited by its child versions.

Dirent deletion is handled by updating the dirent block, possibly
versioning the entire block.  Should the entire block ever be deleted
by a directory repacking operation (never actually happens on the vast
majority of Linux systems) that will be handled as a file truncate.

I see that in Hammer, the name is deleted from the btree instead, so
fair enough, that is actual deletion of an object.  But it could be
handled alternatively by adding a "this object is not here" object,
which might be enough to get rid of delete_tid entirely and just have
create_tid.

> :Fair enough.  I have an entirely different approach to what you call
> :mirroring and what I call delta replication.  (I reserve the term
> :mirroring to mean mindless duplication of physical media writes.)  This
> :method proved out well in ddsnap:
> :
> :   http://phunq.net/ddtree?p=zumastor/.git;a=tree;h=fc5cb496fff10a2b03034fcf95122f5828149257;hb=fc5cb496fff10a2b03034fcf95122f5828149257
> :
> :(Sorry about the massive URL, you can blame Linus for that;-)
> :
> :What I do in ddsnap is compute all the blocks that differ between two
> :versions and apply those to a remote volume already holding the first
> :of the two versions, yielding a replica of the second version that is
> :logically but not physically identical.  The same idea works for a
> :versioned filesystem: compute all the leaf data that differs between
> :two versions, per inum, and apply the resulting delta to the
> :corresponding inums in the remote replica.  The main difference vs a
> :ddsnap volume delta is that not all of the changes are physical blocks,
> :there are also changed inode attributes, so the delta stream format
> :has to be elaborated accordingly.
> 
>     Are you scanning the entire B-Tree to locate the differences?  It
>     sounds you would have to as a fall-back, but that you could use the
>     forward-log to quickly locate differences if the first version is
>     fairly recent.

I plan to scan the whole btree initially, which is what we do in ddsnap
and it works fine.  But by looking at the mtime attributes in the inode
I can see in which versions the file data was changed and thus not have
to scan most files.  Eventually, building in some accelerator to be able
to skip most inode leaves as well would be a nice thing to do.  I will
think about that in the background.

>     HAMMER's mirroring basically works like this:  The master has synchronized
>     up to transaction id C, the slave has synchronized up to transaction id A.
>     The mirroring code does an optimized scan of the B-Tree to supply all
>     B-Tree elements that have been modified between transaction A and C.
>     Any number of mirroring targets can be synchronized to varying degrees,
>     and can be out of date by varying amounts (even years, frankly).
> 
>     I decided to use the B-Tree to optimize the scan.  The B-Tree is
>     scanned and any element with either a creation or deletion transaction
>     id >= the slave's last synchronization point is then serialized and
>     piped to the slave.

That is nearly identical to the ddsnap replication algorithm, and we
also send the list over a pipe.  Ddsnap does not deal with attributes,
only volume blocks, otherwise I think we arrived at the same thing.

>     To avoid having to scan the entire B-Tree I perform an optimization
>     whereby the highest transaction id laid down at a leaf is propagated
>     up the B-Tree all the way to the root.  This also occurs if a B-Tree
>     node is physically deleted (due to pruning), even if no elements are
>     actually present at the leaf within the transaction range.
>     Thus the mirroring scan is able to skip any internal node (and its
>     entire sub-tree) which has not been modified after the synchronization
>     point, and is able to identify any leaf for which real, physical deletions
>     have occured (in addition to logical deletions which simply set the
>     delete_tid field in the B-Tree element) and pass along the key range
>     and any remaining elements in that leaf for the target to do a
>     comparative scan with.

Hmm.  I have quite a few bits available in the inode table btree node
pointers, which are currently only going to be free inode density
hints.  Something to think about.

>     This allows incremental mirroring, multiple slaves, and also allows
>     a mirror to go offline for months and then pop back online again
>     and optimally pick up where it left off.  The incremental mirroring
>     is important, the last thing I want to do is have to scan 2 billion
>     B-Tree elements to do an incremental mirroring batch.

Very, very nice.

>     The advantage is all the cool features I got by doing things that way.
>     The disadvantage is that the highest transaction id must be propagated
>     up the tree (though it isn't quite that bad because in HAMMER an entire
>     flush group uses the same transaction id, so we aren't constantly
>     repropagating new transaction id's up the same B-Tree nodes when
>     flushing a particular flush group).

This is a job for forward logging :-)

>     You may want to consider something similar.  I think using the
>     forward-log to optimize incremental mirroring operations is also fine
>     as long as you are willing to take the 'hit' of having to scan (though
>     not have to transfer) the entire B-Tree if the mirror is too far
>     out of date.

There is a slight skew in your perception of the function of forward
logging here, I think.  Forward logs transactions are not long-lived
disk objects, they just serve to batch together lots of little updates
into a few full block "physical" updates that can be written to the
disk in seek-optimized order.  It still works well for this application.
In short, we propagate dirty bits up the btree while updating the btree
disk blocks only rarely.  Instead, the node dirty bits are tucked into
the commit block of the write transaction or namespace edit that caused
the change, and are in turn propagated into the commit block of an
upcoming physical rollup, eventually working their way up the on-disk
btree image.  But they are present in the cached btree image as soon as
they are set, which is the structure consulted by the replication
process.

> :I intend to log insertions and deletions logically, which keeps each
> :down to a few bytes until a btree rollup episode comes along to perform
> :updating of the btree nodes in bulk.  I am pretty sure this will work
> :for you as well, and you might want to check out that forward logging
> :trick.
> 
>     Yes.  The reason I don't is because while it is really easy to lay down
>     a forward-log, intergrating it into lookup operations (if you don't
>     keep all the references cached in-memory) is really complex code.
>     I mean, you can always scan it backwards linearly and that certainly
>     is easy to do, but the filesystem's performance would be terrible.
>     So you have to index the log somehow to allow lookups on it in
>     reverse to occur reasonably optimally.  Have you figured out how you
>     are going to do that?

I hope that question is cleared up now.

>     With HAMMER I have an in-memory cache and the on-disk B-Tree.  Just
>     writing the merged lookup code (merging the in-memory cache with the
>     on-disk B-Tree for the purposes of doing a lookup) was fairly complex.
>     I would hate to have to do a three-way merge.... in-memory cache, on-disk
>     log, AND on-disk B-Tree.  Yowzer!

Tux3 also has an in-memory cache and an on-disk btree, and in addition,
the in-memory cache is identical to some future version of the on-disk
btree, which ought to be good for losing a lot of corner cases.

> :That reminds me, I was concerned about the idea of UNDO records vs
> :REDO.  I hope I have this right: you delay acknowledging any write
> :transaction to the submitter until log commit has progressed beyond the
> :associated UNDO records.  Otherwise, if you acknowledge, crash, and
> :prune all UNDO changes, then you would end up with applications
> :believing that they had got things onto stable storage and be wrong
> :about that.  I have no doubt you did the right thing there, but it is
> :not obvious from your design documentation.
> 
>     The way it works is that HAMMER's frontend is almost entirely
>     disconnected from the backend.  All meta-data operations are cached
>     in-memory.  create, delete, append, truncate, rename, write...
>     you name it.  Nothing the frontend does modifies any meta-data
>     or mata-data buffers.
> 
>     The only sychronization point is fsync(), the filesystem syncer, and
>     of course if too much in-memory cache is built up.  To improve
>     performance, raw data blocks are not included... space for raw data
>     writes is reserved by the frontend (without modifying the storage
>     allocation layer) and those data buffers are written to disk by the
>     frontend directly, just without any meta-data on-disk to reference
>     them so who cares if you crash then!  It would be as if those blocks
>     were never allocated in the first place.

Things are a little different in Linux.  The VFS takes care of most of
what you describe as your filesystem front end, and in fact, the VFS is
capable of running as a filesystem entirely on its own, just by
supplying a few stub methods to bypass the backing store (see ramfs).
I think this is pretty much what you call your front end, though I
probably missed some major functionality you provide there that the
VFS does not directly provide.

A filesystem backend on Linux would be the thing that implements the
prepare_write/commit_write calls that come in from the one-size-fits all
generic_file_buffered_write function that most filesystems use to
implement write(2).  These functions generally just call back into the
block library, passing a filesystem-specific get_block callback which
the library uses to assign physical disk sector addresses to buffers
attached to the page.  Ext3 opens a journal transaction in prepare_write
and finishes it in commit_write.  Not much of a transaction if you ask
me.  Tux3 wants to do things in a more direct way and in bigger chunks.

Actually, all Tux3 needs to do with the prepare_write/commit_write
calls that generic_file_buffered_write throws at it is remember which
inodes where involved, because the inode keeps a list of dirty pages,
the same ones that prepare_write/commit_write is trying to tell us
about.  When there are enough of them, on one of the commit_write calls
Tux3 can go delving into its file index btree to find or create a place
to store some.  This will generate changes to the btree that must be
logged logically in the commit block of the write transaction that is
being set up, which now gets its sequence number so that the logical
changes can be applied in the same order if replay is needed.

At the point Tux3 is asked to generate some new logical changes, it may
decide that it has already sent enough logical changes onto disk and
now would be a good time to roll some of them up.  The rolled up block
images are sitting in the buffer cache where Tux3 has prudently locked
them by incrementing the page use count.  Tux3 now sets up some physical
log transactions to write out the images.  Before doing so, it copies
each image to a new location in the buffer cache and modifies pointers
in the parent images to point at the new images (possibly having to read
in the parents first).  This generates new logical changes (the changes
to the parent nodes and changes to the free tree to allocate the new
buffer cache positions) which will be logically logged in the commit
blocks of the physical transaction about to be committed.

When the rollup transactions have been sent to disk, Tux3 can continue
allocating disk space etc by updating the new images of the disk blocks
and setting up new transactions, but no transactions that depend on the
rolled up state of the disk blocks can be allowed to complete until the
rollup transactions have completed, which Tux3 learns of by counting the
interrupt mode ->endio calls that come back from the bio transfers.

Note: no other filesystem on Linux current works this way.  They all
pretty much rely on the block IO library which implements the fairly
goofy one-block-at-a-time transfer regimen.  The idea of setting up bio
transfers directly from the filesystem is unfortunately something new.
We should have been doing this for a long time, because the interface
is quite elegant, and actually, that is just what the block IO library
is doing many levels down below.  It is time to strip away some levels
and just do what needs to be done in the way it ought to be done.

Notice how recursive the whole idea is.  Logical transactions cause
physical transactions which cause logical transactions etc.  I am
taking it on faith that the recursion terminates properly and does not
get into cycles.  Over time I will give concrete reasons why I think it
all just works.

Digression done, back to the Hammer algorithms...

>     When the backend decides to flush the cached meta-data ops it breaks
>     the meta-data ops into flush-groups whos dirty meta-data fits in the 
>     system's buffer cache.  The meta-data ops are executed, building the
>     UNDO, updating the B-Tree, allocating or finalizing the storage, and
>     modifying the meta-data buffers in the buffer cache.  BUT the dirty
>     meta-data buffers are locked into memory and NOT yet flushed to
>     the media.  The UNDO *is* flushed to the media, so the flush groups
>     can build a lot of UNDO up and flush it as they go if necessary.

Hmm, and I swear I did not write the above before reading this paragraph.
Many identical ideas there.

>     When the flush group has completed any remaining UNDO is flushed to the
>     media, we wait for I/O to complete, the volume header's UNDO FIFO index
>     is updated and written out, we wait for THAT I/O to complete, *then*
>     the dirty meta-data buffers are flushed to the media.  The flushes at
>     the end are done asynchronously (unless fsync()ing) and can overlap with
>     the flushes done at the beginning of the next flush group.  So there
>     are exactly two physical synchronization points for each flush.
> 
>     If a crash occurs at any point, upon remounting after a reboot
>     HAMMER only needs to run the UNDOs to undo any partially committed
>     meta-data.
> 
>     That's it.  It is fairly straight forward.
>
>     For your forward-log approach you would have to log the operations 
>     as they occur, which is fine since that can be cached in-memory. 
>     However, you will still need to synchronize the log entries with
>     a volume header update to update the log's index, so the recovery
>     code knows how far the log extends.  Plus you would also have to log
>     UNDO records when making actual changes to the permanent media
>     structures (your B-Trees), which is independant of the forward-log
>     entries you made representing high level filesystem operations, and
>     would also have to lock the related meta-data in memory until the
>     related log entries can be synchronized.  Then you would be able
>     to flush the meta-data buffers.

That is pretty close to the mark, but there are UNDO records in Tux3,
there are only logical and physical REDO records.

The forward log is supposed to try to avoid syncing the beginning of
the chain to a known location as much as possible, which it does by
having at least one transaction in its pipeline that has not been
committed to disk yet and is part of an existing forward chain.  The
chain is extended by allocating a location for the commit block of a
new transaction being formed, storing that location in the commit
block of the existing transaction, then committing the existing
transaction to disk.

>     The forward-log approach is definitely more fine-grained, particularly
>     for fsync() operations... those would go much faster using a forward
>     log then the mechanism I use because only the forward-log would have
>     to be synchronized (not the meta-data changes) to 'commit' the work.
>     I like that, it would be a definite advantage for database operations.

I think you will like it even more when you realize that updating the
filesystem header is not required for most transactions.

> :>     HAMMER's B-Tree elements are probably huge compared to Tux3, and that's
> :>     another limiting factor for the fan-out I can use.  My elements
> :>     are 64 bytes each.
> :
> :Yes, I mostly have 16 byte elements and am working on getting most of
> :them down to 12 or 8.
> 
>     I don't know how you can make them that small.  I spent months just
>     getting my elements down to 64 bytes.  The data reference alone for
>     data blocks is 12 bytes (64 bit media storage offset and 32 bit length).

A versioned extent:

   struct extent { unsigned version:10, count:6, block:24; };

Directory format to index the extent within a leaf:

   struct entry { unsigned loglo:24, offset:8 };
   struct group { unsigned loghi:24, count:8 };

Leaf layout (abridged):

   struct leaf { unsigned groups; struct group[]; struct entry[]; struct elements[] };

Extents are divided into groups, all of which have the same upper 24 bits
of logical address.  The 8 bit offset gives the position of the first
extent with that logical offset, measured in 8 byte units from the base
of the extent group.  The difference between two successive offset
bytes is the number of extents with the same logical offset.  The first
offset byte gives the total size of the group, and would otherwise be
always be zero and that would be a waste.  The base of the ith group is
determined by totalling the zeroth offset bytes.

The 48 bit logical address is split between the 32 bit dictionary and
the 32 bit group entry.  This format gives the ability to binsearch on
both entries and groups.  The number of groups per leaf should be just 
one or two at deep leaves where considerable splitting of the logical
address has already occurred, so the overhead of the dictionary is just
32 bits per extent roughly, giving a total overhead of 12 bytes per
extent.  Shallow btrees will approach 16 bytes per extent, but the tree
is shallow anyway so it does not matter.  The shallowest tree consists
of just a few extents recorded as an attribute in the inode btree.  I
introduce the rule that for such an extent attribute, the extents
belonging to each version are presumed to be logically contiguous, so
no directory is needed and the extent overhead drops to 8 bytes in this
common case.  Actually, most files should just consist of a single
extent.  That could be a new attribute form, a size/extent attribute,
with the mtime assumed to be the same as the ctime, which I think would
cover a majority of files on the system I am using right now.

See, I am really obsessive when it comes to saving bits.  None of these
compression hacks costs a lot of code or has a big chance of hiding a
bug, because the extra boundary conditions are strictly local.

>     I think the cpu overhead is going to be a bit worse then you are
>     contemplating, but your access footprint (with regards to system memory
>     use caching the meta-data) for non-historical accesses will be better.

We shall see.  Factors working in my favor are:

  * Snapshots are explicit, so you have to create lots of them plus
    heavily update the data in order to make the versioned data bushy.

  * The linux page cache not only caches data but provides an efficient
    access path to logical blocks that avoids having to consult the
    filesystem metadata repeatedly under heavy traffic.

  * Compact metadata entities including versioned attributes and extents
    (pointers for now)

  * The kind of load that people put on a Netapp is a trivially small
    number of versions for Tux3.

>     Are you going to use a B-Tree for the per-file layer or a blockmap?

Btree, which I call the file index btree.

>     And how are you going to optimize the storage for small files?

Immediate data is allowed in an inode as a versioned attribute.

>     Are you just going to leave them in the log and not push a B-Tree
>     for them? 

It did not occur to me to leave them in the logical log, but yes that
is a nice optimization.  They will only stay there for a while, until a
rollup is triggered, moving them into the inode table proper.  It gets
a little tricky to figure out whether the amortization due to the
logical logging step is worth the extra write.  There is a crossover
somewhere, maybe around a hundred bytes or so, ideal for symlinks and
acls.

> :I might eventually add some explicit cursor caching, but various
> :artists over the years have noticed that it does not make as much
> :difference as you might think.
> 
>      For uncacheable data sets the cpu overhead is almost irrelevant.

For rotating media, true.  There is also flash, and Ramback...

>      But for cached data sets, watch out!  The cpu overhead of your B-Tree
>      lookups is going to be 50% of your cpu time, with the other 50% being
>      the memory copy or memory mapping operation and buffer cache operations.
>      It is really horrendous.  When someone is read()ing or write()ing a
>      large file the last thing you want to do is traverse the same 4 nodes
>      and do four binary searches in each of those nodes for every read().

I found that from my HTree experience.  I found that pretty much the
entire cost of the bad old linear dirops was CPU and I was able to show
a ridiculous speedup by cutting the cost down to O(log511(ops)) from
O(ops^2).

>      For large fully cached data sets not caching B-Tree cursors will
>      strip away 20-30% of your performance once your B-Tree depth exceeds
>      4 or 5.  Also, the fan-out does not necessarily help there because
>      the search within the B-Tree node costs almost as much as moving
>      between B-Tree nodes.
>  
>      I found this out when I started comparing HAMMER performance with
>      UFS.  For the fully cached case UFS was 30% faster until I started
>      caching B-Tree cursors.  It was definitely noticable once my B-Tree
>      grew past a million elements or so.  It disappeared completely when
>      I started caching cursors into the B-Tree.

OK, you convinced me.  I already have btree cursors (see ddsnap path[])
that are so far only used for per operation btree access and editing.
So these will eventually become cache objects.
 
> :>     If I were to do radix compression I would also want to go with a
> :>     fully dynamic element size and fully dynamic fan-out in order to
> :>     best-compress each B-Tree node.  Definitely food for thought.
> :
> :Indeed.  But Linux is braindamaged about large block size, so there is
> :very strong motivation to stay within physical page size for the
> :immediate future.  Perhaps if I get around to a certain hack that has
> :been perenially delayed, that situation will improve:
> :
> :   "Variable sized page objects"
> :   http://lwn.net/Articles/37795/
> 
>     I think it will be an issue for people trying to port HAMMER.  I'm trying
>     to think of ways to deal with it.  Anyone doing an initial port can 
>     just drop all the blocks down to 16K, removing the problem but
>     increasing the overhead when working with large files.

Do the variable sized page hack :-]

Or alternatively look at the XFS buffer cache shim layer, which
Christoph manhandled into kernel after years of XFS not being accepted
into mainline because of it.

> :>     I'd love to do something like that.  I think radix compression would
> :>     remove much of the topological bloat the B-Tree creates verses using
> :>     blockmaps, generally speaking.
> :
> :Topological bloat?
> 
>     Bytes per record.  e.g. the cost of creating a small file in HAMMER
>     is 3 B-Tree records (directory entry + inode record + one data record),
>     plus the inode data, plus the file data.  For HAMMER that is 64*3 +
>     128 + 112 (say the file is 100 bytes long, round up to a 16 byte
>     boundary)... so that is 432 bytes.

Let me see, for Tux3 that is 64 bytes for the inode part including a
tiny amount of data, plus 32 bytes or so for the dirent part, say 100
bytes.  This is my reward for being an obsessive bit mizer.

>     The bigger cost is when creating and managing a large file.  A 1 gigabyte
>     file in HAMMER requires 1G/65536 = 16384 B-Tree elements, which comes
>     to 1 megabyte of meta-data.  If I were to radix-compress those 
>     elements the meta-data overhead would probably be cut to 300KB,
>     possibly even less.
>
>     Where this matters is that it directly effects the meta-data footprint 
>     in the system caches which in turn directly effects the filesystem's
>     ability to cache information without having to go to disk.  It can
>     be a big deal. 

With pointers it is about a megabyte per gigabyte for Tux3 too (and
Ext2 and Ext3) so going to extents is not optional.  I plan a limit of
64 blocks per extent, cutting the metadata overhead down to 16K per
gigabyte, which is hard to complain about.  Larger extents in the file
index would not buy much and would mess up the nice, 8 byte
one-size-fits-all versioned extent design.

>     Adding a forward log to HAMMER is possible, I might do it just for
>     quick write()/fsync() style operations.  I am still very wary of the
>     added code complexity.

Waiting for me to prove the idea seems reasonable.

> :He did remind me (via time travel from year 2000) of some details I
> :should write into the design explicitly, for example, logging orphan
> :inodes that are unlinked while open, so they can be deleted on replay
> :after a crash.  Another nice application of forward logging, which
> :avoids the seek-happy linked list through the inode table that Ext3
> :does.
> 
>     Orphan inodes in HAMMER will always be committed to disk with a
>     0 link count.  The pruner deals with them after a crash.  Orphan
>     inodes can also be commited due to directory entry dependancies.

That is nice, but since I want to run without a pruner I had better
stick with the logical logging plan.  The link count of the inode will
also be committed as zero, or rather, the link attribute will be
entirely missing at that point, which allows for a cross check.

>     The budding operations is very interesting to me...
>     well, the ability to fork the filesystem and effectively write to
>     a snapshot.  I'd love to be able to do that, it is a far superior
>     mechanism to taking a snapshot and then performing a rollback later.

I was thinking more about the operation of taking somebody's home
directory and turning it into a separate volume, for whatever reason.
Tux3 snapshots (versions) have that write/rollback ability, and also
have it for snapshots of snapshots, just in case that was not clear.

>     Hmm.  I could definitely manage more then one B-Tree if I wanted to.
>     That might be the ticket for HAMMER... use the existing snapshot
>     mechanic as backing store and a separate B-Tree to hold all changes
>     made to the snapshot, then do a merged lookup between the new B-Tree
>     and the old B-Tree.  That would indeed work.

Yes, it sounds nice.

>     I can tell you've been thinking about Tux for a long time.  If I
>     had one worry about your proposed implementation it would be in the
>     area of algorithmic complexity.  You have to deal with the in-memory 
>     cache, the log, the B-Tree, plus secondary indexing for snapshotted
>     elements and a ton of special cases all over the place.  Your general
>     lookup code is going to be very, very complex.

The lookup path is actually quite simple, I am far enough in to know
that.  The versioning algorithms are the most complex piece from an
algorithmic point of view, and as you can see from the sample
implementation, they have already been boiled down to nice readable
code.

The details of logging are also less complex than they appear at first
blush, due to a number of simplifications I have discovered.  You
should see the mess that the Ext3 guys had to deal with as a result of
journalling limitations.

I do not plan to provide secondary indexing for any versioned elements,
except as described in the btree leaf formats, which have obligingly
arranged themselves into a nice pattern: fixed sized elements at the
bottom of the leaf grouped by a little directory vector growing down
from the top of the leaf.  I am pleased with the way those details get
more regular as implementation proceeds, opposite to the usual trend.

>     My original design for HAMMER was a lot more complex (if you can
>     believe it!) then the end result.

My design for Tux2 was gross in retrospect.

>     A good chunk of what I had to do 
>     going from concept to reality was deflate a lot of that complexity.
>     When it got down to brass tacks I couldn't stick with using the 
>     delete_tid as a B-Tree search key field, I had to use create_tid.  I
>     couldn't use my fine-grained storage model concept because the
>     performance was terrible (too many random writes interfering with
>     streaming I/O).  I couldn't use a hybrid B-Tree, where a B-Tree element
>     could hold the base of an entirely new B-Tree (it complicated the pruning
>     and reblocking code so much that I just gave up trying to do it in
>     frustration).  I couldn't implement file extents other then 16K and 64K
>     blocks (it really complicated historical lookups and the buffer cache
>     couldn't handle it) <--- that one really annoyed me.  I had overly
>     optimized the storage model to try to get block pointers down to 32 bits
>     by localizing B-Tree elements in the same 'super clusters' as the data
>     they referenced.  It was a disaster and I ripped it out.  The list goes
>     on :-)

I do not rely on reblocking, and pruning of discarded versions is done
by the same somewhat complex but already in production btree traversal
for both the inode table and file btree levels, so I thankfully did not
run into the reasons you discarded the hybrid btree.

File extents are indeed a nasty issue because of the complexity they
impose on the versioned entity algorithms.  But this code will be
implemented and debugged in isolation along the same lines as the
versioned pointer fuzz tester.  I am actually thinking of a list member
who I know is entirely capable of doing that elegantly, and I hope will
raise his hand soon, as otherwise I will have to spend a significant
number of weeks on that.  For the time being I have chosen the path of
wisdom and will stick with versioned pointers until the filesystem is
basically up, versioning, atomically committing, and not deadlocking.

>     I do wish we had something like LVM on BSD systems.  You guys are very
>     lucky in that regard.  LVM is really nice.

But I thought you BSD guys have GEOM.  I think GEOM is really nice, and
I think that Linux LVM has so many problems that I have started work on
a replacement, LVM3.

>     BTW it took all day to write this!

It took two days to write this ;-)

I hope the fanout effect does not mean the next post will take four days.

Regards,

Daniel

_______________________________________________
Tux3 mailing list
Tux3 at tux3.org
http://tux3.org/cgi-bin/mailman/listinfo/tux3



More information about the Tux3 mailing list