Tux3 Report: Faster than tmpfs, what?
Andreas Dilger
adilger at dilger.ca
Mon May 13 17:08:16 PDT 2013
On 2013-05-13, at 17:22, Daniel Phillips <daniel.raymond.phillips at gmail.com> wrote:
> Hi Ted,
> You said:
>> ...any advantage of decoupling the front/back end
>> is nullified, since fsync(2) requires a temporal coupling
>
> After after pondering it for a while, I realized that is not
> completely accurate. The reduced delete latency will
> allow the dbench process to proceed to the fsync point
> faster, then if our fsync is reasonably efficient (not the
> case today, but planned) we may still see an overall
> speedup.
Ages ago, before we implemented extents for ext3, we had an asynchronous unlink/truncate-to-zero thread that was handing the busywork of traversing the indirect tree and updating all of the bitmaps.
This was transactionally safe, since the blocks were moved over to a temporary inode in the main process' transaction, and the unlinked inode was on the orphan list.
With the extent-mapped inodes the latency of the unlink/truncate-to-zero was greatly reduced, and we dropped that code. If anyone is interested to revive this for some reason, the newest version I could find was for 2.4.24:
http://git.whamcloud.com/?p=fs/lustre-release.git;a=blob_plain;f=lustre/kernel_patches/patches/ext3-delete_thread-2.4.24.patch;hb=21420e6d66eaaf8de0342beab266460c207c054d
IIRC, it only pushed unlink/truncate to the thread if it had indirect blocks, since the effort of allocating a separate inode and transferring over the allocated blocks wasn't worthwhile otherwise.
Cheers, Andreas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://phunq.net/pipermail/tux3/attachments/20130513/9767ac95/attachment.html>
More information about the Tux3
mailing list