[Tux3] Deferred namespace operations, change return type of fs create method

Mike Snitzer snitzer at gmail.com
Mon Dec 8 17:50:20 PST 2008


On 12/8/08, Daniel Phillips <phillips at phunq.net> wrote:
> On Monday 08 December 2008 16:38, Mike Snitzer wrote:
>  > On Mon, Dec 8, 2008 at 5:20 PM, Daniel Phillips <phillips at phunq.net> wrote:
>  > > On Monday 08 December 2008 13:02, Mike Snitzer wrote:
>
> > >> So my question is, how might tux3 be trained to _not_ cleanup orphaned
>  > >> inodes on re-mount like conventional Linux fileystems?  Could a
>  > >> re-mount filter be added that would trap and then somehow reschedule
>  > >> tux3's deferred delete of orphan inodes?  This would leave a window of
>  > >> time for an exposed hook to be called (by an upper layer) to
>  > >> reconstitute a reference on each orphaned inode that is still open.
>  > >
>  > > Something like the NFS silly rename problem.  There, the client avoids
>  > > closing a file by renaming it instead, which creates a cleanup problem.
>  > > Something more elegant ought to be possible.
>  > >
>  > > If the dirent is gone, leaving an orphaned inode, and the filesystem
>  > > has been convinced not to delete the orphan on restart, how would you
>  > > re-open the file?  Open by inode number from within kernel?
>  >
>  > Well, in a distributed filesystem the server-side may not even have
>  > the notion of open or closed; the client is concerned with such
>  > details.
>  >
>  > But yes, some mechanism to read the orphaned inode off the disk into
>  > memory.  E.g. iget5_locked, linux gives you enough rope to defeat
>  > n_link==0, combined with a call to read_inode() (ext3_read_inode()
>  > became ext3_iget()).  Unfortunately to read orphaned inodes with ext3
>  > that requires clearing the EXT3_ORPHAN_FS flag in the super_block's
>  > s_mount_state.
>  >
>  > It is all quite ugly; and maybe a corner-case that tux3 doesn't
>  > need/want to worry about?
>
>
> Would a mount option to move orphans into lost+found on remount do the
>  trick?

Yes, I think it could.  Good call.

Mike

_______________________________________________
Tux3 mailing list
Tux3 at tux3.org
http://mailman.tux3.org/cgi-bin/mailman/listinfo/tux3



More information about the Tux3 mailing list