[Tux3] Cool features

Michael Keulkeul kriptomik at gmail.com
Thu Jan 8 02:21:12 PST 2009


Very good news !

 This would be so cool to be able to use tux3 as a backend for SAN
! Journaling data + meta is a solution, or sync writes (I don't know if
OCFS2 works) but I miss journaling and performance, well, just sux... Last
thing is that to my knowledge even ZFS is not ready for that, and I would
love to see tux3 beating it on that ground :)

Just a note : About clustered filesystem, the "no data loss" part is crutial
if you host luns on the filesystem, because if a takeover occurs and the lun
does not contain exactly what the initiator expect it's likely that bad
things will happen (or worse), and that your takeover were pretty useless
(or worse than a target service shutdown).

And thanks again for your attention !


On 1/7/09, Daniel Phillips <phillips at phunq.net> wrote:
>
> On Tuesday 06 January 2009 06:47, Michael Keulkeul wrote:
> > Hi
> > First I must say that tux3 is the coolest and cleanest (in many ways)
> > filesystem project I've seen sofar. I've seen a discussion thread about
> cool
> > features, so I add mine.
>
> Wow, geek praise doesn't come any higher than cool and clean, thanks
> for that :-)
>
> > Clustered filesystem :
> > No grid or things like that, but the ability to maintain a coherent cache
> on
> > a single other host that have access to the same disk backend in order to
> > get a read only acces from this other host, and enable fast filesystem
> > takeover (Make the filesystem read/write on the "passive" host in 100ms
> or
> > less) with no data loss. This would be nice to be able to use tux3 as
> > backend for high availability luns, and might provide some "NVRAM" if you
> > assume that a each host of the cluster will not fail at the same time
> (each
> > single host could provide some memory to the other).
>
> Extending the Tux3 atomic commit to a cluster will be a fine project
> for later on.  I don't think there is much point in doing a read only
> partial cluster implementation.  The techniques for keeping a coherent
> cache on a cluster are well established by now.
>
> More immediately, we will have replication, just as ddsnap/Zumastor
> has it now, except more efficient.  This does not meet your 100 ms
> takeover goal, but it will be useful for many common situations, like
> serving home directories.  It does not necessarily fail over with the
> very latest data, which might not have been replicated yet.
>
> The proper way to do what you want is with a cluster filesystem.  It
> would be lots of fun to turn Tux3 into a cluster filesystem.  There
> are plenty of interesting puzzles to solve.  For now, OCFS2 is pretty
> good.
>
> > Filesystem freeze :
> > Get an utility that flush cache and return something when it's done, then
> > freeze IO on disk and throttle/stack in a memory buffer until it's full.
> > When it's full, return again something and resume normal operation, or
> > freeze IO until we ask to resume. This in order to take clean snapshots
> when
> > backend support versionning. Even if it's not necessary due to tux3
> design,
> > it would be nice to be able to do it in order to ensure that some IO are
> > commited to disk, then get some time to do something to the disk backend,
> > with no impact on the filesystem side.
>
> I think all you want there is the ability to treat a snapshot as a
> barrier: user asks for a snapshot, Tux3 starts a new delta and sets a
> flag on it; when that snapshot has committed, the snapshot request
> is acknowledged.  That way, the user gets a snapshot of what has been
> sent to the filesystem most recently, without needing to stall the
> filesystem throughput.
>
> Tux3 does not need a new memory buffer for this, the needed mechanism
> is just what has already been designed.
>
> > Choose bitmaps or extends at filesystem creation time :
> > Because you sometime know that fragmentation will be your worse foe (that
> > can happen if you keep a lot of versions), and you don't really care
> about
> > metadata weight. If we could just choose, even without any chance to
> change
> > this after creation time, it would be very very nice.
>
> I think we will be able to make that decision automatically, pretty
> reliably.  There is a crossover point where extents become more compact
> than bitmaps.  The plan is to convert automatically on crossing the
> threshold, being a little lazy to avoid too many conversions.  If that
> proves too hard to implement, then a mount option might be a reasonable
> approach.
>
> > Thanks for your time reading this, and tell me if something does not make
> > sense, english is not my first language !
> > And thanks for all your efforts providing us a real modern linux
> filesystem
> > that deserve that name !
> >
> > Michael
>
> I read it a couple of times and it made more sense the second time :-)
>
> You may have to wait a while for cluster Tux3, but the other two should
> arrive over the next few months.
>
> Regards,
>
> Daniel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://phunq.net/pipermail/tux3/attachments/20090108/1da646e3/attachment-0001.html>
-------------- next part --------------
_______________________________________________
Tux3 mailing list
Tux3 at tux3.org
http://mailman.tux3.org/cgi-bin/mailman/listinfo/tux3


More information about the Tux3 mailing list