[Tux3] Cont: a few thoughts about REALLY outstanding features

myLC at gmx.net myLC at gmx.net
Sat Dec 20 09:41:38 PST 2008


'lo again, =)

I currently get to go online only sporadically, hence I'll
give a 3in1-reply here:


Philipp Marek replied:
> > - insert/prepend a chunk of data into a file
> > - remove a chunk from a file
> > - move a chunk from one file into another
> Well, if such things are possible, it would only be a small
> step to make "copy this chunk of data to the other file, and
> let it start from some offset", with COW semantics.
>
> And IIRC there was some talk about using extents in several
> "versions" of the filesystem in parallel (with COW) - so
> this idea would simply say that we need COW within a
> *single* version, too.

That's a funny way of looking at it, "from the back of the
head through the eye" so to speak. ;-)
For me, in order to make that possible, all you need is the
support for sparse blocks within (begin, middle or end)
files. I'm not sure how COW can help here; you can modify
blocks, but not their size ('cept the end), right?



Daniel Phillips replied (making me rather angry;-):
> If you are writing a video editing application, you probably
> should design an intermediate representation that uses
> multiple files and keeps track of them with an index.

Yes, that's what's being done currently. Reinventing the
wheel for thousands of applications...


> Just another filesystem I am afraid, with one big advance
> (versioning) and a number of incremental ones.  This time
> around, we aim to implement existing practice better, keep
> the code base tight, and add some features that are show
> stoppers if Linux doesn't have them.
>
> My theory is, if we really focus on just a few things and do
> those as well as we can, and satisfy the really common use
> cases like desktop and NFS server, that puts us in a
> position to play, and for other people to play with our
> code.  And other people will be able to play a lot more
> easily if we continue our focus on small and tight and
> really convenient to work on.

Take a look at XFS. Isn't XFS + xfsdump great if compared to
ext3? Yet ext3 is being used on perhaps more than 90% of the
Linux boxes. One reason surely is a "general dislike" of the
distributors against XFS, because of where it came from and
because of what it did to the kernel. The other one though
is one which you might encounter as well: As long as there
isn't a graphical tool for doing all the magic (and not some
crappy TCL app which simply calls the command-line tools
"blindly") you might find your fs being ignored, as demand
for something new almost always comes from the users...


> Then just grab the code and start playing with pointers.  We
> will do this much for you:  we will take special care to
> make it easy to change block pointers around on the fly.
> That's needed for base capabilities like volume shrink.
> You can use it for your magic pointers project.

Heh, the last time I wrote something like a (very basic
memory based) filesystem it was on an 68k Amiga. ;-)

I look at it this way:
What you have as the representation of a file is somewhat
like a linked list. You (and everybody else for that matter
as I've never seen such an API) are currently treating it
like a stack. To me that looks like a waste.
It is clear that there can be no "demand" for such
functionality as it has never been there. Programmers always
had to program around these deficiencies.
Once it were there - with an API - it surely would be used
and it would generate a growing demand quickly...


Peter Kelemen wrote:
> History has shown that it is a reasonably well-working
> abstraction and anything more complex just blows
> maintainability costs through the roof.

And it does - even more - not on your end though! ;-)


> >  Let's take a real world example: Video editing. [...]
>
> Every serious video editing software works with individual
> intermediary frames as single files for various reasons, one
> of them being what you cite.

And not only those, yes - my point exactly - reinventing the
wheel over and over. Plus there are some limitations to that
(missing knowledge of the underlying filesystem and ways to
manipulate it) making it far from perfect each time.

I'm not sure why sparse blocks within files make life that
much harder. Sure, if abused you can get a new form of
"defragmentation" - which can be fixed almost like the other
form. Apart from that, what would be so problematic about
it?


Again, sorry for my dead slow replies!

                                        LC (myLC at gmx.net)



_______________________________________________
Tux3 mailing list
Tux3 at tux3.org
http://mailman.tux3.org/cgi-bin/mailman/listinfo/tux3



More information about the Tux3 mailing list