[Tux3] Tux3 Univ - 25th Sept.

Pranith Kumar bobby.prani at gmail.com
Sat Sep 27 13:08:29 PDT 2008


2008-09-25 20:13 <flipsout> where were we
2008-09-25 20:13 <flipsout> any requests?
2008-09-25 20:13 <MaZe> last one was bio xfrs
2008-09-25 20:13 <MaZe> then last tuesday we skipped
2008-09-25 20:13 <flipsout> right, conducted by maze
2008-09-25 20:13 <flipsout> it was good
2008-09-25 20:14 <flipsout> now tux3fs has a rather nice generic set of bio
fns
2008-09-25 20:14 <flipsout> an async and a sync bio transfer flavor
2008-09-25 20:14 <MaZe> right
2008-09-25 20:14 <flipsout> fully general, except maybe it could take some
alloc flags
2008-09-25 20:14 <MaZe> alloc flags?
2008-09-25 20:14 <flipsout> yes, like how hard the kernel should try to
satisfy a request
2008-09-25 20:15 <flipsout> you will see that functions like kmalloc take
gfp flags
2008-09-25 20:15 <MaZe> memory wise or io wise?
2008-09-25 20:15 <flipsout> "gfp: get free pages"
2008-09-25 20:15 <flipsout> memory wise
2008-09-25 20:15 <flipsout> well
2008-09-25 20:15 <flipsout> is coupled to io
2008-09-25 20:15 <flipsout> in an incestuous way
2008-09-25 20:15 <flipsout> most of the time, the kernel cache will be just
about full
2008-09-25 20:16 <MaZe> what we have now I believe asks for memory in a 'can
sleep' way
2008-09-25 20:16 <flipsout> except for right after boot, or after unmounting
a volume, say, which invalidates a bunch of cache
2008-09-25 20:16 <flipsout> unless you specify GFP_ATOMIC, it is always "can
sleep"
2008-09-25 20:16 <MaZe> also when you delete a dvd you just checksummed ;-)
2008-09-25 20:16 <flipsout> so that io transfers can take place and other
things can run while waiting for memory to get free
2008-09-25 20:17 <flipsout> we have __NOFAIL as a gfp flag
2008-09-25 20:17 <MaZe> just means it will try for infinity...
2008-09-25 20:17 <flipsout> it means, under no circumstances return without
completing the allocation
2008-09-25 20:17 <MaZe> until it suceeds
2008-09-25 20:18 <flipsout> yes
2008-09-25 20:18 <flipsout> and what could prevent it from succeeding?
2008-09-25 20:18 <MaZe> asking for 100M on 50M machine
2008-09-25 20:18 <flipsout> true
2008-09-25 20:18 <MaZe> or 120M machine with 20+M already allocated
2008-09-25 20:18 <MaZe> or not enough memory of a specific type
2008-09-25 20:18 <flipsout> or on a 200M machine on which 195M has leaked
2008-09-25 20:19 <MaZe> (ie. asking for low memory, when only high mem is
free)
2008-09-25 20:19 <flipsout> also true
2008-09-25 20:19 <MaZe> [or dma16 or dma32]
2008-09-25 20:19 <flipsout> but the most common reason is: when memory is
full of dirty pages that cannot be written out for some reason
2008-09-25 20:19 <flipsout> in general it is a bug
2008-09-25 20:20 <flipsout> in general, memory can always be allocated in
kernel, by kicking out some cache
2008-09-25 20:20 <MaZe> so writing out dirty pages should not need to
allocate memory, since otherwise it can deadlock?
2008-09-25 20:20 <MaZe> or you need to have a pre-allocated pool of
temporary pages
2008-09-25 20:20 <MaZe> I believe the kernel even provides such features
2008-09-25 20:21 <flipsout> exactly
2008-09-25 20:21 <flipsout> you nailed that
2008-09-25 20:21 <flipsout> in fact, this is an unsolved problem in linux
kernel
2008-09-25 20:21 <flipsout> or it is solved, but the fix is not in mainline
2008-09-25 20:21 <flipsout> see bio-throttle
2008-09-25 20:22 <flipsout> there has been an attempt to fix the problem by
limiting total memory that is allowed to be dirty in kernel
2008-09-25 20:22 <flipsout> "dirty limits"
2008-09-25 20:22 <flipsout> complex, fragile, and doesn't work
2008-09-25 20:22 <flipsout> but has been good for creating lots of bugfixing
activity lately
2008-09-25 20:22 <flipsout> anyway
2008-09-25 20:22 <flipsout> enough on memory for now?
2008-09-25 20:23 <MaZe> I think so...
2008-09-25 20:23 <MaZe> this is just something to be aware of off?
2008-09-25 20:23 <flipsout> let's get back to __copy2
2008-09-25 20:23 <flipsout> get you thinking about it, yes
2008-09-25 20:23 <flipsout> and some practical facts about GFP_ flags to
memory allocators
2008-09-25 20:24 <flipsout> when you allocate a bio, there is an attempt
made to provide a pre-allocated pool, so in theory a bio alloc will never
fail
2008-09-25 20:25 <flipsout> in practice, it can slow to a craw as the
pre-allocated pool only gaurantees 2 bios
2008-09-25 20:25 <flipsout> and it often gets into that corner
2008-09-25 20:25 <MaZe> hmm, so should you keep a couple pre-alloced bios
for yourself?
2008-09-25 20:26 <flipsout> youcan maintain your own pool, yes
2008-09-25 20:26 <flipsout> perhaps a good idea when the kernel is in the
broken state it is
2008-09-25 20:26 <flipsout> extra complexity
2008-09-25 20:26 <flipsout> see the "mempool" mechanism
2008-09-25 20:27 <MaZe> is it worth it?
2008-09-25 20:27 <flipsout> better is to fix the bug
2008-09-25 20:27 <flipsout> it works for some situations
2008-09-25 20:27 <flipsout> its messy
2008-09-25 20:28 <MaZe> much harder to fix bugs, then to work around them
;-)
2008-09-25 20:28 <MaZe> the first requires understanding the entire system
2008-09-25 20:28 <MaZe> the second only the way it affects you
2008-09-25 20:28 <flipsout> true
2008-09-25 20:28 <flipsout> we can return to that issue
2008-09-25 20:29 <flipsout> it is fully understood, but not by everybody
2008-09-25 20:29 <flipsout>
http://lxr.linux.no/linux+v2.6.26.5/mm/filemap.c#L2063 <- _2copy
2008-09-25 20:29 <flipsout> I'm totally mystified by the name, 2copy
2008-09-25 20:29 <MaZe> as we all are
2008-09-25 20:29 <MaZe> speaking of all
2008-09-25 20:29 <MaZe> how many of us are here?
2008-09-25 20:29 <MaZe> I'm feeling lonely
2008-09-25 20:30 <MaZe> [and drunk...]
2008-09-25 20:30 <flipsout> heh
2008-09-25 20:30 <flipsout> we'll keep it light then
2008-09-25 20:30 <flipsout> and short
2008-09-25 20:30 <flipsout> I'm attempting to get feeling drunk ;)
2008-09-25 20:30 <flipsout> you're well ahead it would seem
2008-09-25 20:30 <MaZe> heh
2008-09-25 20:31 <MaZe> yea, 2 glasses of white (some pinot), and 2 of red
(not sure what it was), plus steak, plus an afternoon at a gun range
2008-09-25 20:31 <flipsout> the gun range made you drunk I presume
2008-09-25 20:31 <flipsout> wine before or after shooting?
2008-09-25 20:31 <MaZe> nah, that was first, and was fun ;-)
2008-09-25 20:31 <MaZe> the range was first
2008-09-25 20:32 <flipsout> afterward you rode around and shot up a few stop
signs?
2008-09-25 20:32 <MaZe> nah, we left the range gun-less
2008-09-25 20:32 <flipsout> just checking
2008-09-25 20:32 <MaZe> we then invaded an italian restaurant in downtown
mountain view
2008-09-25 20:33 <flipsout> castro street?
2008-09-25 20:33 <MaZe> yep
2008-09-25 20:34 <flipsout> ok, 2copy
2008-09-25 20:34 <MaZe> right
2008-09-25 20:34 <MaZe> seems we're pretty much alone
2008-09-25 20:35 <flipsout> the basic scheme is: alloc page; ->prepare
write;  copy data onto it; ->commit_write
2008-09-25 20:35 <flipsout> the -> are calls into the filesystem
2008-09-25 20:36 <MaZe> interesting
2008-09-25 20:36 <MaZe> what's the purpose of the prepare?
2008-09-25 20:36 <flipsout> the channel log will be preserved for posterity
2008-09-25 20:36 <MaZe> verify there's enough disk space, etc?
2008-09-25 20:36 <flipsout> I've alwasy wonder about that
2008-09-25 20:36 <MaZe> yes including the comments about wine
2008-09-25 20:36 <flipsout> for a partial page write, the prepare does a
read before write
2008-09-25 20:36 <flipsout> otherwise it seems pretty useless
2008-09-25 20:36 <flipsout> I think it is useless
2008-09-25 20:37 <flipsout> but it has been in linux since eternity, which
is an argument for it staying another eternity
2008-09-25 20:37 <MaZe> how do you know if it's a partial or full page
write?
2008-09-25 20:37 <flipsout> see the parameters passed to it
2008-09-25 20:37 <flipsout> and where they come from
2008-09-25 20:37 <MaZe> ah
2008-09-25 20:37 <flipsout> this comes from the file pos and write len
2008-09-25 20:38 <MaZe> 2159                status =
a_ops->prepare_write(file, page, offset, offset+bytes);
2008-09-25 20:38 <flipsout> so there may be a partial page at the beginning
and one at the end
2008-09-25 20:38 <MaZe> so I'm assuming that the 3rd and 4th paramts
2008-09-25 20:38 <MaZe> are 0,4096 if we're writing a full page
2008-09-25 20:38 <flipsout> pretty dumb to have the ->prepare on every page
when only two per transfer need the special treatment
2008-09-25 20:38 <MaZe> oh, moment
2008-09-25 20:38 <flipsout> 3rd 0
2008-09-25 20:38 <MaZe> do we call prepare_write, commit_write per page
2008-09-25 20:38 <flipsout> otherwise right
2008-09-25 20:38 <MaZe> or on page ranges
2008-09-25 20:39 <flipsout> per page
2008-09-25 20:39 <flipsout> dumb
2008-09-25 20:39 <flipsout> actually, this whole part of the kernel sucks
pretty hard
2008-09-25 20:39 <MaZe> why just 3rd 0, why not 4th PAGE_SIZE (4096)?
2008-09-25 20:40 <flipsout> 4th is normally page_size, yes
2008-09-25 20:40 <MaZe> ok
2008-09-25 20:40 <flipsout> 3rd is zero normally because it's an offset
2008-09-25 20:40 <flipsout> in the page
2008-09-25 20:40 <MaZe> right hence the 0,4096 above I was asking about
2008-09-25 20:40 <flipsout> see the flush_dcache page
2008-09-25 20:41 <flipsout> ah
2008-09-25 20:41 <flipsout> sorry, read wrong
2008-09-25 20:41 <MaZe> oki
2008-09-25 20:41 <flipsout> the dcache flush is a noop on x86
2008-09-25 20:41 <flipsout> some arches need it
2008-09-25 20:41 <flipsout> mips I think
2008-09-25 20:41 <MaZe> what's the purpose?
2008-09-25 20:41 <MaZe> tlb hackery?
2008-09-25 20:41 <flipsout> could not swear to that
2008-09-25 20:41 <flipsout> also not really clear to me
2008-09-25 20:41 <flipsout> it's like L1 cache
2008-09-25 20:42 <flipsout> that has to be explicitly flushed
2008-09-25 20:42 <flipsout> why... another matter
2008-09-25 20:42 <flipsout> seems like braindamage to design a processor
that doesn't know how to flush its cache
2008-09-25 20:42 <flipsout> but people do it, they have their reasons I
suppose
2008-09-25 20:42 <MaZe> maybe the asm code can be much more efficient on
some archs if you assume explicit flushes on any change
2008-09-25 20:43 <flipsout> put that one aside to bother the mips maintainer
about
2008-09-25 20:43 <flipsout> there is some sparse kernel doc on the subject
2008-09-25 20:43 <flipsout> but there is a general principle here: just
because your code works on x86 does not mean it works
2008-09-25 20:44 <MaZe> hmm
2008-09-25 20:44 <flipsout> same is true if all your spinlocks work, because
you compiled with smp disabled
2008-09-25 20:44 <MaZe> so how do you test on the dozen+ archs linux
supports?
2008-09-25 20:44 <MaZe> get users to report errors?
2008-09-25 20:44 <flipsout> that's the question isn't it?
2008-09-25 20:44 <MaZe> after testing on the 2-3 you have access to?
2008-09-25 20:44 <MaZe> well, I can test smp 32 and 64 bit x86
2008-09-25 20:44 <flipsout> you try to be aware of the issues and write
using the generic apis that work on every arch
2008-09-25 20:44 <MaZe> I could probably get my hands on power32 and maybe
alpha
2008-09-25 20:45 <flipsout> and eventually, somebody with that arch will hit
your bug and complain
2008-09-25 20:45 <MaZe> but that's about it
2008-09-25 20:45 <MaZe> right...
2008-09-25 20:45 <MaZe> but...
2008-09-25 20:45 <flipsout> it's good to test on a couple different arches
2008-09-25 20:45 <MaZe> bugs like that are damn near impossible to trace
down
2008-09-25 20:45 <flipsout> big/lttle end
2008-09-25 20:45 <MaZe> is any of the archs the most difficult to program
for?
2008-09-25 20:45 <flipsout> and if one can find it, maybe something that has
to do explicit dcache flush and other such horrors
2008-09-25 20:45 <MaZe> (I know alpha has the most lenient memory cache
coherency model)
2008-09-25 20:45 <flipsout> sparc maybe
2008-09-25 20:46 <flipsout> sparc is pretty horrible
2008-09-25 20:46 <MaZe> who still has sparc machines?
2008-09-25 20:46 <flipsout> pretty much complete absence of atomic
instructions
2008-09-25 20:46 <flipsout> dave miller
2008-09-25 20:46 <flipsout> sparc maintainer
2008-09-25 20:46 <flipsout> sun has the niagara box
2008-09-25 20:46 <MaZe> well, hopefully the maintainer does ;-)
2008-09-25 20:46 <flipsout> but it's true, sparc is nearly dead
2008-09-25 20:46 <flipsout> arm
2008-09-25 20:47 <MaZe> arm is embedded
2008-09-25 20:47 <flipsout> on the rise
2008-09-25 20:47 <flipsout> it's a big constituency these days
2008-09-25 20:47 <MaZe> easy to find, hard to find with a lot of ram or
power or disk
2008-09-25 20:47 <MaZe> would testing in emulators work?
2008-09-25 20:47 <flipsout> if it has such a great mips/watt ratio you'd
expect to see it in hpc
2008-09-25 20:48 <MaZe> some sort of qemu or something?
2008-09-25 20:48 <flipsout> but it's not there
2008-09-25 20:48 <flipsout> makes me wonder
2008-09-25 20:48 <flipsout> about that mips/watt ratio
2008-09-25 20:48 <MaZe> of arm?
2008-09-25 20:48 <flipsout> yes
2008-09-25 20:48 <MaZe> arm is good for stuff which needs high mips
2008-09-25 20:48 <MaZe> but rarely
2008-09-25 20:48 <flipsout> possilby you can test in emulation
2008-09-25 20:48 <MaZe> ie. high peak, but mostly idle
2008-09-25 20:48 <flipsout> I think qemu is x86 only
2008-09-25 20:48 <Bushman> i've been doing a lot of stuff on amd geode and
intel atom if that helps, i can test something, they're low end but still
powerful
2008-09-25 20:48 <MaZe> but those are x86 aren't tyhey?
2008-09-25 20:49 <flipsout> bushman, they'
2008-09-25 20:49 <flipsout> bushman, they're x86 arch
2008-09-25 20:49 <flipsout> but testing is _always_ useful
2008-09-25 20:49 <MaZe> x86 is a sick arch... but it's so dominant
2008-09-25 20:49 <flipsout> true
2008-09-25 20:49 <flipsout> POS86
2008-09-25 20:49 <MaZe> hmm no idea what that is
2008-09-25 20:50 <flipsout> you'll decode it eventually ;)
2008-09-25 20:50 <flipsout> are we done with _2copy?
2008-09-25 20:50 <MaZe> oh piece of
2008-09-25 20:50 <flipsout> balance_dirty_pages_ratelimited(mapping); <-
attempt to limit kernel dirty pages
2008-09-25 20:50 <MaZe> so it goes a page at a time right?
2008-09-25 20:50 <flipsout> nasty thing
2008-09-25 20:50 <flipsout> yes, yuck
2008-09-25 20:51 <flipsout> and even then it's a mess
2008-09-25 20:51 <flipsout> there are many different flavors of similar
kinds of io transfer loops
2008-09-25 20:51 <flipsout> in filemap.c
2008-09-25 20:51 <flipsout> take a browse and enjoy some of them
2008-09-25 20:52 <flipsout> oh, look at that vmtruncate at the end
2008-09-25 20:52 <flipsout> scary stuff
2008-09-25 20:52 <MaZe> why is there so much of it?
2008-09-25 20:52 <flipsout> much of what?
2008-09-25 20:52 <flipsout> copy loops?
2008-09-25 20:52 <MaZe> code;-)
2008-09-25 20:52 <flipsout> badly designed
2008-09-25 20:52 <flipsout> or not designed at all
2008-09-25 20:52 <flipsout> just grows
2008-09-25 20:53 <flipsout> changes in response to bug reports
2008-09-25 20:53 <MaZe> it feels like we have multiple interfaces/apis for
everything
2008-09-25 20:53 <flipsout> including performance bug reports
2008-09-25 20:53 <flipsout> you're starting to get a feeling for it
2008-09-25 20:53 <MaZe> and eventually none of them get fully tested
2008-09-25 20:53 <flipsout> it's not unmanageable, just unconscionable
2008-09-25 20:53 <MaZe> at least not in all the myriad of combinations
2008-09-25 20:54 <flipsout> they get pretty well tested
2008-09-25 20:54 <MaZe> hmm
2008-09-25 20:54 <flipsout> I _think_ pretty much all buffer wries get
funneled through _2copy
2008-09-25 20:54 <flipsout> though I haven't completely read through since
this thing landed
2008-09-25 20:54 <MaZe> here's a question then
2008-09-25 20:54 <MaZe> how would I go about tracing a syscall
2008-09-25 20:55 <MaZe> seeing exactly which kernel funcs
2008-09-25 20:55 <flipsout> linux trace toolkit
2008-09-25 20:55 <MaZe> got called in what order with what params?
2008-09-25 20:55 <flipsout> puts probes into the kernel
2008-09-25 20:55 <MaZe> dprobe? kprobe?
2008-09-25 20:55 <flipsout> http://www.opersys.com/LTT/
2008-09-25 20:55 <flipsout> kprobe
2008-09-25 20:55 <flipsout> now part of ltt I think
2008-09-25 20:55 <MaZe> hmm, so that's the 2nd time you've mentioned ltt
2008-09-25 20:56 <MaZe> it's good I take it?
2008-09-25 20:56 <flipsout> I haven't used it
2008-09-25 20:56 <flipsout> I should
2008-09-25 20:56 <flipsout> but it's the only game in town
2008-09-25 20:56 <flipsout> I think
2008-09-25 20:56 <flipsout> latest news is 2004
2008-09-25 20:56 <flipsout> I think that may because it got at least
partially merged
2008-09-25 20:57 <flipsout> http://ltt.polymtl.ca/
2008-09-25 20:57 <flipsout> moved
2008-09-25 20:58 <MaZe> right
2008-09-25 20:58 <flipsout> it current
2008-09-25 20:58 <flipsout> to 2.6.27-rc7
2008-09-25 20:58 <flipsout> current to yesterday or so ;)
2008-09-25 20:58 <flipsout> I should try it, there are no doubt many times
when it could have saved me time
2008-09-25 20:59 <MaZe> patch-2.6.27-rc7-lttng-0.26.tar.bz225-Sep-2008
16:05  177K - right
2008-09-25 21:00 <flipsout> we should have looked at grab_cache_page in
_2copy
2008-09-25 21:00 <flipsout> another poorly named function
2008-09-25 21:00 <flipsout> but important, and it will serve as our
introduction the the page cache api
2008-09-25 21:00 <MaZe> Find or create a page at the given pagecache
position. Return the locked 2038 * page. This function is specifically for
buffered writes.
2008-09-25 21:00 <flipsout> one of the worst apis in the kernel ;)
2008-09-25 21:01 <flipsout> 2038?
2008-09-25 21:01 <MaZe> line #
2008-09-25 21:01 <flipsout> oh
2008-09-25 21:01 <MaZe> what does page cache position mean?
2008-09-25 21:01 <flipsout> index within the cache for a particular inode
2008-09-25 21:01 <flipsout> so many logical pages offset in the file
2008-09-25 21:01 <MaZe> so a page cache position is a
superblock:inode:offset triplet?
2008-09-25 21:02 <flipsout> just inode:offset
2008-09-25 21:02 <flipsout> because inode->sb
2008-09-25 21:02 <MaZe> ah
2008-09-25 21:02 <MaZe> so now inode #, but instead inode ptr
2008-09-25 21:02 <flipsout> yes
2008-09-25 21:02 <MaZe> s/now/not/
2008-09-25 21:02 <flipsout> the "page cache" is in fact not a single cache
2008-09-25 21:02 <flipsout> maybe it was at one time
2008-09-25 21:03 <flipsout> but now it is a radix tree that hangs off of
each inode
2008-09-25 21:03 <flipsout> giving you an idea maybe how bloating things get
with lots of small files
2008-09-25 21:03 <MaZe> so page cache is effectively per inode?
2008-09-25 21:03 <flipsout> and what a bad idea sysfs is, which uses files
and all the cache stuff that goes with it, to communicate tiny, 4 byte
quantities, to the kernel
2008-09-25 21:04 <flipsout> page cache is per inode
2008-09-25 21:04 <flipsout> not effectively, absolutely
2008-09-25 21:04 <MaZe> why this split to per inode level?
2008-09-25 21:05 <flipsout> we think it's a good idea
2008-09-25 21:05 <MaZe> doesn't it make it harder to find what to free when
memory runs low?
2008-09-25 21:05 <flipsout> no, because all the pages are linked together
via a lru list
2008-09-25 21:05 <flipsout> but anyway
2008-09-25 21:05 <flipsout> lru is probably a bad idea
2008-09-25 21:05 <MaZe> lru = least recently used
2008-09-25 21:05 <flipsout> yes
2008-09-25 21:05 <flipsout> self organizing list
2008-09-25 21:06 <flipsout> simple minded
2008-09-25 21:06 <MaZe> just for the folks reading this later
2008-09-25 21:06 <flipsout> not very effective, especially since we mostly
bypass it
2008-09-25 21:06 <MaZe> who bypasses it?
2008-09-25 21:06 <flipsout> we do
2008-09-25 21:06 <flipsout> in writeout for example
2008-09-25 21:06 <flipsout> it's mostly per-inode using the inode dirty
lists
2008-09-25 21:07 <MaZe> we, as in tux
2008-09-25 21:07 <MaZe> or we as in fs drivers?
2008-09-25 21:07 <flipsout> there as in linuxen
2008-09-25 21:07 <flipsout> we as in linux penguins
2008-09-25 21:08 <MaZe> hmm
2008-09-25 21:08 <flipsout> the lru has exaclty one purpose: to decide which
page to evict next
2008-09-25 21:08 <flipsout> we mess with the lru idea so much that we don't
get good decisions on that
2008-09-25 21:09 <MaZe> what do you mean mess?
2008-09-25 21:09 <flipsout> all kinds of mess
2008-09-25 21:09 <flipsout> there is the concept of hot and cold end of the
lru list
2008-09-25 21:09 <MaZe> does it evict both clean and dirty page?
2008-09-25 21:10 <flipsout> and there is code to try to move pages to the
hot or cold end of the list according to whether we think the page is hot or
cold
2008-09-25 21:10 <flipsout> both clean and dirty
2008-09-25 21:10 <flipsout> actually only clean
2008-09-25 21:10 <flipsout> it cleans dirty pages
2008-09-25 21:10 <flipsout> and evicts clean pages
2008-09-25 21:10 <MaZe> yes, to prefer using hot pages, since those are
likely to be in cache
2008-09-25 21:10 <MaZe> so we won't be wasting cache by using hot pges
2008-09-25 21:10 <flipsout> right, except I think we mostly blow chunks in
deciding what will be hot
2008-09-25 21:11 <MaZe> sinc the caches of hot page, can replace spot of
previous page
2008-09-25 21:11 <flipsout> yes, evicting pages that wil be faulted in again
immediately does no good, quite the contrary
2008-09-25 21:12 <flipsout> or read via filesystem operations
2008-09-25 21:12 <MaZe> ok, it's getting late
2008-09-25 21:12 <flipsout> true
2008-09-25 21:12 <MaZe> and I'm falling asleep
2008-09-25 21:12 <flipsout> see you
2008-09-25 21:12 <MaZe> ;-)
2008-09-25 21:13 <MaZe> should be time for questions now
2008-09-25 21:13 <flipsout> overtime
2008-09-25 21:13 <MaZe> and since I've been asking questions all session...
2008-09-25 21:13 <MaZe> I'll let other folks ask questions now
2008-09-25 21:13 <flipsout> next tuesday we will continue with
grab_cache_page
2008-09-25 21:14 <MaZe> finally
2008-09-25 21:14 <MaZe> ;-)
2008-09-25 21:14 -!- ChanServ changed mode/#tux3 -> +o flips
2008-09-25 21:14 <flips> Topic for #tux3 is: Tux3 list membership roars past
100! ~ http://tux3.org ~ Tux3 U, right here Tuesdays and Thursdays at 8 p.m.
Pacific Time ~ Next session: grab_cache_page and friends
2008-09-25 21:14 -!- flips changed mode/#tux3 -> -o shapor
2008-09-25 21:15 -!- flips changed mode/#tux3 -> -o flips
2008-09-25 21:15 <MaZe> yeah, tried changing that earlier and failed
2008-09-25 21:15 <flips> I'll unlock the topic
2008-09-25 21:15 <MaZe> nah
2008-09-25 21:15 <MaZe> anyway, let's find a bed


-- 
Pranith.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://phunq.net/pipermail/tux3/attachments/20080928/286e2f4f/attachment.html>
-------------- next part --------------
_______________________________________________
Tux3 mailing list
Tux3 at tux3.org
http://tux3.org/cgi-bin/mailman/listinfo/tux3


More information about the Tux3 mailing list