<div dir="ltr">Also I think it's spelled "inodes", not "indoes"<div><br></div><div>If you have this incorrect spelling anywhere in a script or document I suggest fixing it.</div></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Mon, Jun 24, 2013 at 4:06 PM, Daniel Phillips <span dir="ltr"><<a href="mailto:daniel.raymond.phillips@gmail.com" target="_blank">daniel.raymond.phillips@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>A beautifully clear explanation and big efficiency gain from a tiny core patch.</p><div class="HOEnZb"><div class="h5">
<div class="gmail_quote">On Jun 24, 2013 3:03 AM, "OGAWA Hirofumi" <<a href="mailto:hirofumi@mail.parknet.co.jp" target="_blank">hirofumi@mail.parknet.co.jp</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
On the following stress, sync(2) became top of cpu load.<br>
<br>
fsstress -s 1 -n 50000 -p 3 -d `pwd`<br>
<br>
After profile by perf, cpu load was lock contention of inode_sb_list_lock.<br>
<br>
sync(2) is data integrity operation, so it has to make sure all dirty<br>
data was written before sync(2) point. The bdi flusher flushes current<br>
dirty data and wait those. But, if there is in-flight I/O before<br>
sync_inodes_sb(), sync_inodes_sb() can be done before prior in-flight I/O.<br>
<br>
So, wait_sb_inodes() is walking the all inodes on sb to make sure<br>
in-flight I/O was done too. When it is walking inodes,<br>
inode_sb_list_lock is contending with other operations like<br>
create(2). This is the cause of cpu load.<br>
<br>
On another view, wait_sb_inodes() would (arguably) be necessary for<br>
legacy FSes. But, for example, if data=journal on ext*, wait_sb_inodes()<br>
would be more than useless, because ext* can be done it by own<br>
transaction list (and more efficient way).<br>
<br>
Likewise, on tux3, the state is same with data=journal.<br>
<br>
Also, even if data=ordered, ext* might be able to check in-flight I/O by<br>
ordered data list (with some new additional check, I'm not sure).<br>
<br>
<br>
So, this patch adds the sb->s_op->wait_inodes() handler to replace<br>
wait_sb_inodes(). With this, FSes can optimize it by using own<br>
internal knowledge.<br>
<br>
[Alternative idea to optimize this, push down wait_sb_inodes() into<br>
->sync_fs() handler on all FSes.]<br>
<br>
<br>
The following is profile of result on tux3 (->wait_inodes() handler is<br>
noop function).<br>
<br>
- 13.11% fsstress [kernel.kallsyms] [k] sync_inodes_sb<br>
- sync_inodes_sb<br>
- 99.97% sync_inodes_one_sb<br>
iterate_supers<br>
sys_sync<br>
system_call<br>
sync<br>
- 9.39% fsstress [kernel.kallsyms] [k] _raw_spin_lock<br>
- _raw_spin_lock<br>
+ 62.72% sync_inodes_sb<br>
+ 7.46% _atomic_dec_and_lock<br>
+ 6.15% inode_add_lru<br>
+ 4.43% map_region<br>
+ 3.07% __find_get_buffer<br>
+ 2.71% sync_inodes_one_sb<br>
+ 1.77% tux3_set_buffer_dirty_list<br>
+ 1.43% find_inode<br>
+ 1.02% iput<br>
+ 0.69% dput<br>
+ 0.57% __tux3_get_block<br>
+ 0.53% shrink_dcache_parent<br>
+ 0.53% __d_instantiate<br>
<br>
Before patch:<br>
<br>
real 2m40.994s<br>
user 0m1.344s<br>
sys 0m15.832s<br>
<br>
After patch:<br>
<br>
real 2m34.748<br>
user 0m1.360s<br>
sys 0m5.356s<br>
<br>
Signed-off-by: OGAWA Hirofumi <<a href="mailto:hirofumi@mail.parknet.co.jp" target="_blank">hirofumi@mail.parknet.co.jp</a>><br>
---<br>
<br>
fs/fs-writeback.c | 5 ++++-<br>
include/linux/fs.h | 1 +<br>
2 files changed, 5 insertions(+), 1 deletion(-)<br>
<br>
diff -puN include/linux/fs.h~tux3-fix-wait_sb_inodes include/linux/fs.h<br>
--- tux3fs/include/linux/fs.h~tux3-fix-wait_sb_inodes 2013-06-24 18:36:45.000000000 +0900<br>
+++ tux3fs-hirofumi/include/linux/fs.h 2013-06-24 18:36:45.000000000 +0900<br>
@@ -1595,6 +1595,7 @@ struct super_operations {<br>
int (*write_inode) (struct inode *, struct writeback_control *wbc);<br>
int (*drop_inode) (struct inode *);<br>
void (*evict_inode) (struct inode *);<br>
+ void (*wait_inodes)(struct super_block *);<br>
void (*put_super) (struct super_block *);<br>
int (*sync_fs)(struct super_block *sb, int wait);<br>
int (*freeze_fs) (struct super_block *);<br>
diff -puN fs/fs-writeback.c~tux3-fix-wait_sb_inodes fs/fs-writeback.c<br>
--- tux3fs/fs/fs-writeback.c~tux3-fix-wait_sb_inodes 2013-06-24 18:36:45.000000000 +0900<br>
+++ tux3fs-hirofumi/fs/fs-writeback.c 2013-06-24 18:36:45.000000000 +0900<br>
@@ -1401,7 +1401,10 @@ void sync_inodes_sb(struct super_block *<br>
bdi_queue_work(sb->s_bdi, &work);<br>
wait_for_completion(&done);<br>
<br>
- wait_sb_inodes(sb);<br>
+ if (sb->s_op->wait_inodes)<br>
+ sb->s_op->wait_inodes(sb);<br>
+ else<br>
+ wait_sb_inodes(sb);<br>
}<br>
EXPORT_SYMBOL(sync_inodes_sb);<br>
<br>
_<br>
<br>
--<br>
OGAWA Hirofumi <<a href="mailto:hirofumi@mail.parknet.co.jp" target="_blank">hirofumi@mail.parknet.co.jp</a>><br>
<br>
_______________________________________________<br>
Tux3 mailing list<br>
<a href="mailto:Tux3@phunq.net" target="_blank">Tux3@phunq.net</a><br>
<a href="http://phunq.net/mailman/listinfo/tux3" target="_blank">http://phunq.net/mailman/listinfo/tux3</a><br>
</blockquote></div>
</div></div><br>_______________________________________________<br>
Tux3 mailing list<br>
<a href="mailto:Tux3@phunq.net">Tux3@phunq.net</a><br>
<a href="http://phunq.net/mailman/listinfo/tux3" target="_blank">http://phunq.net/mailman/listinfo/tux3</a><br>
<br></blockquote></div><br></div>