libceph: fix corruption when using page_count 0 page in rbd
authorChunwei Chen <tuxoko@gmail.com>
Wed, 23 Apr 2014 04:35:09 +0000 (12:35 +0800)
committerJiri Slaby <jslaby@suse.cz>
Mon, 9 Jun 2014 13:53:58 +0000 (15:53 +0200)
commit92014a624d433fe66fc6c2e012b5d1320b39d69f
tree641cadc7ddc324809c35bf77e8ebd6325790e4e8
parent4ee1108042b15955583703494965934c36b16c32
libceph: fix corruption when using page_count 0 page in rbd

commit 178eda29ca721842f2146378e73d43e0044c4166 upstream.

It has been reported that using ZFSonLinux on rbd will result in memory
corruption. The bug report can be found here:

https://github.com/zfsonlinux/spl/issues/241
http://tracker.ceph.com/issues/7790

The reason is that ZFS will send pages with page_count 0 into rbd, which in
turns send them to tcp_sendpage. However, tcp_sendpage cannot deal with
page_count 0, as it will do get_page and put_page, and erroneously free the
page.

This type of issue has been noted before, and handled in iscsi, drbd,
etc. So, rbd should also handle this. This fix address this issue by fall back
to slower sendmsg when page_count 0 detected.

Cc: Sage Weil <sage@inktank.com>
Cc: Yehuda Sadeh <yehuda@inktank.com>
Signed-off-by: Chunwei Chen <tuxoko@gmail.com>
Reviewed-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
net/ceph/messenger.c