btrfs: compression: don't try to compress if we don't have enough pages
authorDavid Sterba <dsterba@suse.com>
Mon, 14 Jun 2021 10:45:18 +0000 (12:45 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 28 Jul 2021 11:31:02 +0000 (13:31 +0200)
commit11561d2f7b9dc943035ddbffa6840c0e7d34b935
tree12090575db1d499854f135676ed6c853816bdb16
parent4980301e1c1fe69176d90254f92451f8f27fe8f5
btrfs: compression: don't try to compress if we don't have enough pages

commit f2165627319ffd33a6217275e5690b1ab5c45763 upstream

The early check if we should attempt compression does not take into
account the number of input pages. It can happen that there's only one
page, eg. a tail page after some ranges of the BTRFS_MAX_UNCOMPRESSED
have been processed, or an isolated page that won't be converted to an
inline extent.

The single page would be compressed but a later check would drop it
again because the result size must be at least one block shorter than
the input. That can never work with just one page.

CC: stable@vger.kernel.org # 4.4+
Signed-off-by: David Sterba <dsterba@suse.com>
[sudip: adjust context]
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
fs/btrfs/inode.c