btrfs: compression: don't try to compress if we don't have enough pages
authorDavid Sterba <dsterba@suse.com>
Mon, 14 Jun 2021 10:45:18 +0000 (12:45 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 28 Jul 2021 09:12:20 +0000 (11:12 +0200)
commit94e5a12759552aa60d6576f773d0e7a9242594de
tree57e6b6c0df23550261fb56504f13417209b8edaf
parent654d3d1e010702abfe63d9bb033843c3529da5b7
btrfs: compression: don't try to compress if we don't have enough pages

commit f2165627319ffd33a6217275e5690b1ab5c45763 upstream

The early check if we should attempt compression does not take into
account the number of input pages. It can happen that there's only one
page, eg. a tail page after some ranges of the BTRFS_MAX_UNCOMPRESSED
have been processed, or an isolated page that won't be converted to an
inline extent.

The single page would be compressed but a later check would drop it
again because the result size must be at least one block shorter than
the input. That can never work with just one page.

CC: stable@vger.kernel.org # 4.4+
Signed-off-by: David Sterba <dsterba@suse.com>
[sudip: adjust context]
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
fs/btrfs/inode.c