hwpoison, memcg: forcibly uncharge LRU pages
authorMichal Hocko <mhocko@suse.com>
Fri, 12 May 2017 22:46:26 +0000 (15:46 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 31 Jan 2018 11:06:09 +0000 (12:06 +0100)
commit 18365225f0440d09708ad9daade2ec11275c3df9 upstream.

Laurent Dufour has noticed that hwpoinsoned pages are kept charged.  In
his particular case he has hit a bad_page("page still charged to
cgroup") when onlining a hwpoison page.  While this looks like something
that shouldn't happen in the first place because onlining hwpages and
returning them to the page allocator makes only little sense it shows a
real problem.

hwpoison pages do not get freed usually so we do not uncharge them (at
least not since commit 0a31bc97c80c ("mm: memcontrol: rewrite uncharge
API")).  Each charge pins memcg (since e8ea14cc6ead ("mm: memcontrol:
take a css reference for each charged page")) as well and so the
mem_cgroup and the associated state will never go away.  Fix this leak
by forcibly uncharging a LRU hwpoisoned page in delete_from_lru_cache().
We also have to tweak uncharge_list because it cannot rely on zero ref
count for these pages.

[akpm@linux-foundation.org: coding-style fixes]
Fixes: 0a31bc97c80c ("mm: memcontrol: rewrite uncharge API")
Link: http://lkml.kernel.org/r/20170502185507.GB19165@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Tested-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mm/memcontrol.c
mm/memory-failure.c

index e25b93a4267dc1682fc6189d0010780595557445..55a9facb8e8ddd40df49fce1d2c509320177f6f8 100644 (file)
@@ -5576,7 +5576,7 @@ static void uncharge_list(struct list_head *page_list)
                next = page->lru.next;
 
                VM_BUG_ON_PAGE(PageLRU(page), page);
-               VM_BUG_ON_PAGE(page_count(page), page);
+               VM_BUG_ON_PAGE(!PageHWPoison(page) && page_count(page), page);
 
                if (!page->mem_cgroup)
                        continue;
index 091fe9b06663362d8b106c805279426eb9b9baf7..92a647957f91a12489d69ba54fe0d783792bbb20 100644 (file)
@@ -539,6 +539,13 @@ static int delete_from_lru_cache(struct page *p)
                 */
                ClearPageActive(p);
                ClearPageUnevictable(p);
+
+               /*
+                * Poisoned page might never drop its ref count to 0 so we have
+                * to uncharge it manually from its memcg.
+                */
+               mem_cgroup_uncharge(p);
+
                /*
                 * drop the page count elevated by isolate_lru_page()
                 */