From 4d61851d140e6cd540b1defc545069a5d5e4641d Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" Date: Wed, 6 Mar 2024 21:27:30 +0000 Subject: [PATCH] UPSTREAM: mm: fix list corruption in put_pages_list My recent change to put_pages_list() dereferences folio->lru.next after returning the folio to the page allocator. Usually this is now on the pcp list with other free folios, so we try to free an already-free folio. This only happens with lists that have more than 15 entries, so it wasn't immediately discovered. Revert to using list_for_each_safe() so we dereference lru.next before disposing of the folio. Link: https://lkml.kernel.org/r/20240306212749.1823380-1-willy@infradead.org Fixes: 24835f899c01 ("mm: use free_unref_folios() in put_pages_list()") Change-Id: I9f9e2d354da7b51c5e47d3c1dff3df44460b6df8 Signed-off-by: Matthew Wilcox (Oracle) Reported-by: "Borah, Chaitanya Kumar" Closes: https://lore.kernel.org/intel-gfx/SJ1PR11MB61292145F3B79DA58ADDDA63B9232@SJ1PR11MB6129.namprd11.prod.outlook.com/ Signed-off-by: Andrew Morton (cherry picked from commit b555895c313511830762dbb2f469587a822c1759) Bug: 419599659 Signed-off-by: Kalesh Singh --- mm/swap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 665bc70192b5..1dbcd6fdb54c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -134,10 +134,10 @@ EXPORT_SYMBOL(__folio_put); void put_pages_list(struct list_head *pages) { struct folio_batch fbatch; - struct folio *folio; + struct folio *folio, *next; folio_batch_init(&fbatch); - list_for_each_entry(folio, pages, lru) { + list_for_each_entry_safe(folio, next, pages, lru) { if (!folio_put_testzero(folio)) continue; if (folio_test_large(folio)) {