From: Christian Borntraeger Date: Sun, 9 Apr 2017 20:09:38 +0000 (+0200) Subject: s390/mm: fix CMMA vs KSM vs others X-Git-Tag: release-20171130_firefly~4^2~100^2~1^2~12^2~20 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;ds=sidebyside;h=702db976b8574745b20492fc2a89abfa4ec2bef1;p=firefly-linux-kernel-4.4.55.git s390/mm: fix CMMA vs KSM vs others commit a8f60d1fadf7b8b54449fcc9d6b15248917478ba upstream. On heavy paging with KSM I see guest data corruption. Turns out that KSM will add pages to its tree, where the mapping return true for pte_unused (or might become as such later). KSM will unmap such pages and reinstantiate with different attributes (e.g. write protected or special, e.g. in replace_page or write_protect_page)). This uncovered a bug in our pagetable handling: We must remove the unused flag as soon as an entry becomes present again. Signed-of-by: Christian Borntraeger Signed-off-by: Martin Schwidefsky Signed-off-by: Greg Kroah-Hartman --- diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 024f85f947ae..e2c0e4eab037 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -829,6 +829,8 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, { pgste_t pgste; + if (pte_present(entry)) + pte_val(entry) &= ~_PAGE_UNUSED; if (mm_has_pgste(mm)) { pgste = pgste_get_lock(ptep); pgste_val(pgste) &= ~_PGSTE_GPS_ZERO;