mips/atomic: Fix smp_mb__{before,after}_atomic()
authorPeter Zijlstra <peterz@infradead.org>
Thu, 13 Jun 2019 13:43:20 +0000 (15:43 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 7 Oct 2019 16:59:29 +0000 (18:59 +0200)
commitcf4e9c2472acf9e49377b8702028f822c7eff403
treee340ec6d4f75a1fd97fd24fdefebfbd752c9d190
parenta7ef43bf90640761584ef7f618e99cd277167b91
mips/atomic: Fix smp_mb__{before,after}_atomic()

[ Upstream commit 42344113ba7a1ed7b5654cd5270af0d5698d8521 ]

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
arch/mips/include/asm/atomic.h
arch/mips/include/asm/barrier.h
arch/mips/include/asm/bitops.h
arch/mips/include/asm/cmpxchg.h