hppa: fix pthread spinlock

URL: https://bugs.debian.org/725508
This commit is contained in:
John David Anglin 2014-05-04 14:02:30 -04:00 committed by Mike Frysinger
parent db2f6f4794
commit d7f914848b
3 changed files with 47 additions and 8 deletions

View file

@ -1,3 +1,10 @@
2016-01-06 John David Anglin <dave.anglin@bell.net>
* sysdeps/hppa/nptl/pthread_spin_init.c (pthread_spin_init): Replace
asm stw with atomic_exchange_rel. Add explanatory comment.
* sysdeps/hppa/nptl/pthread_spin_unlock.c (pthread_spin_unlock):
Likewise.
2016-01-05 H.J. Lu <hongjiu.lu@intel.com>
[BZ #19122]

View file

@ -20,9 +20,25 @@
int
pthread_spin_init (pthread_spinlock_t *lock, int pshared)
{
int tmp = 0;
/* This should be a memory barrier to newer compilers */
__asm__ __volatile__ ("stw,ma %1,0(%0)"
: : "r" (lock), "r" (tmp) : "memory");
/* CONCURRENCTY NOTES:
The atomic_exchange_rel synchronizes-with the atomic_exhange_acq in
pthread_spin_lock.
On hppa we must not use a plain `stw` to reset the guard lock. This
has to do with the kernel compare-and-swap helper that is used to
implement all of the atomic operations.
The kernel CAS helper uses its own internal locks and that means that
to create a true happens-before relationship between any two threads,
the second thread must observe the internal lock having a value of 0
(it must attempt to take the lock with ldcw). This creates the
ordering required for a second thread to observe the effects of the
RMW of the kernel CAS helper in any other thread.
Therefore if a variable is used in an atomic macro it must always be
manipulated with atomic macros in order for memory ordering rules to
be preserved. */
atomic_exchange_rel (lock, 0);
return 0;
}

View file

@ -20,9 +20,25 @@
int
pthread_spin_unlock (pthread_spinlock_t *lock)
{
int tmp = 0;
/* This should be a memory barrier to newer compilers */
__asm__ __volatile__ ("stw,ma %1,0(%0)"
: : "r" (lock), "r" (tmp) : "memory");
/* CONCURRENCTY NOTES:
The atomic_exchange_rel synchronizes-with the atomic_exhange_acq in
pthread_spin_lock.
On hppa we must not use a plain `stw` to reset the guard lock. This
has to do with the kernel compare-and-swap helper that is used to
implement all of the atomic operations.
The kernel CAS helper uses its own internal locks and that means that
to create a true happens-before relationship between any two threads,
the second thread must observe the internal lock having a value of 0
(it must attempt to take the lock with ldcw). This creates the
ordering required for a second thread to observe the effects of the
RMW of the kernel CAS helper in any other thread.
Therefore if a variable is used in an atomic macro it must always be
manipulated with atomic macros in order for memory ordering rules to
be preserved. */
atomic_exchange_rel (lock, 0);
return 0;
}