[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: spinlocks (was Re: [f-cpu] another DATE report)
On Tue, Mar 19, 2002 at 12:10:03AM +0100, Christophe wrote:
[...]
> - For 2 concurrent processors, the first processor to write always passed, so
> the second do a retry and have a hit the second time because it would have time
> to write before the former processor had time enough to pass a second time. So
> both processors finally share their hits near 1/2.
Without conflict, best case:
time +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
CPU A R--------------W
CPU B R--------------W
With conflict (first write attempt on CPU B fails), worst case:
time +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
CPU A R--------------W
CPU B R--------------R--------------W
The final (succeeding) operations, counted from the initial read, will
never overlap. The same is true for the spinlock variant, except that
the `worst case' situation is much better (less delay in case of
failure):
Spinlock, worst case:
time +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
CPU A LR--------------WU
CPU B LR--------------WU
The letters mean: R(ead), W(rite), L(ock), U(nlock)
[...]
> By the way if you don't call this
> with_cas in an eternal loop so your statement "repeat indefinite times" becomes
> false, because there is a time when a processor hits, it wouldn't be back until
> a very long moment (at least I hope so!) so it should not be any longer a
> possible contention for other processors waiting for their hits.
Things like push/pop operations on a stack happen all the time, not once
in a millenium :)
> But okay, if long_computation is undeterministic, I wouldn't use CAS nor CAS2 !
I wouldn't use a spinlock either, then. Never block if you have no idea
how long the block will persist.
> Anyway, using CAS shouldn't be a thing which occurs at 99% of running code, so
> such a situation should be avoided.
>
> > Note that this does not mean that I don't like CAS. Au contraire -- we
> > definitely need an atomic read-modify-write operation, and this one
> > looks pretty good to me, although it doesn't solve all problems.
>
> Of course, it doesn't solve all problems. But I prefer them to semaphores when
> possible.
Semaphores are slightly different, and usually not present in hardware.
When a task requests a semaphore that is in use, it is suspended (there
will be a task switch). As soon as the semaphore is freed, the first
process waiting for it is woken up, or maybe all of them, and probably
another task switch is performed.
A simple semaphore can be written using CAS:
down(struct sem *sem) {
while (!CAS(&sem->count, 1, 0)) {
wait_on_semaphore(sem); // does a task switch
/* we come back here when the task is woken up again */
}
}
up(struct sem *sem) {
sem->count = 1;
}
A real semaphore (with an initial count > 1) is a little harder...
We'd need another instruction in order to implement that efficiently,
like `atomic test-for-zero-and-decrement-otherwise'.
--
Michael "Tired" Riepe <Michael.Riepe@stud.uni-hannover.de>
"All I wanna do is have a little fun before I die"
*************************************************************
To unsubscribe, send an e-mail to majordomo@seul.org with
unsubscribe f-cpu in the body. http://f-cpu.seul.org/