0% found this document useful (0 votes)
12 views

Implementing Locks: How To Write Correct Concurrent Programs? No Race

The document discusses different techniques for implementing locks to support mutual exclusion in concurrent programs. It describes Dekker's algorithm and Peterson's improvement. It then covers requirements for mutual exclusion, progress, and boundedness. The rest of the document discusses various ways to implement locks using hardware support like atomic memory access, disabling interrupts, and maintaining a wait queue.

Uploaded by

hoang.van.tuan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Implementing Locks: How To Write Correct Concurrent Programs? No Race

The document discusses different techniques for implementing locks to support mutual exclusion in concurrent programs. It describes Dekker's algorithm and Peterson's improvement. It then covers requirements for mutual exclusion, progress, and boundedness. The rest of the document discusses various ways to implement locks using hardware support like atomic memory access, disabling interrupts, and maintaining a wait queue.

Uploaded by

hoang.van.tuan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Implementing Locks

n Dekker’s algorithm, later simplified by Peterson


n No hardware support required

Implementing Mutual Exclusion


lockedA = true;
turn = B;
while (lockedB && turn != A);
Arvind Krishnamurthy <critical section>
Spring 2001 lockedA = false; lockedB = true;
turn = A;
while (lockedA && turn != B);
<critical section>
lockedB = false;

The big picture: Requirements Supporting mutual exclusion


n How to write correct concurrent programs? No race n Atomicity is the key: instruction sequence guaranteed to execute
condition, no deadlock, no starvation: indivisibly (“critical section”)
n Mutual exclusion: at most one process at a time is in the critical n Hardware support for synchronization
section.
n atomic load/store
n Progress (deadlock free): if several simultaneous requests, we n interrupt disable
must allow one to proceed. n atomic read-modify-write instructions
n Bounded (starvation free): a process attempting to enter critical
section will eventually succeed. n HW methods are difficult to use. We need better, higher-level
primitives
n Other requirements:
n fair: don’t make some wait longer than others n This lecture: locks
n efficient: don’t waste resources waiting: n Upcoming lectures: semaphores, condition variables (monitors)
Future: Message passing (send & receive)
simple and easy-to-read programs
n
n

Locks: making code atomic How to use locks?


n Lock: shared variable, two operations: n Every shared variable protected by a lock
Acquire --- wait until lock is free, then grabs it n shared = touched by more than one thread
Release --- unlock, waking up a waiter if any
int stack[ ], n;
n How to use? Bracket critical section in lock/unlock: lock s_l, n_l;

n Must hold lock for a shared variable before you touch


lock hit_lock;
essential property: two threads can’t hold same lock at same time
...
n

acquire(hit_lock); hit = hit + 1; release(hit_lock);


n Atomic operation on several shared variables:
n Result: only one thread updating counter at a time. n acquire all locks before touching, don’t release until done
n Access is “mutually exclusive”
n What have we done? Bootstrap big atomic units from smaller ones acquire(s_l); acquire(n_l); stack[n++] = v; release(s_l); release(n_l);
(locks)

1
How to implement locks? Using interrupts (1)
All require some level of hardware support n Flawed but simple:

n Atomic memory load and store Lock::Acquire() { disable interrupts; }


n too complex (see the too-much-milk problem)
Lock::Release() { enable interrupts; }
n Directly implement lock primitives in hardware
make hardware slower --- not practical
Problems:
n
n

n we often need to support synchronization operations at user level ---


n Disable interrupts (uniprocessor only) the above may make some user thread never give back CPU
n the standard solution --- used by Nachos
n don’t give away CPU and don’t respond to any hardware interrupts n critical sections can be arbitrarily long --- it may take too long to
respond to an interrupt --- real-time system won’t be happy

n For multiprocessors, requires some fancy HW support (e.g., read- n this won’t work for higher-level primitives such as semaphores and
modify-write instructions) condition variables

Using interrupts (2) Using interrupts (3)


Key idea: maintain a lock variable and impose mutual exclusion only Key idea: Use a queue to maintain a list of threads waiting for the
on the operations of testing and setting that variable lock. Avoid busy-waiting:

class Lock { int value = FREE; } class Lock { int value = FREE; }
Lock::Release() { Lock::Release() {
Lock::Acquire() { Disable interrupts; Lock::Acquire() { Disable interrupts;
Disable interrupts; value = FREE; Disable interrupts; if anyone on wait queue {
while (value != FREE) { Enable interrupts; if (value == BUSY) { Take a waiting thread off wait
Enable interrupts; } Put on queue of threads waiting for lock; queue and put it at the front
Disable interrupts; Go to sleep; of the ready queue;
} // Enable interrupts ? No! } else {
value = BUSY; } else { value = FREE;
Enable interrupts; value = BUSY; }
} } Enable interrupts
Enable interrupts }
}

When to re-enable interrupts? Interrupt disable/enable pattern


n Before putting the thread on the wait queue ? Thread A Thread B
n Then Release can check the queue, and not wake the thread up .
Time .
Disable interrupt
n After putting the thread on the wait queue but before Sleep
going to sleep Sleep return
switch
n Then Release puts the thread on the ready queue, but the thread Enable interrupts
still thinks it needs to go to sleep! It will go to sleep and miss the .
wakeup from Release .
Disable interrupts
switch Sleep
n In Nachos, interrupts are disabled when you call
Thread:Sleep; it is the responsibility of the next thread-to- Sleep return
run to re-enable interrupts. Enable interrupts
.
.

2
Atomic read-modify-write Locks using test&set (1)
n On a multiprocessor, interrupt disable does not provide n Flawed but simple:

atomicity lock value = 0;


n other CPUs could still enter the critical section
n disabling interrupts on all CPUs would be expensive Lock::Acquire() { while (test&set(value) == 1) // while BUSY
;
}
n Solution: HW provides some special instructions
n test&set (most arch) --- read value, write 1 back to memory Lock::Release() { value = 0;}
n exchange (x86) --- swaps value between register and memory
n Problems:
n compare&swap (68000) --- read value; if value matches register, do
n busy-waiting --- thread consumes CPU while it is waiting
exchange
n could cause problems if threads have different priorities
n load linked and conditional store (MIPS R4000, Alpha) --- read value in
one instruction, do some operations, when store occurs, check if value has been modified in the
meantime. If not, ok; otherwise, abort, and jump back to start. n Also known as “Spin” lock

Locks using test&set (2) Test-and-Set on Multiprocessors


Key idea: only busy-wait to atomically check lock value --- if n Each processor repeatedly executes a test_and_set
lock is busy, give up CPU. Use a guard on the lock itself. n In hardware, it is implemented as:
n Fetch the old value
Lock::Acquire() { Lock::Release() { n Write a “1” blindly
while (test&set(guard)) // short wait time while (test&set(guard)) n Write in a cached system results in invalidations to other
; ;
caches
if anyone on wait queue {
if (value == BUSY) { Take a waiting thread off wait n Simple algorithm results in a lot of bus traffic
Put on queue of threads waiting for lock; queue and put it at the front
n Wrap an extra test (test-and-test-and-set)
Go to sleep and set guard to 0 of the ready queue;
} else { } else {
value = BUSY; value = FREE;
lock: if (!location)
guard = 0; } if (!test-and-set(location))
} guard = 0; return;
} }
goto lock;

Ticket Lock for Multiprocessors Array Locks


n Hardware support: fetch-and-increment n Problem with ticket locks: everyone is polling the same location
n Obtain a ticket number and wait for your turn n Distribute the shared value, and do directed “unlocks”

Lock:
Lock:
next_ticket = fetch_and_increment(next_ticket)
my_slot = fetch_and_increment(next_slot);
while (next_ticket != now_serving); if (my_place % numProcs == 0)
fetch_and_add(next_slot, -numProcss);
Unlock: my_slot = my_slot % numProcs;
now_serving++; while (slots[my_slot] == must_wait);
slots[my_slot] = must_wait;
n Ensures fairness
Unlock:
n Still could result in a lot of bus transactions slots[(my_slot + 1) % numProcs] = has_lock;
n Can be used to build concurrent queues

3
Summary
n Many threads + sharing state = race conditions
n one thread modifying while others reading/writing

n How to solve?
n To make multiple threads behave like one safe sequential thread,
force only one thread at a time to use shared state.
n Pure load/store methods are too complex and difficult
n General solution is to use high-level primitives, e.g., locks. It lets us
bootstrap to arbitrarily sized atomic units.

n Load/store, disabling/enabling interrupts, and atomic read-


modify-write instructions are all ways that we can
implement high-level atomic operations

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy