0% found this document useful (0 votes)
25 views23 pages

07 Race Conditions

race

Uploaded by

Sachin Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views23 pages

07 Race Conditions

race

Uploaded by

Sachin Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 23

Race Conditions

Critical Sections
Dekkers Algorithm
Announcements
CS 414 Homework this Wednesday, Feb 7th

CS 415 Project due following Monday, February 12th


initial design documents due last, Friday, Feb 2nd

Indy won the SuperBowl!


Review: CPU Scheduling
Scheduling problem
Given a set of processes that are ready to run
Which one to select next

Scheduling criteria
CPU utilization, Throughput, Turnaround, Waiting, Response
Predictability: variance in any of these measures

Scheduling algorithms
FCFS, SJF, SRTF, RR
Multilevel (Feedback-)Queue Scheduling
Goals to Today
Introduction to Synchronization
..or: the trickiest bit of this course

Background
Race Conditions
The Critical-Section Problem

Dekkers Solution
Background
Concurrent access to shared data may result in data
inconsistency

Maintaining data consistency requires mechanisms to


ensure the orderly execution of cooperating processes

Suppose that we wanted to provide a solution to the


consumer-producer problem that fills all the buffers.
Assume an integer count keeps track of the number of full buffers.
Initially, count is set to 0.
It is incremented by the producer after it produces a new buffer
It is decremented by the consumer after it consumes a buffer.
Producer-Consumer
Producer Consumer

while (true) {
while (true) {
/* produce an item and */
/* put in nextProduced */ while (count == 0)
; // do nothing b/c empty
while (count == BUFFER_SIZE)
; // do nothing b/c full nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
buffer [in] = nextProduced;
count--;
in = (in + 1) % BUFFER_SIZE;
count++;
/* consume the item */
}
/* in nextConsumed */
}
Race Condition
count++ not atomic operation. Could be implemented as
register1 = count
register1 = register1 + 1
count = register1
count-- not atomic operation. Could be implemented as
register2 = count
register2 = register2 - 1
count = register2

Consider this execution interleaving with count = 5 initially:


S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
What just happened?
Threads share global memory
When a process contains multiple threads,
they have
Private registers and stack memory (the
context switching mechanism needs to save
and restore registers when switching from
thread to thread)
Shared access to the remainder of the process
state
This can result in race conditions
Two threads, one counter
Popular web server
Uses multiple threads to speed things up.
Simple shared state error:
each thread increments a shared counter to track number of hits

hits = hits + 1;

What happens when two threads execute concurrently?


Shared counters
Possible result: lost update!
hits = 0
T1 T2
time
read hits (0)
read hits (0)
hits = 0 + 1
hits = 0 + 1
hits = 1
One other possible result: everything works.
Difficult to debug
Called a race condition
Race conditions
Def: a timing dependent error involving shared state
Whether it happens depends on how threads scheduled
In effect, once thread A starts doing something, it needs to race
to finish it because if thread B looks at the shared memory
region before A is done, it may see something inconsistent
Hard to detect:
All possible schedules have to be safe
Number of possible schedule permutations is huge
Some bad schedules? Some that will work sometimes?
they are intermittent
Timing dependent = small changes can hide bug

Celebrate if bug is deterministic and repeatable!


Scheduler assumptions
Process a: Process b:
while(i < 10) while(i > -10)
i = i +1; i = i - 1;
print A won!; print B won!;

If i is shared, and initialized to 0


Who wins?
Is it guaranteed that someone wins?
What if both threads run on identical speed CPU
executing in parallel
Scheduler Assumptions
Normally we assume that
A scheduler always gives every executable
thread opportunities to run
In effect, each thread makes finite progress
But schedulers arent always fair
Some threads may get more chances than others
To reason about worst case behavior we
sometimes think of the scheduler as an
adversary trying to mess up the algorithm
Critical Section Goals
Threads do some stuff but eventually
might try to access shared data

T1 T2
time
CSEnter(); CSEnter();
Critical section Critical section
CSExit(); CSExit();
T1 T2
Critical Section Goals
Perhaps they loop (perhaps not!)

T1 T2

CSEnter(); CSEnter();
Critical section Critical section
CSExit(); CSExit();
T1 T2
Critical Section Goals
We would like
Safety (aka mutual exclusion)
No more than one thread can be in a critical section at any time.
Liveness (aka progress)
A thread that is seeking to enter the critical section will eventually succeed
Bounded waiting
A bound must exist on the number of times that other threads are allowed
to enter their critical sections after a thread has made a request to enter its
critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes

Ideally we would like fairness as well


If two threads are both trying to enter a critical section, they have equal
chances of success
in practice, fairness is rarely guaranteed
Solving the problem
A first idea:
Have a boolean flag, inside. Initially false.
Code is unsafe: thread 0 could finish the while
CSEnter() test when inside is false, but then 1 might call
CSExit()
CSEnter() before 0 can set inside to true!
{ {
while(inside) continue; inside = false;
inside = true; }
}

Now ask:
Is this Safe? Live? Bounded waiting?
Solving the problem: Take 2
A different idea (assumes just two threads):
Have a boolean flag, inside[i]. Initially false.
Code isnt live: with bad luck, both threads could
CSEnter(int i) be looping,CSExit(int
with 0 lookingi)at 1, and 1 looking at 0

{ {
inside[i] = true; Inside[i] = false;
while(inside[i^1]) continue; }
}

Now ask:
Is this Safe? Live? Bounded waiting?
Solving the problem: Take 3
Another broken solution, for two threads
Have a turn variable, Code
turn,isnt
initially 1. 1 cant enter unless
live: thread
thread 0 did first, and vice-versa. But perhaps
CSEnter(int i) CSExit(int
one thread i) many times and the
needs to enter
{ other
{ fewer times, or not at all
while(turn != i) continue; turn = i ^ 1;
} }

Now ask:
Is this Safe? Live? Bounded waiting?
A solution that works
Dekkers Algorithm (1965)
(book: Exercise 6.1)
CSEnter(int i)
{
inside[i] = true;
while(inside[J]) CSExit(int i)
{ {
if (turn == J) turn = J;
{ inside[i] = false;
inside[i] = false; }
while(turn == J) continue;
inside[i] = true;
}
}}
Napkin analysis of Dekkers
algorithm:
Safety: No process will enter its CS without setting
its inside flag. Every process checks the other
process inside flag after setting its own. If both are
set, the turn variable is used to allow only one
process to proceed.
Liveness: The turn variable is only considered
when both processes are using, or trying to use,
the resource
Bounded waiting: The turn variable ensures
alternate access to the resource when both are
competing for access
Why does it work?
Safety: Suppose thread 0 is in the CS.
Then inside[0] is true.
If thread 1 was simultaneously trying to enter, then turn must
equal 0 and thread 1 waits
If thread 1 tries to enter now, it sets turn to 0 and waits
Liveness: Suppose thread 1 wants to enter and
cant (stuck in while loop)
Thread 0 will eventually exit the CS
When inside[0] becomes false, thread 1 can enter
If thread 0 tries to reenter immediately, it sets turn=1
and hence will wait politely for thread 1 to go first!
Postscript
Dekkers algorithm does not provide strict alternation
Initially, a thread can enter critical section without accessing turn

Dekkers algorithm will not work with many modern CPUs


CPUs execute their instructions in an out-of-order (OOO) fashion
This algorithm won't work on Symmetric MultiProcessors (SMP)
CPUs equipped with OOO without the use of memory barriers

Additionally, Dekkers algorithm can fail regardless of


platform due to many optimizing compilers
Compiler may remove writes to flag since never accessed in loop
Further, compiler may remove turn since never accessed in loop
Creating an infinite loop!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy