0% found this document useful (0 votes)
160 views

Chapter 6 - Synchronization Tools - Part 1

This document discusses synchronization tools and solutions to the critical section problem in operating systems. It describes the critical section problem where multiple processes need exclusive access to shared resources. Peterson's solution is presented, which uses two flags and a turn variable shared between two processes to ensure only one can be in its critical section at a time. Hardware solutions using memory barriers, compare-and-swap operations, and atomic variables are also mentioned for solving the critical section problem.

Uploaded by

اس اس
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views

Chapter 6 - Synchronization Tools - Part 1

This document discusses synchronization tools and solutions to the critical section problem in operating systems. It describes the critical section problem where multiple processes need exclusive access to shared resources. Peterson's solution is presented, which uses two flags and a turn variable shared between two processes to ensure only one can be in its critical section at a time. Hardware solutions using memory barriers, compare-and-swap operations, and atomic variables are also mentioned for solving the critical section problem.

Uploaded by

اس اس
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 6

Synchronization
Tools
Edited by
Ghada Mohammed S AlOsaimi
CS Lecturer, CCIS, IMISIU
Outline
 Background
 Critical-Section Problem
 Peterson’s Solution
 Hardware Support for Synchronization

Process
Management

Protection and Memory


Security Introduction Management
and Overview

Storage
Management

2
Objectives
 Describe the critical-section problem and illustrate a race condition.

 Illustrate hardware solutions to the critical-section problem using memory

barriers, compare-and-swap operations, and atomic variables

3
Background – Why we need synchronization

 Processes can execute concurrently or in parallel


 May be interrupted at any time with partially completing their execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes
 Process Synchronization means sharing system resources by processes in a such a way that, concurrent access to shared
data is handled thereby minimizing the chance of inconsistent data
 Need to have synchronization tools is illustrated by this problem
 Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers
 We can do so by having an integer counter that keeps track of the number of full buffers
 Initially counter is set to 0
 It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes
a buffer
4
Producer – Consumer Problem

Producer Consumer
while (true) { while (true) {
while (counter == BUFFER_SIZE) while (counter == 0)
; /* do nothing */ ; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++;
counter--;
}
}

5
Race Condition
This conflict called (Race Condition)
 counter++ could be implemented in machine language as
register1 = counter Occurring of a condition when two or more threads or
register1 = register1 + 1 processes can access shared data and then try to
counter = register1 change its value at the same time.

 counter-- could be implemented in machine language as Due to this, the values of variables may be unpredictable
and vary depending on the timings of context switches of
register2 = counter the processes.
register2 = register2 - 1
counter = register2

 Concurrent execution of count++ and count-- will preserve sequential execution order of lower-level statements but may
interleaved
 Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6}
6 S5: consumer execute counter = register2 {counter = 4}
Race Condition (cont.)

 Processes P0 and P1 are creating child processes using the fork() system call
 Race condition on kernel variable next_available_pid which represents the next available process identifier (pid)

 The same pid could be assigned to two different processes!


 Unless there is mutual exclusion
7
Critical-section Problem

 Consider system consist of n processes {p0, p1, … pn-1}

 Each process has a segment of code called critical section

 Process may be changing common variables, updating table (DBs), writing on files, etc

 When one process in critical section, no other may be in its critical section

 Critical section problem is to design protocol to solve that is used by the cooperating processes to synchronize their activity

 Each process must request a permission to enter critical section in entry section, may follow critical section with exit section,
then remainder section

8
Critical-section Problem (cont.)

 General structure of process Pi :

Entry Section – a part of the process which decides entry of a particular process and implement permission request
Critical Section –allows one process to enter and modify the shared variable
Exit Section – allows the other process that are waiting in the Entry Section, to enter into the Critical Sections. It also checks
that a process that finished its execution should be removed through this Section
Remainder Section – All other parts of the Code, which is not in Critical, Entry, and Exit Section
9
Solution to Critical-section Problem

Three requirements are should be considered in the solution of Critical solution Problem
1| Mutual Exclusion
If process Pi is executing in its critical section, then no other processes can be executing in their critical sections
2| Progress
If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then
the selection of the processes that will enter the critical section next cannot be postponed indefinitely
3| Bounded Waiting
A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process
has made a request to enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes

10
Critical-Section Handling in OS

 Race conditions could be appeared in kernel space for maintaining memory allocation and for interrupt handling
 Critical-section problem could be solved simply in a single-core environment by preventing interrupts
 Current process proceed execution with no preemption
 No unexpected modification on shared data
 Disabling interrupts is not a solution in multiprocessor environment
 It requires to pass messages to all processes to delay entry to their critical section
 This solution consumes time then decreases efficiency
 Two approaches to handle critical-section problem in OS
 Preemptive kernels – allows preemption of process when running in kernel mode
 Must be carefully designed to ensure that shared kernel data are free from race conditions
 non- preemptive kernels – runs until exits kernel mode, blocks, or voluntarily yields control of CPU
 Essentially free of race conditions as only one process is active in kernel at time
11
Peterson’s Solution

 Software-based solution to critical-section problem


 Not guaranteed to work on modern architectures! (But good algorithmic description of solving the problem)
 This solution is restricted to two processes Pi and Pj that alternate execution between their critical and remainder sections
 Assume that the load and store machine-language instructions are atomic; that is, cannot be interrupted
 The two processes share two variables:
 int turn;
 boolean flag[2]
 The variable turn indicates whose turn it is to enter the critical section
 The flag array is used to indicate if a process is ready to enter the critical section
 flag[i] = true implies that process Pi is ready !

12
Algorithm for Process Pi and Pj
while (true){ Pi while (true){ Pj
flag[i] = true; flag[j] = true;
Entry Section turn = j; turn = i;
while (flag[j] && turn = = j) while (flag[i] && turn = = i)
; ;

/* critical section */ /* critical section */

Exit Section flag[i] = false; flag[j] = false;

/* remainder section */ /* remainder section */


} }

T0 : flag[i] = true; (Process i) T7 : flag[i] = false; (Process i)


T1 : turn = j; (Process i) T8 : while( flag[j] && turn ==j); (Process i)
T2 : flag[j] = true; (Process j) infinite loop do nothing just wait (process j)
T3 : turn = i; (Process j) T9 : while( flag[i] && turn ==i); (Process j)
T4 : while( flag[i] && turn ==i); (Process j) T10 : executing critical section of(Process j)
infinite loop do nothing just wait (process i) T11 : flag[j] = false; (Process j)
13
T5 : while( flag[j] && turn ==j); (Process i) T12 : executing reminder section of(Process i)
T6 : executing critical section of (Process i) T13 : executing reminder section of(Process j)
Peterson’s Solution (Cont.)

 Provable that the three CS requirement are met:


 Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
 Progress requirement is satisfied
 Bounded-waiting requirement is met

14
Peterson’s Solution (Cont.)

 Although useful for demonstrating an algorithm, Peterson’s Solution is not guaranteed to work on modern architectures
 Understanding why it will not work is also useful for better understanding race conditions
 To improve performance, processors and/or compilers may reorder read and write operations that have no dependencies
 Example: balancing a checkbook
 Memory ordering describes order of accesses to computer memory by a CPU, this term can refer either to
 memory ordering generated by compiler during compile time, or to
 memory ordering generated by a CPU during runtime
 For single-threaded this is ok as the result will always be the same
 For multithreaded the reordering may produce inconsistent or unexpected results !

15
Peterson’s Solution (cont.)

 Two threads share the data:  If two instructions for Thread 2 are reordered

boolean flag = false; flag = true;


int x = 0; x = 100;
 Thread 1 performs  If this occurs, the output may be 0!
 Effects of instruction reordering in Peterson’s Solution
while (!flag);
print x
 Thread 2 performs

x = 100;
flag = true
 This allows both processes to be in their critical section at the same time!
 What is the expected output?
 100 is the expected output
16
Synchronization Hardware

 Many systems provide hardware support for implementing the critical section code
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally, too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 We will look at three forms of hardware support
 Memory barriers
 Hardware instructions
 Atomic variables

17
Hardware Instructions

 In general, stating any solution to the critical-section problem requires a simple tool — a lock
 Race conditions are prevented by requiring that critical regions be protected by locks
 That is, a process must acquire a lock before entering a critical section; it releases the lock when it exits the critical section
 Special hardware instructions that allow us to either test-and-modify the content of a word, or two swap the contents of two
words atomically (uninterruptibly)
 Test-and-Set instruction while (true) {
 Compare-and-Swap instruction
acquire lock

critical section

release lock

remainder section
18 }
test_and_set Instruction

Definition Solution
boolean test_and_set (boolean *target) By defining shared boolean variable lock, initialized to false
{ do {
boolean rv = *target; while (test_and_set(&lock))

*target = true; ; /* do nothing */

return rv:
/* critical section */
}

1. Executed atomically (no interruption) lock = false;

/* remainder section */
2. Returns the original value of passed parameter
} while (true);
3. Set the new value of passed parameter to true
 Two TestAndSet() instructions are executed simultaneously i.e. at the same time on each on a different CPU, they will be
executed sequentially in some arbitrary order but achieving mutual exclusion through using lock
19
test_and_set Instruction (cont.)

20
compare_and_swap Instruction

Definition Solution
int compare_and_swap(int *value, int expected, int new_value) { By defining shared bool variable lock, initialized to 0
int temp = *value; while (true){
while (compare_and_swap(&lock, 0, 1) != 0)

if (*value == expected) ; /* do nothing */

*value = new_value;
/* critical section */
return temp;

} lock = 0;

/* remainder section */
1.Executed atomically
}
2.Returns the original value of passed parameter value
3.Set variable value to value of the passed parameter new_value but only if *value == expected is true
That is, the swap takes place only under this condition
21
Hardware Instructions (cont.)

 Provable that the two of CS requirement are met:


 Mutual exclusion is preserved
Pienters CS only if lock is in off mode (false in test_and_set, 0 in compare_and_swap)
 Progress requirement is satisfied
 Bounded-waiting requirement is not met

22
Atomic Variables

 Typically, instructions such as compare-and-swap are used as building blocks for other synchronization tools
 One tool is an atomic variable that provides atomic (uninterruptible) updates on basic data types such as integers and
booleans
 For example, the increment() operation on the atomic variable sequence ensures sequence is incremented without
interruption

increment(&sequence);

23
Atomic Variables (cont.)

 The increment() function can be implemented as follows: int compare_and_swap(int *value, int expected, int new_value) {

int temp = *value;

void increment(atomic_int *v)


{ if (*value == expected)
int temp; *value = new_value;

return temp;
do {
}
temp = *v;
}
while (temp != (compare_and_swap(v,temp,temp+1));
}

24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy