Process Synchronization 20250115 091836 0000
Process Synchronization 20250115 091836 0000
SYNCHRONIZATION
Here is where your presentation begins
OBJECTIVES
- To understand process synchronization and its
importance.
- To address the critical-section problem and its
solutions.
- To examine classical synchronization
problems.
- To explore synchronization tools and hardware
solutions.
WHAT IS PROCESS
SYNCHRONIZATION?
Process synchronization is about making sure
multiple tasks or programs work together
smoothly when they share resources. It helps
prevent problems like data conflicts and keeps
everything running efficiently and correctly.
KEY CONCEPTS
Concurrency: Processes execute simultaneously
and may lead to data inconsistency.
Requirements:
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
Mutual Exclusion
- is a principle in process
synchronization that ensures only
one process can access a shared
resource or critical section at a time,
preventing conflicts and maintaining
data consistency.
PROGRESS
- in process synchronization ensures
that if no process is in the critical
section, the decision of which
process will enter next is made in a
finite amount of time, without
unnecessary delay.
BOUNDED WAITING
- is a condition in process
synchronization that guarantees no
process will have to wait indefinitely to
enter its critical section, ensuring
fairness by limiting the number of times
other processes can access the critical
section before it gets a turn.
Solution to Critical Section Problem
Peterson’s Algorithm:
- Two-process solution using shared variables
`turn` and `flag[]`.
Synchronization Hardware:
- Atomic instructions like `test_and_set` and
`compare_and_swap`.
Locks:
- Mutex Locks
- Spinlocks
LOCKS
A mutex lock makes sure that only one task
can use a shared resource at a time,
avoiding conflicts. It locks the resource
while it's being used and unlocks it when
the task is finished, so others can access it.
Mutex Locks
Spin Locks
A spinlock is a synchronization mechanism
where a task continuously checks for a lock
until it becomes available. It is efficient for
short waits but can waste CPU resources if
used for longer durations.
SEMAPHORE
- is a synchronization tool used to manage
access to shared resources in concurrent
programming.
Operations:
- `wait()` (decrement)
- `signal()` (increment)
Types:
- Binary Semaphore
- Counting Semaphore
BINARY SEMAPHORE
Is a type of semaphore that can have only
two values, 0 and 1. It is used to
implement mutual exclusion, allowing
only one process to access a critical
section or shared resource at a time.
COUNTING SEMAPHORE
- is a synchronization mechanism that
manages access to a resource by keeping a
count of available instances, allowing
multiple processes to access it up to a
specified limit.
CLASSICAL SYNCHRONIZATION PROBLEM
- BOUNDED-BUFFER PROBLEM
- READERS-WRITERS PROBLEM
- DINING-PHILOSOPHERS PROBLEM
BOUNDED-BUFFER PROBLEM
- is about managing how producers and consumers
share a limited storage space. Producers add items to
the buffer, consumers take them out, and the challenge
is to make sure the buffer doesn’t overflow or run
empty while keeping everything running smoothly.
READERS-WRITERS PROBLEM
- is about managing how readers and writers
access shared data. It allows many readers to
use the data at the same time but ensures
only one writer can make changes to avoid
mistakes.
DINING-PHILOSOPHERS PROBLEM
- is a classic example of managing shared resources. It
imagines philosophers sitting at a table with limited forks,
where each philosopher needs two forks to eat. The
challenge is to ensure they can all eat without getting stuck
waiting forever or causing conflicts over the forks.
HIGH-LEVEL SYNCHRONIZATION TOOLS
Functional programming
Transactional Memory OpenMP languages
ensures reliable task completion is a set of compiler offer a different paradigm
in parallel programming by directives and API that than procedural languages
avoiding conflicts and support parallel in that they do not maintain
simplifying concurrency. progamming. state.
THANK YOU
FOR
LISTENING!