0% found this document useful (0 votes)
21 views

M1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

M1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 154

HARD

SOFT
SEND( MSG,B)
RECEIVE (&BUF, A)
Process Synchronisation
Part-2
3. Language support:

• Monitors: are abstract datatypes for defining shared resources.


• It consists of condition variables and procedures combined
together in a special kind of module or a package.
• Procedures are called by the process to access the resource.

5/10/2021 Prof.BABY SYLA L ,College of Engineering Trivandrum 2


Monitors
• The execution of a monitor obeys the following constraints:
• Only one process can be active within the monitor at a time.
• When a process is active within the monitor, processes trying to enter the
monitor are placed in the monitor’ entry queue.
• Procedures of a monitor can only access data local to the monitor ,they cannot
access an outside variable.
• The variables or data local to monitor cannot be directly accessed from outside
monitor.

5/10/2021 Prof.BABY SYLA L ,College of Engineering Trivandrum 3


Monitors
• To synchronize tasks within the monitor, a condition variable is used to
delay processes executing in a monitor.
• Two different operations are performed on the condition variables each
with its own queue.

1. wait() - suspends the caller process ,caller relinquishes control of


monitor and placed on an urgent queue.

2. signal() – It causes exactly one waiting process to immediately regain


the control of monitor.

Processes in the urgent queue have a higher priority than processes trying
to enter the monitor , when a process relinquishes control of the monitor.

5/10/2021 Prof.BABY SYLA L ,College of Engineering Trivandrum 4


Monitors :
Advantages
• Flexibility in scheduling processes

• Disadvantages
• Only one active process inside a monitor: no concurrency.
• Responsibility of programmers to ensure proper synchronization.
• Nested monitor calls can lead to deadlocks:
• Responsibility of valid programs shifts to programmers, difficult to validate
correctness.
Serializers

• Serializers was a mechanism proposed in 1979 to overcome


some of the monitors’ shortcomings
• ie, it allows concurrency inside.
• Proposed by Hewitt and Atkinson
• Basic structure is similar to monitors
Definition:
Serializers are abstract data types defined by a set of procedures
( or operations ) and can encapsulate the shared resources to form
a protected resource object.
• more automatic, high-level mechanism
• Automatic signaling: Condition for resuming the execution
of a waiting process to be explicitly stated when a process
waits.
6

Concurrency ?
• As in a Monitor only one process can have accesses or control over a serializer at
a given time.
• But in the procedures of a serializer there are certain regions in which multiple
process can be active.
• These regions are known as Hollow regions.
• As soon as a process enters a hollow region, it releases the serializer so that some
other process can access it.
• Thus concurrency is achieved in the hollow regions of a serializer.
• Remember that in a hollow region the process just releases the serializer and
does not exit it. So that the process can regain control when it gets out of the
hollow region.
Serializers –Hollow region
• A hollow region in a procedure is specified by a join-crowd operation. The syntax of the
join-crowd command is

join-crowd (<crowd>) then <body> end


• On invocation of a join-crowd operation, possession of the serializer is released, the
identity of the process invoking the join-crowd is recorded in the crowd, and the list of
statements in the body is executed.

• When the process completes execution of the body, a leave-crowd operation is executed.
• As a result of the leave-crowd operation the process regains control of the serializer.

• Please note that, if the serializer is currently in possession of some other process then
the process executing the leave crowd operation will result in a wait queue.
Structure of a procedure
Serializers- Queue variables
• whenever a process requests to gain or regain access of a serializer certain
conditions are checked.
• The process is held in a waiting queue until the condition is true. This is
accomplished using an enque operation.
• The syntax of the enque command is
enque (<priority>, <queue-name>) until (<condition>)
• The queue name specifies the name of the queue in which the process has to be
held and the priority options specifies the priority of the process to be delayed.
OPERATION OF SERIALIZER
As shown in the above figure, every operation in a serializer
can be identified as an event.
Ø An Entry event can be a request for serializer, in which a
condition will be checked. ( For eg., Is the serializer free for
the process to enter ? )
Ø If the condition is true the process gains control of the
serializer.

Ø Then before the process accesses the resource, a guarantee


event is executed.
Ø The guarantee event results in an established event if the
condition is true, else the process releases the control of the
serializer and waits in its queue.
Ø When a resource is available the process enters (Join-Crowd
event) the Crowd and accesses the resource. After
completing the job with the resource, the process leaves
(Leave-Crowd event) the crowd and regains control of the
serializer ( if the serializer is available, else it has to wait).
Ø Serializers also allow a timeout event which can avoid
processes waiting for a condition longer than the specified
period.
Monitors vs Serializers

• Serializers have several advantages over the monitors.


• Only one process can execute inside a monitor.
Only one process can have possession of the serializer at a time. But in the
hollow regions the process releases the control and thereby facilitates several
process to execute concurrently inside the hollow region of the serializers.
• Nesting of monitors can cause deadlock. If inner process is waiting the outer
one will be tied up. Nesting of serializers are allowed in the hollow regions.
• In monitors the conditions are not clearly stated to exit from a wait state.
The conditions are clearly stated for an event to occur in a serializer.
• In monitors - explicit signalling.
In serializers implicit signalling.
Drawbacks

• More complex and hence less efficient.

• Automatic signalling process increases overhead, since whenever a resource or


serializer is released all the conditions are to be checked for the waiting processes.
PATH EXPRESSION
• What is a Path Expression?
• It is a declarative specification of synchronization between
procedures.
• Automatically generated code uses semaphores for the automatic
enforcement of the synchronization.
Path Expresiion
PATH EXPRESSION
Operators
Operators contd..
Operators
OTHER SYNCHRONISATION PROBLEMS
• One of the problem studied- Mutual exclusion
• Other problems :
• 1.Producer Consumer Problem
• 2.Reader’s writer Problem
• Dining Philosopher Problem
The dining philosophers Problem
Producer – Consumer Problem
Producer - Consumer Problem (Text book)
Readers- writers Problem(Text book)
Different options
The dining philosophers Problem (Text book)
Design issues of distributed
operating system
Module-1
Distributed System
It is a collection of autonomous computers connected by a communication network

Properties:

1.Each computers has its own memory and clock.

2. Computers communicate with each other by


exchanging messages.
System Architectures
• Minicomputer model
• Workstation model
• Processor pool model
Minicomputer model
• It consists of a few minicomputers
interconnected by a communication
network.
• Each minicomputer usually has several
interactive terminals attached to it.
• Each user is logged on to one specific
minicomputer, with remote access to
other minicomputers.
• Advantage: Resource sharing
Example:
The early ARPAnet is an example of a distributed
computing system based on the minicomputer model.
Minicomputer model
• It consists of several workstations interconnected by a
communication network.
• In this model, a user logs onto one of the workstations
called his or her “home” workstation and submits jobs for
execution.
• Normal computation activities required by the user’s
processes are preformed at the user’s home workstation.
• When the system finds that the user’s workstation does not
have sufficient processing power for executing the processes
of the submitted jobs efficiently, it transfers one or more of
the process from the user’s workstation to some other
workstation that is currently idle and gets the process
executed there, and finally the result of execution is returned
to the user’s workstation.
Examples:
Athena and Andrew
Processor pool model
• Processors are pooled together to be shared by the users as
needed.
• In this model, there is no concept of home machine.
• Users login to the system as a whole through the terminals
and submit tasks.
• The run server allocates appropriate number of processors
to the task .
• When the computation is completed, the processors are
returned to the pool for use by other users.

Examples:
Amoeba and Cambridge distributed computing systems
Advantages of distributed system
• Main advantage :Price/Performance ratio is high.

• Other advantages:
• 1.Resource sharing: Computers can send a request for other resources (s/w or
h/w) available in the system.
• 2. Enhanced performance: shorter response time and higher system throughput.
• 3.Improved reliability and availability: System is fault tolerant . Failure of single
system does not affect the whole

• 4.Modular Expandability: New resources can be added without replacing the


existing resources.
Operating systems for Distributed systems
• Network Operating systems
• Distributed operating systems

• 3 features are used to differentiate the two operating systems above.


• 1.system image
• 2. autonomy
• 3. fault tolerance capability
Difference
Network OS Distributed OS

• Users are aware about existence • It hides the existence of multiple


computers and provides a single system
of multiple computers. image to its users.(Transparency)
• Each computer functions • There is a single system wide OS and
independently and runs its own each computer runs a part/identical
copies of the operating system .They
OS and resources are managed work in close cooperation with each
locally. other . Resources are managed globally.
• Fault tolerant capability and users can
complete the task with a small loss in
• No fault tolerant capability performance
Issues in Distributed Operating system .
1. Global Knowledge
2. Naming
3. Scalability
4. Compatibility
5. Process synchronisation
6.Resource management
7. Security
8.Structure of OS
9.Client server computing model.
1.Global knowledge
• Issue:
• It is practically impossible to collect up-to-date information about global state
of distributed system.
• Reason:
• Lack of global clock and lack of global memory.
• Design should incorporate:
• Temporal ordering of events, scheduling of jobs based on arrival etc
• Techniques are needed to solve this problem .
2.Naming
A good naming system for a distributed system should have the
features described below.
• 1. Location transparency
• means that the name of an object should not reveal any hint as to the physical location of
the object.
2. Location independency
• means that the name of an object need not be changed when the object's location
changes.
• Furthermore, a user should be able to access an object by its same name irrespective of the
node from where he or she accesses it.
Naming contd…
• A name server is a process that maintains information about named
objects and It acts to bind an object's name to object's location.
• In distributed systems, name servers/look up tables may be replicated and
stored in different locations for reliability.
• A location-independent naming system must support a dynamic mapping
scheme so that it can map the same object name to different locations at
two different instances of time.
• 2 drawbacks of replication are
1. requires more storage
2. synchronisation requirements need to met
- when one entry is updated/deleted ,changes should be made at all its copies.
3. Scalability
• Scalability refers to the capability of a system to adapt to increased service
load.
• It is inevitable that a distributed system will grow with time since it is very
common to add new machines or an entire subnetwork to the system.
• A distributed operating system should be designed to easily cope with the
growth of nodes and users in the system.
• That is, such growth should not cause serious disruption of service or
significant loss of performance to users.
• Design suggestions:- Avoid centralised entities, centralised algorithmsand
performs computation in client worksataion itself.
4. Compatibility
• Compatibility refers to notion of interoperability among the resources
in a system.
• 3 different levels exist
1. Binary Level
• All processors execute the same binary instruction even though the
processors may differ in performance .
• Advantage : System development is easy but not recommended for building
distributed systems because it do not support heterogenous systems.
Compatibility
2. Execution Level
• Same source code can be compiled and executed on any computer in the
system

3. Protocol level
All the system components to support a common set of protocols.

Advantage: Individual computers can run different operating systems while not
sacrificing interoperability.
5. Process Synchronisation
• Synchronisation of processes is difficult because there is no shared
memory.
• But it is essential when different systems trying to access a shared
resource.(eg:- fileserver)
• For ensuring correctness, it is necessary that the shared resource be
accessed by a single process at a time.
• This problem is known as mutual exclusion problem.
6. Resource Management
• Resource management is concerned with making both local and
remote resources available to users.
• Users should be able to access remote resources as easily as they can
access local resources.
• In other words ,Specific location of resources should be hidden from
users.
It can done using different ways.
Resource Management cntd..
1.Data migration
• Data is brought to the location where it is needed
• Data may be a file or contents of a physical memory.
• If any changes made ,the orginal location have to be updated.
2. Computation migration
• Computation migrates to another location.
• Mechanism used is RPC (Remote Procedure call)
3. Distributed Scheduling
• Processes are transferred from one location to another by the Distributed OS.
• It is desirable when a computer where process originated is overloaded or it does not
have necessary resources required for a process .
7. Security
• Two issues must be considered in the design of security
1.Authentication
• Process of guaranteeing that an entity is what it claims to be.
2.Authorisation
• Process of deciding what privileges an entity has and making only those
privileges available.
8. Structuring
• Structuring defines how various parts of OS are organised.
• Different structures:
• 1.Collective Kernel structure (Microkernel)
• OS services such as distributed memory management, scheduling, name services,
RPC, time management etc are implemented as independent processes.
• Nucleus ,also called microkernel supports the interaction between processes.
• Other functions of kernel are task management, processor management ,etc.
• The microkernel runs on all computers in a distributed system.
• The other processes may or may not run at a computer.
• Examples:
• Mach, Chorus
Structuring contd..
2. Object oriented OS
• In Collective Kernel structure, services are implemented as process whereas
in object oriented OS ,system services are implemented as objects.
• Each object encapsulates a data structure and defines a set of operations on
that data structure.
• Examples: Amoeba, Cloud
9. Client server model
• Processes are categorised as client and servers.
• Process needs service (client) sends request to servers.
• Severs respond with the request and result may be send back to the
client.
• In systems with multiple servers, location and conversations among
the servers are transparent to the clients.
Communication Networks and Primitives
• Reading Assignment:

Communication Networks
Communication models and the primitives
used for communication
There are two communication models that provide communication
primitives.
1. Message passing model
2. Remote Procedure call
1.Message Passing model
• A form of communication between two processes
• A physical copy of message is sent from one process to the other
• 2 primitives
1.Send(msg, destination)
2.Receive( source, buf)
These two can be :
1.Blocking VS Non blocking
2.Synchronous VS Asynchronous
Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
• Blocking send -- the sender is blocked until the message is
received
• Blocking receive -- the receiver is blocked until a message is
available
• Non-blocking is considered asynchronous
• Non-blocking send -- the sender sends the message and
continue
• Non-blocking receive -- the receiver receives:
• A valid message, or
• Null message
• Different combinations possible
• If both send and receive are blocking, we have a rendezvous
2.RPC
• RPC –Remote Procedure Call
Basic RPC Operation

Note: Communication between caller & callee can be hidden by


using procedure-call mechanism.
RPC Implementation (2/2)

Steps involved in doing a remote “foo” operation


Remote Procedure Calls (1)
A remote procedure call occurs in the following
steps:
1.The client procedure calls the client stub in the normal
way.
2.The client stub builds a message and calls the local
operating system.
3.The client’s OS sends the message to the remote OS.
4.The remote OS gives the message to the server stub.
5.The server stub unpacks the parameters and calls the
server.
Continued …
Tanenbaum & Van Steen, Distributed Systems: Principles and
Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved. 0-
13-239227-5
Remote Procedure Calls (2)
A remote procedure call occurs in the following
steps (continued):
6.The server does the work and returns the result to the
stub.
7.The server stub packs it in a message and calls its local
OS.
8.The server’s OS sends the message to the client’s OS.
9.The client’s OS gives the message to the client stub.
10.The stub unpacks the result and returns to the client.
Tanenbaum & Van Steen, Distributed Systems: Principles and
Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved. 0-
13-239227-5
Establishing Communication for RPC.
Clock Synchronization
In distributes systems ,there is no global cloavk.
• How do we synchronize clocks with real-world time?
• How do we synchronize clocks with each other?
• How the events in a distributed system are ordered globally ?
Lamport’s Logical Clock
• Lamport’ propsed a scheme to order events in distributed system
using a logical clock concept.
• For partial ordering of events, Lamport defined a new relation called
happened before and introduced the concept of logical clocks for
ordering of events based on the happened-before relation
The Happened-Before Relationship

The happened-before relation on the set of events in a distributed


system is the smallest relation satisfying:
• If a and b are two events in the same process, and a comes before b, then a
 b. (a happened before b)
• If a is the sending of a message, and b is the receipt of that message, then a
 b.
• If a  b and b  c, then a  c. (transitive relation)
Note: if two events, x and y, happen in different processes that do not
exchange messages, then they are said to be concurrent.
Note: this introduces a partial ordering of events in a system with
concurrently operating processes.

05 – 6 Distributed Algorithms/5.2 Logical Clocks


Logical Clocks Concept

Problem:
To determine that an event a happened before an event b, either a common clock
or a set of perfectly synchronized clocks is needed.
Lamport [1978] provided a solution for this problem by introducing the concept of
logical clocks.
The logical clocks concept is a way to associate a timestamp (which may be simply
a number independent of any clock time) with each system event so that events
that are related to each other by the happened-before relation (directly or
indirectly) can be properly ordered in that sequence.
Solution: attach a timestamp C(e) to each event e, satisfying the
following properties:
P1: If a and b are two events in the same process, and a b, then we
demand that C (a) < C (b)
P2: If a corresponds to sending a message m, and b to the receipt of that
message, then also C (a) < C (b)
Implementation of Logical Clocks using counters

Each process Pi maintains a local counter Ci and adjusts this counter


according to the following rules:
(1) For any two successive events that take place within Pi, Ci is
incremented by 1.
(2) Each time a message m is sent by process Pi, the message
receives a timestamp Tm = Ci.
(3) Whenever a message m is received by a process Pj, Pj adjusts its
local counter Cj:

This is called the Lamport’s Algorithm


Logical Clocks – Implementation using physical clocks

Fig 5-7. (a) Three processes, each with its own clock. The clocks
run at different rates. (b) Lamport’s algorithm corrects the clocks

05 – 9 Distributed Algorithms/5.2 Logical Clocks


P1 P2 P3

a e
j
b f k
c
g
d h l
i

• Assign the Lamport’s logical clock values for all the


events in the above timing diagram. Assume that each
process’s local clock is set to 0 initially.
P1 P2 P3
1
a 1 e
1 j
2
b 3 f 2
k
3 c
4
g
4 d 5 h 3 l
6 i
From the above timing diagram, what can you say about the
following events?
• between a and b: ab
• between b and f : bf
• between e and k: concurrent
• between c and h: concurrent
• between k and h: kh
Total Ordering with Logical Clocks

Problem: it can still occur that two events happen at the same time. Avoid
this by attaching a process number to an event:
Pi timestamps event e with Ci (e) i
Then: Ci (a) i happened before Cj (b) j if and only if:
1: Ci (a) < Cj (a) ; or
2: Ci (a) = Cj (b) and i < j
Limitations of Lamport’s logical clock
Causal Ordering of Messages

• Message delivery is said to be causal if the order in which messages


are received is consistent with the order in which they are sent. That
is, if Send(M1) ->Send (M2) then for every recipient of both messages,
M1 is received before M2.

• Basic idea: Buffer each message until the message that immediately
precedes it is delivered.
• It make use of vector clocks
• In this example, sender SI sends message m1 to receivers R1, R2 , and R3
and sender S2 sends message m2 to receivers R2 , and R3 .
• On receiving m1 , receiver R1 inspects it, creates a new message
m3, and sends m3; to R2 and R3 .
• Note that the event of sending m3 is causally related to the event of
sending m1, because the contents of m3 might have been derived in part
from ml; hence the two messages must be delivered to both R2 and R3 in
the proper order, m1 before m3.
• Also note that since m2 is not causally related to either m1 or m3 , m2 can
be delivered at any time to R2 and R3 irrespective of m, or m3.
(3,2,5,1) means that, until now, A has sent three
Example messages, B has sent two messages, C
has sent five messages, and D has sent one message to
other processes
Birman-Shiper-Stephenson Causal Message
Ordering
• Before Pi broadcasts m, it increments VTPi[i] and timestamps m. Thus
VTPi[i]-1 is the number of messages from Pi preceding m.
• When Pj (j � i) receives message m with timestamp VTm from Pi,
delivery is delayed locally until both of the following are satisfied:
• VTPj[i] = VTm[i] - 1
• VTPj[k] � VTm[k] for all k � i
Delayed messages are queued at each process, sorted by their vector
timestamps, with concurrent messages ordered by time of receipt.
• When m is delivered to Pj, VTPj is updated as usual for vector clocks.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy