0% found this document useful (0 votes)
12 views50 pages

Daa Unit-Vi

The document outlines various concepts related to Distributed Operating Systems, including election algorithms like the Bully Algorithm, which helps determine a new coordinator process when the current one fails. It also discusses the All Pair Shortest Path algorithm and the challenges of distributed process termination, emphasizing the need for algorithms to detect when distributed computations have completed. Additionally, it covers memory management techniques such as buddy memory allocation and dynamic allocation policies, addressing fragmentation issues and deallocation strategies.

Uploaded by

sowmyasai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views50 pages

Daa Unit-Vi

The document outlines various concepts related to Distributed Operating Systems, including election algorithms like the Bully Algorithm, which helps determine a new coordinator process when the current one fails. It also discusses the All Pair Shortest Path algorithm and the challenges of distributed process termination, emphasizing the need for algorithms to detect when distributed computations have completed. Additionally, it covers memory management techniques such as buddy memory allocation and dynamic allocation policies, addressing fragmentation issues and deallocation strategies.

Uploaded by

sowmyasai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

DAA- Unit VI

Case Studies
Outline
Distributed Operating System

Buddy Memory Management

Embedded Systems Internet Of

Things(IOT)

Software Engineering
Distributed Operating Systems

Bully Algorithms

All Pair Shortest Path (Floyed


Warshalls algo)

Process Termination(Diskstra
Scolten)
Election Algorithms

Many distributed algorithms such as mutual


exclusion and deadlock detection require a
coordinator process.
When the coordinator process fails, the
distributed group of processes must execute an
election algorithm to determine a new
coordinator process.
These algorithms will assume that each active
process has a unique priority id.
4
The Bully Algorithm
When any process, P, notices that the coordinator is no
longer responding it initiates an election:

P sends an election message to all processes with


higher id numbers.

If no one responds, P wins the election and becomes


coordinator.

If a higher process responds, it takes over.


Process P‟s job is done.

5
The Bully Algorithm

At any moment, a process can receive an


election message from one of its lower-
numbered colleagues.

The receiver sends an OK back to the sender


and conducts its own election.

Eventually only the bully process remains. The


bully announces victory to all processes in the
distributed group.
6
Bully Algorithm Example

 Process 4 notices 7 down.


 Process 4 holds an election.
 Process 5 and 6 respond, telling 4 to stop.
7  Now 5 and 6 each hold an election.
Bully Algorithm Example

Process 6 tells process 5 to stop.


 Process 6 (the bully) wins and tells everyone.
 If processes 7 comes up, starts elections again.

8
Distributed Operating Systems

Bully Algorithms

All Pair Shortest Path (Floyed


Warshalls algo)

Process Termination(Diskstra
Scolten)
All Pair shortest path Algorithm
Algorithm All Pair shortest path (W,A)
{
for i= 1 to n do { for j= 1 to n do {
A[i,j] =W[i,j] }
For k=1 to n do{ For i= 1 to n do{
for j= 1 to n do{
A[i,j]=min (A[i,j],A[i,k]+A[k,j]);
}
}
}
}
Example
2
4 D(0)  W
3
0 31 8 ∞2 -4
1
8
1 3 2
∞ 0 ∞ 13 7
4
2
3 ∞ 4 0 ∞5 ∞
-4
1 -5 4 2 ∞ -5 0 ∞
7 5 ∞ ∞ ∞ 6 0
5 4
6
Example
2
4 D(1)
3
0 31 8 ∞2 -4
1
8
1 3 2
∞ 0 ∞ 13 7
2 4
3 ∞ 4 0 ∞5 ∞
-4
1 -5 4 2 5 -5 0 -2
7 5 ∞ ∞ ∞ 6 0

5 4
6
Example
2
4 D(2)
3 1 2 3 4 5
0 3 8 4 -4
1
1 8
3 2
∞ 0 ∞ 1 7
2
3 ∞ 4 0 5 11
-4
1 -5 4 2 5 -5 0 -2
7 5 ∞ ∞ ∞ 6 0

5 4
6
Example
2
4 D(3)
3 1 2 3 4 5
0 3 8 4 -4
1
1 8
3 2
∞ 0 ∞ 1 7
2
3 ∞ 4 0 5 11
-4
1 -5 4 2 -1 -5 0 -2
7 5 ∞ ∞ ∞ 6 0

5 4
6
Example
2
4 D(4)
3 1 2 3 4 5
0 3 -1 4 -4
1
1 8
3 2
3 0 -4 1 -1
2
3 7 4 0 5 3
-4
1 -5 4 2 -1 -5 0 -2
7 5 8 5 1 6 0

5 4
6
Time complexity analysis

First double for loop takes O(n2)

The nested 3 for loop takes O(n3)

Thus , the whole algorithm takes


O(n3) time
Distributed Operating Systems

Bully Algorithms

All Pair Shortest Path (Floyed


Warshalls algo)

Process
Termination(Diskstra
Scolten)
Distributed Process Termination-
Introduction
A fundamental problem: To determine if a
distributed computation has terminated.
A non-trivial task since no process has
complete knowledge of the global state, and
global time does not exist.
A distributed computation is globally
terminated if every process is locally terminated
and there is no message in transit between any
processes.
“Locally terminated” state is a state in which a
process has finished its computation and will not
restart any action unless it receives a message.
In the termination detection problem, a
particular process (or all of the processes) must
infer when the underlying computation has
terminated.
Distributed Process Termination-
System Model
 At any given time, a process can be in only one of
the two states: active, where it is doing local
computation and idle, where the process has
 (temporarily) finished the execution of its local
computation and will be reactivated only on the
receipt of a message from another process.
 An active process can become idle at any time.
 An idle process can become active only on the
receipt of
a message from another process.
 Only active processes can send messages.
 A message can be received by a process when the
process is in either of the two states, i.e., active or
idle. On the receipt of a message, an idle process
becomes active.
 The sending of a message and the receipt of a
message occur as atomic actions.
Termination detection algorithm
[Dijkstra, Scholten]
 Simple stable property detection problem.
 Connected, undirected network graph G = (V,E).
 Assume:
 Algorithm A begins with all nodes quiescent (only inputs enabled).
 An input arrives at exactly one node.
 Starting node need not be predetermined.
 From there, computation can “diffuse” throughout
the network, or a portion of the network.
• At some point, the entire system may become quiescent:
– No non-input actions enabled at any node.
– No messages in channels.
• Termination Detection problem:
– If A ever reaches a quiescent state then the starting node should
eventually output “done”.
– Otherwise, no one ever outputs “done”.
• To be solved by a monitoring algorithm Mon(A).
Dijkstra, Scholten Algorithm
 Augment A with extra pieces that construct and maintain a tree, rooted
at the starting node, and including all the nodes currently active in A.
 Grows, shrinks, grows,…as nodes become active, quiescent, active,…
 Algorithm:
 Execute A as usual, adding acks for all messages.
 Messages of A treated like search messages in AsynchSpanningTree.
 When a process receives an external input, it becomes the root, and begins
executing A.
 When any non-root process receives its first A message, it designates the
sender as its parent in the tree, and begins participating in A.
 Root process acks every message immediately.
 Other processes ack all but the first message immediately.
 Convergecast for termination:
 If a non-root process finds its A-state quiescent and all its A-messages acked, then
it cleans up: acks the first A-message, deletes all info about the termination
protocol, becomes idle.
 If it later receives another A message, it treats it like the first A message (defines a
new parent, etc.), and resumes participating in A.
 If root process finds A-state quiescent and all A-messages acked, reports done.
Example
 First, p1 gets awakened by an external A input, 1
becomes the root, sends A messages to p2 and
p4, p2 sends an A-message to p3, all set up
parent pointers and start executing A. 5 2
 Next, p4 sends A message to p3, acked
immediately.
 p4 sends A message to p1, acked immediately. 4 3
 p1, p2, p3, and p4 send A messages to each other
for a while, everything gets acked immediately.
 Tree remains unchanged. 1
 Next, p2 and p3 quiesce locally; p3 cleans up,
sends ack to p2, p2 receives ack, p2 cleans up, 2
sends ack to p1. 5
 Next, p4 sends A messages to p2, p3, and p5,
yielding a new tree:
 Etc. 4 3
Complexity
Messages:
2m, where m is the number of messages sent in A.
Time from quiescence of A until output
“done”:
O( m d ), where d = upper bound on message
delay, ignore local processing time
Time to clean up the spanning tree.
Bounds are most interesting if m << n.
E.g., algorithms that involve only a limited
computation in a small portion of a large network.
Outline
Distributed Operating System

Buddy Memory Management

Embedded Systems Internet Of

Things(IOT)

Software Engineering
Variable Size Partitions

 Idea
 Allocate memory in small units
 Give each job as many units as it needs

A B C D E

Key Challenges
 Keep track of free / allocated memory
regions
 Allocation policy to assign free regions to
jobs 2
5
Malloc: Dynamic Allocation

 Idea
 Allocate memory in small units
 Give each request as many units as it needs

A B C D E 

 Key Challenges
 Keep track of free / allocated memory regions
 Allocation policy to assign free regions to
objects
2
6
Storage Allocation Policies
 First fit
 Use first hole whose size is large enough
 Rationale?
 Best fit
 Use first exact size or smallest hole that is
larger
 Rationale?
 Worst fit
 Use the largest available hole
 Rationale?

2
7
Storage Allocation Policies
 Best fit
 Produces the smallest leftover hole
 Creates small holes that cannot be used
 Worst Fit
 Produces the largest leftover hole
 Difficult to run large programs
 First Fit
 Creates average size holes

 First-fit and best-fit better than worst-fit in terms of speed


and/or storage utilization

1
7
Fragmentation

 Internal Fragmentation
 Allocated memory may be larger than
requested memory
 The extra memory is internal to a

partition/block, but not being used


 External Fragmentation
 Memory space exists to satisfy a request,
but it is not contiguous
 Reduce by compaction, buddy allocation
1
8
3. Deallocation Policies

 Goals: Given deallocation pointer:


 Find size of the target object
 Restore the memory to the “free list”
 Minimize fragmentation
 Common Mechanisms:
• Object metadata to find size
•Binary buddy system to reduce external
fragmentation

1
9
Binary Buddy Allocator

 Memory allocated using power-of-2 allocator


 Satisfy requests in units of size power of 2
 Request rounded up to next
highest power of 2
 When smaller allocation needed than is available, current chunk split
into two buddies of next- lower power of 2
 Continue until appropriate sized chunk available

2
2
Binary Buddy System
 Approach
 Minimum allocation size = smallest frame
 Use a bitmap to monitor frame use
 Maintain freelist for each possible frame
size
 power of 2 frame sizes from min to max
 Initially one block = entire buffer
 If two neighboring frames (“buddies”) are
free, combine them and add to next
larger freelist 23
Buddy System Example

128 Free

2
4
Buddy System Example

Process A requests
16
128 Free

64 Free 64 Free

32 Free 32 Free 64 Free

16 A 16 Free 32 Free 64 Free

2
5
Buddy System Example

Process B requests 32

16 A 16 Free 3232Free
B Free 64 Free

128

2
6
Buddy System Example

Process C requests 8

16 A 16 Free 32 B 128 Free 64 Free

16 A 8 8 32 B 64 Free
C

2
7
Buddy System Example

Process A
exits
16 Free 8 8 32 B 64 Free
C

2
8
Buddy System Example

Process C exits
16 Free 8 8 32 B 64 Free

16 Free 16 Free 32 B 64 Free

32 Free 32 B 64 Free

2
9
Buddy System: Tradeoffs
 Advantages
 Very fast search for Best Fit allocation policy
 Fast compaction
 Minimizes external fragmentation
 Disadvantage
 Internal fragmentation when not 2^n request

3
0
Outline
Distributed Operating System

Buddy Memory Management

Embedded Systems

Internet Of Things(IOT)

Software Engineering
Embedded Systems

Introduction

Scheduling in Embedded
Systems

Sorting in Embedded Systems


What is an Embedded System?
Definition of an embedded computer system:
is a digital system.
uses a microprocessor (usually).
runs software for some or all of its functions.
frequently used as a controller.

42
Definition Embedded system
An embedded system is a computer system
with a dedicated function within a larger
mechanical or electrical system, often with real-
time computing constraints.
It is embedded as part of a complete device
often including hardware and mechanical parts.
Embedded systems control many devices in
common use today.
What an embedded system is NOT.
Not a computer system that is used primarily for
processing.
Not a software system on a PC or Unix box.
Not a traditional business or scientific application.

44
Examples of Embedded Systems

Automotive systems:
Medical instrument‟s
electronic
controls:
dashboards, ABS brakes,
CAT scanners,
transmission controls.
implanted heart
monitors, etc.

Controls for digital equipment: CD players,


TV remote, programmable sprinklers, household
appliances, etc.
45
Why „embedded‟?
Because the processor is „inside‟ some other
system.
A microprocessor is „embedded‟ into your TV,
car,
or appliance.
The consumer does not think about performing
processing,
Considers running a machine or `making
something work‟.

46
Embedded Systems

Introduction

Scheduling in Embedded
Systems

Sorting in Embedded Systems


Real-Time Systems
Two types exist
Soft real-time
 Tasks are performed as fast as possible
 Late completion of jobs is undesirable but not fatal.
 System performance degrades as more & more jobs miss
deadlines
 Example:
 Online Databases
Hard real-time
 Tasks have to be performed on time
 Failure to meet deadlines is fatal
 Example :
 Flight Control System
Embedded & Real-Time Systems
Execute tasks correctly and IN time.
Systems with multiple tasks need scheduling
Definitions
Ready time r – task available
Schedule
Completed C
Deadline D

From C.W. Mercer

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy