0% found this document useful (0 votes)
11 views

unit-2

The document discusses deadlocks and memory management in operating systems, explaining that a deadlock occurs when processes are blocked, each holding resources while waiting for others. It outlines necessary conditions for deadlocks, prevention strategies, avoidance techniques, and detection methods, including algorithms like the Banker’s Algorithm. Additionally, it covers physical and virtual memory, their characteristics, advantages, and memory allocation processes.

Uploaded by

Soham Makwana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

unit-2

The document discusses deadlocks and memory management in operating systems, explaining that a deadlock occurs when processes are blocked, each holding resources while waiting for others. It outlines necessary conditions for deadlocks, prevention strategies, avoidance techniques, and detection methods, including algorithms like the Banker’s Algorithm. Additionally, it covers physical and virtual memory, their characteristics, advantages, and memory allocation processes.

Uploaded by

Soham Makwana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Harivandana College, Rajkot

Semester–IV
Subject: CS-22 Operating System
Concepts with Unix/Linux

UNIT-2

Deadlocks & Memory management

 What is Deadlocks?

A process in operating system uses resources in the following way.


1. Requests a resource
2. Use the resource
3. Releases the resource

A deadlock is a situation where a set of processes are blocked because


each process is holding a resource and waiting for another resource
acquired by some other process.
Consider an example when two trains are coming toward each other on
the same track and there is only one track, none of the trains can move
once they are in front of each other. A similar situation occurs in operating
systems when there are two or more processes that hold some resources
and wait for resources held by other(s). For example, in the below
diagram, Process 1 is holding Resource 1 and waiting for resource 2 which
is acquired by process 2, and process 2 is waiting for resource 1.

1
Examples Of Deadlock

1. The system has 2 tape drives. P1 and P2 each hold one tape drive and
each needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as
follows:
 P0 executes wait(A) and preempts.
 P1 executes wait(B).
 Now P0 and P1 enter in deadlock.

P0 P1

wait(A); wait(B)

wait(B); wait(A)

2
3. Assume the space is available for allocation of 200K bytes, and the
following sequence of events occurs.

P0 P1

Request 80KB; Request 70KB;

Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.

Deadlock can arise if the following four conditions hold


simultaneously (Necessary Conditions)

Mutual Exclusion: Two or more resources are non-shareable (Only one


process can use at a time)

Hold and Wait: A process is holding at least one resource and waiting for
resources.

No Preemption: A resource cannot be taken from a process unless the


process releases the resource.

Circular Wait: A set of processes waiting for each other in circular form.

3
Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then
we can also simulate four legs with the four conditions which when occurs
simultaneously, cause the deadlock.

However, if we break one of the legs of the table then the table will fall
definitely. The same happens with deadlock, if we can be able to violate one
of the four necessary conditions and don't let them occur together then we
can prevent the deadlock.

Let's see how we can prevent each of the conditions.

1. Mutual Exclusion

Mutual section from the resource point of view is the fact that a resource
can never be used by more than one process simultaneously which is fair
enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the
process would have never been waiting for any resource.

However, if we can be able to violate resources behaving in the mutually


exclusive manner then the deadlock can be prevented.

Spooling

For a device like printer, spooling can work. There is a memory associated
with the printer which stores jobs from each of the process into it. Later,
Printer collects all the jobs and print each one of them according to FCFS.
By using this mechanism, the process doesn't have to wait for the printer
and it can continue whatever it was doing. Later, it collects the output when
it is produced.

4
Although, Spooling can be an effective approach to violate mutual exclusion
but it suffers from two kinds of problems.

1. This cannot be applied to every resource.


2. After some point of time, there may arise a race condition between
the processes to get space in that spool.

We cannot force a resource to be used by more than one process at the


same time since it will not be fair enough and some serious problems may
arise in the performance. Therefore, we cannot violate mutual exclusion for
a process practically.

2. Hold and Wait

Hold and wait condition lies when a process holds a resource and waiting
for some other resource to complete its task. Deadlock occurs because
there can be more than one process which are holding one resource and
waiting for other in the cyclic order.

However, we have to find out some mechanism by which a process either


doesn't hold any resource or doesn't wait. That means, a process must be
assigned all the necessary resources before the execution starts. A process
must not wait for any resource once the execution has been started.

!(Hold and wait) = !hold or !wait (negation of hold and wait is, either
you don't hold or you don't wait)

5
This can be implemented practically if a process declares all the resources
initially. However, this sounds very practical but can't be done in the
computer system because a process can't determine necessary resources
initially.

Process is the set of instructions which are executed by the CPU. Each of the
instruction may demand multiple resources at the multiple times. The need
cannot be fixed by the OS.

The problem with the approach is:

1. Practically not possible.


2. Possibility of getting starved will be increases due to the fact that
some process may hold a resource for a very long time.

3. No Preemption

Deadlock arises due to the fact that a process can't be stopped once it
starts. However, if we take the resource away from the process which is
causing deadlock then we can prevent deadlock.

This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now can
become inconsistent.

Consider a printer is being used by any process. If we take the printer away
from that process and assign it to some other process then all the data
which has been printed can become inconsistent and ineffective and also
the fact that the process can't start printing again from where it has left
which causes performance inefficiency.

4. Circular Wait

To violate circular wait, we can assign a priority number to each of the


resource. A process can't request for a lesser priority resource. This
ensures that not a single process can request a resource which is being
utilized by some other process and no cycle will be formed.

6
Among all the methods, violating Circular wait is the only approach that
can be implemented practically.

Deadlock Avoidance

Deadlock Avoidance is a process used by the Operating System to avoid


Deadlock. Let's first understand what is Deadlock in an Operating System
is. Deadlock is a situation that occurs in the Operating System when any
Process enters a waiting state because another waiting process is holding
the demanded resource.
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we
have to make an assumption. We need to ensure that all information
about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm (Which is in turn a
gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but
performance decreases.

In deadlock avoidance, the request for any resource will be granted if the
resulting state of the system doesn't cause deadlock in the system. The
state of the system will continuously be checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum
number of resources a process can request to complete its execution.

7
The simplest and most useful approach states that the process should
declare the maximum number of resources of each type it may ever need.
The Deadlock avoidance algorithm examines the resource allocations so
that there can never be a circular wait condition.

Safe and Unsafe States

If the system cannot fulfill the request of all processes then the state of the
system is called unsafe.

The key of Deadlock avoidance approach is when the request is made for
resources then the request must only be approved in the case if the
resulting state is also a safe state.

Deadlock Detection

Deadlock detection algorithms are used to identify the presence of


deadlocks in computer systems. These algorithms examine the system's
processes and resources to determine if there is a circular wait situation
that could lead to a deadlock.

Deadlock detection and recovery is the process of detecting and resolving


deadlocks in an operating system. A deadlock occurs when two or more
processes are blocked, waiting for each other to release the resources
they need. This can lead to a system-wide stall, where no process can
make progress.

There are two main approaches to deadlock detection and recovery:

1. Prevention: The operating system takes steps to prevent deadlocks


from occurring by ensuring that the system is always in a safe state,
where deadlocks cannot occur. This is achieved through resource
allocation algorithms such as the Banker’s Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system
must detect and resolve them. Deadlock detection algorithms, such as
the Wait-For Graph, are used to identify deadlocks, and recovery
algorithms, such as the Rollback and Abort algorithm, are used to
resolve them. The recovery algorithm releases the resources held by
one or more processes, allowing the system to continue to make
progress.

8
Deadlock Detection:

1. If resources have a single instance –


In this case for Deadlock detection, we can run an algorithm to check for
the cycle in the Resource Allocation Graph. The presence of a cycle in the
graph is a sufficient condition for deadlock.

In the above diagram, resource 1 and resource 2 have single instances.


There is a cycle R1 → P1 → R2 → P2. So, Deadlock is confirmed.

2. If there are multiple instances of resources –


Detection of the cycle is necessary but not a sufficient condition for
deadlock detection, in this case, the system may or may not be in deadlock
varies according to different situations.
3. Wait-For Graph Algorithm –
The Wait-For Graph Algorithm is a deadlock detection algorithm used to
detect deadlocks in a system where resources can have multiple instances.
The algorithm works by constructing a Wait-For Graph, which is a
directed graph that represents the dependencies between processes and
resources.

9
Deadlock Recovery:

A traditional operating system such as Windows doesn’t deal with


deadlock recovery as it is a time and space-consuming process. Real-time
operating systems use Deadlock recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one
by one. After killing each process check for deadlock again and keep
repeating the process till the system recovers from deadlock. Killing all
the processes one by one helps a system to break circular wait
conditions.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock,
and preempted resources are allocated to other processes so that there
is a possibility of recovering the system from the deadlock. In this case,
the system goes into starvation.
3. Concurrency Control – Concurrency control mechanisms are used to
prevent data inconsistencies in systems with multiple concurrent
processes. These mechanisms ensure that concurrent processes do not
access the same data at the same time, which can lead to
inconsistencies and errors. Deadlocks can occur in concurrent systems
when two or more processes are blocked, waiting for each other to
release the resources they need. This can result in a system-wide stall,
where no process can make progress. Concurrency control
mechanisms can help prevent deadlocks by managing access to shared
resources and ensuring that concurrent processes do not interfere with
each other.

10
Physical Memory

 Random access memory ( RAM ) of a computer is known as a


Physical Memory or Primary Memory or Main memory .
 It is a volatile memory because RAM lost its data when a power on/off
occurs.
 RAM chip is attached to the motherboard.
 CPU takes less time to access data from the physical memory than
accessing the data from the hard disk.
 So programs are loaded to physical memory (RAM) from secondary
memory.
 CPU directly accesses the RAM. Physical memory holds the
instructions of programs to execute. (Ex. open Chrome)
 Main memory is allocated for both the operating system and the
user program.
 Physical memory is the linear addressable memory and the memory
addresses increase in a linear fashion and in this each byte is directly
addressable.
 A process should always in the main memory for the execution.
 A process can be swapped between main memory to virtual memory
and vice versa.
 In the contiguous memory allocation, each process should remain in
the single section of main memory. In the main memory, the efficient
approach is to divide it into the fixed size block and each block may
contain only one process.

11
Virtual Memory

 Using secondary memory as physical memory is called Virtual


Memory.
 When the RAM (main memory) is full, OS allocates a portion of hard
disk to act as a RAM. This portion is called as virtual memory.
 It is memory management technique performed by OS.
 Virtual memory combines the RAM space with the hard disk space.
 It allows executing large programs faster when Ram is not enough.
 Virtual memory is comparatively slower than the physical memory
 Approach is temporarily moving unused data from the RAM to the
disk.
 The user thinks that they can write a very big program and its entire
process is in the RAM and all the space allocated to the user is
contiguous.
 But the reality is only a small portion of the user program is in the
main memory, while the remaining program is in virtual memory.
 All the decision like which section to bring in RAM, where to place
them, when to bring back to the virtual memory is made by the
OS.
 When information is needed in RAM, pages(blocks) of memory are
swapped between RAM and the hard disk.
 The details of virtual memory management are transparent to the
users.
 It is based on the non-contiguous memory allocation.
 It makes the physical memory appear limitless from the
programmer's view.
 Virtual memory uses the concept of paging and Segmentation.
 Virtual memory maintained the degree of multiprogramming.

12
Motivations behind this technique:

From real time programs it is observed that the entire


program is not needed to bein memory for its execution.
Consider the following observations-

1. Programs contain codes to handle unusual error


conditions. Such errors occurvery rarely, and so such
codes execute in rare situations.
2. Generally arrays are allocated more memory than the actual
requirement.
3. Programs contain certain operations and features
which are used very rarely so,such types of code and
data part need not to be in memory all the time.

Also, even in cases where the entire program is needed, it


may not be needed at the sametime.

These all factors motivate the technique of virtual memory.

13
Advantages:

There are two main advantages of virtual memory.

1. Programs (and so processes) are not constrained by


physical memory size. A process larger than the
memory can also execute.
2. Degree of multiprogramming can be varied over a
large range. As there is no need to keep an entire
process in memory, more and more processes can be
accommodated in memory.

Implementation:

The virtual memory can be implemented in one of the following


three ways:

1. Demand paging
2. Demand Segmentation
3. Segmentation with Paging

In demand paging, pages are moved to memory as per


requirement. Similarly, in demand segmentation, segments
are moved only as per requirement. Here, on all pages or all
segments are moved to memory together at the same time.
In segmentation with paging scheme, segments are divided
into pages.

Physical Memory Virtual Memory

RAM of a computer is known as a Using secondary memory as


physical memory physical memory is virtual
memory.
Faster Slower
CPU directly accesses the RAM. CPU can not directly accesses the
RAM.
Limited to the size of RAM chip. Limited by the size of Hard disk.
Uses the swapping technique. Uses Paging and Segmentation
Actual memory Logical Memory

14
Memory Allocation

 Memory allocation is a process by which memory is assigned to


computer programs.
 OS keeps track of RAM, which portion of memory is used and which
portion is free.

While allocating memory, the two goals should be fulfilled:


1. High Utilization

- Maximum possible memory should be utilized


- No any single piece of memory should be wasted.

2. High Concurrency:

- Maximum possible processes should be in main memory


- As there are no more and more processes in
memory, CPU will remain busy most of the time
in executing one of the process; resulting in
better throughput.

 Main memory is divided into two types of partitions:


Low Memory - Operating system resides in low memory.
High Memory- User programs reside in high memory.

15
Contiguous Memory Allocation

 A method that allocates memory in a continuous way to the


processes.
 Two types: a.) fixed-sized partition
b.) variable-sized partition
Example-1: Contiguous blocks are assigned to each file as per its need.

16
Advantages:

 Easy to implement
 Excellent read performance due to sequential data.

Disadvantages:

 Disk will become fragmented.


 Memory gets wasted.

a.) Fixed-Sized Partition (Static partitioning):

 Main memory is pre-divided into fixed size partitions.


 The partitions may or may not be the same size.
 The size of each partition can not be changed.
 Number of partitions are fixed.
 Each partition is allowed to store only one process.
( One partition  One Process )
 Any process terminates then the partition becomes available for
another process.

17
Advantages:

 Simple and easy to implement


 Multiple processes can be stored inside the main memory.

Disadvantages:

 Internal Fragmentation: The unused space of each partition cannot


be used to load another process.

18
 Limiting the size of the process: The size of the process cannot be
larger than the size of the partition.
 Degree of multiprogramming is less: number of processes depends
on number of partitions.

b.) Variable Size Partition (Dynamic partitioning):

 Main memory is not divided into


fixed size partitions.
 When a process arrives, size of
partition = size of process is created
and allocated to process.
 Partition size is decided according to
the need of the process.
 So, it overcomes the drawback of
internal fragmentation.

19
Advantages:

 No Internal Fragmentation : size of partition = size of process


 Degree of Multiprogramming is Dynamic: No unused space.
So more processes can be loaded into the memory at the same time.
 No Limitation on the Size of Process.

Disadvantages:

 External Fragmentation:
Total unused space of RAM cannot be used to load the
processes because available space is not in the contiguous manner.

 Difficult Implementation:
Allocation and De-allocation are done very frequently and
partition size will be changed at each time so it will be difficult for the
OS to manage everything.

20
Fragmentation

Processes are loaded and deleted from memory, which creates free
memory space (hole), which are too small to assign other processes and
the memory blocks stay unused. It is called fragmentation.

Types of Fragmentation:

1. Internal Fragmentation
 It occurs only in fixed size partitioning.
 Size of the process is smaller than the size of the partition in that case
some space of the partition gets wasted and remains unused. This
wastage inside the memory partition is called as Internal
fragmentation.
 This space can not be allocated to any other process.
 Because static partitioning allows to store only one process in each
partition.

2. External Fragmentation
 Total unused space by various partitions cannot be used to load the
processes because the available space is not in the contiguous
manner is called as External fragmentation.

21
Sr. Key Internal Fragmentation External Fragmentation
No.

Definition When there is a difference When there are small and non-
between required memory contiguous memory blocks
1 space vs allotted memory which cannot be assigned to
space, problem is termed as any process, the problem is
Internal Fragmentation. termed as External
Fragmentation.
Memory Internal Fragmentation External Fragmentation occurs
2 Block occurs when allotted when allotted memory blocks
Size memory blocks are of fixed are of varying size.
size.
Occurren Internal Fragmentation External Fragmentation occurs
3 ce occurs when a process needs when a process is removed
more space than the size of from the main memory.
allotted memory block or
use less space.
Solution Best Fit Block Search is Compaction is the solution for
4 thesolution for internal externalfragmentation.
fragmentation.
Process Internal Fragmentation External Fragmentation occurs
5 occurs whenPaging is whenSegmentation is employed.
employed.

MEMORY ALLOCATION ALGORITHMS

1. First Fit: Process allocates the first free partition.


Adv: Fastest algorithm because it searches as little as possible.
Disadv: unused memory areas left after allocation become waste.

2. Best Fit: Process allocates the smallest(nearest) free partition


Adv: Memory is more utilized.
Disadv: It is slower because it searches for smallest partition.

3. Worst fit: Process allocates largest available free partition.


Disadv: largest hole is already split and occupied.
It is the reverse of best fit.

22
Non-Contiguous Memory Allocation

 Process is divided in to multiple parts and these parts are


allocated at different locations of memory as per requirements.
 Here, Process P needs 4KB.
It is divided into two parts:
P1(2KB) and P2(2KB).
 P1 and P2 are allocated at d

 ifferent places of memory.

Types: a.) Paging

b.) Segmentation

Advantages:

 Maximum utilization of CPU


 No memory is wasted
 No External fragmentation

23
Disadvantages:

 It executes slowly in comparison to contiguous memory.

a.) Paging

 RAM is divided into a number of fixed-size blocks called frames.


 Virtual memory is divided into same size blocks called pages.
 Process will be divided into pages.
 Page size = Frame size

 Whenever the process is created paging will be applied on the


process and page table will be created.
 Every process will have its own page table.

Advantages:

 No External Fragmentation
 Swapping is easy (Frame size = Page size)

Disadvantages:

 Internal Fragmentation
 Page table occupies more memory.

24
b.) Segmentation

 Virtual memory is divided into the variable (different) size parts that
are called as segment.
 Program is a collection of segments.
 Segment assigns related data of process for faster processing.
 Types of segmentation: 1. Simple
2. Virtual

Virtual Memory using Paging

 When RAM is filled or larger program than memory comes, concept


of Virtual memory is used.
Ex. To load any 5GB game in 2GB RAM.
 Memory management scheme
 Logical concept

Process:

 RAM is divided into a number of fixed-size blocks called frames.


 Virtual memory is divided into same size blocks called pages.
 Process will be divided into pages.
 Page size = Frame size (because page will fit in frame easily)
 If process requires n pages, at least n frames are required.

 The pages are then stored in different frames of the main memory.

25
Note: CPU generates the logical address for user programs and
the user thinks that the program is running in this logical address
but the program needs physical address because they are in RAM
for its execution. So logical address must be mapped to the
physical address.

Logical (Virtual) address: generated by CPU while a program is


running. User can view the logical address of a program.
Logical Address Space: set of all logical addresses
Physical Address: physical location of required data in RAM.
User never deals with the physical address.
Physical Address Space: set of all physical addresses

Note:
Address = 31 bit, then Address Space = 231 words = 2 G words (1 G =
230)

26
Mapping: Converting logical address(pages) into Physical
address(frames) is called mapping.
Memory-Management Unit: A hardware that maps logical address to
physical address.

Page Map table(PMT):

 Whenever the process is created, page table will be created.


 Every process has its own page table.
 MMU needs a page table. The page table stores all the Frame
numbers corresponding to the page numbers of process.
 Page tables are stored in main memory.
 Number of entries in the page table = the Number of Pages in which
the process is divided.

27
Logical Address:

Page number Page offset


(Which page is to be searched) (which Byte/word number of data
is to be read from that page)

Physical Address:

Frame number Frame offset


(corresponding to page number)

Extra note:
 Page number(p): Number of bits required to represent the pages in
Logical Address Space or Page number
 Page offset(d): Number of bits required to represent particular
word in a page or page size of Logical Address Space or word
number of a page or page offset.
 Frame number(f): Number of bits required to represent the frame
of Physical Address Space or Frame number.
 Frame offset(d): Number of bits required to represent particular
word in a frame or frame size of Physical Address Space or word
number of a frame or frame offset.

Structure: Suppose CPU wants to read 5th word of 3rd page, then logical
address is So 3 5 Physical Address for that 1 5

28
Advantages:

 No External Fragmentation
 Swapping is easy (Frame size = Page size)

Disadvantages:

 Internal Fragmentation
 Page table occupies more memory.

Demand Paging:
The process of loading the page in main memory from secondary memory
on demand is known as demand paging.

Page fault:
When a page referenced by the CPU is not found in the main memory, it is
called as a page fault.

29
Virtual Memory using Segmentation

 Virtual memory is divided into the variable (different) size parts that
are called as segment.
 Program is divided into variable size segments and loaded to main
memory.
 Program is collection of segments.
 OS doesn't care about the User's view of the process. It may divide
the same function into different pages and those pages may or may
not be loaded at the same time into the memory. It decreases the
efficiency of the system.
 It is better to have segmentation which divides the process into the
segments. Segment contains related data into segments for faster
processing.

Segment is a logical unit such as:

 main program
 function
 method
 object
 local variables
 global variables
 symbol table
 stack
 arrays

Process:

The CPU generates virtual addresses for running processes.


Segmentation translates the CPU-generated virtual addresses into physical
addresses.

Logical address:

Segment number(s) Segment offset(d)


(Which segment is to be searched) (range between 0 and segment
limit)

30
Segment table:

 Stores the information of all segments of the process.


 The mapping of a two-dimensional Logical address into a one-
dimensional Physical address is done using the segment table.
 Table has: 1. Base: Starting address of the segment
2. Limit: length of the segment.

 Offset(limit)+ segment base= address in Physical memory


 For Segment 1:
Base address = 1500
Limit = 100
Physical address = 1500 + 100 =1600

Advantages
 No Internal fragmentation.

31
 Segment Table consumes less space in comparison to Page table in
paging.
 Average Segment Size is larger than the actual page size.

Disadvantages:

 External fragmentation.

Structure:

 If offset is not less than limit then it is trapped (error).


 If offset is less than limit, then add limit to base address of
segment to get physical address.
 Suppose limit is 500, for every segment offset must be less than 500.

32
Contiguous memory allocation Non- Contiguous memory
allocation

Memory is allocated in a Process is divided in to multiple


continuous way to the processes. parts and these parts are allocated at
different locations of memory.
Memory is wasted. No memory is wasted.

no overhead in the address overhead in the address translation


translation
Easy for OS Difficult for OS

Faster Slower
(sequential block ) (different location)
Types: Types:
1. Fixed size partition 1. Paging
2. Variable size partition 2. Segmentation

Paging Segmentation

Virtual memory is divided into same Virtual memory is divided into


size blocks called pages. variable size blocks
called segments.
Faster Slower
Internal Fragmentation occurs. External Fragmentation occurs.
Logical Address: Logical Address:

Page number Page Offset Segment Segment Offset


number
Page size is determined by Segment size is determined by
hardware. user.
Page table contains page number Segment table contains Base and
and Frame number Limit.

33

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy