0% found this document useful (0 votes)
20 views35 pages

Unit 3 OS 2023

The document discusses memory management techniques, focusing on swapping and contiguous memory allocation. It explains how processes can be swapped in and out of memory, the importance of backing stores, and various memory allocation algorithms such as first-fit, best-fit, and worst-fit. Additionally, it covers concepts like fragmentation, paging, and memory protection mechanisms.

Uploaded by

nctitacademic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views35 pages

Unit 3 OS 2023

The document discusses memory management techniques, focusing on swapping and contiguous memory allocation. It explains how processes can be swapped in and out of memory, the importance of backing stores, and various memory allocation algorithms such as first-fit, best-fit, and worst-fit. Additionally, it covers concepts like fragmentation, paging, and memory protection mechanisms.

Uploaded by

nctitacademic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Unit 3

SWAPPING:
 A process needs to be in a memory to be executed. A process, can be swapped temporarily
out of memory to a backing store, and then brought back into memory for continued
execution.
 For example, Assume a multiprogramming environment with a round robin CPU
scheduling algorithm. When a time quantum expires, the memory manager will start to swap
out the process that just finished, and to swap in another process to the memory space that
has been freed.

 In meantime the CPU scheduler will allocate a time slice to some other process in memory.
When each process finishes its quantum, it will be swapped with another process. The
memory manager can swap processes.

 Another variant of swapping policy is used for priority based scheduling algorithms. If a
higher priority process arrives and wants service, the memory manager can swap out the
lower priority process so that it can load and execute the higher priority process.
 When the higher priority process finishes the lower priority process can be swapped back
and continued. This is also called as roll in and rolls out.

 Normally a process that is swapped out will be swapped back into the same memory space
that it occupied previously. This restriction is assumed by the method of address binding. If
binding is done at load time, then the process cannot be moved to different locations. If
execution time binding is being used then a process can be swapped into a different memory
space.

 Swapping requires a backing store. The backing store is commonly fast disk. It must be
large enough to accommodate copies of all memory for all users, and it must provide direct
access to this memory.

1
 The system manages a ready queue consisting of all processes whose memory are on the
backing store or in memory and are ready to run. When the CPU scheduler decides to
execute a process, it calls the dispatcher.

 The context switch time in such a swapping is high.


 For, example of context switch
 Assumes the process is of size 1MB and the backing store is a standard hard disk with a
transfer rate of 5 MB per second.
 The actual transfer of 1 MB process to or from memory takes:

1000 KB / 5000 KB per second = 1 / 5 second = 200 milliseconds.

Swapping is constrained by other factors as well:


 If we want to swap a process, we must be sure that it is completely idle. Particularly is I/O
operation is pending. A process may be waiting for an I/O operation when we want to swap
that process to free up its memory then swapping is not done.
 Assume that the I/O operation was queued because the device was busy. Then if we were to
swap out process P1 and swap P2, the I/O operation might then attempt to use the memory
which now belongs to P2.
Schematic View of Swapping

2
CONTIGUOUS MEMORY ALLOCATION:
Main memory usually into two partitions:
1. Resident operating system, usually held in low memory with interrupt vector
2. User processes then held in high memory
Memory Protection:
 It is concept by
 The OS is protected from other user processes.
 One user process is protected from other user processes
 Relocation register and limit register is used for this memory protection.
 Relocation register: Smallest physical address is stored in the relocation register.
 Limit register: The ranges of logical address are stored in limit register.
 Protection steps:
1. Every logical address is verified whether it is small than the limit register.
2. If it so, then the logical address is summed with the relocation register.
3. This mapping was done by MMU and this mapped address is passed to memory.

Memory Allocation:
1. Multiple-partition allocation
 The memory is divided into many fixed size partitions. Every partition is definitely holding
one process.
 This is simplest method for memory allocation.
 Here, when a partition is free, a process is chosen from the input queue. It is then loaded into
free partition.
 If the process ends, the partition may be used for other processes.
 This method was working in IBM OS/360 operating system.

3
 Hole – block of available memory; holes of various sizes are spread throughout memory.
 When a process arrives, it is allocated memory from a hole large enough to accommodate it
 Operating system maintains information about:
a) Allocated partitions
b) Free partitions (hole)

2. Dynamic Storage-Allocation Problem:


 First-fit: Allocate the first hole that is big enough
 Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless
ordered by size
 Worst-fit: Allocate the largest hole; must also search entire list produces the largest leftover
hole
 First-fit and best-fit better than worst-fit in terms of speed and storage utilization
3. Fragmentation:
 External Fragmentation:
Total memory space exists to satisfy a request, but it is not contiguous
 Internal Fragmentation:
Allocated memory may be slightly larger than requested memory; this size difference is
memory internal to a partition, but not being used
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory together in one large block
 Compaction is possible only if relocation is dynamic, and is done at execution time
 I/O problem
1. Latch job in memory while it is involved in I/O
2. Do I/O only into OS buffers

4
Simple algorithm for Compaction:
Solution1:
 Simply move all processes toward one end of memory
 All the holes move in another direction by making a big hole of the available memory. But it
takes more Cost
Solution2:
 Allow the logical address space of a process to be non-contiguous .
 To do this, there are two techniques
1. Paging
2. Segmentation

Problem: Describe the following allocation algorithms: a. First fit b. Best fit c. Worst fit

5
Answer:
a. First-fit:
Search the list of available memory and allocate the first block that is big enough.
b. Best-fit:
Search the entire list of available memory and allocate the smallest block that is big enough.
c. Worst-fit:
Search the entire list of available memory and allocate the largest block.
(The justification for this scheme is that the leftover block produced would be larger and potentially
more useful than that produced by the best-fit approach.)

Problem: 1
Given memory partitions of 100K, 500K, 200K, 300K, and 600K (in order), how would each
of the First-fit, Best-fit, and Worst-fit algorithms place processes of 212K, 417K, 112K, and
426K (in order)? Which algorithm makes the most efficient use of memory?
Solution:
First-fit:
 212K is put in 500K partition
 417K is put in 600K partition
 112K is put in 288K partition (new partition 288K = 500K - 212K)
 426K must wait 211
Best-fit:
 212K is put in 300K partition
 417K is put in 500K partition
 112K is put in 200K partition
 426K is put in 600K partition
Worst-fit:
 212K is put in 600K partition
 417K is put in 500K partition
 112K is put in 388K partition
 426K must wait Note: In this example, Best-fit turns out to be the best.
Problem: 2

6
Consider the following segment table:
Segment Base Length
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96
Problem:
What are the physical addresses for the following logical addresses?
a. 0,430
b. 1,10
c. 2,500
d. 3,400
e. 4,112
Answer:
a. 219 + 430 = 649
b. 2300 + 10 = 2310
c. illegal reference, trap to operating system
d. 1327 + 400 = 1727
e. illegal reference, trap to operating system

7
Problem: 3 Given memory partitions of 100K, 500K, 200K, 300K and 600K (in order) how
would each of the first fit, best fit and worst fit algorithms place processes of 212K, 417K,
112K and 426K (in order) ? Which algorithm makes the most efficient use of memory.
Solution:
1. first fit:

Memory 100K 500K 200K 300K 600K


Partitions

Processes 212K 417K 112K 426K

No memory
partitions

2.worst fit:

Memory 100K 500K 200K 300K 600K


Partitions

Processes 212K 417K 112K 426K

No memory partitions

3.best fit:

Memory 100K 500K 200K 300K 600K


Partitions

Processes 212K 417K 112K 426K

8
Paging:
 Paging is an efficient memory management scheme because it is non contiguous memory
allocation method.
 In the paging the process is divided into small parts that are loaded into elsewhere in the
main memory.
Concept:
 Paging is a memory management scheme.
 Paging reduces the external fragmentation.
 Memory is divided into fixed size blocks called page frames.
 The virtual address space of a process is also split into size blocks of the same sizes, called
pages.
 The size of frames is determined by hardware.

 Figure Paging Hardware


 When a process is loaded into memory, the OS loads each page into an unused page frame.
 The page frame used need not be contiguous.
 OS maintains the page table for each process. The page table shows the page frame location
for each page of the process.
Basic Idea:

9
 Paging is the physical memory or main memory is divided into fixed sized blocks called
pages.
 But page size and frame size is equal.
 The size of the page and frame is depends on OS generally page size is 4KB.
 In this scheme, the OS maintains a data structure that is page table, it is used for
mapping purpose.
 The page table specifies, the some useful information, it tells which frames are allocated
which frames are available and how many total frames are there.
 The general page table consisting of 2 fields
1. Page number
2. Frame number
 Each OS has its own methods for taking page tables; most allocate a page table for each
process.
 CPU divided into two parts Page no and Page Offset or displacement
 Page no is used index page table
 Page offset is used displacement within the page.
 The logical address space i.e.) CPU generated address space is divided into pages,
0
each pages having the page no and displacement.
1
 The pages are loaded into available free frames in physical memory.
2
 Mapping between Page number and frame number done by page map table.
3
 The page table specifies which page is loaded in which frame but displacement is
4
For Example:
5
Two jobs are in the ready queue, the job size 16KB and 24KB. Page size is 4KB.
6
Available main memory is 72 KB (i.e. 18 frames)
7
So, job1 is divided into 4 pages
8
Job2 is divided into 6 pages Physical Memory
9
10
11
P.No F.No 12
0 10 13
1 03 14
2 12 15
10
3 1 16
17
Logical Memory Page table for Job 1

Page 0
Page 3 [job 1]
Page 1
Page 2
Page 1 [job 1]
Page 3

Page 0 [job 2]

Page 0 [job 1]
Page table for Job 2
P.No F.NO
Page 0 0 8 Page 2 [job 1]

Page 1 1 16
Page 2 2 13
Page 3 3 9
Page 4 4 17
Page 5 5 19

 Four page of job 1 are loaded in different location in main memory.


 The OS provide a page table for each process; the page table specifies the location in main
memory.
 The capacity of main memory in a example 18 frames.
 But the available jobs are two (10 page), so the remaining 8 frames are free.
 The scheduler can be the free frame to some other jobs.

11
Paging model of memory:

 Page table stores the number of the page frame allocated for each page.
 The page number is used to index into a page table.
 Page table contains the base address of each page in physical memory.
 This base address is combined with the page offset to define the physical memory address (i.e.)
sent to memory unit.
Page number and Page Offset calculation
 Address generated by CPU is divided into:
 Page number (p) – used as an index into a pagetable which contains base address of each
page in physical memory
 Page offset (d) – combined with base address to define the physical memory address that is
sent to the memory unit
 For given logical address space 2m and page size 2n

Page number (p) = m – n Page offset (d) = n

 Where p = index into the page table


d = displacement within the page
Example:
Memory Protection
 Memory protection implemented by associating protection bit with each frame
 Valid-invalid bit attached to each entry in the page table:
 “valid” indicates that the associated page is in the process’ logical address space, and
is thus a legal page

12
 “invalid” indicates that the page is not in the process’ logical address space

Figure: valid or invalid bit in a page table


Shared Pages
 Shared code
 One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
 Shared code must appear in same location in the logical address space of all
processes
 Private code and data
 Each process keeps a separate copy of the code and data
 The pages for the private code and data can appear anywhere in the logical address
space

13
Example:

Hardware Support for paging:


 Page table is kept in main memory
 Page-table base register (PTBR) points to the page table
 Page-table length register (PRLR) indicates size of the page table
 In this scheme every data/instruction access requires two memory accesses. One for the
page table and one for the data/instruction.
 The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffer(TLBs)
 Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies
each process to provide address-space protection for that process
Associative Memory
 Associative memory – parallel search
Page # Frame#

Address translation (p, d)


 If p is in associative register, get frame # out
 Otherwise get frame # from page table in memory

14
Figure: Paging Hardware with TLB
Structure of the Page Table
 Hierarchical Paging
 Hashed Page Tables
 Inverted Page Tables
Hierarchical Page Tables
 Break up the logical address space into multiple page tables
 A simple technique is a two-level page table
 A logical address (on 32-bit machine with 1K page size) is divided into:
 a page number consisting of 22 bits
 a page offset consisting of 10 bits
 Since the page table is paged, the page number is further divided into:
 a 12-bit page number
 a 10-bit page offset

15
Hashed Page Tables
 Common in address spaces > 32 bits
 The virtual page number is hashed into a page table. This page table contains a chain of
elements hashing to the same location.
 Virtual page numbers are compared in this chain searching for a match. If a match is found,
the corresponding physical frame is extracted.

Inverted Page Table

16
 One entry for each real page of memory
 Entry consists of the virtual address of the page stored in that real memory location, with
information about the process that owns that page
 Decreases memory needed to store each page table, but increases time needed to search the
table when a page reference occurs
 Use hash table to limit the search to one — or at most a few — page-table entries
o where pi is an index into the outer page table, and p2 is the displacement within the
page of the outer page table

Advantages of paging:
 Paging eliminates fragmentation
 It Supports time sharing system
 It supports virtual memory
 Paging increases memory and processor utilization
Disadvantage of Paging:
 Page address mapping hardware usually increases the cost of the computer.
 Memory must be used to store the various table like page table, memory map table etc.

Segmentation:
 It is a memory-management scheme that supports user view of memory

17
 A program is a collection of segments.
 It is a variable size.
 A segment is a logical unit such as:
 main program,
 procedure,
 function,
 method,
 object,
 local variables, global variables,
 common block,
 stack,
 symbol table, arrays
Logical view of segmentation:

Segmentation Architecture
 Logical address consists of a two tuple:

18
 <segment-number, offset>,
 Segment table – maps two-dimensional physical addresses; each table entry has:
 base – contains the starting physical address where the segments reside in memory
 limit – specifies the length of the segment
Figure: Segmentation Hardware

 a logical address consists of two parts: a segment number, s and an offset into that segment,
d.
 the segment number is used as an index into the segment table. The offset d of the logical
address must be between 0 and the segment limit.
 If it is not, we trap to the OS (logical addressing attempt beyond end of the segment)
 If this offset is legal, it is added to the segment base to produce the address in physical
memory of the desired byte.
 The segment table is thus essentially an array of base-limit register pairs.
 Segment-table base register (STBR) points to the segment table’s location in
memory
 Segment-table length register (STLR) indicates number of segments used by a
program, segment number s is legal if s<STLR

19
Protection:
 A particular advantage of segmentation is the association of protection with the segments.
 Because the segments represent a semantically defined portion of the program, it is likely
that all entries in the segment will be used the same way
 With each entry in segment table associate:
 validation bit = 0 Þ illegal segment
 read/write/execute privileges
 Protection bits associated with segments; code sharing occurs at segment level
 Since segments vary in length, memory allocation is a dynamic storage-allocation problem
 A segmentation example is shown in the following diagram

Sharing:
 Another advantage of segmentation involves the sharing of code or data.
 Each process has a segment table associated with it, which dispatcher uses to define the
hardware segment table when this process is given the CPU.
 Segments are shared when entries in the segment tables of two different processes point to
the same physical location.
 Code sharing occurs at segment level.

20
 Any information can be shared if it is define to be a segment. Several segments can be
shared, so a program composed of several segments can be shared.
 For example, consider the use of text editor in a time sharing system. A complete editor
might quite large, composed of many segments. These segments can be shared among all
users, limiting the physical memory needed to support editing tasks.
 Rather than n copies of the editor, we need only one copy. For each user, we still need
separate, unique segments to store local variables. These segments, of course would not be
shared.
Advantages:
 Segmentation eliminates fragmentation
 It provides virtual memory
 Allows dynamic segment growth.
 Segmentation is visible.
Disadvantages:
 Maximum size of a segment is limited by the size of main memory
 Difficulty to manage variable size segments on secondary storage.
Segmentation with Paging: 0
 Both paging and segmentation have then advantages and disadvantages. 1
 It is better to combine there 2 schemes to improve on each. 2
 The combined scheme was page the segments. 3
 Each segment in this scheme divided into pages and each segment maintain the
4
page table.
5
6
 So the logical address into 3 parts
7
 Segment, Page number and Displacement or Offset (Frame number)
8
Logical Address Space Page Table for Segment 0
9
Frame Number
10
11
P.No S.No 12
Main
0 5 13
Segment 0 1 7 14
2 9 15
21
3 11 16
17
Array

Page Table for Segment 1

P.No S.No
0 2 (1,0)
Stack
1 4
2 6 (1,1)
3 8 (0,0)
(1,2)
Page Table for Segment 2
(0,1)
P.No S.No
(1,3)
0 12
(0,2)
1 13
2 14
(0,3)
3 17
(2,0)
(2,1)
 The logical address space divided into segments numbered
(2,2)
from 0 to 2.
 Each segment maintains a page table, the mapping between
page and frame done by page table.
(2,3)

For Example
The Frame number 8 shows the address (1,3)
 1 Stands for segment number
 3 Stands for page number
The main advantage of this scheme is fragmentation and its support user view of memory.

22
Fig: The hardware support for this combine scheme is Segmentation with paging

Segment Page Offset Frame No Offset


no no

Seg table Pointer

Limit Base Limit Base

Segment Table Page Table Main memory

Virtual Memory:
 Virtual memory is the separation of user logical memory from physical memory.
 This separation allows an extremely large virtual memory to be provided for programmers
when only a smaller physical memory is available.
 Virtual memory also allows files and memory to be shared by several different processes
through page sharing.
 Virtual memory is commonly implemented by demand paging.
 It can also be implemented in segmentation system.
 Demand segmentation can also be used to provide virtual memory.

Demand Paging:
 A demand paging system is similar to a paging system with swapping.
 Processes reside on secondary memory.
 When a process is to be executed, it is swapped into the memory.
 Rather than swapping the entire process a lazy swapper is used.

23
 The lazy swapper never swaps a page into memory unless that page will be needed.
Basic Concepts:
 When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again.
 Instead of swapping in a whole process, the pager brings only those necessary pages into
memory.
 Thus it avoids reading into memory pages that will not be used, decreasing the swap time
and the amount of physical memory needed.
Fig: Transfer of paged memory to contiguous disk space.

Valid – Invalid Bit Method:


 We must now distinguish those pages which are on the disk and those on memory.
 There is a valid – invalid bit, valid bit indicates the pages are in the memory.
 Invalid bits indicate the pages are not on memory they are on the disk.

Fig: Page table when some pages are not in main memory.

24
Page Fault Trap:
 If a process attempt to use a page which was not fetched into memory then there will be
page fault trap.
 Access to a page marked invalid causes a page-fault trap.

Steps in handling a Page Fault:


1. We check an internal table to determine whether the reference was a valid or a invalid
memory access.
2. If the reference was invalid, we terminate the process. It is was valid, but we have not yet
fetched in that page
3. We find a free frame
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.

25
6. We restart the instruction that was interrupted by the illegal address trap.

Hardware Support:
 The hardware to support demand paging is the same as the hardware for paging and
swapping.
 Page Table: It has the ability to mark an entry invalid through a valid – Invalid bit.
 Secondary memory: This memory holds those pages that are not present in main memory. It
is usually high speed disk.
Demand Paging Example:
 Demand paging can have a significant effect on the performance of a computer system.
 Let us calculate effective Memory access time for a demand paged.
 Let P be the Probability page-fault ( 0 < =P <= 1)
 Effective access time = (1 – p) x ma + p * Page Fault time.
Where, P =Page Fault
ma = Memory Access Time.

Process Creation:
Virtual Memory allows other benefits during process creation:
Copy – on – write& Memory – Mapped Files

26
Copy – On – Write (COW):
 COW allows both parent and child processes to initially share the same pages.
 These shared paged are marked as Copy – On – Write pages, meaning that if either process
writes to a shared page, a copy of the shared pages is created.
Memory – Mapped Files:
 Allowing a part of the virtual address space to be logically associated with a file is called
memory mapped files.
 Memory mapping a file is possible by mapping a disk block to a page or pages in memory.
Page Replacement:
 Page replacement takes the following approach.
 If no frame is free, we find one that is not currently being used and free it.
 We can free a frame by writing its contents to swap, and changing the page table.
 We can now use the freed frame to hold the page for which the process faulted.
 We modify the page fault service routine to include page replacement:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the free frame; change the page and frame tables.
4. Restart the user process.

27
Page Replacement Algorithms:
We shall discuss the following page replacementalgorithms:
 First-In-First-Out - FIFO
 The Least Recently Used – LRU
 The Optimal Algorithm
 The Counting based Algorithm
I. FIFO Page Replacement
 The simplest page-replacement algorithm is a FIFO algorithm.
 A FIFO replacement algorithm associates with each page the time when that page was
brought into memory.
 When a page must be replaced, the oldest page is chosen.
 We replace the page at the head of the queue.
 When a page is brought into memory, we insert it at the tail of the queue.

28
II. Optimal page replacement:
 One result of the discovery of belady’s anomaly was the search for an optimal page-
replacement algorithm.
 An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms, and
will never suffer from belady’s anomaly.
 Such an algorithm does exist, and has been called OPT or MIN.
 It is simply replace the page that will not be used for the longest period of time.
 Use of the page -replacement algorithm guarantees the lowest possible page fault rate for a
fixed number of frames.

III. LRU page replacement


 The key distinction between the FIFO and OPT algorithms is that the FIFO algorithm uses
the time when a page was brought into memory; the OPT algorithm uses the time when a
page is to be used.
 If we use the recent past as an approximation of the near future, then we will replace the
page that has not been used for the longest period of time.

29
 This approach is the least –recently-used (LRU) algorithm.
 LRU replacement associates with each page the time that page’s last use. When a page must
be replaced; LRU chooses that page that has not been used for the longest period.

Two Implementation are feasible:


Counters:In the simplest case, we associate with each page-table entry a time-of-use field, and
add to the CPU a logical clock or counter.
 The clock is incremented for every memory reference.
 Whenever a reference to a page is made, the contents of the clock register are copied to the
time-of-use field in the page-table entry for that page.
 In this way, we always have the “time” of the last reference to each page.
 We replace the page with the smallest time value.
 Stack: Another approach to implementing LRU replacement is to keep a stack of page
numbers.
 Whenever a page is referenced, it is removed from the stack and put on the top.
 In this way, the top of the stack is always the most recently used page and the bottom is the
LRU page.
 Because entries must be removed from the middle of the stack, it is best implemented by a
doubly linked list, with a head and tail pointer.
IV. Counting-Based Page Replacement:
 There are many other algorithms that can be used for page replacement.
 For example, we could keep a counter of the number of references that have been made to
each page, and develop the following two schemes.
 The least frequently used (LFU) page-replacement algorithm requires that the page with the
smallest count be replaced.

30
 The reasons for this selection are that an actively used page should have a large reference
count.
 This algorithm suffers from the situation in which a page is used heavily during the initial
phase of a process, but then is never used again.
 Since it was used heavily, it has a large count and remains in memory even though it is no
longer needed.
 One solution is to shift the counts right by 1 bit at regular intervals, forming an
exponentially decaying average usage count.
 The most frequently used (MFU) page-replacement algorithm is based on the argument that
the page with the smallest count was probably just brought in and has yet to be used.

Allocation of Frames:
 Each process needs minimum number of pages
 Example: IBM 370 – 6 pages to handle SS MOVE instruction:
 instruction is 6 bytes, might span 2 pages
 2 pages to handle from
 2 pages to handle to
 Two major allocation schemes
 fixed allocation
 priority allocation
Fixed Allocation:

si = size of process pi
S=∑ s i
m= total number of frames
s
a i= allocation for pi = i ×m
S
m=64
si =10
s2 =127
10
a 1= ×64≈5
137
127
Priority Allocation:a 2=137 ×64≈59

31
 Use a proportional allocation scheme using priorities rather than size
 If process Pi generates a page fault,
 select for replacement one of its frames
 select for replacement a frame from a process with lower priority number
Global vs. Local Allocation:
 Global replacement – process selects a replacement frame from the set of all frames; one
process can take a frame from another
 Local replacement – each process selects from only its own set of allocated frames

Thrashing:
 When page faults occurs again, again, and again. The process continues to fault,replacing pages
for which it then faults and brings back in right away. This high paging activity is called
thrashing.
 A process is thrashing if it is spending more time paging than executing.
I. Cause of Thrashing:
Thrashing results in several performance problems.
CPU Utilization:
 The OS monitors CPU utilization.
 If CPU utilization is too low, we increase the degree of multiprogramming by introducing a
new process to the system.
 A global page replacement algorithm is used.
 It replaces the pages with no use to the process to which they suitable.
 Now that process enters a new phase in its execution and needs more frames.
 It starts faulting and taking frames away from other processes.
 These processes need those pages, so they also fault, taking frames from other processes.
 These faulting processes must use the paging device to swap pages in and out.
 The scheduler sees the decreasing CPU utilization, and increases the degree of
multiprogramming as a result.
 The new process tries to get started by taking frames from running processes, causing more
page faults, and a longer queue for the paging device.
 The CPU scheduler tries to increase the degree of multiprogramming more.

32
 Thrashing has occurred and memory access time increases.
 No other work is getting done because the processes re spending all there time paging.

Local Replacement Algorithm:


 Thrashing can be limited by using a local replacement algorithm.
 With local replacement, if one process starts thrashing, it cannot take frames from another
process.
 If processes are thrashing, they will be in the queue for the paging device most of the time.
 The average service time for a page fault will increase, due to the longer average queue for the
paging device. Thus, the effective access time will increase even for a process that is not
thrashing.
Toprevent thrashing:
 The number of frames each process needs has to be specified. But how to know the number of
frames it “needs”.
 There are several techniques: The working – set strategy – starts by looking at how many
frames a process is actually using. This approach defines the locality model of process
execution.
 The locality model states that, as a process executes, it moves from locality to locality. A
locality is a set of pages that are actively used together. A program is generally composed of
several different localities, which may overlap.

II. Working – Set Model:


 The Working-set model is based on the assumption of locality.
 This model uses a parameter, to define the working-set window.

33
 The idea is to examine the most recent  page references.
 The set of pages in the most recent  page references is the working set.
 If a page is in the active use, it will be in the working set.
 If it is no longer being used, it will drop from the working set  time units after it last
reference.
 For example, given the sequence of memory references shown in the figure, if  = 10
memory reference, then the working set at time t1 is {1,2,5,6,7}. By time t2, the working set
has changed to {3,4}.
Figure: Working set model.
…2615777751 623412 3444343444 13234443444...
 

t1 t2
WS(t1)={1,2,5,6,7} WS(t2)={3,4}
The most important property of the working set is its size. If we compute the working set size,
WSS, for each process in the system, we can consider
D =  WSS I
Where D - is the total demand for frames. Thus process I need WSSi frames. If the total demand is
greater than the total number of available frames (D>m), thrashing will occur, because some
processes will not have enough frames.

Use of working-set model:


 The OS monitors the working set of each process and allocates to that working set enough
frames to provide it with its working set size.
 If there are enough extra frames,, another process can be initiated.
 If the sum of the working-set size increases, exceeding the total number of available frames,
the operating system selects a process to suspend.
 The process pages are written out and its frames are reallocated to other processes.
 The suspended process can be restarted later.
 This working set prevents thrashing while keeping the degree of multiprogramming as high
as possible.

34
35

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy