0% found this document useful (0 votes)
110 views

Unit 1 Part 2-1 Memory Management

1) Memory management involves separating each process's memory space using base and limit registers to define the legal address range. 2) Address binding can occur at compile/load time or runtime using virtual addresses and memory management units to map virtual to physical addresses. 3) Dynamic loading only loads routines when needed to improve memory utilization compared to static linking of all routines. 4) Swapping allows more processes in memory than physical memory by writing processes to disk when not in use. 5) Contiguous allocation places each process in a single contiguous block of memory to simplify allocation.

Uploaded by

neetunarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Unit 1 Part 2-1 Memory Management

1) Memory management involves separating each process's memory space using base and limit registers to define the legal address range. 2) Address binding can occur at compile/load time or runtime using virtual addresses and memory management units to map virtual to physical addresses. 3) Dynamic loading only loads routines when needed to improve memory utilization compared to static linking of all routines. 4) Swapping allows more processes in memory than physical memory by writing processes to disk when not in use. 5) Contiguous allocation places each process in a single contiguous block of memory to simplify allocation.

Uploaded by

neetunarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

RECAP - Introduction to OS

Quiz 2

Dr. Neetu Narwal, MSI


Q1. A distributed system is a system whose components are located on
different computers on a network, which communicate and coordinate their
actions by passing messages to one another.
What is/are the main advantages of distributed systems?
a) Resource Sharing
b) Computation Speed
c) Hardware Abstraction
d) Reliability
Answer
1) a and b
2) a and c
3) c and d
4) All of the above

Dr. Neetu Narwal, MSI


Q2. Which one of the following is False ?
Answer
1) Monolithic Kernel has direct communication with all the modules.
2) Monolithic Kernel is faster than microkernel.
3) Microkernel is faster than monolithic kernels
4) Monolithic Kernel is more crashable compared to microkernel.

Dr. Neetu Narwal, MSI


Q3. To extend the IO address range, we use the low memory region of the
RAM as Memory mapped IO.
Answer
1) True
2) False

Dr. Neetu Narwal, MSI


Q4.

Answer
1) 1-d, 2-c, 3-b, 4-a
2) 1-c, 2-d, 3-b, 4-a
3) 1-b, 2-a, 3-c, 4-d
4) 1-a, 2-b, 3-d, 4-a

Dr. Neetu Narwal, MSI


 Q5. For monolithic kernels, which of the following is False.
 i) The entire kernel runs in protected mode.
 ii) Communication between kernel modules is by system calls.
 iii) Is likely to be more crashable.
1) I only
2) II only
3) I and III only
4) II and III only

Dr. Neetu Narwal, MSI


Q6. _____________ module in the operating system helps in communicating
with hardware devices.

Dr. Neetu Narwal, MSI


Q7. In the absence of system calls, a user process can never dynamically
allocate memory.
Answer
1) True
2) False

Dr. Neetu Narwal, MSI


Q8. What are the features of Contiki OS
a) Provide good user interface features
b) Energy aware scheduling.
c) Small footprint
d) Zigbee protocol support
Answer
1) a , b and c
2) a , c and d
3) b, c and d
4) All of the above

Dr. Neetu Narwal, MSI


Answers
 Answer 1 – option 4 - All of the above
 Answer 2 – option 1 - Monolithic Kernel has direct communication with all
the modules.
 Answer 3 – option 2 – False
 Answer 4 – option 1 1-d, 2-c, 3-b, 4-a
 Answer 5 – option 2 - II only
 Answer 6 – Device Drivers
 Answer 7 – option 1 –True
 Answer 8 – option 3 - b, c and d

Dr. Neetu Narwal, MSI


Memory Management
Dr. Neetu Narwal
Assoc. Prof.
BCA(M)

Dr. Neetu Narwal, MSI


Basic Hardware

Main memory and the registers built into the


processor itself are the only general-purpose
storage that the CPU can access directly.
There are machine instructions that take
memory addresses as arguments, but none that
take disk addresses.

Dr. Neetu Narwal, MSI


For proper system operation, we must protect the
operating system from access by user processes, as well as
protect user processes from one another.
This protection must be provided by the hardware,
because the operating system doesn’t usually intervene
between the CPU and its memory accesses

Dr. Neetu Narwal, MSI


We first need to make sure that each process has a
separate memory space.
Separate per-process memory space protects the
processes from each other and is fundamental to having
multiple processes loaded in memory for concurrent
execution.
To separate memory spaces, we need the ability to
determine the range of legal addresses that the process
may access and to ensure that the process can access
only these legal addresses.

Dr. Neetu Narwal, MSI


We can provide this protection
by using two registers, usually a
base and a limit, as illustrated in
Figure 8.1.
The base register holds the
smallest legal physical memory
address; the limit register
specifies the size of the range.
For example, if the base register
holds 300040 and the limit
register is 120900, then the
program can legally access all
addresses from 300040 through
420939 (inclusive).

Dr. Neetu Narwal, MSI


Dr. Neetu Narwal, MSI
P1
.
P2
.

P3

Q1. What are the values of base and limit register of P1, P2, P3.
Q2. What happens P1 tries to access 300100, at what stage it is trapped and
why?
Q3. What happens P2 tries to access 2561000 address?
Q4. What happens
Dr. Neetu Narwal, MSI when P3 tries to access 430100 address?
Address Binding

Usually, a program resides on a disk as a binary executable


file. To run, the program must be brought into memory and
placed within the context of a process where it becomes
eligible for execution on an available CPU.
As the process executes, it accesses instructions and data
from memory. Eventually, the process terminates, and its
memory is reclaimed for use by other processes

Dr. Neetu Narwal, MSI


Dr. Neetu Narwal, MSI
 Binding addresses at either compile or load time generates
identical logical and physical addresses. However, the execution-
time address-binding scheme results in differing logical and physical
addresses.
 In this case, we usually refer to the logical address as a virtual
address.
 We use logical address and virtual address interchangeably in this
text. The set of all logical addresses generated by a program is a
logical address space.
 The set of all physical addresses corresponding to these logical
addresses is a physical address space. Thus, in the execution-time
address-binding scheme, the logical and physical address spaces
differ.

Dr. Neetu Narwal, MSI


 The run-time mapping from virtual to physical addresses
is done by a hardware device called the memory-
management unit (MMU)

Dr. Neetu Narwal, MSI


 We can choose from many different methods to accomplish such
mapping,
 The mapping with a simple MMU scheme that is a generalization of
the base register. The base register is now called a relocation
register.
 The value in the relocation register is added to every address
generated by a user process at the time the address is sent to
memory.

Dr. Neetu Narwal, MSI


Dynamic Loading
 It has been necessary for the entire program and all data of a
process to be in physical memory for the process to execute. The
size of a process has thus been limited to the size of physical
memory.
 To obtain better memory-space utilization, we can use dynamic
loading.
 With dynamic loading, a routine is not loaded until it is called. All
routines are kept on disk in a relocatable load format.
 The main program is loaded into memory and is executed. When a
routine needs to call another routine, the calling routine first checks
to see whether the other routine has been loaded. If it has not, the
relocatable linking loader is called to load the desired routine into
memory.
Dr. Neetu Narwal, MSI
 The advantage of dynamic loading is that a routine is loaded
only when it is needed.
 Dynamically linked libraries (DLLs) are system libraries that are
linked to user Programs when the programs are run.
 Some operating systems support only static linking, in which
system libraries are treated like any other object module and
are combined by the loader into the binary program image.
 Dynamic linking, in contrast, is similar to dynamic loading.
 A second advantage of DLLs is that these libraries can be
shared among multiple processes, so that only one instance
of the DLL in main memory. For this reason, DLLs are also
known as shared libraries, and are used extensively in
Windows and Linux systems.

Dr. Neetu Narwal, MSI


Swapping

Dr. Neetu Narwal, MSI


Time elapse due to swapping (context-
switch time)
 Let’s assume that the user process is 100 MB in size and
the backing store is a standard hard disk with a transfer
rate of 50 MB per second. The actual transfer of the 100-
MB process to or from main memory takes
100 MB/50 MB per second = 2 seconds
 The swap time is 200 milliseconds. Since we must swap
both out and in, the total swap time is about 400
milliseconds.

Dr. Neetu Narwal, MSI


Contiguous Memory Allocation

 The main memory must accommodate both the


operating system and the various user processes.
 The memory is usually divided into two partitions: one for
the operating system and one for the user processes.
 We can place the operating system in either low
memory addresses or high memory addresses.

Dr. Neetu Narwal, MSI


 We usually want several user processes to reside in
memory at the same time. We therefore need to
consider how to allocate available memory to the
processes that are waiting to be brought into memory.
 In contiguous memory allocation, each process is
contained in a single section of memory that is
contiguous to the section containing the next process

Dr. Neetu Narwal, MSI


Memory Protection
 We can prevent a process from accessing
memory that it does not own by combining
two ideas previously discussed.
 If we have a system with a relocation
register, together with a limit register, we
accomplish our goal.
 The relocation register contains the value of
the smallest physical address; the limit
register contains the range of logical
addresses (for example, relocation = 100040
and limit = 74600). Each logical address must
fall within the range specified by the limit
register.

Dr. Neetu Narwal, MSI


Memory Allocation
One of the simplest methods of allocating memory is to
assign processes to variably sized partitions in memory,
where each partition may contain exactly one process.
In this variable partition scheme, the operating system
keeps a table indicating which parts of memory are
available and which are occupied. Initially, all memory is
available for user processes and is considered one large
block of available memory space.

Dr. Neetu Narwal, MSI


Eventually, as you will see, memory contains a set of holes
of various sizes. This procedure is a particular instance of
the general dynamic storage allocation problem, which
concerns how to satisfy a request of size n from a list of free
holes. There are many solutions to this problem.
The first-fit, best-fit , and worst-fit strategies are the ones
most commonly used to select a free space from the set of
available spaces.

Dr. Neetu Narwal, MSI


First fit. Allocate the first space that is big enough.
Searching can start either at the beginning of the set of
holes or at the location where the previous first-fit search
ended. We can stop searching as soon as we find a free
space that is large enough.
Best fit . Allocate the smallest space that is big enough. We
must search the entire list, unless the list is ordered by size.
This strategy produces the smallest leftover space.
Worst fit. Allocate the largest space. Again, we must
search the entire list, unless it is sorted by size. This strategy
produces the largest leftover space, which may be more
useful than the smaller leftover space from a best-fit
approach.
Dr. Neetu Narwal, MSI
9.6 Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750 KB,
and 125 KB (in order), how would the first-fit, best-fit, and worst-fit
algorithms place processes of size 115 KB, 500 KB, 358 KB, 200 KB, and
375 KB (in order)?

Dr. Neetu Narwal, MSI


Fragmentation
 Both the first-fit and best-fit strategies for memory allocation suffer from
external fragmentation. As processes are loaded and removed from
memory, the free memory space is broken into little pieces. External
fragmentation exists when there is enough total memory space to satisfy a
request but the available spaces are not contiguous: storage is fragmented
into a large number of small holes.
 The difference between these two numbers is internal fragmentation
unused memory that is internal to a partition.
 One solution to the problem of external fragmentation is compaction. The
goal is to shuffle the memory contents so as to place all free memory
together in one large block. Compaction is not always possible, however. If
relocation is static and is done at assembly or load time, compaction
cannot be done. It is possible only if relocation is dynamic and is done at
execution time.

Dr. Neetu Narwal, MSI


9.4 Consider a logical address space of 64 pages of 1,024 words each,
mapped onto a physical memory of 32 frames.
a. How many bits are there in the logical address?
b. How many bits are there in the physical address?

Dr. Neetu Narwal, MSI


Questions for Students

Dr. Neetu Narwal, MSI


Q1. Consider the following memory map using multiprogram with partition
model. Blue represent memory in use while white represent free memory as
shown in the figure below.

Request for memory follows the following order : 100k, 25k, 125k, 50k. Which of
the following allocation satisfies the above request?
1)Best Fit 2)First Fit 3)Worst Fit

Answer

1) 1,2,3
2) 1,2
3) 2,3
4) None of these

Dr. Neetu Narwal, MSI


Q2. Consider the following memory map using the
partition model, with Blue representing memory in use
and White representing free memory.

A new process Pnew is of size 10k. In which partition is Pnew placed for

a)Best Fit : _____________k


b)First Fit : _____________k
c)Worst Fit : _____________k

Answer
1) 10, 11, 11
2) 10, 10, 11
3) 11, 12, 40
4) 11, 10, 48

Dr. Neetu Narwal, MSI


 Q3. Given memory partitions of 100K, 500K, 300K, and
600 K (in order), how would each of the algorithms(first-
fit, best-fit, worst-fit) in parts a,b and c place processes
of 212K, 417K, 112K and 426K(in order)? Which algorithm
makes the best use of memory?
 Answer_________

Dr. Neetu Narwal, MSI


Answer

 Answer 1- option 2 – 1,2


 Answer 2 – option 3- 11, 12, 40
 Answer 3- Best-fit approach

Dr. Neetu Narwal, MSI

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy