OS_CH-3-4
OS_CH-3-4
CHAPTER THREE
MEMORY MANAGEMENT
Outlines
Working sets
Physical memory and memory Paging and
and Caching
management hardware segmentation
thrashing
Hardware Overlays
Page
Replacement
Swapping and
replacement
policies
Partitions
Overview of memory management
• Memory management is a core function of an operating system (OS), ensuring that each
process has enough memory to execute efficiently while maximizing system performance
and resource utilization.
• Memory is one of the most vital resources in a computer system
• Memory is a large array of words or bytes, each with its own address.
• The CPU fetches instructions and data of a program from memory; thus,
• Both the program and its data must reside in the main (RAM and ROM) memory.
• The purpose of memory management is to ensure fair, secure, orderly, and efficient use of
memory by providing:
• Sharing of memory- Transparency, Safety, Efficiency and
• Relocation-Ability of a program to run in different memory locations.
• Memory protection- address translation
Cont..
Computers have :
• a few megabytes of very fast, expensive, volatile cache memory
Memories are made up of registers. Each register in the memory is one storage
location/address
Cont,
The OS has a part called Memory manager that manipulates and perform an operation
on memory hierarchy.
Registers < 1 nsec
• Just as processes share the CPU, they also share physical memory
Physical Memory
• Refers to the actual RAM (Random Access Memory) installed in the system/computer.
• It is the primary working memory that stores:
• Currently running processes.
• The operating system.
• Data that is being accessed and manipulated.
• Processes must reside in physical memory to execute.
• The OS must allocate memory space to processes, manage free memory, and handle
memory protection.
• Memory is divided into small fixed-size blocks called frames.
• Address Translation:
•Logical Address: Generated by the CPU.
•Physical Address: Actual address in memory (after translation by MMU).
• Example:
• Suppose: Base = 1000, and Limit = 500
• Then the process can access physical memory from address 1000 to 1499 only.
• If it tries to access address 1600, the OS throws an exception → memory protection fault.
Protection
Protection : protecting multiple process from unwanted interference against each other
Address Protection With Base & Limit Registers
Cont..
How protection implemented?
• Each logical address must fall within the range specified by the limit register.
• The MMU maps the logical address dynamically by adding the value in the relocation
register
Example: If, in a dynamic partition memory management system, the current value of
the base register(base) is 52994 and the current value of the bounds register(Limit) is
5131, compute the physical addresses that correspond to the following logical addresses
A) 5130
B) 650
C) 5132
• In dynamic partition memory management, the system uses base and bounds registers
to perform address translation and protection.
• Base Register: Holds the starting physical address of the process’s allocated memory.
• Bounds Register: Holds the maximum size (limit) of the process's memory (i.e., valid logical
address range: 0 to bounds - 1).
Given:
•Base Register = 52994
•Bounds Register = 5131
•Therefore, valid logical addresses: 0 to 5130 (inclusive)
Lets compute:
• Q. What to do when program size is larger than the amount of memory/partition (that exists or
can be) allocated to it?
• Answer:- There are two basic solutions in real memory management: Overlays and Dynamic
Link Libraries (DLLs)
• Before virtual memory systems (like paging or segmentation) became widely used, physical
memory was very limited (often in the kilobyte range). Developers needed a way to:
• Run programs larger than RAM.
• Avoid loading the entire program into memory at once.
• Overlays: Keep in memory only the overlay (those instructions and data that are) needed at any given
phase/time.
• Overlays can be used only for programs that fit this model, i.e., multi-pass programs like compilers.
• Overlays are designed/implemented by programmer. Needs an overlay driver.
• No special support needed from operating system, but program design of overlays structure is complex.
How Hardware Overlays Work
• The idea is to: Divide a program into several independent parts called overlay modules.
• Load only one module (or a group) into a specific region of memory (overlay region).
• Replace it with another module when needed.
• Overlays are defined and managed using an overlay structure or tree which determines:
-Which modules can coexist.
-Which modules will replace each other.
• Early systems like MS-DOS and UNIX V6 supported overlays via linkers (e.g., ld) and
overlay management utilities.
Example2 Q: The overlay tree for a program is as shown below:
• Answer:
• Using the overlay concept we need not actually have the entire program inside the main
memory.
• Only we need to have the part which are required at that instance of time, either we need
Root-A-D or Root-A-E or Root-B-F or Root-C-G part.
• An address generated by the CPU is a logical address whereas address actually available
on memory unit is a physical address OR Physical address – address seen by the
memory unit.
• Logical address is also known a Virtual address which are independent of location in
physical memory data lives.
• The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
• The set of all logical addresses generated by a program is referred to as a logical address
space.
• Logical/virtual addresses are translated by hardware into physical addresses called
MMU.
Logical vsphysical address space….
The concept of a logical address space that is bound to a separate physical address space is
central to proper memory management.
Physical address is actual location in memory unit (address seen by memory unit.)
User never access directly the physical address
Physical address space is set of all physical address corresponding to the logical address
in Logical address space.
Swapping
To increase CPU utilization in multiprogramming, a memory management scheme known as
swapping can be used.
Is a mechanism in which a process can be swapped temporarily out of main memory (or move) to
secondary storage (disk) and make that memory available to other processes,
And then later time, the system swaps back the process from the secondary storage to main
memory (back into memory for continued execution.)
A process is swapped out will be swapped back into the same memory space that it occupies
previously.
• If binding is done at assembly or load time, then the process cannot be moved to different
location.
• If execution-time binding is being used, then it is possible to swap a
process into a different memory space
Example: If user process is of size 2048KB and on a standard hard disk where swapping will take
place has a data transfer rate around 1 MB per second.
Then it will take 2048KB / 1024KB per second = 200 milliseconds (2sec)
Swapping
To increase CPU utilization in multiprogramming, a memory management scheme known as
swapping can be used.
Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes,
And then later time, the system swaps back the process from the secondary storage to main
memory (back into memory for continued execution.)
The purpose of swapping in an operating system is to access data on a hard disc and move it to
RAM so that application programs can use it.
A process is swapped out will be swapped back into the same memory space that it occupies
previously.
Cont..
11
• Processes:
• P1: 100 KB → fits in Partition 1
• P2: 90 KB → Partition 2
• P3: 130 KB → ❌ Too big (must wait or be denied)
Dynamic/Variable Partitioning
Thus it occurs when allocated memory may be slightly larger than requested memory Or
when small program occupies an entire partition.
Occurs in fixed partitioning. Memory inside a partition but not used by the process is
wasted.
Solution: Allocate very small free partition as a part of the larger request.
Paging
Is a memory management scheme by which a computer stores and retrieves data from secondary
storage for use in main memory.
It divides the logical memory into fixed-size blocks called pages and the physical memory into
blocks of the same size called frames. Paging avoids external fragmentation
In this scheme, the operating system retrieves data from secondary storage in same-size blocks
called pages.
Pages vs Frames
• Page and page frame are same size.
• A page fault happens when a program tries to access a page (a block of memory) that is not currently loaded into the
physical memory (RAM).
• In other words:
• If the page is already in one of the frames, no page fault.
• If the page is NOT in any frame, then a page fault occurs, and the operating system must:
• Analogy: Imagine a line of people waiting to use a single computer at a library.
• When a new person comes and the computer is full, the person who has been waiting the longest gets off, regardless of
whether they are about to finish or just started.
• How it works: FIFO is like a queue. The first page that enters the memory is the first one to be removed when a new
page needs to come in. It doesn't care how often a page is used; it only tracks when it arrived.
• Idea: When a page needs to be replaced, remove the page that was loaded earliest.
In all our examples, the reference string is 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0,1
For a memory with three frames (Buffer size = 3).
Causes of Thrashing:
• Too many processes.
• Insufficient memory allocation (Not enough RAM.)
• Poor page replacement policy.
• Ignoring the working set model.
Solutions
• Reduce degree of multiprogramming.
• Use working set-based memory allocation.
• Page fault frequency (PFF) control.
Example of Thrashing:
• Let’s say: 4 processes are running. Each needs 5 pages in its working set.
• Total memory = 16 pages
• But 4 × 5 = 20 pages > 16 → Not enough!
• OS tries to keep all 4 process running:
• Page faults increase.
• CPU idle → Performance drops drastically.
CHAPTER FOUR
DEVICE MANAGEMENT
Outlines
Abstracting
Serial and device Direct Recovery
Buffering
parallel differences memory from
strategies
devices access failures
Device Management
Device management
After the completion of this chapter, students will able to
Understand I/O devices and I/O operations
Understand the way of communication in computer system
Differentiate Serial and Parallel Information Transfer
Understand the major category of I/O devices
Distinguish I/O hardware devices parts
Know Polling concepts
Know Direct Memory Access
Understand Buffering concepts and types of buffer
know Spooling
Identify Device drivers
Overview of Device management
• Device management is the part of an OS that manages all hardware devices like disks, printers, keyboards,
and network cards.
• The OS provides a uniform interface to interact with diverse hardware and ensures efficient, reliable, and
secure communication.
Keep tracks of all the devices and the programs
Monitoring the status of each device
Take the decision that which process gets the device when & for how long.
Efficiently allocates and deallocates the devices
Optimize the performance of individual devices
Most fit into the general categories of the following:
storage devices (disks, tapes),
transmission devices (network cards, modems), and
human-interface devices (screen, keyboard, mouse).
Serial versus Parallel Information Transfer
• Parallel cables that are too long can cause signal skew, allowing the parallel signals to
become “out of step” with each other.
Serial Transfers
• A serial transfer uses a single “lane” in the computer for information transfers.
• This sounds like a recipe for slowdowns, but it all depends on how fast the speed limit is on the
“data highway”.
• The following ports and devices in the computer use serial transfers:
• Serial (also called RS-232 or COM) ports and devices
• USB (Universal Serial Bus) 1.1 and 2.0 ports and devices
• Modems (which can be internal devices or can connect to serial or USB ports)
• IEEE-1394 (FireWire, i.Link) ports and devices
• Serial ATA (SATA) host adapters and drives
Cont
• Serial Devices:
• Transmit data one bit at a time over a single communication line.
• Common examples: Keyboard, mouse, USB devices, modem
• Low cost, simpler cabling
• Slower compared to parallel communication
Cont…
• Serial transfers have the following characteristics:
• One bit at a time is transferred to the device.
• Transmission speeds can vary greatly, depending on the sender and receiver.
• Very few connections are needed in the cable and ports (one transmit, one receive, and a
few control and ground wires).
• Cable lengths can be longer with serial devices. For example, an UltraDMA/66 ATA/IDE
cable can be only 18 inches long for reliable data transmission, whereas a Serial ATA cable
can be almost twice as long.
• The extra speed is possible because serial transfers don’t have to worry about interference or other
problems caused by running so many data lines together.
Comparison
Feature Serial Devices Parallel Devices