0% found this document useful (0 votes)
12 views22 pages

OS Important M-4

Uploaded by

chota0101bheem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views22 pages

OS Important M-4

Uploaded by

chota0101bheem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Operating Systems

rd
3 SEM Exam – Important Questions

MODULE – 4

1. Explain segmentation in detail with Diagram


Ans:
Figure: Segmentation hardware

2. Explain paging hardware with TLB


Ans:
3. What is paging? Explain the structure of page table
Ans:
Paging:

Paging is a memory management scheme used in operating systems to divide a process's logical
address space into fixed-sized blocks called pages. These pages are of equal size and are mapped
to physical frames in main memory. Paging provides a simple and efficient way to manage memory
by allowing processes to be allocated memory in smaller, fixed-size units.
Structure of the Page Table:
The structure of the page table varies depending on the memory management technique used.
Here are the most common techniques for structuring the page table:

Hierarchical Paging:

1. Problem Solving:
 Divides large page tables into smaller pieces to manage the excessively large logical-address
space.
2. Multi-Level Hierarchy:
 Utilizes a multi-level hierarchy where each level contains page table entries.
3. Reduced Memory Overhead:
 Decreases memory overhead by storing only the necessary portions of the page table.
4. Address Space Management:
 Facilitates efficient management of large address spaces by organizing page tables hierarchically.
5. Improved Access Time:
 Enhances access time by allowing for faster retrieval of page table entries through the hierarchical
structure.
6. Scalability:
 Offers scalability to handle increasingly large logical address spaces by dividing the page table
into smaller, manageable pieces.
Hashed Page-tables:

1. Address Space Size:


 Used for handling address spaces larger than 32 bits, where traditional page tables become
impractical.
2. Hash Function Usage:
 Employs a hash function to map virtual page numbers to hash table entries.
3. Collision Handling:
 Manages collisions by storing linked lists of elements in hash table entries.
4. Element Composition:
 Each element in the linked list contains virtual page numbers, mapped page-frame values, and
pointers.
5. Efficient Memory Usage:
 Optimizes memory usage by storing only the necessary page table entries in the hash table.
6. Performance Enhancement:
 Improves performance by providing a more efficient mechanism for accessing page table entries,
especially in systems with large address spaces.
Inverted Page Tables:

1. Memory Entry per Real Page:


 Represents one entry for each real page of memory instead of for each virtual page.
2. Entry Components:
 Each entry contains the virtual address of the page stored in that real memory location and
information about the owning process.
3. Virtual Address Format:
 Consists of a triplet including process ID, page number, and offset.
4. Reduced Memory Overhead:
 Decreases memory overhead by storing entries only for real pages of memory.
5. Process Identification:
 Associates each page with the process that owns it, enabling efficient memory management and
access control.
6. Address Resolution:
 Facilitates quick resolution of virtual addresses to physical addresses by directly mapping real
memory locations to virtual pages.

Figure: Inverted Page Tables


5. Write a note on Contiguous Memory Allocation?
Ans:
Contiguous Memory Allocation:

1. Memory Partitioning:
 Main memory is divided into two partitions: one for the operating system (OS) and the
other for user processes.
2. Single Contiguous Section:
 Each process occupies a single contiguous section of memory, ensuring efficient memory
management.
3. Memory Protection:
 Memory protection is crucial to prevent user processes from accessing the OS and each
other's memory areas.
 Implemented using relocation and limit registers:
 Relocation register stores the smallest physical address.
 Limit register defines the range of logical addresses accessible to a process.
4. Dynamic Address Mapping:
 Memory Management Unit (MMU) dynamically maps logical addresses by adding the
value in the relocation register.
 Ensures that each logical address falls within the defined limit register range.
5. CPU Scheduler Interaction:
 When a process is chosen for execution, the dispatcher sets the relocation and limit
registers with appropriate values.
 This mechanism protects the OS from the executing process by checking addresses against
these registers.
6. Dynamic OS Size:
 The relocation-register scheme enables the OS size to change dynamically as needed.
 Facilitates efficient memory utilization by allowing transient OS code to load and unload as
necessary.
7. Transient OS Code:
 Transient OS code refers to code that is loaded and unloaded as required to conserve
memory space and reduce unnecessary swapping overhead.

Advantages:

1. Efficient Memory Utilization:


 Contiguous memory allocation ensures efficient use of memory by allocating each process
in a single, continuous block.
2. Simplified Memory Protection:
 Memory protection using relocation and limit registers provides a straightforward
mechanism to safeguard the OS and user processes.
3. Dynamic OS Adaptation:
 The relocation-register scheme allows the OS size to adapt dynamically, optimizing
memory usage and performance.

Disadvantages:

1. Fragmentation Concerns:
 Contiguous memory allocation may lead to fragmentation issues, particularly external
fragmentation, which can reduce memory utilization efficiency.
2. Limited Scalability:
 This approach may have limitations in handling dynamic memory requirements and
variable-sized memory allocations, affecting system scalability.
3. Complexity with Memory Protection:
 While relocation and limit registers offer memory protection, managing and updating
these registers for multiple processes may introduce complexity and overhead.

6.What is demand paging? Discuss with a neat diagram. Steps to handle the page fault
Ans:
Demand Paging:

Demand paging is a memory management scheme used by operating systems to


efficiently manage memory resources by loading pages into memory only when they
are required. It allows programs to start execution with a portion of their code and
data in memory, loading additional pages as needed during runtime. Here's an
explanation of demand paging along with steps to handle a page fault:

Explanation:

In demand paging, the entire program or process does not need to be loaded into
memory before execution begins. Instead, only the necessary portions, typically
referred to as pages, are loaded into memory when required. This approach
optimizes memory usage by minimizing the amount of memory needed to initiate
program execution.

• Basic concept: Instead of swapping the whole process the pager swaps only the necessary
pages in to memory. Thus it avoids reading unused pages and decreases the swap time and
amount of physical memory needed
The steps to handle the page faults with neat diagram.
[For Reference]

7. Illustrate how demand paging affects system performance


Ans:
8. Explain the various attributes and operations performed on files?
Ans:

File Attributes:

1. Name: The name of the file, which uniquely identifies it within a directory.
2. Type: The type of the file, which determines its format and how it can be interpreted
by applications (e.g., text file, image file, executable file).
3. Location: The location of the file in the file system, specified by its path.
4. Size: The size of the file in bytes, indicating the amount of data it contains.
5. Timestamps:
 Creation Time: The time when the file was created.
 Last Modified Time: The time when the file's contents were last modified.
 Last Accessed Time: The time when the file was last accessed or read.
6. Permissions: The access permissions associated with the file, specifying which users
or groups are allowed to read, write, or execute the file.
7. Owner: The user or group that owns the file and has certain rights over it, such as
changing permissions or deleting the file.
File Operations:

1. Create: Make a new file.


2. Open: Access an existing file.
3. Read: Get data from a file.
4. Write: Add or modify data in a file.
5. Close: Finish using a file.
6. Delete: Remove a file.
7. Rename: Change a file's name.
8. Copy: Duplicate a file.
9. Move: Transfer a file to a different location.
10. Seek: Navigate to a specific position in a file.
9. Explain the various accesss methods of files
Ans:
Access Methods for Files:

1. Sequential Methods:
 Simplest access method where information in the file is processed in order, one
record after another.
 Read operations (next-reads) retrieve the next portion of the file, automatically
advancing a file pointer.
 Write operations (write next) append data to the end of the file, advancing to the end
of the newly written material.
 Allows resetting the file to the beginning and potentially skipping forward or
backward by a specified number of records.

2. Direct Access:
 Utilizes fixed-length logical records for rapid reading and writing of records in any
order.
 Views the file as a numbered sequence of blocks or records, akin to a disk model,
enabling random access.
 Allows reading or writing blocks in any order, facilitating immediate access to large
amounts of data, such as databases.
 Requires modifications to file operations to include block numbers as parameters,
replacing read next and write next with read n and write n.
3. Other Access Methods:
 Constructed on top of direct access, typically involving the creation of an index for
the file.
 The index functions like a book index, containing pointers to various blocks within
the file.
 To locate a record, first search the index and then use the pointer to directly access
the desired record within the file.

Access Methods for Files Simplified:

1. Sequential Methods:
 Read one after another.
 Reading/writing in a fixed order.
 Can reset to start or skip ahead.
 Useful for processing data in order.
 Operations: Read next, write next.
2. Direct Access:
 Read/write in any order.
 Treats file like numbered blocks.
 No restriction on reading or writing order.
 Suitable for quick access to large data.
 Operations: Read block number, write block number.
3. Other Access Methods:
 Built on direct access.
 Involves creating an index.
 Index points to different blocks.
 Helps in finding records quickly.
 Search index first, then access record.
10.What is thrashing? How it can be controlled?
Ans:

Thrashing:

1. Definition:
 Thrashing occurs when a computer's operating system spends a significant amount of
time swapping data between memory (RAM) and storage (disk), instead of executing
useful tasks.
2. Causes:
 Insufficient physical memory to accommodate the working set of active processes.
 High demand for memory by multiple processes, leading to excessive paging activity.
 Ineffective memory management policies or allocation strategies.
3. Symptoms:
 Severe degradation in system performance.
 Excessive disk activity as the system continuously swaps data between RAM and disk.
 Increased response time for user interactions and application execution.
 System becomes unresponsive or slow to execute tasks.
Controlling Thrashing - Simple Points:

1. Increase Physical Memory:


 Add more RAM to the system.
 Provides additional memory resources.
 Helps accommodate active processes.
2. Optimize Process Management:
 Prioritize critical processes.
 Efficiently schedule CPU and memory allocation.
 Preload frequently used data into memory.
3. Tune Paging Algorithms:
 Adjust page replacement policies.
 Optimize selection of pages for eviction.
 Minimize disk latency during paging.
4. Allocate Resources Wisely:
 Limit concurrent processes.
 Control memory allocation per process.
 Prevent overcommitment of resources.
5. Dynamic Memory Management:
 Dynamically adjust memory allocation.
 Based on system load and demands.
 Ensure efficient resource utilization.
6. Monitor System Performance:
 Continuously track memory utilization.
 Monitor page fault rates and disk activity.
 Identify signs of thrashing for proactive measures.
7. Optimize Application Design:
 Reduce memory footprint.
 Minimize excessive paging.
 Optimize memory usage patterns.
PROBLEMS
11.First Fit, Best Fit and Worst Fit

1. First Fit (F):


 Allocate memory to the first available block that is large enough to accommodate the process.
 Simply scan the memory blocks from the beginning and allocate to the first block that fits.
2. Best Fit (B):
 Allocate memory to the block that is the best fit for the process, i.e., closest in size but still large
enough.
 Compare the size of the process with all available blocks and allocate to the one with the
minimum wasted space.
3. Worst Fit (W):
 Allocate memory to the largest available block that can accommodate the process.
 Choose the block with the maximum free space, leaving the largest remaining fragment.
12. FIFO, LRU and Optimal

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy