Solved Past Papers (Operating System)
Solved Past Papers (Operating System)
Ans: In the realm of operating systems, schedulers play a crucial role in managing the allocation of
resources, particularly CPU time, among various processes. While both long-term and short-term
schedulers contribute to this task, they differ significantly in their scope, objectives, and decision-
making processes.
Scope: Determines which processes are admitted into the system and the degree of
multiprogramming.
Decision Factors: Process priority, memory requirements, resource usage history, and overall system
load.
Frequency: Less frequent, typically invoked when a new process arrives or when the system load
changes significantly.
Objectives:
Scope: Selects a process from the pool of ready processes to be allocated the CPU.
Decision Factors: Process priority, CPU burst time, I/O requirements, and the current state of the
system.
Frequency: Very frequent, typically invoked every few milliseconds or whenever a process changes
state (e.g., from running to waiting).
Objectives:
2|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#4: Explain deadlock condition?
Ans: Deadlock, a dreaded state in concurrent systems, refers to a situation where a set of processes
are blocked indefinitely, preventing any of them from making progress. It's like a traffic jam where
cars are stuck, unable to move because they are all blocking each other's way.
To understand deadlock, let's consider the four necessary conditions that must hold simultaneously
for it to occur:
1. Mutual Exclusion: Each resource is held by only one process at a time. No sharing is allowed.
2. Hold and Wait: A process holding at least one resource is waiting to acquire additional
resources currently held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process. They must be
released by the process holding them.
4. Circular Wait: There exists a circular chain of processes, each waiting for a resource held by
the next process in the chain.
If all four conditions are met, the processes involved are stuck in a deadlock, unable to proceed
further.
Ans: Dispatcher latency, also known as context switch latency, refers to the time it takes for a CPU to
switch from executing one process to another. It's the delay experienced between the moment a
process becomes ready to run and the moment it actually starts running on the CPU.
This latency arises due to the various tasks involved in switching contexts, such as:
Saving the state of the currently running process: This includes storing the process's
registers, program counter, and other relevant information in memory.
Loading the state of the new process: This involves retrieving the new process's state from
memory and loading it into the CPU registers.
Updating internal data structures: The operating system needs to update its internal data
structures to reflect the change in the running process.
Hardware architecture: The complexity of the CPU and memory system can influence the
time it takes to save and load process states.
Operating system design: The efficiency of the context switching algorithm and the amount
of information that needs to be saved and loaded can impact latency.
Process characteristics: Processes with larger memory footprints or more complex state
information will take longer to switch contexts.
3|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#6: Define demand paging.
Process Address Space: The process's virtual address space is divided into fixed-size blocks
called pages.
Page Table: A page table is maintained for each process, mapping virtual pages to physical
memory frames.
Initial State: Initially, most pages are marked as "not present" in the page table, meaning
they are not loaded in memory.
Page Fault: When the process tries to access a page that is not present, a page fault occurs.
Demand Paging: The operating system then fetches the required page from secondary
storage (e.g., disk) and loads it into a free memory frame.
Page Table Update: The page table is updated to reflect the new mapping between the
virtual page and the physical frame.
Process Resumes: The process resumes execution, now with the required page loaded in
memory.
Virtual Memory: The process address space is a key component of virtual memory, which
allows processes to have a larger address space than the available physical memory.
Logical View: The process address space provides a logical view of memory for the process,
hiding the complexities of physical memory management from the programmer.
Segmentation and Paging: The process address space can be organized using segmentation
or paging techniques.
Protection: The process address space provides a mechanism for memory protection,
preventing processes from accessing each other's memory or the operating system's
memory.
Sharing: The process address space can be shared among multiple processes, allowing them
to access common data structures or libraries. This
Address Translation: The MMU plays a crucial role in translating virtual addresses to
physical addresses.
4|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#8: Difference between paging and segmentation.
Ans: Paging and segmentation are two distinct memory management techniques used in operating
systems to organize and allocate memory to processes. While both share the goal of efficient
memory utilization, they differ significantly in their approach and characteristics.
Paging:
Divides the process address space into fixed-size blocks called pages.
Less prone to external fragmentation but can suffer from internal fragmentation.
Segmentation:
Divides the process address space into variable-size segments based on logical units (e.g., code, data,
and heap).
Each segment can have different access permissions and protection levels.
Less prone to internal fragmentation but can suffer from external fragmentation.
Offers more granular protection as access control is applied at the segment level.
Where:
5|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#10: Define two operations of semaphore.
Ans: In the realm of operating systems, processes operate in two distinct modes: user mode and
system mode. These modes define the level of access and privileges a process has to system
resources and functionality. Understanding the differences between these modes is crucial for
comprehending how operating systems manage processes and protect the system from
unauthorized access.
User Mode:
Limited Privileges: Processes running in user mode have restricted access to system
resources and functionality. They cannot directly access hardware devices, modify critical
system data, or perform privileged operations.
Protection Mechanism: User mode acts as a protective layer, preventing malicious or buggy
applications from harming the system or other processes.
Typical Operations: User mode processes typically perform tasks related to user
applications, such as running programs, displaying data, and interacting with input devices.
Example: A web browser, a text editor, or a media player running on your computer
operates in user mode.
6|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
System Mode:
Elevated Privileges: Processes running in system mode have full access to system resources
and functionality. They can access hardware devices, modify system data, and perform
privileged operations.
Kernel Access: System mode processes operate within the kernel, the core of the operating
system, giving them direct control over system resources and functionality.
Critical Operations: System mode processes handle critical tasks such as memory
management, process scheduling, device drivers, and file system operations.
Example: The operating system itself, device drivers, and system utilities run in system
mode.
Ans: Dispatch latency, also known as context switch latency, refers to the time it takes for a CPU to
switch from executing one process to another. It's the delay experienced between the moment a
process becomes ready to run and the moment it actually starts running on the CPU.
7|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Effects of Dispatch Latency:
Ans: Starvation, in the context of operating systems, refers to a situation where a process is
indefinitely prevented from accessing resources it needs to make progress. It's like being stuck in a
queue, waiting for your turn, but never actually getting served.
Causes of Starvation:
Examples of Starvation:
Priority Inversion:
8|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Resource Deprivation:
Q#14: What is PCB? What are different types of information stored in PCB about a process? At the
time of context-switch what role PCB plays?
Ans: The Process Control Block (PCB), also known as the Task Control Block (TCB), is a data structure
that contains all the information needed by the operating system to manage a process. It acts as the
central repository of a process's state, including its execution context, resource allocation, and
scheduling information.
1) Process Identification:
Process ID (PID): A unique identifier assigned to the process.
Process Name: A descriptive name for the process.
User ID (UID): The user who owns the process.
2) Process State:
Current state of the process (e.g., running, waiting, ready).
Program counter (PC): The memory address of the next instruction to be executed.
CPU registers: The values of the CPU registers at the time of context switch.
3) Process Scheduling Information:
Priority: The priority level of the process.
Scheduling queue: The queue in which the process is waiting for its turn to run.
Time slices: The amount of CPU time allocated to the process.
4) Memory Management Information:
Memory address space: The virtual memory allocated to the process.
Page table: A table that maps virtual memory addresses to physical memory addresses.
5) Accounting Information:
CPU time used by the process.
I/O statistics (e.g., number of bytes read/written).
Memory usage statistics.
6) I/O Information:
List of open files.
List of devices assigned to the process.
During a context switch, the operating system saves the state of the currently running process in its
PCB and loads the state of the new process from its PCB. This allows the operating system to quickly
switch between processes without losing track of their execution state.
9|P a ge
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#15: Write one solution for classical Procedure-Consumer problem with only three shared
variables (in, out and buffer)
Ans: The classical producer-consumer problem involves two processes: a producer that generates
data and a consumer that consumes it. The challenge is to ensure that the producer and consumer
operate concurrently without data loss or corruption.
1. Shared Variables:
in: An integer variable indicating the index of the next empty slot in the buffer.
out: An integer variable indicating the index of the next full slot in the buffer.
buffer: An array of fixed size that stores the data items produced by the producer and
consumed by the consumer.
2. Producer Process:
While there is space in the buffer (i.e., (in + 1) % buffer_size != out):
Produce a data item.
Store the data item in the buffer at index in.
Increment in (wrap around if necessary).
Signal the consumer that a new data item is available.
3. Consumer Process:
While there is data in the buffer (i.e., in != out):
Retrieve the data item from the buffer at index out.
Increment out (wrap around if necessary).
Consume the data item.
Signal the producer that a slot is available in the buffer.
4. Synchronization:
To ensure proper synchronization between the producer and consumer, we can use
semaphores or murexes.
empty: A semaphore initialized with the number of empty slots in the buffer.
full: A semaphore initialized with 0.
5. Producer Process:
Wait on empty.
Produce a data item and store it in the buffer.
Signal full.
6. Consumer Process:
Wait on full.
Retrieve and consume the data item from the buffer.
Signal empty.
Correctness:
This solution ensures that the producer and consumer operate concurrently without data loss or
corruption. The use of semaphores or murexes guarantees mutual exclusion, preventing race
conditions and ensuring data consistency.
10 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#16: Write code for wait() and signal() operations of counting semaphore.
Ans:
Explanation:
The code defines a CountingSemaphore structure that includes the semaphore value, a
mutex for synchronization, and a semaphore for managing waiting processes.
The counting_semaphore_init() function initializes a counting semaphore with a given initial
value.
The counting_semaphore_wait() function decrements the semaphore value if it's positive.
Otherwise, it adds the current process to the waiting queue and waits for a signal.
The counting_semaphore_signal() function increments the semaphore value. If there are
waiting processes, it removes the first one from the queue and signals it, allowing it to
proceed.
The counting_semaphore_destroy() function destroys a counting semaphore and releases its
associated resources.
11 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Usage:
Note:
This code uses the POSIX semaphore API for synchronization. Ensure that you have the appropriate
libraries and headers installed for your operating system.
12 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#17: What is difference between microkernel and layered operating system structures?
Ans: Both microkernel and layered operating system structures are designed to manage system
resources and provide services to applications. However, they differ significantly in their architecture
and approach.
Microkernel:
Size: Smaller footprint, containing only essential functionalities like memory management,
process management, and inter-process communication (IPC).
Modularity: Highly modular, with additional services like file systems and device drivers
implemented as separate user-space processes.
Security: Enhanced security due to smaller attack surface and isolation of services in user
space.
Flexibility: Easier to extend and customize by adding or removing modules.
Performance: Potentially slower due to increased context switching between kernel and
user space.
Examples: MINIX, L4, QNX
Size: Larger footprint, with multiple layers stacked on top of each other, each providing
specific functionalities.
Modularity: Less modular, with services tightly integrated within the kernel.
Security: Potentially less secure due to larger attack surface and increased complexity.
Flexibility: Less flexible, as modifying the kernel requires more effort.
Performance: Potentially faster due to reduced context switching within the kernel.
Examples: Unix, Linux, Windows
Ans: In a multithreaded application, the Many-to-One model implies that multiple threads are
mapped to a single operating system (OS) thread. This approach offers several advantages and
disadvantages compared to other models like One-to-One and Many-to-Many.
Advantages:
Disadvantages:
Limited parallelism: All threads share the same OS thread, limiting true parallelism and
potentially hindering performance on multi-core systems.
Blocking issues: If one thread blocks, all other threads sharing the same OS thread are also
blocked, potentially leading to performance bottlenecks.
13 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#19: Why every thread has its own stack?
Ans: In a multithreaded application, each thread has its own stack for several crucial reasons:
14 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#20: What is zombie state of a process?
Ans: The zombie state, also known as the "defunct" state, is a special state a process enters after it
has terminated but its parent process has not yet performed a wait() or waitpid() system call to
collect its exit status.
How it happens:
Why it exists:
Inform Parent Process: It allows the parent process to learn about the child's termination
and retrieve its exit status.
Resource Cleanup: The parent process can use the exit status to determine the reason for
termination and perform any necessary cleanup tasks.
Historical Information: The zombie state provides a record of the child's execution for
debugging or monitoring purposes.
Q#21: Name the inter-process communication tools used in UNIX operating system?
Ans: The UNIX operating system offers a variety of tools and mechanisms for inter-process
communication (IPC), allowing processes to exchange data and synchronize their actions. Here are
some of the most commonly used IPC tools:
1. Pipes:
Type: Unidirectional, one-way communication between two related processes.
Mechanism: Creates a virtual pipe in memory, where one process writes data and the
other reads it.
Use Cases: Simple data transfer between parent and child processes, command-line
filtering (e.g., ls | grep).
2. FIFOs (Named Pipes):
Type: Unidirectional or bidirectional, named pipes accessible by any process.
Mechanism: Similar to pipes, but data persists in the file system until read.
15 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Use Cases: Communication between unrelated processes, data exchange across the
network.
3. Sockets:
Type: Bidirectional, network-based communication between processes on the same or
different machines.
Mechanism: Uses IP addresses and port numbers to establish connections.
Use Cases: Network communication, client-server applications, distributed systems.
4. Shared Memory:
Type: Bidirectional, high-speed data sharing between processes.
Mechanism: Allocates a shared memory segment accessible by multiple processes.
Use Cases: Efficient data exchange between processes with high data transfer rates.
5. Semaphores:
Type: Synchronization mechanism for controlling access to shared resources.
Mechanism: Uses a counter to regulate the number of processes accessing a resource.
Use Cases: Mutual exclusion, resource allocation, process synchronization.
6. Message Queues:
Type: Asynchronous, message-passing mechanism for communication between
processes.
Mechanism: Stores messages in a queue, allowing processes to send and receive
messages independently.
Use Cases: Loosely coupled communication, buffering data, asynchronous processing.
7. Signals:
Type: Asynchronous notification mechanism to send signals between processes.
Mechanism: Sends a signal to a process, interrupting its execution and potentially
triggering a specific action.
Use Cases: Event notification, process termination, inter-process control.
8. Remote Procedure Calls (RPC):
Type: High-level mechanism for invoking procedures on remote machines.
Mechanism: Transparently calls procedures on remote systems as if they were local.
Use Cases: Distributed computing, client-server applications, network services.
Q#22: What does a child process inherits from its parent process?
Ans: When a child process is created by forking a parent process, it inherits certain attributes and
resources from its parent. Here are some of the things that a child process typically inherits from its
parent:
Process ID (PID): The child process is assigned a unique PID, which is different from the
parent's PID but is based on it.
Parent Process ID (PPID): The child process is aware of its parent's PID and can access it
using system calls. The PPID helps establish the parent-child relationship.
Open file descriptors: The child process inherits the open file descriptors from the parent. It
means that if the parent had files or network connections open, the child process will have
access to them.
16 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Environment variables: The child process inherits the environment variables of the parent
process. These variables include information such as paths, system configuration, and other
settings.
Working directory: The child process starts with the same working directory as the parent
process. It means that relative file paths will be resolved from the same location.
Signal handlers: A child process inherits the signal handlers set by the parent. Signal
handlers are used to handle events such as interrupts or termination signals.
User and group ID: The child process inherits the user and group ID from its parent. This
determines the permissions and access rights the child process has.
It's important to note that once the child process is created, it becomes an independent process and
can modify or change the inherited attributes as needed.
Q#23: What are the different techniques for the evaluation of scheduling algorithms?
Ans: Evaluating the performance of scheduling algorithms is crucial to determine their effectiveness
and suitability for different scenarios. Several techniques can be used to assess and compare
scheduling algorithms, each with its own strengths and limitations.
1. Simulation:
Concept: Simulating the execution of tasks on a virtual system with different scheduling
algorithms.
Advantages: Allows for controlled experimentation, testing various scenarios and
workloads.
Disadvantages: Can be time-consuming and may not accurately reflect real-world
behavior.
2. Analytical Modeling:
Concept: Using mathematical models to predict the performance of scheduling
algorithms under specific assumptions.
Advantages: Provides insights into theoretical behavior and can be computationally
efficient.
Disadvantages: May require simplifying assumptions and may not capture all aspects of
real-world systems.
3. Benchmarking:
Concept: Running standard benchmarks or workloads on different systems with
different scheduling algorithms.
Advantages: Provides a practical comparison of performance under realistic conditions.
Disadvantages: May not be representative of all possible workloads and can be
influenced by hardware and system configuration.
4. Measurement and Tracing:
Concept: Collecting performance data from real-world systems running different
scheduling algorithms.
Advantages: Provides real-world performance insights and can identify bottlenecks and
inefficiencies.
17 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Disadvantages: Can be challenging to set up and may require specialized tools and
expertise.
5. User Perception:
Concept: Evaluating the perceived performance of scheduling algorithms based on user
experience.
Advantages: Captures the subjective experience of users and can be valuable for
interactive systems.
Disadvantages: Can be subjective and difficult to quantify, and may not reflect objective
performance metrics.
6. Hybrid Approaches:
Concept: Combining multiple techniques, such as simulation and measurement, to
provide a more comprehensive evaluation.
Advantages: Leverages the strengths of different techniques and can provide a more
holistic understanding of performance.
Disadvantages: Can be more complex and require additional resources and expertise.
Q#24: Mention the three problems that may be caused by the wrong initialization or placement of
wait ( ) and signal ( ) operations in the use of semaphores.
Ans: Using semaphores for synchronization requires careful initialization and placement of wait()
and signal() operations to avoid potential problems. Here are three common issues that can arise
from incorrect usage:
1. Deadlock:
Cause: A deadlock occurs when two or more processes are waiting for each other to
release a resource they both need.
Example: Process A needs resource X and Y, while process B needs resource Y and X. If
process A acquires X and then waits for Y, while process B acquires Y and then waits for
X, both processes will be stuck waiting indefinitely, creating a deadlock.
Prevention: Ensure that processes acquire resources in a consistent order and release
them in the reverse order. Avoid placing wait() operations before acquiring resources or
signal() operations after releasing them.
2. Starvation:
Cause: Starvation occurs when a process is repeatedly denied access to a resource due
to other processes being prioritized.
Example: A high-priority process repeatedly acquires a resource and signals it,
preventing a low-priority process from ever acquiring the resource, leading to
starvation.
Prevention: Use fair scheduling algorithms to ensure all processes have a chance to
acquire resources. Avoid holding resources for longer than necessary and use signal()
operations judiciously to allow other processes access.
18 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
3. Race Conditions:
Cause: A race condition occurs when the order of execution of wait() and signal()
operations is critical and can lead to unpredictable behavior.
Example: Two processes try to increment a shared counter. If one process increments
the counter before the other process waits for it, the final value may be incorrect due to
the race condition.
Prevention: Use semaphores in conjunction with other synchronization mechanisms like
murexes or critical sections to ensure proper ordering of operations and avoid race
conditions.
Ans: The terms "busy waiting semaphore" and "spin lock" are actually quite close, but not exactly
the same. Here's a breakdown:
Busy waiting:
Spin lock:
Q#26: In a 64 bit machine, with 16GB RAM and 8KB page size. How many entries will there be in
the page if it is an inverted page table?
Ans:
Given:
64-bit machine
19 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Calculation:
1. Number of Pages: Divide the total RAM size by the page size:
2. Number of Entries in Inverted Page Table: Since each page entry in the inverted page table
corresponds to a frame in physical memory, the number of entries will be equal to the
number of pages:
Therefore, in a 64-bit machine with 16GB RAM and 8KB page size, an inverted page table will have
2^20 entries.
Q#27: Explain the difference between layered and module based operating system structure?
Ans: Both layered and module-based operating system structures aim to organize the operating
system into smaller, manageable units. However, they differ in their approach and level of
modularity.
Layered Structure:
Concept: The operating system is divided into distinct layers, each with specific
functionalities and well-defined interfaces.
Characteristics:
Strict hierarchy: Each layer depends on the functionalities of the layer below it.
Limited modularity: Modifications to a layer often require changes in other layers due to
their interdependence.
Examples: Unix, Linux, Windows
20 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Module-Based Structure:
Q#28: Describe the difference among short-term, medium term, and long term scheduling.
Ans: Operating systems employ different scheduling algorithms at various levels to manage the
execution of processes. These levels are categorized as short-term, medium-term, and long-term
scheduling, each with distinct goals and responsibilities.
21 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#29: What are the two differences between kernel level thread and user level thread? Under
what circumstance is one type better than other?
1. Kernel-Level Threads:
Advantages: More efficient for I/O-bound tasks due to faster context switching, better
support for system calls and inter-process communication.
Disadvantages: More complex to implement and manage, requires privileged access,
less portable across different operating systems.
Best for: Applications with frequent I/O operations, multi-core systems where efficient
context switching is crucial.
2. User-Level Threads:
Advantages: Easier to implement and manage, more portable across different operating
systems, allows for more flexible scheduling policies.
Disadvantages: Less efficient for I/O-bound tasks due to slower context switching,
limited support for system calls and inter-process communication.
Best for: Applications with CPU-bound tasks, situations where portability and ease of
implementation are prioritized.
22 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#30: In a multithreaded application, if we implement Many-to-Many model then how does this
application handle the threads?
Ans: In a multithreaded application, the Many-to-Many model implies that multiple threads are
mapped to multiple operating system (OS) threads. This approach offers several advantages and
disadvantages compared to other models like One-to-One and Many-to-One.
Advantages:
Increased parallelism: Multiple threads can execute concurrently on multiple CPU cores,
potentially leading to significant performance improvements.
Improved resource utilization: Threads can share resources like memory and file handles,
potentially reducing resource consumption.
Disadvantages:
Increased complexity: Managing multiple threads and their interactions becomes more
complex, requiring careful synchronization and coordination.
Overhead of context switching: Frequent context switching between threads can lead to
performance overhead, especially on systems with limited resources.
Q#31: Mention the three problems that may be caused by the wrong initialization or placement of
wait ( ) and signal ( ) operations in the use of semaphores.
Ans: Using semaphores for synchronization requires careful initialization and placement of wait()
and signal() operations to avoid potential problems. Here are three common issues that can arise
from incorrect usage:
1. Deadlock:
Cause: A deadlock occurs when two or more processes are waiting for each other to
release a resource they both need.
Example: Process A needs resource X and Y, while process B needs resource Y and X. If
process A acquires X and then waits for Y, while process B acquires Y and then waits for
X, both processes will be stuck waiting indefinitely, creating a deadlock.
Prevention: Ensure that processes acquire resources in a consistent order and release
them in the reverse order. Avoid placing wait() operations before acquiring resources
or signal() operations after releasing them.
2. Starvation:
Cause: Starvation occurs when a process is repeatedly denied access to a resource due
to other processes being prioritized.
Example: A high-priority process repeatedly acquires a resource and signals it,
preventing a low-priority process from ever acquiring the resource, leading to
starvation.
Prevention: Use fair scheduling algorithms to ensure all processes have a chance to
acquire resources. Avoid holding resources for longer than necessary and use signal()
operations judiciously to allow other processes access.
23 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
3. Race Conditions:
Cause: A race condition occurs when the order of execution of wait() and signal()
operations is critical and can lead to unpredictable behavior.
Example: Two processes try to increment a shared counter. If one process increments
the counter before the other process waits for it, the final value may be incorrect due
to the race condition.
Prevention: Use semaphores in conjunction with other synchronization mechanisms
like murexes or critical sections to ensure proper ordering of operations and avoid race
conditions.
Q#32: What are the main differences between operating systems for mainframe computers and
personal computers?
Ans: While both mainframe and personal computer (PC) operating systems share some fundamental
functionalities, there are significant differences between them due to the distinct nature of their
intended use cases and hardware environments. Here's a breakdown of the key distinctions:
24 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
6. Cost:
Mainframe OS: Typically more expensive due to the higher cost of mainframe hardware
and specialized software licenses.
PC OS: Generally less expensive, with a wide range of options available at different
price points.
7. Applications:
Mainframe OS: Supports enterprise-level applications for transaction processing,
database management, data analytics, and mission-critical operations.
PC OS: Supports a wide variety of applications for individual productivity,
entertainment, communication, and general-purpose computing.
8. Open Source vs. Proprietary:
Mainframe OS: Traditionally proprietary, but some open-source options are emerging.
PC OS: Both open-source (e.g., Linux) and proprietary (e.g., Windows, macOS) options
are widely available.
9. Support and Maintenance:
Mainframe OS: Typically comes with dedicated support and maintenance services from
the vendor.
PC OS: Support options vary depending on the specific OS and vendor, with both free
and paid options available.
10. Community and Resources:
Mainframe OS: Smaller community compared to PC OS, but with specialized resources
and expertise available.
PC OS: Large and active community with extensive resources, documentation, and
support available online and through various channels.
Q#33: What are the four necessary conditions for a deadlock to occur?
Ans: A deadlock is a situation where two or more processes are blocked indefinitely, waiting for
resources that are held by other processes. For a deadlock to occur, four necessary conditions must
be met simultaneously:
1. Mutual Exclusion: Each resource can be held by only one process at a time. No other
process can access the resource while it is being held.
2. Hold and Wait: A process holding at least one resource is waiting to acquire additional
resources that are currently held by other processes.
3. No Preemption: Resources cannot be forcibly taken away from a process. They must be
released by the process holding them.
4. Circular Wait: There exists a circular chain of processes, each waiting for a resource that is
held by the next process in the chain.
If any of these four conditions is not met, a deadlock cannot occur. For example, if resources can be
preempted, a process can be interrupted and its resources taken away, breaking the deadlock cycle.
Similarly, if there is no circular wait, processes can eventually acquire the resources they need and
avoid deadlock.
25 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Concept: Occurs when allocated memory blocks are larger than the actual memory required
by the process, resulting in unused space within the block.
Causes:
Fixed-size allocation schemes (e.g., contiguous memory allocation).
Memory allocation algorithms that do not find the best fit for the process's memory
requirements.
Example: Allocating a 4KB memory block to a process that only needs 2KB, leaving 2KB of
unused space within the block.
Impact: Reduces memory utilization and can lead to memory exhaustion even when there is
sufficient free memory available.
External Fragmentation:
Concept: Occurs when free memory is broken into small, non-contiguous chunks that are
too small to be used by any process. This fragmented free memory cannot be allocated to
any process, even though the total amount of free memory may be sufficient.
Causes:
Frequent allocation and deallocation of memory blocks.
Compaction algorithms that are not efficient in merging free memory blocks.
Example: After several memory allocations and deallocations, the free memory is scattered
into small chunks of 1KB, 2KB, and 5KB, which are too small to accommodate a process
requiring 8KB.
Impact: Reduces memory utilization and can lead to memory exhaustion even when there is
enough total free memory.
Q#35: What is the copy-on-write feature and under what circumstances is it beneficial to use this
feature?
Ans: Copy-on-write is a memory optimization technique that allows multiple processes to share the
same pages of memory until one of them attempts to modify the data. At that point, a copy of the
page is created for the modifying process, and the original page remains unchanged for the other
processes.
Benefits:
26 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#36: Which system call is used to create a process in UNIX?
fork() creates a new process that is a copy of the calling process. The new process inherits the parent
process's memory space, file descriptors, and other resources. After the fork() call, the parent
process and the child process both continue execution concurrently.
1. The kernel allocates a new process control block (PCB) for the child process.
2. The kernel copies the parent process's memory space to the child process's memory space.
3. The child process inherits the parent process's file descriptors and other resources.
4. The child process starts execution at the same instruction as the parent process.
On success: The parent process returns the process ID (PID) of the child process.
On error: The parent process returns -1, and the child process is not created.
Example:
27 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#37: How hardware devices use functionality of an operating system?
Ans: Hardware devices and operating systems have a symbiotic relationship. While hardware
provides the physical components for a computer system, the operating system acts as the
intermediary between the hardware and software applications, enabling them to interact and
function effectively. Here's how hardware devices utilize the functionalities of an operating system:
1. Device Drivers:
Operating systems provide device drivers, which are specialized software programs that
allow the OS to communicate with specific hardware devices.
Examples: Graphics card drivers, network card drivers, printer drivers, etc.
2. Interrupt Handling:
Hardware devices can generate interrupts, which are signals sent to the OS to notify it
of events requiring attention.
Examples: Keyboard input, mouse clicks, network data arrival, etc.
3. Memory Management:
The OS manages the allocation and deallocation of memory for both the device drivers
and the data buffers used by the devices.
Examples: Allocating memory for a graphics card's frame buffer, managing data buffers
for network communication, etc.
4. I/O Management:
The OS provides a unified interface for applications to perform input/output operations
with various devices.
Examples: Reading data from a keyboard, writing data to a printer, sending data over a
network, etc.
5. Power Management:
The OS can control the power state of hardware devices to optimize energy
consumption.
Examples: Putting the hard disk to sleep after a period of inactivity, dimming the display
when the computer is idle, etc.
6. Device Security:
The OS can implement security measures to protect devices from unauthorized access
and malicious attacks.
Examples: Restricting access to specific devices based on user privileges, encrypting
data transmitted over a network, isolating virtual machines from the host system's
hardware, etc.
28 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#38: What is meant by CPU-protection?
Ans: CPU protection refers to a set of mechanisms and techniques implemented in hardware and
software to safeguard the central processing unit (CPU) from unauthorized access, malicious code
execution, and other security threats. It encompasses various aspects of CPU security, including:
1. Memory Protection:
Prevents unauthorized processes from accessing or modifying memory regions that
belong to other processes or the operating system.
Achieved through techniques like memory segmentation, paging, and access control
mechanisms.
2. Instruction Protection:
Safeguards the CPU from executing malicious instructions or unauthorized code.
Implemented through mechanisms like code signing, instruction set extensions for
security, and privilege levels.
3. Data Protection:
Protects sensitive data from unauthorized access or modification during processing
within the CPU.
Achieved through techniques like encryption, data isolation, and secure memory
management.
4. Exception Handling:
Provides a mechanism for the CPU to handle unexpected events or errors securely.
Prevents malicious code from exploiting exceptions to gain control or access sensitive
data.
5. Virtualization Support:
Enables the creation of isolated virtual environments that share the same physical CPU
resources.
Provides a layer of protection between different virtual machines and the host system.
Memory Management Units (MMUs): Hardware units that enforce memory protection
by translating virtual addresses to physical addresses and checking access permissions.
x86 Privilege Levels: Different privilege levels for instructions and data, allowing the
operating system to control access to sensitive resources.
ARM TrustZone: A hardware-based security extension that creates isolated execution
environments for sensitive code and data.
Intel SGX (Software Guard Extensions): Provides enclaves for secure code execution
and data protection.
29 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#39: What is meant by dual-mode operation?
Ans: Dual-mode operation is a fundamental concept in operating systems that refers to the ability of
the CPU to operate in two distinct modes:
x86 Architecture: Uses four privilege levels, with ring 0 being the most privileged (kernel
mode) and ring 3 being the least privileged (user mode).
ARM Architecture: Uses two privilege levels: privileged mode (kernel mode) and user
mode.
Ans: The ready queue, also known as the CPU ready queue, is a data structure in a computer's
operating system that holds all the processes that are currently in the ready state, waiting for their
turn to be executed by the central processing unit (CPU). It acts as a kind of staging area or buffer
between the processes that are running and the rest of the system, ensuring that processes are
managed fairly and efficiently.
The scheduler, a part of the operating system, uses the ready queue to determine which process is
next in line for execution based on a scheduling algorithm that considers factors like priority, time,
and other criteria. This ensures that the CPU is being used efficiently and that no processes are left
waiting indefinitely.
30 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#41: In which memory management technique process can be divided in to equal parts?
Ans: The memory management technique that divides the process into equal-sized blocks is called
paging.
In paging, both the process address space and the physical memory are divided into fixed-size blocks
called pages and frames, respectively. The size of a page is typically a power of 2, such as 4 KB or 8
KB. This simplifies memory management and reduces external fragmentation.
Q#42: In which memory management technique, system suffers from External Fragmentation?
Ans: There are two main memory management techniques that suffer from external fragmentation:
Contiguous Allocation with Variable-Sized Partitions: This technique allocates memory to processes
in contiguous blocks, but the size of the blocks can vary depending on the process's needs. While this
approach allows for better memory utilization compared to fixed-sized partitions, it leads to external
fragmentation over time. As processes are loaded and unloaded, free memory gets broken up into
smaller chunks that may not be large enough to fit new processes, even though there might be
sufficient total free space available.
Segmentation (Pure): Segmentation divides a process's logical address space into variable-sized
segments based on its logical structure (code, data, stack etc.). This offers flexibility in memory
allocation, but similar to variable-sized contiguous allocation, it can lead to external fragmentation.
Free memory becomes scattered in chunks of various sizes that may not be suitable for new
processes requiring memory.
Ans: In an operating system, a critical section refers to a specific block of code within a program that
deals with shared resources. These shared resources can be various things, like:
Memory locations: This could be a variable or data structure that multiple processes or
threads need to access and modify.
Files: When multiple processes need to read from or write to the same file concurrently.
Hardware devices: Imagine multiple processes trying to print to the same printer at the
same time.
The critical part is that if multiple processes or threads try to access and modify these shared
resources simultaneously, it can lead to inconsistencies and errors. This is called a race condition.
31 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#43: What is meant by race condition?
Ans: A race condition is a situation in concurrent programming where the outcome of a computation
depends on the unpredictable timing of multiple processes or threads accessing shared resources.
This can lead to inconsistent and erroneous results, as the order in which processes access and
modify shared data becomes critical.
Shared resources: Multiple processes or threads accessing and modifying the same shared
data, such as global variables, data structures, or hardware devices.
Unpredictable timing: The timing of process execution and resource access is unpredictable,
leading to non-deterministic behavior.
Lack of synchronization: Processes or threads do not synchronize their access to shared
resources, resulting in potential conflicts and race conditions.
32 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#45: What is difference between Load Time Dynamic Linking VS Run Time Dynamic Linking?
Ans: Dynamic linking is a technique that allows a program to load and link with libraries or modules
at runtime instead of compile time. This provides several advantages, including reduced memory
footprint, modularity, and easier updates. However, there are two main types of dynamic linking:
load-time dynamic linking and run-time dynamic linking.
Binding: Libraries or modules are linked with the program at load time, typically when the
program starts.
Mechanism: The linker resolves external references and creates a table of function pointers
within the program's memory space.
Advantages:
Faster startup time compared to run-time linking.
Reduced memory footprint as only the required libraries are loaded.
Easier debugging as symbols are available at load time.
Disadvantages:
Requires static linking information about the libraries.
Libraries cannot be dynamically loaded or unloaded at runtime.
Binding: Libraries or modules are linked with the program at runtime, typically when a
function from the library is first called.
Mechanism: The program dynamically loads the library and resolves external references
when needed.
Advantages:
Allows for dynamic loading and unloading of libraries at runtime.
Enables lazy loading of libraries, reducing startup time.
Provides greater flexibility for modularity and updates.
Disadvantages:
Slower startup time compared to load-time linking.
Requires additional runtime overhead for dynamic linking.
Debugging can be more challenging due to late binding.
Examples:
33 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Long Questions
Q#46: Recall reader-writer problem with reader’s priority and check the following code. Re by
adding missing lines.
Readers can access the shared resource concurrently as long as no writer is accessing it.
A writer has to wait for all readers to finish before it can enter the critical section.
This solution ensures that readers do not starve (wait indefinitely) but it can lead to writer starvation
if there are many readers and few writers.
34 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#47: Consider a logical address space of 64 pages of 1024 words each, mapped onto a physical
memory of 32 frames.
Therefore:
35 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#48: A system has four process and five resources. The current allocation and maximum needs
are as follows:
If Available matrix consist of [0,0,2,1,1], check whether system is in safe state are not. Also write
safe sequence if system is in safe state.
Need = 1213
2210
1311
1220
Available = 0 0 2 1 1
36 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Iteration 1:
i=0
Finish[0] = true
Available = Available + Allocated[0]
Available = [0+1, 0+0, 2+2, 1+1, 1+1]
Available = [1, 0, 4, 2, 2]
Iteration 2:
i=1
Finish[1] = true
Available = Available + Allocated[1]
Available = [1+2, 0+0, 4+1, 2+1, 2+0]
Available = [3, 0, 5, 3, 2]
Iteration 3:
i=2
Finish[2] = true
Available = Available + Allocated[2]
Available = [3+1, 0+1, 5+0, 3+1, 2+1]
Available = [4, 1, 5, 4, 3]
Iteration 4:
i=3
Finish[3] = true
Available = Available + Allocated[3]
Available = [4+1, 1+1, 5+1, 4+1, 3+0]
Available = [5, 2, 6, 5, 3]
Since all the processes are finished, the system is in a safe state.
37 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#49: Consider the following set of Processes, with the length of CPU time given in microseconds.
Draw the Gantt chart, using Round Robin Algorithm when Time Slice = 3 microseconds. Also
compute the waiting and turnaround times of each process.
Ans: Here's the Gantt chart for the given set of processes using the Round Robin algorithm with a
time slice of 3 microseconds:
38 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#50: Consider three resource types in a system with following number of instances.
System State is as shown below, find safe sequence using Bankers algorithms.
Iteration 1:
Iteration 2:
39 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Iteration 3:
Iteration 4:
Process P0 is now finished, so it is removed from the system. The available resources are updated
as follows:
40 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Iteration 5:
41 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
42 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#51: Consider a system with 64-bit logical address. This system implements paging with 2 KB
page size. Compute: lengths of p and d fields in the logical address.
Ans: Compute the lengths of the page number (p) and offset (d) fields in the logical address for a
system with 64-bit logical address and 2 KB page size:
1. Page Size: We are given that the page size is 2 KB. In bits, this translates to:
Page size (bits) = 2 KB * 8 bits/byte = 2 * 1024 bits = 2^11 bits
2. Logical Address Size: The logical address is 64 bits.
3. Relationship between p and d: The logical address is divided into two parts: the page
number (p) and the offset (d) within the page.
4. Computing p: Since the offset specifies the address within a page of size 2^11 bits, the
remaining bits in the logical address must represent the page number. So, we can find the
length of p as follows:
Logical Address size (bits) - Offset size (bits) = p (bits)
64 bits - 2^11 bits = p (bits)
5. Calculating p:
p (bits) = 64 bits - 2048 bits = 53 bits
Therefore, the page number field (p) in the logical address has a length of 53 bits.
6. Offset (d): As we already established, the offset (d) specifies the address within a page and
its size is equal to the page size in bits:
d (bits) = Page size (bits)
d (bits) = 2^11 bits = 2048 bits
Q#52: In a paging system, logical address space is 4 GB and the page size is 4 KB. Available physical
memory is 16 GB. Compute: length of logical address and length of f field in the physical address.
Ans: Compute the lengths of the logical address and the frame number (f) field in the physical
address for a paging system with the given specifications:
1. Logical Address Space: We are given that the logical address space is 4 GB.
2. Logical Address Size: To find the size of the logical address in bits, we need to convert
gigabytes (GB) to bits:
Logical Address size (bits) = 4 GB * (8 bits/byte) * (1024 bytes/KB) * (1024 KB/MB)
Note: We use the conversion factors of 8 bits/byte and 1024 units (bytes or kilobytes)
per unit (kilobyte or megabyte) to convert between units.
3. Calculation:
Logical Address size (bits) = 4 * 8 * 1024 * 1024 = 2^32 bits
Page Size: The page size is given as 4 KB. Converting this to bits:
Page size (bits) = 4 KB * 8 bits/byte = 2^12 bits
4. Relationship between Logical Address and Frame Number: In paging, the logical address is
divided into two parts: the page number (p) and the offset (d) within the page. The frame
number (f) in the physical address corresponds to the page number in the logical address.
43 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
5. Computing f: Since the offset specifies the address within a page of size 2^12 bits, the
remaining bits in the logical address must represent the frame number. So, we can find the
length of f as follows:
Logical Address size (bits) - Offset size (bits) = f (bits)
2^32 bits - 2^12 bits = f (bits)
6. Calculating f:
f (bits) = 2^32 bits - 2^12 bits = 2^20 bits
Q#53: If the hit-ratio to a TLB is 98%, and it takes 10 nanoseconds (nsec) to search the TLB and 100
nsec to access the main memory, then what must be the Effective Memory Access Time in
nanoseconds?
Ans: Calculate the Effective Memory Access Time (EMAT) for the system:
1. Hit Ratio: We are given that the hit ratio to the TLB (Translation Lookaside Buffer) is 98%.
This means in 98% of cases, the memory address translation happens using the TLB, which is
faster than main memory access.
2. Miss Ratio: The remaining 2% of cases represent misses in the TLB, requiring access to the
main memory. So, the miss ratio is 100% - 98% = 2%.
3. Access Times:
TLB access time: 10 nanoseconds (nsec)
Main memory access time: 100 nanoseconds (nsec)
4. Weighted Average: The EMAT is essentially a weighted average of the access times based on
the probability of each scenario (hit or miss).
5. Calculation:
EMAT = (Hit ratio * TLB access time) + (Miss ratio * Main memory access time)
6. Plugging in the values:
EMAT = (0.98 * 10 nsec) + (0.02 * 100 nsec)
7. Calculation:
EMAT = 9.8 nsec + 2 nsec = 11.8 nsec (approximately)
Therefore, the Effective Memory Access Time (EMAT) for this system is approximately 11.8
nanoseconds.
44 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#54: If the total number of available frame is 50, and there are 2 processes; one of 50 pages and
the other of 35 pages, then how much memory would be proportionally allocated to each of these
processes?
Ans: We can calculate the proportional memory allocation for each process using the following
steps:
1. Total Available Frames: We are given that there are 50 total available frames in the system.
2. Process Sizes: There are two processes:
Process 1: 50 pages
Process 2: 35 pages
3. Proportional Allocation: We want to allocate memory based on the ratio of each process's
size to the total process size.
4. Total Process Size:
Total process size = Size of process 1 + Size of process 2
= 50 pages + 35 pages
= 85 pages
5. Proportional Allocation Ratio:
For process 1: (Size of process 1) / (Total process size)
= 50 pages / 85 pages
For process 2: (Size of process 2) / (Total process size)
= 35 pages / 85 pages
Since these ratios are in fractions, it might be easier to work with them in decimal form.
45 | P a g e
MUHAMMAD BILAL RAFIQ
PAPER: Operating Systems Course Code: IT-306
Q#55: What is meant by Belady’s Anomaly? Why stack based algorithms do not suffer from
Belady’s Anomaly?
Ans: Belady's Anomaly is a phenomenon in memory management that occurs in paging systems
using certain page replacement algorithms. It refers to the counterintuitive situation where
increasing the number of available page frames (memory) can actually lead to an increase in the
number of page faults (accesses to the disk to retrieve data).
1. Page Faults: When a program tries to access data that is not currently loaded in physical
memory, a page fault occurs. The operating system needs to fetch the required page from
disk and load it into a free frame.
2. Page Replacement Algorithms: To manage memory efficiently, operating systems use page
replacement algorithms. These algorithms decide which page frame to evict from memory
when a new page needs to be loaded and there are no free frames available.
3. Belady's Anomaly: This anomaly occurs with specific page replacement algorithms, most
notably the First-In-First-Out (FIFO) algorithm. In FIFO, the oldest page in memory (the one
loaded first) is evicted to make space for a new page.
FIFO doesn't consider the future usage of a page. It simply evicts the oldest page regardless
of whether it will be needed soon.
Belady's Anomaly can occur with FIFO when a sequence of memory accesses repeatedly uses
a specific set of pages that barely fit in the available frames.
Increasing the number of frames might initially seem beneficial, allowing more pages to be
loaded. However, it can also lead to the eviction of recently used pages that will be needed
again soon, causing more page faults.
Stack-based algorithms prioritize pages based on their recent usage. Here are some examples:
Least Recently Used (LRU): This algorithm keeps track of which pages are accessed most
recently and evicts the least recently used one. Pages that are actively used are less likely to
be evicted, even if they are older.
Second Chance Algorithm: This is a variant of FIFO that gives a second chance to a page
before evicting it. When a page fault occurs and all frames are full, the second chance
algorithm checks if the oldest page has been modified. If not, it evicts the page. Otherwise, it
sets a "second chance" bit and moves on to the next page in the FIFO order.
By prioritizing recently used pages, stack-based algorithms are less likely to evict pages that will be
needed again soon, even if the number of frames increases. This helps to avoid Belady's Anomaly.
46 | P a g e
MUHAMMAD BILAL RAFIQ