0% found this document useful (0 votes)
15 views

PP

Uploaded by

whtpepsi1999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

PP

Uploaded by

whtpepsi1999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 4

Q. (1) How would you apply scheduling policy for each of the following cases?

Explain your
reasons for choosing them:
a. The processes arrive at large time intervals
b. The system’s efficiency is measured by the percentage of jobs completed.
c. All the processes take almost equal amounts of time to complete

a. The processes arrive at large time intervals:


Scheduling Policy: First-Come, First-Served (FCFS)
Reasoning: Since processes arrive at large intervals, the system can afford to use
a simple and straightforward scheduling algorithm like FCFS. This method ensures
that processes are executed in the order they arrive, leading to minimal waiting
time for individual processes. Since the arrival times are significantly spaced
apart, the chances of context switching and overhead are reduced, thereby enhancing
overall efficiency. This policy also minimizes the complexity of the scheduler and
keeps it efficient for such sporadic arrivals.

b. The system’s efficiency is measured by the percentage of jobs completed:


Scheduling Policy: Shortest Job First (SJF)
Reasoning: To maximize the percentage of jobs completed, SJF is an optimal choice.
By prioritizing shorter jobs, the system minimizes the average waiting time and
turnaround time, allowing more processes to be completed in a given time frame.
This policy also helps in reducing the average waiting time and improving the
throughput, leading to higher efficiency. While SJF may require knowledge of the
job lengths, it effectively maximizes job completion rates when appropriately
implemented.

c. All the processes take almost equal amounts of time to complete:


Scheduling Policy: Round Robin (RR)
Reasoning: When processes have similar execution times, Round Robin is an ideal
scheduling policy. This algorithm allocates a fixed time slice (quantum) to each
process, allowing all processes to receive an equal share of CPU time. This
fairness leads to improved responsiveness and ensures that no single process
monopolizes the CPU. Additionally, with similar process durations, the context-
switching overhead becomes less impactful, leading to efficient CPU utilization and
user satisfaction. RR effectively balances the workload and maintains system
responsiveness in such scenarios.

Q.(2) Define WAIT & SIGNAL Operations.

WAIT and SIGNAL operations are fundamental synchronization mechanisms used in


concurrent programming to manage access to shared resources and prevent race
conditions among processes or threads.

WAIT Operation:
The WAIT operation (also known as P or down operation) is employed when a process
or thread wants to access a resource that may not be immediately available. When a
process invokes WAIT on a semaphore (a synchronization variable), it checks the
semaphore's value. If the value is greater than zero, the process decrements the
semaphore (indicating that a resource is being used) and continues execution. If
the value is zero, the process is blocked and placed in a waiting queue,
effectively yielding control until the resource becomes available. This operation
ensures that processes only access resources when they are free, preventing
conflicts.

SIGNAL Operation:
The SIGNAL operation (also known as V or up operation) is used to indicate that a
resource has been released or is now available. When a process completes its use of
a resource, it invokes SIGNAL on the corresponding semaphore. This operation
increments the semaphore's value, and if any processes are waiting in the queue,
one of them is unblocked and allowed to proceed. SIGNAL thus serves to notify
waiting processes that they can now access the resource.

Q.(3) What is a critical section? Give solution to the critical section problem
using semaphores.

A critical section is a segment of code in a multi-threaded or multi-process


environment where shared resources (such as variables or data structures) are
accessed and manipulated. The critical section problem arises when multiple
processes or threads attempt to enter their critical sections simultaneously,
potentially leading to data inconsistency and unpredictable behavior.

Solution Using Semaphores:


Semaphores are synchronization tools that help manage access to shared resources
and prevent race conditions. Here’s how to solve the critical section problem using
semaphores:

Initialization: Define a semaphore variable, typically initialized to 1. This


semaphore will control access to the critical section.

semaphore mutex = 1; // Binary semaphore

Entering the Critical Section: Each process must execute the following code before
entering its critical section:

wait(mutex); // Decrement the semaphore

f the semaphore value is greater than 0, the process enters the critical section
and decrements the semaphore. If the value is 0, the process is blocked and waits.

Exiting the Critical Section: Once a process finishes executing its critical
section, it must signal the semaphore:

signal(mutex); // Increment the semaphore

This operation releases the semaphore, allowing other waiting processes to enter
their critical sections.

Assignment III

Q. (1) What is fragmentation ? What are its types? How they are minimized in
different
memory management schemes ?

Fragmentation refers to the inefficient use of memory space that occurs when free
memory is divided into small, non-contiguous blocks. It can lead to reduced
available memory for new processes, ultimately impacting system performance.
Fragmentation is typically categorized into two types:

Types of Fragmentation:
Internal Fragmentation: This occurs when allocated memory blocks are larger than
the requested memory. For example, if a process requests 20 KB but is allocated 32
KB, the remaining 12 KB is wasted, leading to internal fragmentation.

External Fragmentation: This happens when free memory is split into small,
scattered blocks that are not usable for larger processes. Even if there is enough
total free memory, the non-contiguous nature prevents allocation for processes that
require larger contiguous blocks.

Minimizing Fragmentation:
Segmentation: This memory management scheme divides memory into segments based on
logical divisions (e.g., functions, arrays). By using segments, internal
fragmentation is reduced as memory allocation can be more closely aligned with the
actual size of data structures.

Paging: This method breaks physical memory into fixed-size pages and divides
logical memory into pages of the same size. Paging eliminates external
fragmentation, as any free page can be used regardless of its location.

Compaction: This technique involves reorganizing memory contents to eliminate


external fragmentation. By moving processes closer together, larger contiguous free
blocks can be created.

Buddy System: This approach allocates memory in powers of two, which helps reduce
internal fragmentation and allows for efficient merging of free blocks.

Q. (2) What are demand paging and Demand Segmentation?

Demand Paging and Demand Segmentation are memory management techniques used in
operating systems to optimize the use of physical memory while managing large
processes efficiently.

Demand Paging:
Demand Paging is a memory management scheme where pages of a process are loaded
into memory only when they are needed (i.e., when a page fault occurs). This allows
the operating system to keep physical memory usage low by loading only those pages
that are actively used, rather than the entire process. When a process tries to
access a page that is not currently in memory, a page fault is triggered, and the
required page is loaded from secondary storage (usually a disk) into physical
memory. This approach helps in utilizing memory more efficiently and reduces
loading time, as it only loads necessary pages, leading to improved performance for
large applications.

Demand Segmentation:
Demand Segmentation operates similarly but is based on logical segments rather than
fixed-size pages. In this method, segments of a process (e.g., functions, arrays,
or data structures) are loaded into memory only when required. Each segment can be
of varying sizes, and when a segment fault occurs (when a segment is not in
memory), the operating system loads the needed segment from disk to memory. Demand
segmentation provides a more logical view of memory allocation and can improve the
organization of processes, as it allows for loading only the required segments.

Q.(3) Give difference between physical & logical address. Also show that how
logical

Physical Address vs. Logical Address

In computer systems, understanding the distinction between physical and logical


addresses is essential for memory management and addressing mechanisms.

Physical Address:
Definition: A physical address refers to an actual location in the computer’s
physical memory (RAM). It is the address that the memory unit recognizes and uses
to access data.
Context: Physical addresses are used by the memory management unit (MMU) to access
the memory hardware directly.
Example: If the physical memory has a size of 4 GB, addresses range from 0x00000000
to 0xFFFFFFFF.
Logical Address:
Definition: A logical address, also known as a virtual address, is generated by the
CPU during program execution. It represents an address within a process's logical
address space, which may not correspond directly to a physical memory address.
Context: Logical addresses allow processes to use memory without knowing the actual
physical address. This abstraction enables easier memory allocation and process
isolation.
Example: In a system with virtual memory, a process might use a logical address of
0x00001234, which the MMU translates into a physical address.
Translation from Logical to Physical Address:
The translation from logical to physical addresses is managed by the MMU using a
page table in the case of paging systems. When a program accesses a logical
address, the MMU checks the page table to find the corresponding physical address.
For example, if a logical address maps to a page frame in physical memory, the MMU
retrieves the data from the physical address.

Assignment IV

Q.(1) How would you compare different types of disk scheduling algorithm.

Disk scheduling algorithms are essential for managing how read and write requests
are handled on a disk, optimizing performance and minimizing wait times. Here’s a
comparison of several common disk scheduling algorithms:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy