PP
PP
Explain your
reasons for choosing them:
a. The processes arrive at large time intervals
b. The system’s efficiency is measured by the percentage of jobs completed.
c. All the processes take almost equal amounts of time to complete
WAIT Operation:
The WAIT operation (also known as P or down operation) is employed when a process
or thread wants to access a resource that may not be immediately available. When a
process invokes WAIT on a semaphore (a synchronization variable), it checks the
semaphore's value. If the value is greater than zero, the process decrements the
semaphore (indicating that a resource is being used) and continues execution. If
the value is zero, the process is blocked and placed in a waiting queue,
effectively yielding control until the resource becomes available. This operation
ensures that processes only access resources when they are free, preventing
conflicts.
SIGNAL Operation:
The SIGNAL operation (also known as V or up operation) is used to indicate that a
resource has been released or is now available. When a process completes its use of
a resource, it invokes SIGNAL on the corresponding semaphore. This operation
increments the semaphore's value, and if any processes are waiting in the queue,
one of them is unblocked and allowed to proceed. SIGNAL thus serves to notify
waiting processes that they can now access the resource.
Q.(3) What is a critical section? Give solution to the critical section problem
using semaphores.
Entering the Critical Section: Each process must execute the following code before
entering its critical section:
f the semaphore value is greater than 0, the process enters the critical section
and decrements the semaphore. If the value is 0, the process is blocked and waits.
Exiting the Critical Section: Once a process finishes executing its critical
section, it must signal the semaphore:
This operation releases the semaphore, allowing other waiting processes to enter
their critical sections.
Assignment III
Q. (1) What is fragmentation ? What are its types? How they are minimized in
different
memory management schemes ?
Fragmentation refers to the inefficient use of memory space that occurs when free
memory is divided into small, non-contiguous blocks. It can lead to reduced
available memory for new processes, ultimately impacting system performance.
Fragmentation is typically categorized into two types:
Types of Fragmentation:
Internal Fragmentation: This occurs when allocated memory blocks are larger than
the requested memory. For example, if a process requests 20 KB but is allocated 32
KB, the remaining 12 KB is wasted, leading to internal fragmentation.
External Fragmentation: This happens when free memory is split into small,
scattered blocks that are not usable for larger processes. Even if there is enough
total free memory, the non-contiguous nature prevents allocation for processes that
require larger contiguous blocks.
Minimizing Fragmentation:
Segmentation: This memory management scheme divides memory into segments based on
logical divisions (e.g., functions, arrays). By using segments, internal
fragmentation is reduced as memory allocation can be more closely aligned with the
actual size of data structures.
Paging: This method breaks physical memory into fixed-size pages and divides
logical memory into pages of the same size. Paging eliminates external
fragmentation, as any free page can be used regardless of its location.
Buddy System: This approach allocates memory in powers of two, which helps reduce
internal fragmentation and allows for efficient merging of free blocks.
Demand Paging and Demand Segmentation are memory management techniques used in
operating systems to optimize the use of physical memory while managing large
processes efficiently.
Demand Paging:
Demand Paging is a memory management scheme where pages of a process are loaded
into memory only when they are needed (i.e., when a page fault occurs). This allows
the operating system to keep physical memory usage low by loading only those pages
that are actively used, rather than the entire process. When a process tries to
access a page that is not currently in memory, a page fault is triggered, and the
required page is loaded from secondary storage (usually a disk) into physical
memory. This approach helps in utilizing memory more efficiently and reduces
loading time, as it only loads necessary pages, leading to improved performance for
large applications.
Demand Segmentation:
Demand Segmentation operates similarly but is based on logical segments rather than
fixed-size pages. In this method, segments of a process (e.g., functions, arrays,
or data structures) are loaded into memory only when required. Each segment can be
of varying sizes, and when a segment fault occurs (when a segment is not in
memory), the operating system loads the needed segment from disk to memory. Demand
segmentation provides a more logical view of memory allocation and can improve the
organization of processes, as it allows for loading only the required segments.
Q.(3) Give difference between physical & logical address. Also show that how
logical
Physical Address:
Definition: A physical address refers to an actual location in the computer’s
physical memory (RAM). It is the address that the memory unit recognizes and uses
to access data.
Context: Physical addresses are used by the memory management unit (MMU) to access
the memory hardware directly.
Example: If the physical memory has a size of 4 GB, addresses range from 0x00000000
to 0xFFFFFFFF.
Logical Address:
Definition: A logical address, also known as a virtual address, is generated by the
CPU during program execution. It represents an address within a process's logical
address space, which may not correspond directly to a physical memory address.
Context: Logical addresses allow processes to use memory without knowing the actual
physical address. This abstraction enables easier memory allocation and process
isolation.
Example: In a system with virtual memory, a process might use a logical address of
0x00001234, which the MMU translates into a physical address.
Translation from Logical to Physical Address:
The translation from logical to physical addresses is managed by the MMU using a
page table in the case of paging systems. When a program accesses a logical
address, the MMU checks the page table to find the corresponding physical address.
For example, if a logical address maps to a page frame in physical memory, the MMU
retrieves the data from the physical address.
Assignment IV
Q.(1) How would you compare different types of disk scheduling algorithm.
Disk scheduling algorithms are essential for managing how read and write requests
are handled on a disk, optimizing performance and minimizing wait times. Here’s a
comparison of several common disk scheduling algorithms: